Now, we will create our DataFrame from the SQL table and do some similar analysis as we did with Spark SQL but using the DataFrames API. Step 10: Run this command in cmd8 and write a comment above it to explain what it does: df_adult = spark.table ("adult") Step 11: In cmd9, write a command to print the schema of df_adult and write a comment about it. Step 12: Run the following commands in cmd10 and write comments above command to explain what it does (please be as specific as possible when commenting): from pyspark.sql.functions import when, trim, col, mean, desc = df adult.select( df_divorced_status_by_occupation df_adult['occupation'], == when(trim (col ('marital_status')) 'Divorced',1).otherwise (0).alias ('is_divorced') ) df_divorced_rate_by_occupation = df_divorced_status_by_occupation.groupBy('occupation').agg (mean ("is_divorced' ).alias ('divorced_rate')).orderby (desc ('divorced_rate')) df_divorced_rate_by_occupation.show