Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save shravankumar147/561a7991c126f0dfe21cc62127757d7d to your computer and use it in GitHub Desktop.
Save shravankumar147/561a7991c126f0dfe21cc62127757d7d to your computer and use it in GitHub Desktop.
This is an useful function helps in feature selection process of your ML workflow.
from pyspark.sql.functions import col
from pyspark.sql.functions import sum as spark_sum
def count_null(col_name):
return spark_sum(col(col_name).isNull().cast('integer')).alias(col_name)
# Build up a list of column expressions, one per column.
exprs = [count_null(col_name) for col_name in logs_df.columns]
# Run the aggregation. The *exprs converts the list of expressions into
# variable function arguments.
logs_df.agg(*exprs).show()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment