Skip to content

Instantly share code, notes, and snippets.

@lakshay-arora
Last active January 13, 2020 04:10
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save lakshay-arora/dd1fbf1f925e57787d0198a1d51dc22e to your computer and use it in GitHub Desktop.
Save lakshay-arora/dd1fbf1f925e57787d0198a1d51dc22e to your computer and use it in GitHub Desktop.
# create spark sql context
sql_context = SQLContext(sc)
# split the data
csv_rdd = raw_data.map(lambda row: row.split(','))
# top 2 rows
csv_rdd.take(2)
# map the datatypes of each column
parsed = csv_rdd.map(lambda r : Row( age = int(r[0]),
blood_group = r[1],
city = r[2],
gender = r[3],
id_ = int(r[4])))
# top 5 rows
parsed.take(5)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment