Skip to content

Instantly share code, notes, and snippets.

@zaloogarcia
Last active April 19, 2022 16:20
Show Gist options
  • Save zaloogarcia/11508e9ca786c6851513d31fb2e70bfc to your computer and use it in GitHub Desktop.
Save zaloogarcia/11508e9ca786c6851513d31fb2e70bfc to your computer and use it in GitHub Desktop.
Script for converting Pandas DF to Spark's DF
from pyspark.sql.types import *
# Auxiliar functions
# Pandas Types -> Sparks Types
def equivalent_type(f):
if f == 'datetime64[ns]': return DateType()
elif f == 'int64': return LongType()
elif f == 'int32': return IntegerType()
elif f == 'float64': return FloatType()
else: return StringType()
def define_structure(string, format_type):
try: typo = equivalent_type(format_type)
except: typo = StringType()
return StructField(string, typo)
#Given pandas dataframe, it will return a spark's dataframe
def pandas_to_spark(df_pandas):
columns = list(df_pandas.columns)
types = list(df_pandas.dtypes)
struct_list = []
for column, typo in zip(columns, types):
struct_list.append(define_structure(column, typo))
p_schema = StructType(struct_list)
return sqlContext.createDataFrame(df_pandas, p_schema)
@UltraDiuve
Copy link

I can suggest updating this:

equivalent_type_dict = {
    ...
    'float64': DoubleType(),
    'float32': FloatType(),
    ...
}

Because, using FloatType will remove some decimals, like this:

Pandas: 9544.145833333334 | Float64
SparkDF: 9544.1455 | float
SparkDF: 9544.145833333334 | double

Thanks for the suggestion, I updated this part in my code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment