Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
Useful Pandas Snippets
# List unique values in a DataFrame column
# h/t @makmanalp for the updated syntax!
df['Column Name'].unique()
# Convert Series datatype to numeric (will error if column has non-numeric values)
# h/t @makmanalp
pd.to_numeric(df['Column Name'])
# Convert Series datatype to numeric, changing non-numeric values to NaN
# h/t @makmanalp for the updated syntax!
pd.to_numeric(df['Column Name'], errors='coerce')
# Grab DataFrame rows where column has certain values
valuelist = ['value1', 'value2', 'value3']
df = df[df.column.isin(valuelist)]
# Grab DataFrame rows where column doesn't have certain values
valuelist = ['value1', 'value2', 'value3']
df = df[~df.column.isin(value_list)]
# Delete column from DataFrame
del df['column']
# Select from DataFrame using criteria from multiple columns
# (use `|` instead of `&` to do an OR)
newdf = df[(df['column_one']>2004) & (df['column_two']==9)]
# Rename several DataFrame columns
df = df.rename(columns = {
'col1 old name':'col1 new name',
'col2 old name':'col2 new name',
'col3 old name':'col3 new name',
})
# Lower-case all DataFrame column names
df.columns = map(str.lower, df.columns)
# Even more fancy DataFrame column re-naming
# lower-case all DataFrame column names (for example)
df.rename(columns=lambda x: x.split('.')[-1], inplace=True)
# Loop through rows in a DataFrame
# (if you must)
for index, row in df.iterrows():
print index, row['some column']
# Much faster way to loop through DataFrame rows
# if you can work with tuples
# (h/t hughamacmullaniv)
for row in df.itertuples():
print(row)
# Next few examples show how to work with text data in Pandas.
# Full list of .str functions: http://pandas.pydata.org/pandas-docs/stable/text.html
# Slice values in a DataFrame column (aka Series)
df.column.str[0:2]
# Lower-case everything in a DataFrame column
df.column_name = df.column_name.str.lower()
# Get length of data in a DataFrame column
df.column_name.str.len()
# Sort dataframe by multiple columns
df = df.sort(['col1','col2','col3'],ascending=[1,1,0])
# Get top n for each group of columns in a sorted dataframe
# (make sure dataframe is sorted first)
top5 = df.groupby(['groupingcol1', 'groupingcol2']).head(5)
# Grab DataFrame rows where specific column is null/notnull
newdf = df[df['column'].isnull()]
# Select from DataFrame using multiple keys of a hierarchical index
df.xs(('index level 1 value','index level 2 value'), level=('level 1','level 2'))
# Change all NaNs to None (useful before
# loading to a db)
df = df.where((pd.notnull(df)), None)
# More pre-db insert cleanup...make a pass through the dataframe, stripping whitespace
# from strings and changing any empty values to None
# (not especially recommended but including here b/c I had to do this in real life one time)
df = df.applymap(lambda x: str(x).strip() if len(str(x).strip()) else None)
# Get quick count of rows in a DataFrame
len(df.index)
# Pivot data (with flexibility about what what
# becomes a column and what stays a row).
# Syntax works on Pandas >= .14
pd.pivot_table(
df,values='cell_value',
index=['col1', 'col2', 'col3'], #these stay as columns; will fail silently if any of these cols have null values
columns=['col4']) #data values in this column become their own column
# Change data type of DataFrame column
df.column_name = df.column_name.astype(np.int64)
# Get rid of non-numeric values throughout a DataFrame:
for col in refunds.columns.values:
refunds[col] = refunds[col].replace('[^0-9]+.-', '', regex=True)
# Set DataFrame column values based on other column values (h/t: @mlevkov)
df.loc[(df['column1'] == some_value) & (df['column2'] == some_other_value), ['column_to_change']] = new_value
# Clean up missing values in multiple DataFrame columns
df = df.fillna({
'col1': 'missing',
'col2': '99.999',
'col3': '999',
'col4': 'missing',
'col5': 'missing',
'col6': '99'
})
# Concatenate two DataFrame columns into a new, single column
# (useful when dealing with composite keys, for example)
# (h/t @makmanalp for improving this one!)
df['newcol'] = df['col1'].astype(str) + df['col2'].astype(str)
# Doing calculations with DataFrame columns that have missing values
# In example below, swap in 0 for df['col1'] cells that contain null
df['new_col'] = np.where(pd.isnull(df['col1']),0,df['col1']) + df['col2']
# Split delimited values in a DataFrame column into two new columns
df['new_col1'], df['new_col2'] = zip(*df['original_col'].apply(lambda x: x.split(': ', 1)))
# Collapse hierarchical column indexes
df.columns = df.columns.get_level_values(0)
# Convert Django queryset to DataFrame
qs = DjangoModelName.objects.all()
q = qs.values()
df = pd.DataFrame.from_records(q)
# Create a DataFrame from a Python dictionary
df = pd.DataFrame(list(a_dictionary.items()), columns = ['column1', 'column2'])
# Get a report of all duplicate records in a dataframe, based on specific columns
dupes = df[df.duplicated(['col1', 'col2', 'col3'], keep=False)]
# Set up formatting so larger numbers aren't displayed in scientific notation (h/t @thecapacity)
pd.set_option('display.float_format', lambda x: '%.3f' % x)

I spent almost three hours trying to do the things you present. Excelent.

This is purely excellent! Thanks!

Awesome :-)

Excellent reference. I have come back to it many times. Thank you!

Awesome. you rock. Thanks for doing this.

Very nice reference -- thanks for sharing!

thanks!

thanks a ton for sharing. this is awesome.

@ghost

ghost commented Mar 7, 2016

Should valuelist (lines 8 and 12) and value_list (lines 9 and 13), be the same? Either valuelist or value_list?

awesome stuff!!, thank you so much

Most useful demo!!!!!!! Thank you.......

Very useful, keep them coming!

kisna72 commented May 20, 2016

This is very useful. Thanks.

cool

Nice...

mlevkov commented Jul 24, 2016 edited

From line #87, above:

Set DataFrame column values based on other column values

df['column_to_change'][(df['column1'] == some_value) & (df['column2'] == some_other_value)] = new_value

is throwing an error SettingWithCopyWarning
see http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
alternatively I recommend that you change this to the following syntax, using .loc
df.loc((df['column1'] == some_value) & (df['column2'] == some_other_value), ['column_to_change']) = new_value

Owner

bsweger commented Aug 13, 2016 edited

@mlevkov Thank you, thank you! Have long been vexed by Pandas SettingWithCopyWarning and, truthfully, do not think the docs for .loc provide enough clear examples for those of us who want to re-write using that syntax.

Your re-write of the example in this gist worked great...just had to change the parens to brackets like so:
df.loc[(df['column1'] == some_value) & (df['column2'] == some_other_value), ['column_to_change']] = new_value

Really, really appreciate you taking the time to pass along this tip. Updated the gist accordingly--no doubt I'll refer back to this example many times!

Thank you so much for this!

Owner

bsweger commented Aug 18, 2016

@ward916 Sorry for seeing your comment so late--yes, value_list was a typo (fixed). Thanks so much for letting me know!

rezastd commented Sep 30, 2016

i have a question, for example i have a csv file with columns A-Z, what if i want to select column D and column G until the rest of the columns? thank you for sharing 👍

ndanturt commented Jan 6, 2017

I signed-up just to thank you !!

This is pretty good. Gracias.

Amazing You saved me a lot of time. Thanks!

Thank you very much for sharing!

naripok commented Feb 25, 2017

An yet another thanks!!!
Thank you very much man!

hughamacmullaniv commented Mar 9, 2017 edited

Good stuff! Thanks!

For looping through rows, if you can work with tuples, try df.itertuples(). Super fast!

This is really useful - thanks!

MarkFeder commented Mar 30, 2017 edited

thanks! really useful 👍

Thanks! 👍

saminaji commented May 4, 2017 edited

Thanks for sharing

Thanks for sharing!

Thank you! Great list, and helped me get through a few troubling issues, and did it with good performance.

Thank you for sharing this.

japhigu commented Jun 25, 2017 edited

I don't think any other gist for "pandas snippets" ranks better. I have one I would like to add and since pull request for gists don't canonically exist, I'd like to post it here. Keeping with your formatting:

#Check how many rows in DataFrame contain certain substring s in column col
print(len(df[df['col'].str.contains("s")].index.values[:]))
#Get indices of rows that contain substring s in column col
print(len(df[df['col'].str.contains("s")].index.values[:]))

the most helpful script ever ! :) Thank you so much helped me ALOT.

cddesire commented Aug 4, 2017

good stuffs, help me a lot

This makes a nice cheatsheet. Ever think about using some markup and making a document out of it?

Owner

bsweger commented Sep 18, 2017

@japhigu Thanks for your contribution--will add these! Your notes and @evanleeturner's have made me realize that this info, though useful, would better serve people in a different format, where others can weigh in.

Loved it

dblinde commented Sep 20, 2017

really usefull if you want to ETL your data!

Thanks, Daan Blinde.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment