Skip to content

Instantly share code, notes, and snippets.

@yong27
Last active April 12, 2023 04:35
Show Gist options
  • Save yong27/7869662 to your computer and use it in GitHub Desktop.
Save yong27/7869662 to your computer and use it in GitHub Desktop.
pandas DataFrame apply multiprocessing
import multiprocessing
import pandas as pd
import numpy as np
def _apply_df(args):
df, func, kwargs = args
return df.apply(func, **kwargs)
def apply_by_multiprocessing(df, func, **kwargs):
workers = kwargs.pop('workers')
pool = multiprocessing.Pool(processes=workers)
result = pool.map(_apply_df, [(d, func, kwargs)
for d in np.array_split(df, workers)])
pool.close()
return pd.concat(list(result))
def square(x):
return x**x
if __name__ == '__main__':
df = pd.DataFrame({'a':range(10), 'b':range(10)})
apply_by_multiprocessing(df, square, axis=1, workers=4)
## run by 4 processors
@akhtarshahnawaz
Copy link

I wrote a package to use apply methods on Series, DataFrames and GroupByDataFrames on multiple cores. It makes it very easy to do multiprocessing in Pandas.

You can check the documentation at https://github.com/akhtarshahnawaz/multiprocesspandas

You can also install the package directly using pip

pip install multiprocesspandas

Then doing multiprocessing is as simple as importing the package as

from multiprocesspandas import applyparallel

and then using applyparallel instead of apply like

def func(x):
    import pandas as pd
    return pd.Series([x['C'].mean()])

df.groupby(["A","B"]).apply_parallel(func, num_processes=30)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment