-
-
Save jaypeedevlin/fdfb88f6fd1031a819f1d46cb36384da to your computer and use it in GitHub Desktop.
pd vs np for detecting null
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
In [16]: data_pd = pd.Series(np.random.rand(10000000)) | |
In [17]: data_pd[data_pd < .2] = np.nan | |
In [18]: %timeit np.isfinite(data_pd) | |
10 loops, best of 3: 27.3 ms per loop | |
In [19]: %timeit pd.notnull(data_pd) | |
100 loops, best of 3: 11.2 ms per loop | |
In [20]: # check for equality | |
In [21]: (pd.notnull(data_pd).values == np.isfinite(data_pd)).all() | |
Out[21]: True | |
In [22]: data_np = np.random.rand(10000000) | |
In [23]: data_np[data_np < .2] = np.nan | |
In [24]: %timeit pd.notnull(data_np) | |
100 loops, best of 3: 11.2 ms per loop | |
In [25]: %timeit np.isfinite(data_np) | |
10 loops, best of 3: 24.6 ms per loop | |
In [26]: (pd.notnull(data_np) == np.isfinite(data_np)).all() | |
Out[26]: True |
You're doing the operations on the pandas object. The reverse is actually true if you do the operations on the ndarray, and it's not even close (numpy is ~ 25x faster)
obj = pd.Series([4, np.nan, 7, np.nan, -3, 2])
%timeit np.isnan(obj).values # Note this one will perform similarly to the Pandas case
%timeit obj.isnull()
%timeit np.isnan(obj.values)
30.5 µs ± 2.87 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
32.9 µs ± 2.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
1.15 µs ± 36.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
obj = pd.Series([4, 6.5, 7, 3.25, -3, 2])
%timeit obj.div(obj.iloc[::-1])
%timeit obj.values / obj.iloc[::-1].values # Uses the numpy div ufunc
342 µs ± 15.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
33.6 µs ± 494 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit obj.sum()
%timeit np.sum(obj.values)
49 µs ± 867 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
2.7 µs ± 66.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit obj.mean()
%timeit np.mean(obj.values)
22.8 µs ± 885 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
5.46 µs ± 52.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
df = pd.DataFrame([[1.4, np.nan], [7.1, -4.5], [np.nan, np.nan], [0.75, -1.3]], index=['a','b','c','d'], columns=['one','two'])
%timeit np.nansum(df.values, axis=0) # Note I switched up the order to make it clear it's not a display fluke
%timeit df.sum()
13.3 µs ± 180 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
94.8 µs ± 1.78 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
And here's your own test case:
data_np = np.random.rand(10000000)
data_np[data_np < .2] = np.nan
data_pd = pd.Series(data_np)
# On the Pandas object
%timeit np.isfinite(data_pd.values)
%timeit ~np.isnan(data_pd.values)
%timeit pd.notnull(data_pd)
# On the Numpy object
%timeit np.isfinite(data_np)
%timeit ~np.isnan(data_np)
%timeit pd.notnull(data_np)
10.6 ms ± 517 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
14.5 ms ± 256 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
14.5 ms ± 669 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
10.4 ms ± 583 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
14.5 ms ± 179 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
14.8 ms ± 307 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Running:
- Python 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)] on win32
- numpy=1.13.3=py36h4a99626_2
- pandas=0.22.0=py36h6538335_0
- anaconda-client=1.6.9=py36_0
- anaconda=custom=py36h363777c_0
- anaconda-navigator=1.7.0=py36_0
- anaconda-project=0.8.2=py36hfad2e28_0
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Wow, nice experiment, good result to know!