Skip to content

Instantly share code, notes, and snippets.

@rossant
Last active February 26, 2017 19:28
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 4 You must be signed in to fork a gist
  • Save rossant/4645217 to your computer and use it in GitHub Desktop.
Save rossant/4645217 to your computer and use it in GitHub Desktop.
Numpy performance tricks
{
"metadata": {
"name": "numpy_tricks"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Numpy performance tricks\n",
"========================\n",
"I've been using Numpy for nearly five years, but I'm still learning performance tricks. The reason is that I currently need to deal with very large arrays (hundreds of millions of elements) and the performance of my code started to be disappointing. Then, through extensive line-by-line profiling, I discovered some subtleties that explain why some seemingly harmless lines of code can actually lead to major bottlenecks. Very often, a small trick allows to significantly improve the performance. Here is what I've learnt. These tips are intended to regular Numpy users rather than pure beginners."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Know how Numpy works\n",
"--------------------\n",
"\n",
"For a beginner, learning Numpy's basics is more a matter of days rather than weeks. The notion of multidimensional array is quite intuitive, so is the notion of computations on vectors and matrices for someone with a mathematics background. Computing an element-wise multiplication of two vectors with `a*b` is actually easier and less error-prone than with a classical imperative programming language where it would require a `for` loop. Yet, when comes the need to do complex computations on multidimensional arrays containing millions of elements, it becomes quite valuable to know a bit about Numpy's internals. I don't know much about it, but what I know sometimes allows me to improve the performance of my code.\n",
"\n",
"Here are simplified facts that can be useful to know. Computer memory is basically one-dimensional: bytes are consecutively stored in a one-dimensional memory space and are accessed through memory adresses. A multidimensional Numpy array is stored as a contiguous block of memory, so that two successive elements in the array occupy two successive places in memory. Each element occupies `itemsize` bytes, depending on the **data type** (dtype): 2 for an `int16`, 4 for a `float32` (single precision floating point number), 8 for a `float64 = double` (double precision), etc. But memory is one-dimensional: how to store a nD array in a one-dimensional space? The solution lies in the notion of **shape** and **[stride](http://en.wikipedia.org/wiki/Stride_of_an_array)**. The shape is a n-tuple with the number of elements in each dimension. The stride is a n-tuple with, for each dimension, the number of bytes (or step) that one needs to jump in memory to go from one element to the next in that dimension. \n",
"\n",
"For a one-dimensional vector, the stride is typically `(itemsize,)`, but for higher dimensions, there are more than a unique choice. The C-order ([row-major order](en.wikipedia.org/wiki/Row-major_order)) and Fortran-order (column-major order) are two different conventions. Elements are stored row after row in the C-order, and column after column in the Fortran-order. This notion extends to arrays with more than two dimensions. For example, the matrix with [1, 2] on the first row and [3, 4] on the second row is stored internally as [1, 2, 3, 4] in C-order or [1, 3, 2, 4] in Fortran-order. Numpy uses the C-order, but that can be changed with some Numpy functions using the `order` keyword argument. Here, by default, the stride is (8, 4) (on a 32-bits system), since the first axis is the column, and the second is the row. 16-bit integers are used by default here."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a = array([[1, 2], [3, 4]])"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 1
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a.size"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 2,
"text": [
"4"
]
}
],
"prompt_number": 2
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a.shape"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 3,
"text": [
"(2, 2)"
]
}
],
"prompt_number": 3
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a.dtype"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 4,
"text": [
"dtype('int32')"
]
}
],
"prompt_number": 4
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a.itemsize"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 5,
"text": [
"4"
]
}
],
"prompt_number": 5
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a.nbytes"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 6,
"text": [
"16"
]
}
],
"prompt_number": 6
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a.strides"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 7,
"text": [
"(8, 4)"
]
}
],
"prompt_number": 7
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"array(a, order='F').strides"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 8,
"text": [
"(4, 8)"
]
}
],
"prompt_number": 8
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Knowing this is the basis for a few tricks that we'll see below.\n",
"\n",
"\n",
"Beware of array copies\n",
"----------------------\n",
"\n",
"Memory copies happen transparently with Numpy, which is generally quite convenient compared to low-level languages where memory management is mostly up to the developer. But not knowing what happens under the hood can sometimes helps fixing some performance issues. Consider the following example."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a = rand(1000, 1000)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 9
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are two ways to obtain a 1D array from a nD array: `flatten` and `ravel`. The first function returns a copy, whereas the second one returns a view (when possible), which is much faster."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 100 b1 = a.flatten()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"100 loops, best of 3: 9.14 ms per loop\n"
]
}
],
"prompt_number": 10
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 100 b2 = a.ravel()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"100 loops, best of 3: 929 ns per loop\n"
]
}
],
"prompt_number": 11
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"array_equal(a.flatten(), a.ravel())"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 12,
"text": [
"True"
]
}
],
"prompt_number": 12
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you use `flatten` in your code, maybe you don't need to do a copy and you can use `ravel` instead? The performance speedup can be significant for large arrays (10000 times faster here!). Be aware, however, that the two results are slightly different, in that with `flatten`, you obtain a copy, whereas with `ravel`, you obtain a view, so that changes in the result will change the original array with `ravel` and not with `flatten`."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"b1 = a.flatten()\n",
"b2 = a.ravel()\n",
"# return the address of the memory block\n",
"id = lambda x: x.__array_interface__['data'][0]\n",
"print(id(a), id(b1), id(b2))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"(108199968, 116289568, 108199968)\n"
]
}
],
"prompt_number": 13
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sometimes, even `ravel` needs to do a copy, because the array is not in the specified order (C-order by default). In the following example, `a.T` is in Fortran-order, so that returning a flattened version with C-order implies memory copy. It is 3-4 times slower than with `a.ravel()`."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"b = zeros(1000000)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 14
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 100 b[:] = a.ravel()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"100 loops, best of 3: 5.14 ms per loop\n"
]
}
],
"prompt_number": 15
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 100 b[:] = a.T.ravel()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"100 loops, best of 3: 20.4 ms per loop\n"
]
}
],
"prompt_number": 16
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 100 b[:] = a.ravel(order='F')"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"100 loops, best of 3: 19 ms per loop\n"
]
}
],
"prompt_number": 17
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Know how to use broadcasting\n",
"----------------------------\n",
"\n",
"You know how to use `tile` and `repeat` to do vectorized computations on your arrays. These functions obviousy involve array copies. You may use them to do some temporary calculations. You don't always need them and you may be able to use broadcasting instead for better performance. In the following example, we want to add identical copies of `b` to each *column* of `a`."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a = rand(10000, 1000)\n",
"b = arange(10000)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 18
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"c = a + b"
],
"language": "python",
"metadata": {},
"outputs": [
{
"ename": "ValueError",
"evalue": "operands could not be broadcast together with shapes (10000,1000) (10000) ",
"output_type": "pyerr",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[1;31mValueError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m<ipython-input-19-60f555c9e9aa>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m()\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0mc\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0ma\u001b[0m \u001b[1;33m+\u001b[0m \u001b[0mb\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[1;31mValueError\u001b[0m: operands could not be broadcast together with shapes (10000,1000) (10000) "
]
}
],
"prompt_number": 19
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Adding `a` and `b` does not work here because they do not have compatible shapes (we'll see what that means in a minute). So a first possibility is to replace the smallest array `b` with an array with the same size as `a` so that we can add them. This involves the creation of a temporary array 1000 times bigger than `b`."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 10 c = a + tile(b.reshape((-1, 1)), (1, 1000))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"10 loops, best of 3: 206 ms per loop\n"
]
}
],
"prompt_number": 20
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can do better thanks to broadcasting: two arrays with different shapes can still be added together: they need to have compatible shapes. This means that in each dimension, the length is the same in the two arrays, or one is equal to 1, in which case it is assumed that this value should be\n",
"repeated along this axis to match the other array's dimension. This repetition does not involve any copy, so it's about twice as fast. Here, we can just reshape `b` to make it a column vector, and it will have a shape compatible with `a`'s shape."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 10 c = a + b.reshape((-1, 1))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"10 loops, best of 3: 107 ms per loop\n"
]
}
],
"prompt_number": 21
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"array_equal(a + tile(b.reshape((-1, 1)), (1, 1000)), a + b.reshape((-1, 1)))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 22,
"text": [
"True"
]
}
],
"prompt_number": 22
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Faster alternatives to fancy indexing\n",
"-------------------------------------\n",
"\n",
"Fancy indexing offers you the possibility to extract any portion of an array, even with repeated parts or non-contiguous parts. But it may be slow sometimes and faster alternatives may exist depending on what you're trying to do. Here is an example."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a = rand(10000, 100)\n",
"ind = randint(low=0, high=10000, size=10000)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 23
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 100 b = a[ind,:]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"100 loops, best of 3: 34 ms per loop\n"
]
}
],
"prompt_number": 24
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 100 b = take(a, ind, axis=0)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"100 loops, best of 3: 9.68 ms per loop\n"
]
}
],
"prompt_number": 25
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"array_equal(a[ind,:], take(a, ind, axis=0))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 26,
"text": [
"True"
]
}
],
"prompt_number": 26
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, `take(a, ind, axis=0)` replaces `a[ind,:]` and is 3-4 times faster. Here is another example with boolean masks."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"ind = a[:,0] > .5"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 27
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 10 b = a[ind,:]"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"10 loops, best of 3: 16.7 ms per loop\n"
]
}
],
"prompt_number": 28
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 10 b = compress(ind, a, axis=0)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"10 loops, best of 3: 4.92 ms per loop\n"
]
}
],
"prompt_number": 29
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"array_equal(a[ind,:], compress(ind, a, axis=0))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 30,
"text": [
"True"
]
}
],
"prompt_number": 30
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, using `compress` instead of fancy indexing is about 3-4 times faster."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Numpy's loadtxt may be slow\n",
"---------------------------\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Numpy's `savetxt` and `loadtxt` functions are quite useful, but should not be used with large arrays, where binary files would be more adapted and much faster. Sometimes, however, you really need to open text files. In these cases, depending on the specific structure of your files, you may be able to write a faster function yourself. In the following example, the custom `loadtxt_fast` function (found [here](http://stackoverflow.com/questions/8956832/python-out-of-memory-on-large-csv-file-numpy)) is about twice as fast as `loadtxt`. Also, it creates less temporary objects and uses less memory. Another solution is to use the great Pandas library, which extends Numpy in different areas, notably for I/O functions which are generally faster and more efficient. This should probably be the subject of a future post."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"def loadtxt_fast(filename, dtype=int32, skiprows=0, delimiter=' '):\n",
" def iter_func():\n",
" with open(filename, 'r') as infile:\n",
" for _ in range(skiprows):\n",
" next(infile)\n",
" skip = 0\n",
" for line in infile:\n",
" line = line.rstrip().split(delimiter)\n",
" for item in line:\n",
" yield dtype(item)\n",
" loadtxt_fast.rowlength = len(line)\n",
" data = np.fromiter(iter_func(), dtype=dtype)\n",
" data = data.reshape((-1, loadtxt_fast.rowlength))\n",
" return data"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 31
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a = randint(low=0, high=1000, size=(100000, 10))\n",
"fn = '_array.txt'\n",
"savetxt(fn, a, fmt='%d')"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 32
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 1 b = loadtxt(fn)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"1 loops, best of 3: 3.23 s per loop\n"
]
}
],
"prompt_number": 33
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 1 b = loadtxt_fast(fn)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"1 loops, best of 3: 1.65 s per loop\n"
]
}
],
"prompt_number": 34
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"array_equal(loadtxt(fn), loadtxt_fast(fn))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 35,
"text": [
"True"
]
}
],
"prompt_number": 35
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Do not use built-in Python functions for lists on arrays\n",
"--------------------------------------------------------\n",
"\n",
"This trick is a fun one. I once had a major performance issue in my code. I had a pretty good idea where it was, I can't remember the details but it was quite involved. After a thorough line-by-line profiling session however, I realized that I was completely wrong and that the bottleneck was actually the computation of the maximum element of an array. I was using `max(a)` instead of `a.max()`, thereby automatically converting the array into a list and using the built-in Python `max` function! Using Numpy's `max` function can be about 100 times faster. This is the kind of mistake you don't do twice."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"a = rand(1000000)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 36
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 10 max(a)"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"10 loops, best of 3: 273 ms per loop\n"
]
}
],
"prompt_number": 37
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"timeit -n 10 a.max()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"10 loops, best of 3: 3.2 ms per loop\n"
]
}
],
"prompt_number": 38
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"max(a) == a.max()"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "pyout",
"prompt_number": 39,
"text": [
"True"
]
}
],
"prompt_number": 39
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"I'm sure there are many more performance tricks out there. But if you want to discover your own: learn a bit more about how Numpy works, and learn how to profile Python code line by line!\n",
"\n",
" > [by Cyrille Rossant](http://cyrille.rossant.net)"
]
}
],
"metadata": {}
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment