Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save anseldu/a143198da19c6cc91f4e42ad8a46107f to your computer and use it in GitHub Desktop.
Save anseldu/a143198da19c6cc91f4e42ad8a46107f to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 引言\n",
"\n",
"刚看完\"Python和HDF5大数据应用\"这本书,该篇文章也是个人笔记,记录了h5py在使用时需要注意的性能问题。文章内容大多数摘自书中,代码部分做了略微调整。\n",
"\n",
"本篇文章中,我们使用\"h5py\"模块来使用HDF5。该模块包含了文件、组、数据集以及特征等HDF对象的上层封装类,同时还包含了HDF5的C语言数据结构和函数的底层封装。\n",
"\n",
"在后面的内容中,我们会着重讨论h5py使用中需要避开的性能问题,不会介绍它的基本用法。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import h5py\n",
"from IPython import display"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"f = h5py.File('learn_h5py.h5', 'w')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 第一弹:高效率切片\n",
"\n",
"首先来看在h5py的Dataset对象上切片操作的例子。创建一个形状为(100, 1000)的数组。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<HDF5 dataset \"data1\": shape (100, 1000), type \"<f8\">"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"f['data1'] = np.random.rand(100, 1000) - 0.5\n",
"dset = f['data1']\n",
"dset"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在我们对它进行切片:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(10, 50)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"out = dset[0:10, 20:70]\n",
"out.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"下面说下切片操作背后的细节。\n",
"\n",
"1. h5py计算出结果数组对象的形状是(10, 50);\n",
"\n",
"2. 分配一个空的NumPy数组,形状为(10, 50);\n",
"\n",
"3. HDF5选出数据集中相应的部分;\n",
"\n",
"4. HDF5将数据集中的数据复制给空的NumPy数组;\n",
"\n",
"5. 返回填好的NumPy数组。\n",
"\n",
"从这里可以看到读取数据之前有不少隐藏的开销。我们不仅需要为每一次切片创建一个新的NumPy数组,还必须计算数组对象的大小,检查切片范围不超出数据集边界,让HDF5执行切片查询。\n",
"\n",
"这引出了我们在使用数据集上的第一个也是最重要的性能建议:**选择合理的切片的大小**。\n",
"\n",
"看看下面的例子,思考run1和run2哪个执行效率最高。"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"def run1():\n",
" for ix in range(100):\n",
" for iy in range(1000):\n",
" val = dset[ix, iy]\n",
" if val < 0: dset[ix, iy] = 0\n",
"\n",
"def run2():\n",
" for ix in range(100):\n",
" val = dset[ix, :]\n",
" val[ val < 0 ] = 0\n",
" dset[ix, :] = val"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们来看看它们的执行时间吧"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 11.7 s, sys: 36.9 ms, total: 11.8 s\n",
"Wall time: 11.7 s\n"
]
}
],
"source": [
"%%time\n",
"run1()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 39.2 ms, sys: 4.15 ms, total: 43.3 ms\n",
"Wall time: 38.5 ms\n"
]
}
],
"source": [
"%%time\n",
"run2()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"执行run1用时11.3秒,执行run2用时27.8毫秒,相差大约400倍!\n",
"\n",
"run1进行了100 000次切片操作,run2则仅有100次。\n",
"\n",
"这个例子看上去简单,却是我们走向真实世界代码的第一步:对内存中的NumPy数组切片速度非常快,但对磁盘上的HDF5数据集进行切片读取时就会陷入性能瓶颈。\n",
"\n",
"写入时的步骤略少一些,但基本道理也是一样:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"dset[0:10, 20:70] = out*2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"会产生下列步骤。\n",
"\n",
"1. h5py计算出切片大小并检查是否跟输入的数组大小匹配;\n",
"\n",
"2. HDF5选出数据集中相应的部分;\n",
"\n",
"3. HDF5从输入数组读取并写入文件。\n",
"\n",
"计算切片大小等所有开销也依然存在。对数据集一次写入一个或几个元素必然导致性能的低下。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 第二弹:将数据直接读入一个已存在的数组\n",
"\n",
"这里我们介绍一个Dataset对象中的方法`read_direct`,该方法让HDF5将数据填入一个已经存在的数组,并自动进行类型转换。它是非常接近HDF5的C接口的方法。\n",
"\n",
"观察下面两段,它们用来计算数据集中每条记录前50个数据点的中位数。"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<HDF5 dataset \"data1\": shape (100, 1000), type \"<f8\">"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dset"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"使用标准的切片技术:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(100, 50)"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"(100,)"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"out = dset[:, 0:50]\n",
"display.display(out.shape)\n",
"means = out.mean(axis=1)\n",
"display.display(means.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"或者使用`read_direct`,如下所示:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"out = np.empty((100, 50), dtype=np.float32)\n",
"dset.read_direct(out, np.s_[:, 0:50])\n",
"means = out.mean(axis=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这两种方法看上去可能只有很小的差异,其实有一个很重要的区别。\n",
"\n",
"第一个方法,h5py每次都会在内部创建一个out数组(也就是创建一个NumPy数组)用于保存切片数据,用完就丢弃。 \n",
"第二种方法的out数组则是由用户分配,且可以被后续的`read_direct`循环利用。\n",
"\n",
"说的大白话一些,第一种方法会重复创建内存空间,而第二种方法由用户主动创建内存空间,h5py只负责往里填充。如果运行多次,第一种方法会多次创建内存空间,出现性能瓶颈。\n",
"\n",
"让我们实际检查一下这种情况下的性能开销吧。我们将创建一个测试数据集以及两个函数。"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<HDF5 dataset \"data2\": shape (10000, 10000), type \"<f8\">"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"f['data2'] = np.random.rand(10000, 10000)\n",
"dset = f['data2']\n",
"dset"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"def time_simple():\n",
" dset[:, 0:5000].mean(axis=1)\n",
"\n",
"out = np.empty((10000, 5000), dtype=np.float32)\n",
"def time_direct():\n",
" dset.read_direct(out, np.s_[:, 0:5000])\n",
" out.mean(axis=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们让两个函数各执行100次,输出平均耗时。"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 5.31 s, sys: 17.6 s, total: 22.9 s\n",
"Wall time: 22.9 s\n"
]
}
],
"source": [
"%%time\n",
"for _ in range(100):\n",
" time_simple()"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 10.3 s, sys: 6.5 s, total: 16.8 s\n",
"Wall time: 16.8 s\n"
]
}
],
"source": [
"%%time\n",
"for _ in range(100):\n",
" time_direct()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"单从结果上看,相差6秒,26%的提升。当然,最终还是由你自己决定如何优化。上面那个simple的方法可读性肯定高一些。但在对同样形状进行多次读取,特别是数组较大的情况下,它很难打败`read_direct`。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 第三弹:分块存储\n",
"\n",
"HDF5数据集默认使用连续存储,数据集里的数据被扁平地存入磁盘。\n",
"\n",
"下面让我们举个栗子来清晰的认识下,我们有一个包含100张480x640的灰度图像的数据集,其形状为(100, 480, 640)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"dset = f.create_dataset(\"Images\", shape=(100, 480, 640), dtype=np.uint8)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"使用连续存储的数据集会将图像以具有640个元素的”扫描线“的形式一条一条地保存在磁盘上。如果我们读取第一张图像,切片的代码将会是:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(480, 640)"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"image = dset[0, :, :]\n",
"image.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"下图显示了其背后的工作原理。注意数据以640字节分块存储,这和数据集最后一维的长度相符。当我们读取第一张图像时,我们从磁盘上读取了480个这样的块,全在一个大块里。从下图可以很容易看到应用程序读取整个图像的效率非常高。连续存储的好处在于磁盘上的布局和数据集的形式直接相关:最后一维上索引向前进一步意味着磁盘上的数据向前进一步。\n",
"\n",
"![磁盘上的连续存储](https://s1.ax2x.com/2018/03/10/EDsRE.png \"磁盘上的连续存储\")\n",
"\n",
"从这里引出了我们处理磁盘数据的第一个原则(事实上也是唯一一个原则),位置原则:**如果数据被存储在一起,通常读起来更快**。将数据组织在一起有很多理由,其中一个就是利用操作系统和HDF5自身的缓存。\n",
"\n",
"如果我们不是要处理整个图像而只是一个图像区域呢?假设我们需要读取并处理第一张图像角落上一个64x64的像素块来添加一个标识。\n",
"\n",
"我们的切片选择会是:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(64, 64)"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tile = dset[0, 0:64, 0:64]\n",
"tile.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"下图显示了这种情况下数据是如何读取的。有些不妙。我们的应用程序不得不从各地收集数据,而不是一次性读取一整块连续数据。如果我们想要一次性得到所有图像的64x64区块(`dset[:, 0:64, 0:64]`),我们要一路读到数据集的末尾!\n",
"\n",
"![磁盘上的连续存储](https://s1.ax2x.com/2018/03/10/EDrMN.png)\n",
"\n",
"这个问题最根本的原因在于默认的连续存储机制跟我们的访问模式不匹配。要解决这个问题,我们在保留数据集形状(这在语义上很重要)的同时又能告诉HDF5对64x64的像素块进行访问优化。这就是HDF5**分块存储**的用途。它允许你指定最适合你访问模式的N维形状。当需要对磁盘写入数据时,HDF5将数据分成指定形状的块,然后将它们扁平地写入磁盘。这些块被存放在文件的各地,其坐标由一个B树索引。\n",
"\n",
"让我们以上面的(100, 480, 640)形状的数据集为例。为了告诉HDF5将其分块存储,我们需要为`create_dataset`方法提供一个新的关键字`chunks`:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"dset = f.create_dataset('chunked', shape=(100, 480, 640), dtype=np.uint8, chunks=(1, 64, 64))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"和数据类型一样,这个分块形状的值在数据集创建时就被固定且无法更改。你可以检查`chunks`属性来查看分块形状。如果它是None,意味着数据集并没有使用分块存储:"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(1, 64, 64)"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dset.chunks"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 性能测试:扩展数据集\n",
"\n",
"在数据集创建时我们为了灵活性会将数据集定义为可变形数据集,当创建了可变形数据集时,分块功能会被自动打开,若没有手动指定分块形状,h5py的自动分块器会帮你选择一个分块形状"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"创建一个不可变形的数据集,`chunks=None`。"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"None"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"dset = f.create_dataset(\"chunck1\", shape=(100,480), dtype=np.uint8)\n",
"display.display(dset.chunks)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"创建一个可变形的数据集,不指定`chunks`,h5py的自动分块器会自动选择一个分块形状。"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(50, 240)"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"dset = f.create_dataset(\"chunck2\", shape=(100,480), dtype=np.uint8, maxshape=(None, 480))\n",
"display.display(dset.chunks)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"创建一个可变形的数据集,并指定`chunks=(1, 480)`。"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(1, 480)"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"dset = f.create_dataset(\"chunck3\", shape=(100,480), dtype=np.uint8, maxshape=(None, 480), chunks=(1,480))\n",
"display.display(dset.chunks)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"可变数据集在日常工作中是经常使用的,当获取新的数据后,我们会将其添加到已有的数据集中,这是非常普遍的操作。下面我们来看看自定义分块形状对性能上的影响吧。\n",
"\n",
"首先创建两个数据集来存储一批数据,每条数据有1000个元素。这两个数据集的第一个纬度都是可扩展的,只是初始大小不同:"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"dset1 = f.create_dataset('mydata1', shape=(1,1000), maxshape=(None, 1000))\n",
"dset2 = f.create_dataset('mydata2', shape=(50000,1000), maxshape=(None, 1000))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们再来定义两种添加数据的方法。第一种是简单添加(`add_mydata_1`),第二种是超额分配并在结束时消减数据集(add_mydata_2和done).第二种理论上会快一些,因为调用`resize`的次数少。"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"def add_mydata_1(arr):\n",
" dset1.resize((dset1.shape[0]+1, 1000))\n",
" dset1[-1, :] = arr\n",
"\n",
"nmydata = 0\n",
"def add_mydata_2(arr):\n",
" global nmydata\n",
" dset2[nmydata, :] = arr\n",
" nmydata += 1\n",
"\n",
"def done():\n",
" dset2.resize((nmydata, 1000))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在来测试一下它们的性能"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"data = np.random.random(1000)\n",
"N = 10000 # 测试写入10000次"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"测试第一种方法消耗时间"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 1.12 s, sys: 140 ms, total: 1.26 s\n",
"Wall time: 1.21 s\n"
]
}
],
"source": [
"%%time\n",
"for _ in range(N):\n",
" add_mydata_1(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"测试第二种方法消耗时间"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 2.34 s, sys: 6.26 s, total: 8.6 s\n",
"Wall time: 8.57 s\n"
]
}
],
"source": [
"%%time\n",
"nmydata = 0\n",
"for _ in range(N):\n",
" add_mydata_2(data)\n",
"done()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"从结果看来与我们期望的不太一样,通过查看每个数据集的分块形状,我们得到线索:"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(1, 1000)"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"(782, 32)"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display.display(dset1.chunks)\n",
"display.display(dset2.chunks)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"看来自动分块的形状是由数据集的初始大小和一些其他因素决定的。让我们手动指定分块形状,再重试一下。这次我们将两个数据集的分块形状都设为(1, 1000),分块大小都是4k(1000个元素x4个字节):"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [],
"source": [
"dset1 = f.create_dataset('mydata3', shape=(1, 1000), maxshape=(None, 1000), chunks=(1, 1000))\n",
"dset2 = f.create_dataset('mydata4', shape=(50000, 1000), maxshape=(None, 1000), chunks=(1, 1000))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"测试第一种方法消耗时间"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 1.12 s, sys: 63.4 ms, total: 1.18 s\n",
"Wall time: 1.17 s\n"
]
}
],
"source": [
"%%time\n",
"for _ in range(N):\n",
" add_mydata_1(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"测试第二种方法消耗时间"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 966 ms, sys: 95.8 ms, total: 1.06 s\n",
"Wall time: 1.03 s\n"
]
}
],
"source": [
"%%time\n",
"nmydata = 0\n",
"for _ in range(N):\n",
" add_mydata_2(data)\n",
"done()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这样就好多了。最后不要忘了关闭文件指针"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [],
"source": [
"f.close()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (snap)",
"language": "python",
"name": "snap"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.14"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment