Created
January 11, 2018 22:13
-
-
Save mdcallag/7b1de83a2f7602bc7cfe5b6abdace4ea to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
tl;dr #1: | |
Write throughput dropped from ~560 MB/s to ~300 MB/s for O_DIRECT and from ~480 MB/s to 8 MB/s for buffered. | |
tl;dr :2: | |
* Ubuntu 16.04 HWE kernel changed from 4.8 to 4.13 with Meltdown fix | |
* XFS doesn't support nobarrier mount option in 4.13 | |
* mount command doesn't complain about that | |
* a warning is visible in dmesg | |
* write throughput is much worse on my Samsung 960 EVO SSD with this change | |
I use Ubuntu 16.04 on Intel NUC servers. One set of servers has a SATA SSD and uses the regular kernel. | |
The other set of servers uses an NVMe SSD and uses the HWE kernel (to make wireless work). Yesterday | |
I upgraded servers to understand the impact of the Meltdown fix on MySQL performance and noticed | |
a problem with IO throughput for the 4.13.0-26 kernel. The problem reproduces with pti=on and pti=off. | |
In my case this server has an NVMe SSD. The problem does not reproduce on the 4.4.0-109 kernel, but | |
those servers use a SATA SSD for me. The problem occurs for writes but not for reads and is worse | |
for buffered writes than for O_DIRECT. | |
Servers I used are described at: | |
* http://smalldatum.blogspot.com/2017/05/small-servers-for-database-performance.html | |
* NUC5i3ryh has Samsung 850 EVO using SATA and regular kernel (4.4.0-109 has pti fix, 4.4.0-38 does not) | |
* NUC7ibnh has Samsung 960 EVO using NVMe and HWE kernel (4.13.0-26 has pti fix, 4.8.0-36 does not) | |
Tests are: | |
* drr.1 - random read, O_DIRECT, 16kb blocks, 1 thread | |
* drr.4 - random read, O_DIRECT, 16kb blocks, 4 threads | |
* drw.1 - random write, O_DIRECT, 16kb blocks, 1 thread | |
* drw.4 - random write, O_DIRECT, 16kb blocks, 4 threads | |
* brw.4 - random write, buffered IO, 16kb blocks, 4 threads | |
* dseq - sequential rewrite, O_DIRECT, 1 thread | |
Command lines: | |
sysbench fileio --file-num=1 --file-test-mode=rndrd --file-extra-flags=direct prepare | |
sysbench fileio --file-num=1 --file-test-mode=rndrd --file-extra-flags=direct --max-requests=0 --num-threads=1 --max-time=60 run > drr.1 | |
sysbench fileio --file-num=1 --file-test-mode=rndrd --file-extra-flags=direct --max-requests=0 --num-threads=4 --max-time=60 run > drr.4 | |
sysbench fileio --file-num=1 --file-test-mode=rndwr --file-extra-flags=direct --max-requests=0 --num-threads=1 --max-time=60 run > drw.1 | |
sysbench fileio --file-num=1 --file-test-mode=rndwr --file-extra-flags=direct --max-requests=0 --num-threads=4 --max-time=60 run > drw.4 | |
sysbench fileio --file-num=1 --file-test-mode=rndwr --file-fsync-all=on --max-requests=0 --num-threads=4 --max-time=60 run > brw.4 | |
sysbench fileio --file-num=1 --file-test-mode=seqrewr --max-requests=0 --num-threads=1 --max-time=60 run > dseq | |
These tables list MB/s read from or written to storage for each test. Note the bad | |
results for drw.1, drw.4, brw.4 and dseq on NUC7i5bnh for kernels with the pti fix. | |
NUC5i3ryh | |
drr.1 drr.4 drw.1 drw.4 brw.4 dseq kernel | |
119 271 212 233 217 347 4.4.0-109, pti=on | |
122 272 214 236 218 348 4.4.0-109, pti=off | |
115 271 214 236 217 324 4.4.0-38, no pti support | |
NUC7i5bnh | |
drr.1 drr.4 drw.1 drw.4 brw.4 dseq kernel | |
148 459 217 297 8 213 4.13.0-26, pti=on | |
155 466 238 309 8 229 4.13.0-26, pti=off | |
151 450 536 568 482 756 4.8.0-36, no pti support |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment