Skip to content

Instantly share code, notes, and snippets.

@bag-man
Last active May 18, 2016 11:16
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bag-man/1dab123226589c19db64c787aae22f15 to your computer and use it in GitHub Desktop.
Save bag-man/1dab123226589c19db64c787aae22f15 to your computer and use it in GitHub Desktop.
ZFS Blog Post

MongoDB Performance on ZFS and Linux

Here at Clock we love ZFS, and have been running it in production on our Linux file servers for several years. It provides us with numerous excellent features, such as snapshotting, incremental send/receive, and transparent compression. With the recent release of Ubuntu Xenial 16.04 official support for ZFS is now here, and we are keen to integrate it fully into our next generation hosting stack.

As a Node.js and MongoDB house, one of our main concerns has been how MongoDB will perform on ZFS on Linux, especially after reading about potential problems other people have faced. There really isn't much data out there to put our minds at rest.

We decided to setup a method of benchmarking MongoDB with the supported EXT4 and XFS, then compare against ZFS with different options enabled. The idea being that we can hopefully figure out how ZFS compares, and if there are any options we can set that will impact the performance in any noticeable way.

There are a few caveats to our testing, so we are aware that these results need to be taken with a pinch of salt. They are aimed at just providing an indicator as to the performance between the file systems, not being a definitive guide to which is best to use.

Setup

The main variable that may affect the results was the hardware that we chose to use. We spun up a 4GB Linode instance with four cores, and four virtual disks: one for the latest Ubuntu 15.10 image (which we then upgraded to 16.04), and one disk for each of the file systems that we intended to test, EXT4, XFS and ZFS.

The issue with this approach is that the system is running on a virtualised machine with shared hardware, so there may be variations in the performance available to the machine. In an ideal world we would run this on a physical machine with identical disks, but that wasn't feasible for this investigation.

We decided to use the latest stable version of MongoDB 3.2.5, ZFS was at version 0.6.5.6 provided by the zfsutils-linux package for Xenial.

To benchmark the performance we investigated a few options, such as YCSB, and even writing our own benchmark based on examples of our real world data and queries. However we settled on using a Java-based tool, sysbench-mongodb. This made it easy to configure and run consistent and repeatable tests that would push the database to its limits.

Methodology

First the drives were mounted to directories reflecting their filesystems, this made it easy to switch the file system that MongoDB was using.

Filesystem  Type  Size  Used  Avail  Use%  Mounted on
/dev/sda    ext4  7.7G  1.8G  5.5G   25%   /
/dev/sdc    ext4   20G  44M   19G    1%    /ext4
/dev/sdd    xfs    20G  33M   20G    1%    /xfs
tank        zfs    19G  0M    19G    0%    /zfs

The drives were setup and formatted with the default options of mkfs.ext4 and mkfs.xfs and zpool create. I then wrote a script, that can be found here, to utilise these disks with the sysbench-mongodb utility, and log the results. If you want to see the specific commands that we used, please have a look at the script.

The way the script works is by destroying and recreating the ZFS volume with the option that is being tested, then starting a mongod instance, using the filesystems mount point as the dbpath. For example mongod --directoryperdb --dbpath /zfs. Then we simply run the sysbench-mongo script, and pull out the results of the run.

The parameters we decided to test for ZFS are listed below:

Defaults (ashift = 0, recordsize = 128K, compression = off)
Defaults & ashift = {9,12}        
Defaults & recordsize = {8KB,64KB}        
Defaults & compression = {lz4,gzip}        

We also edited the sysbench-mongodb config ever so slightly. We opted to use the FSYNC_SAFE write concern to ensure that the data was getting written to disk, and not just stored in RAM. We also reduced the number of documents per collection to 1,000,000 tenfold less than the default 10,000,000. This was simply to save time on each “Load” step, something we aren't too concerned with as our applications are principally read-heavy.

After running the benchmarks for each of the separate filesystems ten times and recording the last cumulative average of the inserts per second for the “Load” stage of the benchmark, and the last cumulative average of the transactions per second for the “Execute” stage, we could create a representative average of the performance for each filesystem.

Results

Results

You can download the raw data here if you want to perform your own analysis of the results.

As we suspected, ZFS doesn’t perform quite as well as the other filesystems, but it is worth noting that with the default settings it is only slightly slower. Most importantly, we didn’t uncover any show-stopping performance issues, as hinted in the discussions that are linked above. Unless you are looking to get the utmost performance in your queries, then ZFS certainly looks to be a viable option. Moreover, we feel that the benefits gained by utilising ZFS are more than worth the minor performance penalties.

This investigation has been far from definitive, but hopefully has provided you with a rough overview of how these filesystems perform. If you know of ways to help us improve the results, or performance of MongoDB on ZFS, please do let us know, we are keen to hear your experiences!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment