Skip to content

Instantly share code, notes, and snippets.

@kenn
Last active August 29, 2015 14:01
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kenn/51af188d35dac8069313 to your computer and use it in GitHub Desktop.
Save kenn/51af188d35dac8069313 to your computer and use it in GitHub Desktop.
DigitalOcean Disk Performance (May 2014)

Degradation of DigitalOcean Disk Performance

DigitalOcean's disk performance got order of magnitude worse. Compare the following test result with the ones that I did last year, when they started to support Virtio:

Probably DO started to throttle the I/O on the cheaper droplets, but the result is poor overall.

# hdparm -tf /dev/disk/by-label/DOROOT

/dev/disk/by-label/DOROOT:
 Timing buffered disk reads: 132 MB in  3.00 seconds =  44.00 MB/sec
 Timing buffered disk reads: 634 MB in  3.01 seconds = 210.90 MB/sec
 Timing buffered disk reads: 904 MB in  3.02 seconds = 299.42 MB/sec
 Timing buffered disk reads: 1148 MB in  3.00 seconds = 382.39 MB/sec
 Timing buffered disk reads: 1000 MB in  3.00 seconds = 332.94 MB/sec

# bonnie++ -b -u root

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
eme-staging-test 1G   711  96  5753   1 22303   3  3138  98 726778  33 11446 164
Latency             32335us      136s    4698ms   16416us   45819us     435ms
Version  1.96       ------Sequential Create------ --------Random Create--------
eme-staging-test    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16   808   4 +++++ +++  1055   6  1192   6 +++++ +++   811   3
Latency              2100ms     627us     901ms     622ms      31us    2970ms
1.96,1.96,eme-staging-test,1,1400509579,1G,,711,96,5753,1,22303,3,3138,98,726778,33,11446,164,16,,,,,808,4,+++++,+++,1055,6,1192,6,+++++,+++,811,3,32335us,136s,4698ms,16416us,45819us,435ms,2100ms,627us,901ms,622ms,31us,2970ms

Compare it with Linode, who just introduced SSD-based instances. Linode is a lot faster, even faster than the original DO results.

# hdparm -tf /dev/root

/dev/root:
 Timing buffered disk reads: 3042 MB in  3.00 seconds = 1013.74 MB/sec
 Timing buffered disk reads: 2960 MB in  3.00 seconds = 986.54 MB/sec
 Timing buffered disk reads: 2946 MB in  3.00 seconds = 981.88 MB/sec
 Timing buffered disk reads: 3028 MB in  3.00 seconds = 1008.84 MB/sec
 Timing buffered disk reads: 2942 MB in  3.00 seconds = 980.42 MB/sec

# bonnie++ -b -u root

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
li632-240        4G   375  99 709327  98 409634  64  1006  98 1026989  83  9222 109
Latency             25129us    2475us   26406us   34816us    3551us   92975us
Version  1.96       ------Sequential Create------ --------Random Create--------
li632-240           -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  4184  19 +++++ +++  3494  18  4302  19 +++++ +++  3513  18
Latency              2013us     303us    2027us    1219us      77us    1337us
1.96,1.96,li632-240,1,1400517486,4G,,375,99,709327,98,409634,64,1006,98,1026989,83,9222,109,16,,,,,4184,19,+++++,+++,3494,18,4302,19,+++++,+++,3513,18,25129us,2475us,26406us,34816us,3551us,92975us,2013us,303us,2027us,1219us,77us,1337us

Also, a test from last year ago when Linode still ran on hard drives:

@raiyu
Copy link

raiyu commented May 20, 2014

In a public cloud environment comparing disk performance is a bit complicated because there are many factors at play. Sometimes it could be a noisy neighbor issue which does require providers to instate some sort of fair-share policy to ensure that noisy neighbors are weighted down appropriately based on the size of the plan that they purchased.

Also please keep in mind that SSD drives and clouds that operate on them are relatively new. We were one of the first to go with an all SSD cloud, as such initial performance is always going to be great simply because each new customer is usually being put on a brand new server simply because the cloud is so new, however over time as customers begin to spread out across the cloud and they move from temporary loads such as testing and development to more permanent and production loads the throughput stabilizes at a certain threshold.

Like most providers we have a fair share policy in place but we don't throttle disk performance if throughput is available so that customers can spike in their utilization when needed if the resources are free. If you feel that your disk performance is below where it should be please open up a support ticket so that our customer support staff can take a look at the hypervisor in question and see if any changes need to be done.

Thanks,
Moisey
Cofounder DigitalOcean

@kenn
Copy link
Author

kenn commented May 21, 2014

Thanks for taking time to comment, Moisey. All valid points and I totally understand.

That said, I ran the test because I heard some of my friends complained about the disk I/O performance on DO recently. It may be a side effect of maturity, but from a user's view, nobody likes noisy neighbors and some of us expect more stable, or if at all possible, guaranteed outcome.

In this kind of speculative / lottery business where how much you get for the buck depends on your luck, perception is everything - rumors among the dev community have the real driving force of flocking, even if the rumor is not true. Right now Linode is stealing the momentum from you guys, it seems.

But there's no doubt that you guys are the innovator in this field who triggered the massive competition, and we the devs are all beneficiary. I hope DO will continue to win this game!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment