Skip to content

Instantly share code, notes, and snippets.

@ktheory
Last active November 10, 2023 23:41
Show Gist options
  • Save ktheory/3c3616fca42a3716346b to your computer and use it in GitHub Desktop.
Save ktheory/3c3616fca42a3716346b to your computer and use it in GitHub Desktop.
EC2 EBS-SSD vs instance-store performance on an EBS-optimized m3.2xlarge
# /tmp/test = EBS-SSD
# /mnt/test = instance-store
root@ip-10-0-2-6:~# dd bs=1M count=256 if=/dev/zero of=/tmp/test
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 3.26957 s, 82.1 MB/s
root@ip-10-0-2-6:~# dd bs=1M count=256 if=/dev/zero of=/tmp/test
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 3.61336 s, 74.3 MB/s
root@ip-10-0-2-6:~# dd bs=1M count=256 if=/dev/zero of=/tmp/test
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 3.31484 s, 81.0 MB/s
root@ip-10-0-2-6:~# dd bs=1M count=256 if=/dev/zero of=/mnt/test
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 0.291084 s, 922 MB/s
root@ip-10-0-2-6:~# dd bs=1M count=256 if=/dev/zero of=/mnt/test
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 0.29238 s, 918 MB/s
root@ip-10-0-2-6:~# dd bs=1M count=256 if=/dev/zero of=/mnt/test
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 0.291242 s, 922 MB/s
## BENCHMARK
# /dev/xvda1 = EBS-SSD
# /dev/xvdb = instance-store SSD
ubuntu@ip-10-0-2-6:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 20G 777M 18G 5% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 15G 8.0K 15G 1% /dev
tmpfs 3.0G 332K 3.0G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 15G 0 15G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/xvdb 74G 52M 70G 1% /mnt
ubuntu@ip-10-0-2-6:~$ sudo hdparm -t /dev/xvda1
/dev/xvda1:
Timing buffered disk reads: 366 MB in 3.01 seconds = 121.66 MB/sec
ubuntu@ip-10-0-2-6:~$ sudo hdparm -t /dev/xvda1
/dev/xvda1:
Timing buffered disk reads: 406 MB in 3.01 seconds = 134.77 MB/sec
ubuntu@ip-10-0-2-6:~$ sudo hdparm -t /dev/xvda1
/dev/xvda1:
Timing buffered disk reads: 406 MB in 3.01 seconds = 134.74 MB/sec
ubuntu@ip-10-0-2-6:~$ sudo hdparm -T /dev/xvda1
/dev/xvda1:
Timing cached reads: 20350 MB in 1.99 seconds = 10237.13 MB/sec
ubuntu@ip-10-0-2-6:~$ sudo hdparm -T /dev/xvda1
/dev/xvda1:
Timing cached reads: 20306 MB in 1.99 seconds = 10214.82 MB/sec
ubuntu@ip-10-0-2-6:~$ sudo hdparm -T /dev/xvda1
/dev/xvda1:
Timing cached reads: 20324 MB in 1.99 seconds = 10223.70 MB/sec
ubuntu@ip-10-0-2-6:~$ sudo hdparm -t /dev/xvdb
/dev/xvdb:
Timing buffered disk reads: 1920 MB in 3.00 seconds = 639.98 MB/sec
ubuntu@ip-10-0-2-6:~$ sudo hdparm -t /dev/xvdb
/dev/xvdb:
Timing buffered disk reads: 2230 MB in 3.00 seconds = 742.96 MB/sec
ubuntu@ip-10-0-2-6:~$ sudo hdparm -t /dev/xvdb
/dev/xvdb:
Timing buffered disk reads: 2212 MB in 3.00 seconds = 736.89 MB/sec
ubuntu@ip-10-0-2-6:~$ sudo hdparm -T /dev/xvdb
/dev/xvdb:
Timing cached reads: 20326 MB in 1.99 seconds = 10224.69 MB/sec
ubuntu@ip-10-0-2-6:~$ sudo hdparm -T /dev/xvdb
/dev/xvdb:
Timing cached reads: 20538 MB in 1.99 seconds = 10331.94 MB/sec
ubuntu@ip-10-0-2-6:~$ sudo hdparm -T /dev/xvdb
/dev/xvdb:
Timing cached reads: 20350 MB in 1.99 seconds = 10237.58 MB/sec

Device reads (no cache)

hdparm -t /dev/xvd*

Trial EBS-SSD (mb/s) instance-store (mb/s)
1 121.66 639.98
2 134.77 742.96
3 134.74 736.89

Cache reads

hdparm -T /dev/xvd*

Trial EBS-SSD (mb/s) instance-store (mb/s)
1 10,237.13 10,224.69
2 10,214.82 10,331.94
3 10,223.70 10,237.58

Writes

dd bs=1M count=256 if=/dev/zero of=/{tmp,mnt}/test

Trial EBS-SSD (mb/s) instance-store (mb/s)
1 82.1 922
2 74.3 918
3 81.0 922

Conclusions:

Instance-store is over 5x faster than EBS-SSD for uncached reads.

Instance-store and EBS-SSD are equalivent for cached reads.

Instance-store is over 10x faster than EBS-SSD for writes.

@mafonso
Copy link

mafonso commented Jul 22, 2015

There is more in it than just performance. Instance-store SSDs are ephemeral drives, its contents go with the box whilst EBS volumes can be detached and remounted on a new box. They can be used for swap or for caching the EBS volume using https://github.com/stec-inc/EnhanceIO for instance.

Multiple EBS volumes can also be stripe RAIDed together to increase throughput and load balance IO load.
I got very good results w

Also, the tests above show only sequencial access metrics. random I/O and different block sizes can make a huge diference.
AWS uses 128k as the block size for IO accounting.

@ArbitraryCritter
Copy link

hdparm -T /dev/xvd* is basically just benchmarking the speed of your memory.

@hobinyoon
Copy link

Should the dd test be with oflag=direct? I'm wondering if the writes were absorbed by file system caches?

@rdkls
Copy link

rdkls commented Dec 21, 2017

FYI I re-did these in Dec 2017 and the results still hold https://gist.github.com/rdkls/823399188d2e9be0c754a93e186b427a

@SOODANS
Copy link

SOODANS commented Aug 7, 2018

Thanks...for valuable input.

There is some more information in the below url
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

@rpietzsch
Copy link

Thanks super helpful data. Can you share some details on what kind of ESB storage type (GP2/GP3/IO2/...) you used?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment