hdparm -t /dev/xvd*
Trial | EBS-SSD (mb/s) | instance-store (mb/s) |
---|---|---|
1 | 121.66 | 639.98 |
2 | 134.77 | 742.96 |
3 | 134.74 | 736.89 |
hdparm -T /dev/xvd*
Trial | EBS-SSD (mb/s) | instance-store (mb/s) |
---|---|---|
1 | 10,237.13 | 10,224.69 |
2 | 10,214.82 | 10,331.94 |
3 | 10,223.70 | 10,237.58 |
dd bs=1M count=256 if=/dev/zero of=/{tmp,mnt}/test
Trial | EBS-SSD (mb/s) | instance-store (mb/s) |
---|---|---|
1 | 82.1 | 922 |
2 | 74.3 | 918 |
3 | 81.0 | 922 |
Instance-store is over 5x faster than EBS-SSD for uncached reads.
Instance-store and EBS-SSD are equalivent for cached reads.
Instance-store is over 10x faster than EBS-SSD for writes.
There is more in it than just performance. Instance-store SSDs are ephemeral drives, its contents go with the box whilst EBS volumes can be detached and remounted on a new box. They can be used for swap or for caching the EBS volume using https://github.com/stec-inc/EnhanceIO for instance.
Multiple EBS volumes can also be stripe RAIDed together to increase throughput and load balance IO load.
I got very good results w
Also, the tests above show only sequencial access metrics. random I/O and different block sizes can make a huge diference.
AWS uses 128k as the block size for IO accounting.