Skip to content

Instantly share code, notes, and snippets.

@yorickdowne
Last active January 23, 2025 19:57
Show Gist options
  • Save yorickdowne/f3a3e79a573bf35767cd002cc977b038 to your computer and use it in GitHub Desktop.
Save yorickdowne/f3a3e79a573bf35767cd002cc977b038 to your computer and use it in GitHub Desktop.
Great and less great SSDs for Ethereum nodes

Overview

Syncing an Ethereum node is largely reliant on latency and IOPS, I/O Per Second, of the storage. Budget SSDs will struggle to an extent, and some won't be able to sync at all. For simplicity, this page treats IOPS as a proxy for/predictor of latency.

This document aims to snapshot some known good and known bad models.

The drive lists are ordered by interface and then by capacity and alphabetically by vendor name, not by preference. The lists are not exhaustive at all. @mwpastore linked a filterable spreadsheet in comments that has a far greater variety of drives and their characteristics. Filter it by DRAM yes, NAND Type TLC, Form Factor M.2, and desired capacity.

For size, 4TB is a very conservative choice. The smaller 2TB drive should last an Ethereum full node until at least sometime 2026, with the pre-merge history expiry scheduled for May 1st 2025. The Portal team aim to make 2TB last forever with EIP-4444. Remy wrote a migration guide to 4TB.

High-level, QLC and DRAMless are far slower than "mainstream" SSDs. QLC has lower endurance as well. Any savings will be gone when the drive fails early and needs to be replaced.

Other than a slow SSD model, these are things that can slow IOPS down:

  • Heat. Check with smartctl -x; the SSD should be below 50C so it does not throttle.
  • TRIM not being allowed. This can happen with some hardware RAID controllers, as well as on macOS with non-Apple SSDs
  • ZFS, BTRFS, any CoW file system
  • RAID5/6 - write amplification is no joke
  • On SATA, the controller in UEFI/BIOS set to anything other than AHCI. Set it to AHCI for good performance.

If you haven't already, do turn off atime on your DB volume, it'll increase SSD lifetime and speed things up a little bit.

Some users have reported that NUC instability with certain drives can be cured by adding nvme_core.default_ps_max_latency_us=0 pcie_aspm=off to their GRUB_CMDLINE_LINUX_DEFAULT kernel parameters via sudo nano /etc/default/grub and sudo update-grub. This keeps the drive from entering powersave states by itself.

The Good

"Mainstream" and "Performance" drive models that can sync mainnet execution layer clients in a reasonable amount of time.

  • Higher endurance (TBW) than most: Seagate Firecuda 530, WD Red SN700
  • Lowest power draw: SK Hynix P31 Gold - is a great choice for Rock5 B and other low-power devices, but 2TB only

We've started crowd-sourcing some IOPS numbers. If you want to join the fun, run fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75; rm test and give us the read and write IOPS.

If you have room for it and need an excellent heatsink, consider the "Rocket NVMe Heatsink". It is quite high however, and may not fit in some miniPC cases.

Hardware

M.2 NVMe "Mainstream" - TLC, DRAM, PCIe 3, 4TB drives

  • Any data center/enterprise NVMe SSD
  • Teamgroup MP34, between 94k/31k and 118k/39k r/w IOPS
  • WD Red SN700, 141k/47k r/w IOPS

M.2 NVMe "Performance" - TLC, DRAM, PCIe 4 or 5, 4TB drives

  • Any data center/enterprise NVMe SSD
  • Acer GM7000 "Predator", 125k/41k r/w IOPS
  • ADATA XPG Gammix S70, 272k/91k r/w IOPS
  • Corsair Force MP600 Pro and variants (but not "MP600 Core XT"), 138k/46k r/w IOPS
  • Crucial T700, 215k/71k r/w IOPS
  • Kingston KC3000, 377k/126k r/w IOPS
  • Kingston Fury Renegade, 211k/70k r/w IOPS
  • Mushkin Redline Vortex (but not LX)
  • Sabrent Rocket 4 Plus, 149k/49k r/w IOPS. @SnoepNFTs reports the Rocket NVMe Heatsink keeps it very cool.
  • Samsung 990 Pro, 124k/41k r/w IOPS - there are reports of 990 Pro rapidly losing health. A firmware update to 1B2QJXD7 is meant to stop the rapid degradation, but won't reverse any that happened on earlier firmware.
  • Seagate Firecuda 530, 218k/73k r/w IOPS
  • Teamgroup MP44 (but not MP44L or MP44Q), 105k/35k r/w IOPS - caution that this is DRAMless and uses a Host Memory Buffer (HMB), yet appears to perform fine.
  • Transcend 250s, 127k/42k r/w IOPS. @SnoepNFTs reports it gets very hot, you'd want to add a good heatsink to it.
  • WD Black SN850X, 101k/33k r/w IOPS

M.2 NVMe "Mainstream" - TLC, DRAM, PCIe 3, 2TB drives

  • Any data center/enterprise NVMe SSD
  • AData XPG Gammix S11/SX8200 Pro. Several hardware revisions. It's slower than some QLC drives. 68k/22k r/w IOPS
  • AData XPG Gammix S50 Lite
  • HP EX950
  • Mushkin Pilot-E
  • Samsung 970 EVO Plus 2TB, pre-rework (firmware 2B2QEXM7). 140k/46k r/w IOPS
  • Samsung 970 EVO Plus 2TB, post-rework (firmware 3B2QEXM7 or 4B2QEXM7). In testing this syncs just as quickly as the pre-rework drive
  • SK Hynix P31 Gold
  • WD Black SN750 (but not SN750 SE)

M.2 NVMe "Performance" - TLC, DRAM, PCIe 4 or 5, 2TB drives

  • Any data center/enterprise NVMe SSD
  • Crucial P5 Plus
  • Kingston KC2000
  • Samsung 980 Pro (not 980) - a firmware update to 5B2QGXA7 is necessary to keep them from dying, if they are firmware 3B2QGXA7. Samsung's boot Linux is a bit broken, you may want to flash from your own Linux.
  • SK Hynix P41 Platinum / Solidigm P44 Pro, 99k/33k r/w IOPS
  • WD Black SN850

Cloud

  • Any baremetal/dedicated server service
  • AWS i3en.(2)xlarge or is4gen.xlarge
  • AWS gp3 w/ >=10k IOPS provisioned and an m7i/a.xlarge

The Bad

These "Budget" drive models are reportedly too slow to sync (all) mainnet execution layer clients.

Hardware

  • AData S40G/SX8100 4TB, QLC - the 2TB model is TLC and should be fine; 4TB is reportedly too slow
  • Crucial P1, QLC - users report it can't sync Nethermind
  • Crucial P2 and P3 (Plus), QLC and DRAMless - users report it can't sync Nethermind, 27k/9k r/w IOPS
  • Kingston NV1 - probably QLC and DRAMless and thus too slow on 2TB, but could be "anything" as Kingston do not guarantee specific components.
  • Kingston NV2 - like NV1 no guaranteed components
  • WD Green SN350, QLC and DRAMless
  • Anything both QLC and DRAMless will likely not be able to sync at all or not be able to consistently keep up with "chain head"
  • Crucial BX500 SATA, HP S650 SATA, probably most SATA budget drives
  • Samsung 980, DRAMless - unsure, this may belong in "Ugly". If you have one and can say for sure, please come to ethstaker Discord.
  • Samsung T7 USB, even with current firmware

The Ugly

"Budget" drive models that reportedly can sync mainnet execution layer clients, if slowly.

Note that QLC drives usually have a markedly lower TBW than TLC, and will fail earlier.

Hardware

  • Corsair MP400, QLC
  • Inland Professional 3D NAND, QLC
  • Intel 660p, QLC. It's faster than some "mainstream" drives. 98k/33k r/w IOPS
  • Seagata Barracuda Q5, QLC
  • WD Black SN770, DRAMless
  • Samsung 870 QVO SATA, QLC

2.5" SATA "Mainstream" - TLC, DRAM

  • These have been moved to "ugly" because there are user reports that only Nimbus/Geth will now sync on SATA, and even that takes 3 days. It looks like after Dencun, NVMe is squarely the way to go.
  • Any data center/enterprise SATA SSD
  • Crucial MX500 SATA, 46k/15k r/w IOPS
  • Samsung 860 EVO SATA, 55k/18k r/w IOPS
  • Samsung 870 EVO SATA, 63k/20k r/w IOPS
  • WD Blue 3D NAND SATA

Cloud

  • Netcup RS G11 Servers. Impressively fast; but it still depends on your neighbors in the service.
  • Contabo SSD - reportedly able to sync Geth 1.13.0 and Nethermind, if slowly
  • Netcup VPS Servers - reportedly able to sync Geth 1.13.0 and Nethermind, if slowly
  • Contabo NVMe - fast enough but not enough space. 800 GiB is not sufficient.
@sebastiandanconia
Copy link

Crucial MX500 2TB 3D NAND SATA, P/N CT2000MX500SSD1:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75; rm test
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.36
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=170MiB/s,w=55.8MiB/s][r=43.5k,w=14.3k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=58013: Tue Aug  6 20:41:22 2024
  read: IOPS=40.1k, BW=157MiB/s (164MB/s)(113GiB/735227msec)
   bw (  KiB/s): min=56120, max=178656, per=100.00%, avg=160549.40, stdev=16317.69, samples=1470
   iops        : min=14030, max=44664, avg=40137.29, stdev=4079.44, samples=1470
  write: IOPS=13.4k, BW=52.2MiB/s (54.8MB/s)(37.5GiB/735227msec); 0 zone resets
   bw (  KiB/s): min=18936, max=59840, per=100.00%, avg=53507.94, stdev=5464.86, samples=1470
   iops        : min= 4734, max=14960, avg=13376.91, stdev=1366.21, samples=1470
  cpu          : usr=11.35%, sys=37.07%, ctx=20912690, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=157MiB/s (164MB/s), 157MiB/s-157MiB/s (164MB/s-164MB/s), io=113GiB (121GB), run=735227-735227msec
  WRITE: bw=52.2MiB/s (54.8MB/s), 52.2MiB/s-52.2MiB/s (54.8MB/s-54.8MB/s), io=37.5GiB (40.3GB), run=735227-735227msec

Disk stats (read/write):
    dm-0: ios=29488516/9828823, sectors=235908176/78630008, merge=0/0, ticks=33612141/11641385, in_queue=45253526, util=100.00%, aggrios=29483985/9828644, aggsectors=235938760/78640656, aggrmerge=8354/1510, aggrticks=33412161/11652208, aggrin_queue=45069154, aggrutil=100.00%
  sda: ios=29483985/9828644, sectors=235938760/78640656, merge=8354/1510, ticks=33412161/11652208, in_queue=45069154, util=100.00%

@dbeal-eth
Copy link

4x SAMSUNG MZQL27T6HBLA-00A07 8TB

on LVM RAID0

test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=418MiB/s,w=138MiB/s][r=107k,w=35.2k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1791624: Fri Sep 20 06:50:44 2024
  read: IOPS=106k, BW=415MiB/s (435MB/s)(113GiB/277427msec)
   bw (  KiB/s): min=412601, max=433600, per=100.00%, avg=425912.77, stdev=2952.66, samples=554
   iops        : min=103150, max=108400, avg=106478.01, stdev=738.21, samples=554
  write: IOPS=35.4k, BW=138MiB/s (145MB/s)(37.5GiB/277427msec); 0 zone resets
   bw (  KiB/s): min=137098, max=145432, per=100.00%, avg=141948.63, stdev=1246.79, samples=554
   iops        : min=34274, max=36358, avg=35486.98, stdev=311.72, samples=554
  cpu          : usr=14.71%, sys=73.15%, ctx=8764196, majf=0, minf=10
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=415MiB/s (435MB/s), 415MiB/s-415MiB/s (435MB/s-435MB/s), io=113GiB (121GB), run=277427-277427msec
  WRITE: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=37.5GiB (40.3GB), run=277427-277427msec

Disk stats (read/write):
    dm-6: ios=29470675/9822052, merge=0/0, ticks=2172516/114900, in_queue=2287416, util=100.00%, aggrios=7373081/2457323, aggrmerge=0/0, aggrticks=537561/26680, aggrin_queue=564241, aggrutil=100.00%
    dm-4: ios=7374797/2455609, merge=0/0, ticks=536436/26820, in_queue=563256, util=100.00%, aggrios=7375991/2455677, aggrmerge=431/31, aggrticks=538438/28509, aggrin_queue=566947, aggrutil=100.00%
  nvme2n1: ios=7375991/2455677, merge=431/31, ticks=538438/28509, in_queue=566947, util=100.00%
    dm-2: ios=7372459/2457946, merge=0/0, ticks=538468/27240, in_queue=565708, util=100.00%, aggrios=7431982/2460076, aggrmerge=450/55, aggrticks=546460/28955, aggrin_queue=575442, aggrutil=100.00%
  nvme0n1: ios=7431982/2460076, merge=450/55, ticks=546460/28955, in_queue=575442, util=100.00%
    dm-5: ios=7373249/2457155, merge=0/0, ticks=538392/26592, in_queue=564984, util=100.00%, aggrios=7374415/2457222, aggrmerge=421/35, aggrticks=538730/28707, aggrin_queue=567438, aggrutil=100.00%
  nvme3n1: ios=7374415/2457222, merge=421/35, ticks=538730/28707, in_queue=567438, util=100.00%
    dm-3: ios=7371821/2458585, merge=0/0, ticks=536948/26068, in_queue=563016, util=100.00%, aggrios=7372991/2458649, aggrmerge=454/28, aggrticks=538861/28691, aggrin_queue=567552, aggrutil=100.00%
  nvme1n1: ios=7372991/2458649, merge=454/28, ticks=538861/28691, in_queue=567552, util=100.00%

I also have a Samsung 990 Pro w/ Heatsink on my laptop, running the same test was same results as SnoepNFT https://gist.github.com/yorickdowne/f3a3e79a573bf35767cd002cc977b038?permalink_comment_id=4958391#gistcomment-4958391

@h0m3us3r
Copy link

Solidigm P44 Pro 2TB

I get quite a bit better results on my P44 Pro 2TB (250k/83k; zero tuning):

$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.36
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=838MiB/s,w=279MiB/s][r=215k,w=71.4k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=12922: Sun Sep 29 08:48:05 2024
  read: IOPS=250k, BW=977MiB/s (1025MB/s)(113GiB/117867msec)
   bw (  KiB/s): min=809536, max=1110872, per=100.00%, avg=1001964.77, stdev=99876.67, samples=235
   iops        : min=202384, max=277718, avg=250491.18, stdev=24969.18, samples=235
  write: IOPS=83.4k, BW=326MiB/s (342MB/s)(37.5GiB/117867msec); 0 zone resets
   bw (  KiB/s): min=271720, max=371000, per=100.00%, avg=333941.07, stdev=33374.23, samples=235
   iops        : min=67930, max=92750, avg=83485.27, stdev=8343.57, samples=235
  cpu          : usr=15.19%, sys=69.35%, ctx=8336094, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=977MiB/s (1025MB/s), 977MiB/s-977MiB/s (1025MB/s-1025MB/s), io=113GiB (121GB), run=117867-117867msec
  WRITE: bw=326MiB/s (342MB/s), 326MiB/s-326MiB/s (342MB/s-342MB/s), io=37.5GiB (40.3GB), run=117867-117867msec

Disk stats (read/write):
  nvme0n1: ios=29461506/9818976, sectors=235692056/78552008, merge=0/25, ticks=3013987/57806, in_queue=3071817, util=71.84%

@Lexazan
Copy link

Lexazan commented Oct 21, 2024

ADATA XPG GAMMIX S70 BLADE 4TB (AGAMMIXS70B-4T-CS)
301k / 100k

Got it from Amazon USA in october 2024
Tested on framework laptop 13 (11 gen Intel), on empty ex4 drive, booted from usb

DRAM chip was a bit hot, so looks like heatsink and a cooler might be a good idea.

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75; rm test
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=1176MiB/s,w=394MiB/s][r=301k,w=101k IOPS][eta 00m:00s] 
test: (groupid=0, jobs=1): err= 0: pid=3884: Mon Oct 21 07:46:53 2024
  read: IOPS=301k, BW=1176MiB/s (1234MB/s)(113GiB/97929msec)
   bw (  MiB/s): min=    1, max= 1307, per=100.00%, avg=1177.29, stdev=110.59, samples=195
   iops        : min=  389, max=334608, avg=301386.99, stdev=28309.82, samples=195
  write: IOPS=100k, BW=392MiB/s (411MB/s)(37.5GiB/97929msec); 0 zone resets
   bw (  KiB/s): min=97520, max=449912, per=100.00%, avg=403870.42, stdev=24508.67, samples=194
   iops        : min=24380, max=112478, avg=100967.57, stdev=6127.17, samples=194
  cpu          : usr=18.97%, sys=65.46%, ctx=8422129, majf=0, minf=6
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=1176MiB/s (1234MB/s), 1176MiB/s-1176MiB/s (1234MB/s-1234MB/s), io=113GiB (121GB), run=97929-97929msec
  WRITE: bw=392MiB/s (411MB/s), 392MiB/s-392MiB/s (411MB/s-411MB/s), io=37.5GiB (40.3GB), run=97929-97929msec

Disk stats (read/write):
  nvme0n1: ios=29458080/9817976, merge=0/34, ticks=1449679/142800, in_queue=1592483, util=99.51%

@SnoepNFTs
Copy link

Tested SSD:

  • Emtec X400-10 Power pro

Managed to get the following results:

Emtec X400-10 Power pro
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.36
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][99.8%][r=324MiB/s,w=107MiB/s][r=82.9k,w=27.4k IOPS][eta 00m:01s] 
test: (groupid=0, jobs=1): err= 0: pid=3257: Wed Nov  6 20:46:37 2024
  read: IOPS=60.0k, BW=234MiB/s (246MB/s)(113GiB/491712msec)
   bw (  KiB/s): min=165432, max=366584, per=99.97%, avg=239835.00, stdev=15147.15, samples=982
   iops        : min=41358, max=91646, avg=59958.75, stdev=3786.78, samples=982
  write: IOPS=20.0k, BW=78.1MiB/s (81.9MB/s)(37.5GiB/491712msec); 0 zone resets
   bw (  KiB/s): min=53960, max=120912, per=99.97%, avg=79933.04, stdev=5104.43, samples=982
   iops        : min=13490, max=30228, avg=19983.25, stdev=1276.11, samples=982
  cpu          : usr=18.32%, sys=41.47%, ctx=8479778, majf=0, minf=9
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=234MiB/s (246MB/s), 234MiB/s-234MiB/s (246MB/s-246MB/s), io=113GiB (121GB), run=491712-491712msec
  WRITE: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=37.5GiB (40.3GB), run=491712-491712msec

Disk stats (read/write):
  nvme0n1: ios=29465216/9820893, sectors=235723592/78573680, merge=0/595, ticks=27622647/606658, in_queue=28233556, util=63.66%

Additional Notes:

  • A non-mainstream drive and therefore a cheap SSD (especially second hand)
  • Gets burning hot under load

@kewlfft
Copy link

kewlfft commented Nov 10, 2024

Kingston FURY Renegade 4TB, PCIe 3.0

Heat

The temperature under load, as displayed by nvme smart-log /dev/nvme0, was reaching 80°C without heatsink, I bought a basic low profile heatsink for $8 (Thermalright M.2) and now the temperature does not exceed 50°C under heavy load. It's worth it to eliminate the chance of damage or throttling.

btrfs

I am getting 30k read / 10k write IOPS with btrfs. fio seems to be mostly slowed by 100% 1-core CPU utilization of my Intel Core i5 9500T. It could be due to my btrfs setup (defaults, I tested with or without zstd:1 compression which did not make much difference) or the way fio measures performance.

ext4

With ext4, I am now getting 180k read / 60k write (x6 compared to btrfs).

Additional Notes

phoronix also benchmarked btrfs significantly slower than f2fs, ext4 and xfs in some of their Aug'24 tests. Not necessarily a fundamental btrfs issue, it may simply require more tuning.

Conclusion

Just to say that the filesystem and its configuration, and not only noatime, matter.

@kewlfft
Copy link

kewlfft commented Dec 21, 2024

Corsair MP600 PRO LPX 4TB

441k read / 147k write

Detailed results _fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=120G --readwrite=randrw --rwmixread=75_
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.38
Starting 1 process
test: Laying out IO file (1 file / 122880MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=1724MiB/s,w=571MiB/s][r=441k,w=146k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=4704: Sat Dec 21 17:58:50 2024
read: IOPS=441k, BW=1724MiB/s (1808MB/s)(90.0GiB/53453msec)
bw (  MiB/s): min= 1294, max= 1798, per=99.91%, avg=1722.66, stdev=91.25, samples=107
iops        : min=331464, max=460404, avg=440999.76, stdev=23359.11, samples=107
write: IOPS=147k, BW=575MiB/s (603MB/s)(30.0GiB/53453msec); 0 zone resets
bw (  KiB/s): min=442288, max=614680, per=99.91%, avg=587946.84, stdev=31570.49, samples=107
iops        : min=110572, max=153670, avg=146986.75, stdev=7892.65, samples=107
cpu          : usr=15.26%, sys=68.75%, ctx=3380514, majf=0, minf=7
IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=23593489,7863791,0,0 short=0,0,0,0 dropped=0,0,0,0
latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=1724MiB/s (1808MB/s), 1724MiB/s-1724MiB/s (1808MB/s-1808MB/s), io=90.0GiB (96.6GB), run=53453-53453msec
WRITE: bw=575MiB/s (603MB/s), 575MiB/s-575MiB/s (603MB/s-603MB/s), io=30.0GiB (32.2GB), run=53453-53453msec

Disk stats (read/write):
nvme0n1: ios=23505258/7834615, sectors=188042064/62677000, merge=0/10, ticks=2170274/118271, in_queue=2288549, util=34.33%

@oi32rc
Copy link

oi32rc commented Jan 7, 2025

TeamGroup MP44

Sad to report that I'm getting terrible performance from my new TeamGroup MP44:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75; rm test
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.36
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][99.9%][r=129MiB/s,w=41.7MiB/s][r=33.0k,w=10.7k IOPS][eta 00m:01s]
test: (groupid=0, jobs=1): err= 0: pid=1486: Tue Jan 7 19:33:03 2025
read: IOPS=28.4k, BW=111MiB/s (116MB/s)(113GiB/1037371msec)
bw ( KiB/s): min=13720, max=260144, per=100.00%, avg=113825.80, stdev=35686.31, samples=2072
iops : min= 3430, max=65036, avg=28456.37, stdev=8921.62, samples=2072
write: IOPS=9475, BW=37.0MiB/s (38.8MB/s)(37.5GiB/1037371msec); 0 zone resets
bw ( KiB/s): min= 4808, max=86944, per=100.00%, avg=37936.07, stdev=11880.92, samples=2072
iops : min= 1202, max=21736, avg=9483.92, stdev=2970.28, samples=2072
cpu : usr=10.00%, sys=35.40%, ctx=7207655, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=113GiB (121GB), run=1037371-1037371msec
WRITE: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=37.5GiB (40.3GB), run=1037371-1037371msec

Disk stats (read/write):
dm-0: ios=29486007/9828111, sectors=235888120/78623552, merge=0/0, ticks=36581898/8760179, in_queue=45342077, util=98.71%, aggrios=29492340/9830016, aggsectors=235938784/78640920, aggrmerge=0/266, aggrticks=35933654/8770040, aggrin_queue=44704468, aggrutil=72.39%
nvme0n1: ios=29492340/9830016, sectors=235938784/78640920, merge=0/266, ticks=35933654/8770040, in_queue=44704468, util=72.39%

@duncancmt
Copy link

TeamGroup MP44

What size drive? Have you set the logical block size to the native block size? What filesystem are you using and have you properly aligned it? Are you using LVM/encryption? Feel free to DM me on your messaging platform of choice (I'm duncancmt everywhere)

@sterlingcrispin
Copy link

SDSSDE61-4T00-G25
SanDisk Extreme Portable SSD - 4TB

Macbook pro , m1 chip
usbc connected

read 14.0k 😰 , write 4.6k 😰

➜  ~ fio --randrepeat=1 --ioengine=posixaio --direct=1 --gtod_reduce=1 --name=test --filename=/Volumes/CryptoData/test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75; rm /Volumes/CryptoData/test 

test: (groupid=0, jobs=1): err= 0: pid=72113: Sat Jan 11 15:10:58 2025
  **read: IOPS=14.0k**, BW=54.7MiB/s (57.4MB/s)(113GiB/2105296msec)
   bw (  KiB/s): min= 8657, max=69726, per=100.00%, avg=56077.81, stdev=8328.66, samples=4179
   iops        : min= 2164, max=17431, avg=14019.12, stdev=2082.16, samples=4179
  **write: IOPS=4668,** BW=18.2MiB/s (19.1MB/s)(37.5GiB/2105296msec); 0 zone resets
   bw (  KiB/s): min= 2859, max=23398, per=100.00%, avg=18689.34, stdev=2772.04, samples=4179
   iops        : min=  714, max= 5849, avg=4672.01, stdev=693.01, samples=4179
  cpu          : usr=18.74%, sys=25.11%, ctx=38439956, majf=0, minf=28
  IO depths    : 1=0.1%, 2=0.1%, 4=0.4%, 8=46.3%, 16=53.2%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.0%, 8=0.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

@yorickdowne
Copy link
Author

That is not surprising at all, @sterlingcrispin . Anything external will be slower, anything marketed as a portable SSD will be even slower.

Macbook is hard mode for an Ethereum node - and expensive. An Odroid H4 can be built for 320 to 520 USD depending on RAM and drive size, and will be faster and easier to maintain than a Mac.

@yorickdowne
Copy link
Author

@oi32rc Can you check the exact model number? Is this an MP44, or an MP44L, or an MP44Q?

@oi32rc
Copy link

oi32rc commented Jan 16, 2025

It was an MP44. I'm not sure what was wrong with that disk, but I have since replaced it with a WD850X and am happy to report a blazing 246k/81.8k performance.

@davidpius95
Copy link

davidpius95 commented Jan 23, 2025

OWC, 4TB

test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=240MiB/s,w=78.9MiB/s][r=61.5k,w=20.2k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2013: Thu Jan 23 19:41:06 2025
  read: IOPS=52.3k, BW=204MiB/s (214MB/s)(113GiB/564341msec)
   bw (  KiB/s): min=  624, max=270168, per=100.00%, avg=209259.86, stdev=32921.85, samples=1127
   iops        : min=  156, max=67542, avg=52314.92, stdev=8230.47, samples=1127
  write: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(37.5GiB/564341msec); 0 zone resets
   bw (  KiB/s): min=  288, max=90568, per=100.00%, avg=69742.11, stdev=10979.85, samples=1127
   iops        : min=   72, max=22642, avg=17435.48, stdev=2744.98, samples=1127
  cpu          : usr=9.22%, sys=78.40%, ctx=8744718, majf=2, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=204MiB/s (214MB/s), 204MiB/s-204MiB/s (214MB/s-214MB/s), io=113GiB (121GB), run=564341-564341msec
  WRITE: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=37.5GiB (40.3GB), run=564341-564341msec

Disk stats (read/write):
  nvme0n1: ios=29485062/10762328, merge=0/13748, ticks=2010688/3967043, in_queue=6018202, util=100.00%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment