Skip to content

Instantly share code, notes, and snippets.

@yorickdowne
Last active December 21, 2024 01:34
Show Gist options
  • Save yorickdowne/f3a3e79a573bf35767cd002cc977b038 to your computer and use it in GitHub Desktop.
Save yorickdowne/f3a3e79a573bf35767cd002cc977b038 to your computer and use it in GitHub Desktop.
Great and less great SSDs for Ethereum nodes

Overview

Syncing an Ethereum node is largely reliant on latency and IOPS, I/O Per Second, of the storage. Budget SSDs will struggle to an extent, and some won't be able to sync at all. For simplicity, this page treats IOPS as a proxy for/predictor of latency.

This document aims to snapshot some known good and known bad models.

The drive lists are ordered by interface and then by capacity and alphabetically by vendor name, not by preference. The lists are not exhaustive at all. @mwpastore linked a filterable spreadsheet in comments that has a far greater variety of drives and their characteristics. Filter it by DRAM yes, NAND Type TLC, Form Factor M.2, and desired capacity.

For size, 4TB comes recommended as of mid 2024. The smaller 2TB drive should last an Ethereum full node until early 2025 or thereabouts, with crystal ball uncertainty. The Portal team aim to make 2TB last forever with EIP-4444 by late 2024. Remy wrote a migration guide to 4TB.

High-level, QLC and DRAMless are far slower than "mainstream" SSDs. QLC has lower endurance as well. Any savings will be gone when the drive fails early and needs to be replaced.

Other than a slow SSD model, these are things that can slow IOPS down:

  • Heat. Check with smartctl -x; the SSD should be below 50C so it does not throttle.
  • TRIM not being allowed. This can happen with some hardware RAID controllers, as well as on macOS with non-Apple SSDs
  • ZFS
  • RAID5/6 - write amplification is no joke
  • On SATA, the controller in UEFI/BIOS set to anything other than AHCI. Set it to AHCI for good performance.

If you haven't already, do turn off atime on your DB volume, it'll increase SSD lifetime and speed things up a little bit.

Some users have reported that NUC instability with certain drives can be cured by adding nvme_core.default_ps_max_latency_us=0 pcie_aspm=off to their GRUB_CMDLINE_LINUX_DEFAULT kernel parameters via sudo nano /etc/default/grub and sudo update-grub. This keeps the drive from entering powersave states by itself.

The Good

"Mainstream" and "Performance" drive models that can sync mainnet execution layer clients in a reasonable amount of time.

  • Higher endurance (TBW) than most: Seagate Firecuda 530, WD Red SN700
  • Lowest power draw: SK Hynix P31 Gold - was a great choice for Rock5 B and other low-power devices, but 2TB only

We've started crowd-sourcing some IOPS numbers. If you want to join the fun, run fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75; rm test and give us the read and write IOPS.

If you have room for it and need an excellent heatsink, consider the "Rocket NVMe Heatsink". It is quite high however, and may not fit in some miniPC cases.

Hardware

M.2 NVMe "Mainstream" - TLC, DRAM, PCIe 3, 4TB drives

  • Any data center/enterprise NVMe SSD
  • Teamgroup MP34, between 94k/31k and 118k/39k r/w IOPS
  • WD Red SN700, 141k/47k r/w IOPS

M.2 NVMe "Performance" - TLC, DRAM, PCIe 4 or 5, 4TB drives

  • Any data center/enterprise NVMe SSD
  • Acer GM7000 "Predator", 125k/41k r/w IOPS
  • ADATA XPG Gammix S70, 272k/91k r/w IOPS
  • Corsair Force MP600 Pro and variants (but not "MP600 Core XT"), 138k/46k r/w IOPS
  • Crucial T700, 215k/71k r/w IOPS
  • Kingston KC3000, 377k/126k r/w IOPS
  • Kingston Fury Renegade, 211k/70k r/w IOPS
  • Mushkin Redline Vortex (but not LX)
  • Sabrent Rocket 4 Plus, 149k/49k r/w IOPS. @SnoepNFTs reports the Rocket NVMe Heatsink keeps it very cool.
  • Samsung 990 Pro, 124k/41k r/w IOPS - there are reports of 990 Pro rapidly losing health. A firmware update to 1B2QJXD7 is meant to stop the rapid degradation, but won't reverse any that happened on earlier firmware.
  • Seagate Firecuda 530, 218k/73k r/w IOPS
  • Teamgroup MP44, 105k/35k r/w IOPS - caution that this is DRAMless and uses a Host Memory Buffer (HMB), yet appears to perform fine.
  • Transcend 250s, 127k/42k r/w IOPS. @SnoepNFTs reports it gets very hot, you'd want to add a good heatsink to it.
  • WD Black SN850X, 101k/33k r/w IOPS

M.2 NVMe "Mainstream" - TLC, DRAM, PCIe 3, 2TB drives

  • Any data center/enterprise NVMe SSD
  • AData XPG Gammix S11/SX8200 Pro. Several hardware revisions. It's slower than some QLC drives. 68k/22k r/w IOPS
  • AData XPG Gammix S50 Lite
  • HP EX950
  • Mushkin Pilot-E
  • Samsung 970 EVO Plus 2TB, pre-rework (firmware 2B2QEXM7). 140k/46k r/w IOPS
  • Samsung 970 EVO Plus 2TB, post-rework (firmware 3B2QEXM7 or 4B2QEXM7). In testing this syncs just as quickly as the pre-rework drive
  • SK Hynix P31 Gold
  • WD Black SN750 (but not SN750 SE)

M.2 NVMe "Performance" - TLC, DRAM, PCIe 4 or 5, 2TB drives

  • Any data center/enterprise NVMe SSD
  • Crucial P5 Plus
  • Kingston KC2000
  • Samsung 980 Pro (not 980) - a firmware update to 5B2QGXA7 is necessary to keep them from dying, if they are firmware 3B2QGXA7. Samsung's boot Linux is a bit broken, you may want to flash from your own Linux.
  • SK Hynix P41 Platinum / Solidigm P44 Pro, 99k/33k r/w IOPS
  • WD Black SN850

Cloud

  • Any baremetal/dedicated server service
  • AWS i3en.(2)xlarge or is4gen.xlarge
  • AWS gp3 w/ >=10k IOPS provisioned and an m7i/a.xlarge

The Bad

These "Budget" drive models are reportedly too slow to sync (all) mainnet execution layer clients.

Hardware

  • AData S40G/SX8100 4TB, QLC - the 2TB model is TLC and should be fine; 4TB is reportedly too slow
  • Crucial P1, QLC - users report it can't sync Nethermind
  • Crucial P2 and P3 (Plus), QLC and DRAMless - users report it can't sync Nethermind, 27k/9k r/w IOPS
  • Kingston NV1 - probably QLC and DRAMless and thus too slow on 2TB, but could be "anything" as Kingston do not guarantee specific components.
  • Kingston NV2 - like NV1 no guaranteed components
  • WD Green SN350, QLC and DRAMless
  • Anything both QLC and DRAMless will likely not be able to sync at all or not be able to consistently keep up with "chain head"
  • Crucial BX500 SATA, HP S650 SATA, probably most SATA budget drives
  • Samsung 980, DRAMless - unsure, this may belong in "Ugly". If you have one and can say for sure, please come to ethstaker Discord.
  • Samsung T7 USB, even with current firmware

The Ugly

"Budget" drive models that reportedly can sync mainnet execution layer clients, if slowly.

Note that QLC drives usually have a markedly lower TBW than TLC, and will fail earlier.

Hardware

  • Corsair MP400, QLC
  • Inland Professional 3D NAND, QLC
  • Intel 660p, QLC. It's faster than some "mainstream" drives. 98k/33k r/w IOPS
  • Seagata Barracuda Q5, QLC
  • WD Black SN770, DRAMless
  • Samsung 870 QVO SATA, QLC

2.5" SATA "Mainstream" - TLC, DRAM

  • These have been moved to "ugly" because there are user reports that only Nimbus/Geth will now sync on SATA, and even that takes 3 days. It looks like after Dencun, NVMe is squarely the way to go.
  • Any data center/enterprise SATA SSD
  • Crucial MX500 SATA, 46k/15k r/w IOPS
  • Samsung 860 EVO SATA, 55k/18k r/w IOPS
  • Samsung 870 EVO SATA, 63k/20k r/w IOPS
  • WD Blue 3D NAND SATA

Cloud

  • Netcup RS G11 Servers. Impressively fast; but it still depends on your neighbors in the service.
  • Contabo SSD - reportedly able to sync Geth 1.13.0 and Nethermind, if slowly
  • Netcup VPS Servers - reportedly able to sync Geth 1.13.0 and Nethermind, if slowly
  • Contabo NVMe - fast enough but not enough space. 800 GiB is not sufficient.
@valo
Copy link

valo commented Sep 10, 2023

@d347h-eth I am using Windows 11 with WSL2 and I attach the NVMe directly to WSL2 using wsl2 --mount --bare. Turned out that having Memory integrity enabled in Device security -> Core isolation causes the IOPS to drop in half. After disabling that I get similar results to yours. Interesting find for anyone using a windows + wsl2 system:

test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=843MiB/s,w=281MiB/s][r=216k,w=71.9k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=909: Sun Sep 10 23:05:35 2023
  read: IOPS=218k, BW=853MiB/s (895MB/s)(113GiB/134980msec)
   bw (  KiB/s): min=80937, max=1107312, per=100.00%, avg=875172.81, stdev=131967.58, samples=269
   iops        : min=20234, max=276828, avg=218793.17, stdev=32991.93, samples=269
  write: IOPS=72.8k, BW=284MiB/s (298MB/s)(37.5GiB/134980msec); 0 zone resets
   bw (  KiB/s): min=26981, max=370216, per=100.00%, avg=291685.58, stdev=43874.05, samples=269
   iops        : min= 6745, max=92554, avg=72921.35, stdev=10968.53, samples=269
  cpu          : usr=17.81%, sys=47.06%, ctx=1693386, majf=0, minf=7
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=853MiB/s (895MB/s), 853MiB/s-853MiB/s (895MB/s-895MB/s), io=113GiB (121GB), run=134980-134980msec
  WRITE: bw=284MiB/s (298MB/s), 284MiB/s-284MiB/s (298MB/s-298MB/s), io=37.5GiB (40.3GB), run=134980-134980msec

Disk stats (read/write):
  sdf: ios=29465514/9820322, merge=0/26, ticks=6502679/1003158, in_queue=7505932, util=99.66%

Seems like the drive is working in PCIe 4.0 x4 mode. I'll make an additional test from a linux live system and see if there is any difference.

@IchigoGirl
Copy link

I am using a Sakura Internet VPS server (Japan) and sometimes Validator crashes.
I measured the IOPS and got the following results.
CPU 8 Core/ Memory 16GB / SSD 1600GB VPS Service.
I may need to change my cloud service.(;;)

<journalctl WARN/error log>
Aug 04 20:14:58 ik1-430-47259 lighthouse[93709]: Aug 04 11:14:58.747 WARN Execution engine call failed error: HttpClient(url: http://localhost:8551/, kind: timeout, detail: operation timed out), service: exec
Aug 04 20:14:58 ik1-430-47259 lighthouse[93709]: Aug 04 11:14:58.747 WARN Error whilst processing payload status error: Api { error: HttpClient(url: http://localhost:8551/, kind: timeout, detail: operation timed out) }, service: exec
Aug 04 20:14:58 ik1-430-47259 lighthouse[93709]: Aug 04 11:14:58.747 CRIT Failed to update execution head error: ExecutionForkChoiceUpdateFailed(EngineError(Api { error: HttpClient(url: http://localhost:8551/, kind: timeout, detail: operation timed out) })), service: beacon

test: (groupid=0, jobs=1): err= 0: pid=663286: Mon Sep 18 18:12:07 2023 read: IOPS=4391, BW=17.2MiB/s (18.0MB/s)(113GiB/6715029msec) bw ( KiB/s): min= 2260, max=23072, per=100.00%, avg=17634.97, stdev=1648.47, samples=13385 iops : min= 565, max= 5768, avg=4408.63, stdev=412.13, samples=13385 write: IOPS=1463, BW=5855KiB/s (5996kB/s)(37.5GiB/6715029msec); 0 zone resets bw ( KiB/s): min= 88, max= 7552, per=100.00%, avg=5876.91, stdev=487.16, samples=13386 iops : min= 22, max= 1888, avg=1469.15, stdev=121.81, samples=13386 cpu : usr=0.82%, sys=6.41%, ctx=4757653, majf=0, minf=6 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=113GiB (121GB), run=6715029-6715029msec
WRITE: bw=5855KiB/s (5996kB/s), 5855KiB/s-5855KiB/s (5996kB/s-5996kB/s), io=37.5GiB (40.3GB), run=6715029-6715029msec

Disk stats (read/write):
vda: ios=30713239/9971460, merge=7896/111624, ticks=17920914/413844545, in_queue=431767366, util=77.61%

@ichibrosan
Copy link

ichibrosan commented Sep 24, 2023

As a newbie in this field, I can share an observation. Investing in storage hardware manufacturer's stocks might be as advantageous as investing in cryptocurrency. Building the properly optimized rig to handle this data is quite a task, intellectually and monetarily. I am very impressed with the work you people are doing :-) I do have a question. I read that a "full" archive node with all state back to genesys requires more than 12TB. The most capable motherboard I have on hand has four M.2 slots, 1 Gen5 and 3 Gen4. The largest NVMe SSDs I can buy right now are 4TB and some are Gen5 and expensive. If I pop 3 and the Gen4's into the Gen4 slots that would give me 12TB for the full archive, but its not the fastest. This motherboard also has a ton of SATAs. If its not to much to ask, could you answer the question, whether it would be better from a cost/performance perspective to max out the NVMe4s for the archive, use the 5 for the os, or put the archive on the SATAs? Thanks

@yorickdowne
Copy link
Author

It’d be best to run a smaller archive node. Both Reth and Erigon can fit an archive into a single 4TB disk, taking just over 2 TiB. Geth is working on similar capability, early 2024 maybe.

@valo
Copy link

valo commented Sep 28, 2023

Here is a benchmark of Seagate Firecuda 530 4TB on a native Ubuntu 22.04 install. Same hardware as my WSL2 test from above. As you can see the IOPS are about 50% higher, so if you are using WSL2, migrating to native linux should give you quite a bit of IO speedup. Interestingly I can't replicate the IOPS from the gist for this device.

test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [m(1)][100.0%][r=1294MiB/s,w=431MiB/s][r=331k,w=110k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=153177: Thu Sep 28 15:15:18 2023
  read: IOPS=331k, BW=1291MiB/s (1354MB/s)(113GiB/89206msec)
   bw (  MiB/s): min= 1053, max= 1327, per=100.00%, avg=1291.95, stdev=35.50, samples=178
   iops        : min=269642, max=339804, avg=330739.91, stdev=9088.15, samples=178
  write: IOPS=110k, BW=430MiB/s (451MB/s)(37.5GiB/89206msec); 0 zone resets
   bw (  KiB/s): min=362144, max=454496, per=100.00%, avg=440922.97, stdev=11996.22, samples=178
   iops        : min=90536, max=113624, avg=110230.74, stdev=2999.06, samples=178
  cpu          : usr=14.33%, sys=71.05%, ctx=4381204, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=1291MiB/s (1354MB/s), 1291MiB/s-1291MiB/s (1354MB/s-1354MB/s), io=113GiB (121GB), run=89206-89206msec
  WRITE: bw=430MiB/s (451MB/s), 430MiB/s-430MiB/s (451MB/s-451MB/s), io=37.5GiB (40.3GB), run=89206-89206msec

Disk stats (read/write):
  nvme1n1: ios=29473358/9822980, merge=0/17, ticks=4254089/128160, in_queue=4382314, util=99.92%

@valamidev
Copy link

As of October 2023:

The Contabo VPS SSD (non-NVMe) is fully capable of running an Ethereum 2.0 mainnet Execution + Validator Node.

Geth version >= 1.13.2 or later with PBSS (path-based database storage) and the Lighthouse validator client.

Geth sync was completed in 12 hours and 30 minutes.

@yorickdowne
Copy link
Author

Good to know. I assume Nethermind still cannot sync, or can you report that it now can?

@valamidev
Copy link

Good to know. I assume Nethermind still cannot sync, or can you report that it now can?

ETH network load is quite low nowadays, but it can Nethermind can sync and keep it up with the chain, but I can confirm Nethermind is much more I/O hungry and touch the IOPS limit.
You can ask higher IOPS at Contabo by the support, usually you got it.

Nethermind logs:

 Processed            18326160     |    819.55 ms  |  slot     11,814 ms | Gas gwei: 7.36 .. 7.36 (9.48) .. 60.93
 Processed            18326161     |  1,154.41 ms  |  slot     14,673 ms | Gas gwei: 7.27 .. 7.27 (11.52) .. 230.44
 Processed            18326162     |    667.72 ms  |  slot      9,832 ms | Gas gwei: 7.64 .. 7.65 (11.12) .. 95.42
 Processed            18326163     |    677.76 ms  |  slot     14,253 ms | Gas gwei: 7.50 .. 7.61 (10.95) .. 257.55
 Processed            18326164     |    642.48 ms  |  slot     11,875 ms | Gas gwei: 7.90 .. 7.98 (11.24) .. 115.91

@kaloyan-raev
Copy link

Kingston KC3000 2TB on Ubuntu 22.04:

test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=1482MiB/s,w=493MiB/s][r=379k,w=126k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1513: Fri Dec  1 15:27:59 2023
  read: IOPS=377k, BW=1473MiB/s (1545MB/s)(113GiB/78201msec)
   bw (  MiB/s): min= 1134, max= 1506, per=100.00%, avg=1474.69, stdev=42.25, samples=156
   iops        : min=290478, max=385538, avg=377520.78, stdev=10814.73, samples=156
  write: IOPS=126k, BW=491MiB/s (515MB/s)(37.5GiB/78201msec); 0 zone resets
   bw (  KiB/s): min=386840, max=514416, per=100.00%, avg=503278.41, stdev=14299.46, samples=156
   iops        : min=96710, max=128604, avg=125819.59, stdev=3574.86, samples=156
  cpu          : usr=14.90%, sys=71.19%, ctx=3805582, majf=0, minf=6
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=1473MiB/s (1545MB/s), 1473MiB/s-1473MiB/s (1545MB/s-1545MB/s), io=113GiB (121GB), run=78201-78201msec
  WRITE: bw=491MiB/s (515MB/s), 491MiB/s-491MiB/s (515MB/s-515MB/s), io=37.5GiB (40.3GB), run=78201-78201msec

Disk stats (read/write):
    dm-0: ios=29487639/9827779, merge=0/0, ticks=2155548/89524, in_queue=2245072, util=99.97%, aggrios=29492812/9829478, aggrmerge=29/85, aggrticks=2198992/106382, aggrin_queue=2305387, aggrutil=99.90%
  nvme0n1: ios=29492812/9829478, merge=29/85, ticks=2198992/106382, in_queue=2305387, util=99.90%

@aliask
Copy link

aliask commented Dec 18, 2023

TeamGroup MP34 4TB drive on Ubuntu 22.04:

test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=804MiB/s,w=266MiB/s][r=206k,w=68.0k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=16333: Mon Dec 18 04:00:20 2023
  read: IOPS=118k, BW=459MiB/s (482MB/s)(113GiB/250798msec)
   bw (  KiB/s): min=413304, max=854560, per=100.00%, avg=470602.83, stdev=36300.77, samples=501
   iops        : min=103326, max=213640, avg=117650.70, stdev=9075.17, samples=501
  write: IOPS=39.2k, BW=153MiB/s (161MB/s)(37.5GiB/250798msec); 0 zone resets
   bw (  KiB/s): min=138672, max=282072, per=100.00%, avg=156842.56, stdev=12061.92, samples=501
   iops        : min=34668, max=70520, avg=39210.63, stdev=3015.52, samples=501
  cpu          : usr=9.01%, sys=31.13%, ctx=3536466, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=459MiB/s (482MB/s), 459MiB/s-459MiB/s (482MB/s-482MB/s), io=113GiB (121GB), run=250798-250798msec
  WRITE: bw=153MiB/s (161MB/s), 153MiB/s-153MiB/s (161MB/s-161MB/s), io=37.5GiB (40.3GB), run=250798-250798msec

Disk stats (read/write):
    dm-0: ios=29469140/9821675, merge=0/0, ticks=13703608/1542052, in_queue=15245660, util=100.00%, aggrios=29492330/9829436, aggrmerge=0/54, aggrticks=13745890/1554511, aggrin_queue=15300436, aggrutil=100.00%
  nvme0n1: ios=29492330/9829436, merge=0/54, ticks=13745890/1554511, in_queue=15300436, util=100.00%

@netzlvl
Copy link

netzlvl commented Dec 27, 2023

Finally got around to troubleshooting the fairly frequent missed attestations I've been plagued with. It would appear something is amiss with my Seagate Firecuda 520. IOPS are reporting significantly worse than other less performant drives I own. Anything I should look into as a next step? Running Ubuntu 22.04.3 LTS

Seagate Firecuda 520 2TB Gen4 X4 NVMe
size: 1863GiB (2TB)
capabilities: gpt-1.00 partitioned partitioned:gpt
configuration: logicalsectorsize=512 sectorsize=512

test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=76.6MiB/s,w=25.7MiB/s][r=19.6k,w=6567 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2400068: Wed Dec 27 01:41:32 2023

read: IOPS=23.6k, BW=92.1MiB/s (96.6MB/s)(113GiB/1250418msec)
bw ( KiB/s): min=40880, max=118128, per=100.00%, avg=94485.13, stdev=12039.63, samples=2500
iops : min=10220, max=29532, avg=23621.04, stdev=3009.87, samples=2500

write: IOPS=7860, BW=30.7MiB/s (32.2MB/s)(37.5GiB/1250418msec); 0 zone resets
bw ( KiB/s): min=13752, max=39503, per=100.00%, avg=31489.84, stdev=4019.81, samples=2500
iops : min= 3438, max= 9875, avg=7872.22, stdev=1004.92, samples=2500

cpu : usr=23.99%, sys=65.19%, ctx=8866372, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=92.1MiB/s (96.6MB/s), 92.1MiB/s-92.1MiB/s (96.6MB/s-96.6MB/s), io=113GiB (121GB), run=1250418-1250418msec
WRITE: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=37.5GiB (40.3GB), run=1250418-1250418msec

Disk stats (read/write):
dm-0: ios=30274481/9951689, merge=0/0, ticks=6203536/580080, in_queue=6783616, util=100.00%, aggrios=30056705/9880464, aggrmerge=221778/73883, aggrticks=6170229/418768, aggrin_queue=6594545, aggrutil=100.00%
nvme0n1: ios=30056705/9880464, merge=221778/73883, ticks=6170229/418768, in_queue=6594545, util=100.00%

@yorickdowne
Copy link
Author

@netzlvl Run a smartctl -x on it and check temperature. If it's in the 60C+ range it's likely throttling. Anything else "in the path" that could impact IO performance? Virtualization, ZFS?

@netzlvl
Copy link

netzlvl commented Jan 2, 2024

@yorickdowne Thanks for the suggestions. Temps have been hovering around 50-55C. I just retested with case open and fan blowing on the machine bringing temps down to ~40C and still got the same results. This was otherwise a fresh RPL build, no virtualization or anything fancy. Fairly barebones setup. I'm leaning towards formatting and starting fresh but any other suggestions are welcome!

@snoopmx
Copy link

snoopmx commented Jan 27, 2024

XPG GAMMIX S70 BLADE 4TB uBUNTU 22.04
CJValdez
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=1038MiB/s,w=346MiB/s][r=266k,w=88.6k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=4869: Sat Jan 27 05:10:48 2024
read: IOPS=272k, BW=1063MiB/s (1115MB/s)(113GiB/108389msec)
bw ( MiB/s): min= 355, max= 1249, per=99.98%, avg=1062.66, stdev=174.61, samples=216
iops : min=90942, max=319828, avg=272039.96, stdev=44698.94, samples=216
write: IOPS=90.7k, BW=354MiB/s (371MB/s)(37.5GiB/108389msec); 0 zone resets
bw ( KiB/s): min=121848, max=427936, per=99.98%, avg=362675.44, stdev=59711.66, samples=216
iops : min=30462, max=106984, avg=90668.86, stdev=14927.91, samples=216
cpu : usr=17.23%, sys=67.11%, ctx=7809250, majf=1, minf=7
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=1063MiB/s (1115MB/s), 1063MiB/s-1063MiB/s (1115MB/s-1115MB/s), io=113GiB (121GB), run=108389-108389msec
WRITE: bw=354MiB/s (371MB/s), 354MiB/s-354MiB/s (371MB/s-371MB/s), io=37.5GiB (40.3GB), run=108389-108389msec

Disk stats (read/write):
nvme0n1: ios=29516035/9832782, merge=179/932, ticks=4976294/179813, in_queue=5156253, util=99.84%

@0xSmit
Copy link

0xSmit commented Feb 7, 2024

Kingston Fury Renegade 4 TB on Ubuntu 22.04.3 LTS

Jobs: 1 (f=1): [m(1)][100.0%][r=825MiB/s,w=275MiB/s][r=211k,w=70.3k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2391: Wed Feb  7 22:40:00 2024
  read: IOPS=211k, BW=825MiB/s (865MB/s)(113GiB/139692msec)
   bw (  KiB/s): min=640384, max=857392, per=100.00%, avg=844813.59, stdev=19162.15, samples=279
   iops        : min=160096, max=214348, avg=211203.44, stdev=4790.55, samples=279
  write: IOPS=70.4k, BW=275MiB/s (288MB/s)(37.5GiB/139692msec); 0 zone resets
   bw (  KiB/s): min=213584, max=286904, per=100.00%, avg=281559.23, stdev=6428.86, samples=279
   iops        : min=53396, max=71726, avg=70389.82, stdev=1607.22, samples=279
  cpu          : usr=14.78%, sys=70.64%, ctx=4460659, majf=0, minf=7
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=825MiB/s (865MB/s), 825MiB/s-825MiB/s (865MB/s-865MB/s), io=113GiB (121GB), run=139692-139692msec
  WRITE: bw=275MiB/s (288MB/s), 275MiB/s-275MiB/s (288MB/s-288MB/s), io=37.5GiB (40.3GB), run=139692-139692msec

Disk stats (read/write):
  nvme0n1: ios=29482363/9825979, merge=0/27, ticks=2949353/147884, in_queue=3097249, util=99.97%

@DellaWhitaker
Copy link

Thanks for sharing it with us.

@bussyjd
Copy link

bussyjd commented Feb 19, 2024

ubuntu 22.04 WD RED 2TB NAS SSD
WDS400T1R0A-68A4W0
Node won't sync

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=87.9MiB/s,w=29.5MiB/s][r=22.5k,w=7547 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1263819: Mon Feb 19 11:44:23 2024
  read: IOPS=25.9k, BW=101MiB/s (106MB/s)(113GiB/1138368msec)
   bw (  KiB/s): min=22104, max=161232, per=100.00%, avg=103709.80, stdev=26209.74, samples=2275
   iops        : min= 5526, max=40308, avg=25927.42, stdev=6552.44, samples=2275
  write: IOPS=8634, BW=33.7MiB/s (35.4MB/s)(37.5GiB/1138368msec); 0 zone resets
   bw (  KiB/s): min= 7600, max=54608, per=100.00%, avg=34564.47, stdev=8736.34, samples=2275
   iops        : min= 1900, max=13652, avg=8641.09, stdev=2184.09, samples=2275
  cpu          : usr=6.15%, sys=23.19%, ctx=22865227, majf=0, minf=17
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=113GiB (121GB), run=1138368-1138368msec
  WRITE: bw=33.7MiB/s (35.4MB/s), 33.7MiB/s-33.7MiB/s (35.4MB/s-35.4MB/s), io=37.5GiB (40.3GB), run=1138368-1138368msec

Disk stats (read/write):
  sda: ios=29508728/9828000, merge=2646/272, ticks=60857576/5856658, in_queue=68659770, util=100.00%

@duncancmt
Copy link

duncancmt commented Feb 20, 2024

Fedora 38 on Qubes R4.1 (virtualized, but none of this multi-layered filesystem on top of filesystem on top of encryption shenanigans)
XFS with reverse mapping
TeamGroup MP44 8TB (HMB) set to 4k sectors

Jobs: 1 (f=1): [m(1)][100.0%][r=415MiB/s,w=137MiB/s][r=106k,w=35.0k IOPS][eta 00m:00s] 
test: (groupid=0, jobs=1): err= 0: pid=1716: Tue Feb 20 09:15:13 2024
  read: IOPS=105k, BW=412MiB/s (432MB/s)(113GiB/279629msec)
   bw (  KiB/s): min=168648, max=757002, per=100.00%, avg=422125.70, stdev=120676.28, samples=559
   iops        : min=42162, max=189250, avg=105531.34, stdev=30169.07, samples=559
  write: IOPS=35.2k, BW=137MiB/s (144MB/s)(37.5GiB/279629msec); 0 zone resets
   bw (  KiB/s): min=55832, max=251358, per=100.00%, avg=140685.69, stdev=40142.70, samples=559
   iops        : min=13958, max=62839, avg=35171.33, stdev=10035.67, samples=559
  cpu          : usr=13.32%, sys=69.85%, ctx=5737782, majf=0, minf=6
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=412MiB/s (432MB/s), 412MiB/s-412MiB/s (432MB/s-432MB/s), io=113GiB (121GB), run=279629-279629msec
  WRITE: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=37.5GiB (40.3GB), run=279629-279629msec

Disk stats (read/write):
  xvdi: ios=29480750/9825346, merge=0/1, ticks=3170142/447551, in_queue=3617709, util=100.00%

will edit when I've had a chance to see how the node runs on this
reth+nimbus, both in archive mode, syncs and follows the chain just fine

@laurenzberger
Copy link

Crucial P3 Plus 4TB in ThinkCentre m75q gen2 (SSD running hot, no room for cooler)
Took 4 weeks to sync with Erigon (archive node), 4TB almost full already, pruning not possible post-sync.
Can barely keep up with chain head.

Jobs: 1 (f=1): [m(1)][100.0%][r=71.5MiB/s,w=24.0MiB/s][r=18.3k,w=6134 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1236631: Tue Feb 20 10:29:07 2024
  read: IOPS=27.6k, BW=108MiB/s (113MB/s)(113GiB/1069328msec)
   bw (  KiB/s): min=17440, max=244744, per=100.00%, avg=110395.07, stdev=74976.03, samples=2138
   iops        : min= 4360, max=61186, avg=27598.67, stdev=18744.03, samples=2138
  write: IOPS=9192, BW=35.9MiB/s (37.7MB/s)(37.5GiB/1069328msec); 0 zone resets
   bw (  KiB/s): min= 5400, max=81672, per=100.00%, avg=36792.43, stdev=24997.34, samples=2138
   iops        : min= 1350, max=20418, avg=9198.00, stdev=6249.35, samples=2138
  cpu          : usr=3.62%, sys=25.64%, ctx=13387268, majf=1, minf=7
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=113GiB (121GB), run=1069328-1069328msec
  WRITE: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=37.5GiB (40.3GB), run=1069328-1069328msec

Disk stats (read/write):
    dm-1: ios=29487924/9833619, merge=0/0, ticks=66472928/1215864, in_queue=67688792, util=100.00%, aggrios=29492470/9835159, aggrmerge=0/0, aggrticks=66456848/1206580, aggrin_queue=67663428, aggrutil=100.00%
    dm-0: ios=29492470/9835159, merge=0/0, ticks=66456848/1206580, in_queue=67663428, util=100.00%, aggrios=29492402/9833761, aggrmerge=68/1401, aggrticks=66122896/1097291, aggrin_queue=67223406, aggrutil=100.00%
  nvme0n1: ios=29492402/9833761, merge=68/1401, ticks=66122896/1097291, in_queue=67223406, util=100.00%

@bussyjd
Copy link

bussyjd commented Feb 21, 2024

WD Red SN700 2000GB NVME

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=530MiB/s,w=176MiB/s][r=136k,w=45.0k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=22923: Wed Feb 21 13:22:31 2024
  read: IOPS=141k, BW=551MiB/s (578MB/s)(113GiB/208929msec)
   bw (  KiB/s): min=213392, max=846128, per=99.99%, avg=564555.30, stdev=107083.72, samples=417
   iops        : min=53348, max=211532, avg=141138.86, stdev=26770.92, samples=417
  write: IOPS=47.0k, BW=184MiB/s (193MB/s)(37.5GiB/208929msec); 0 zone resets
   bw (  KiB/s): min=72240, max=282960, per=99.99%, avg=188160.02, stdev=35738.54, samples=417
   iops        : min=18060, max=70740, avg=47040.00, stdev=8934.64, samples=417
  cpu          : usr=14.21%, sys=48.19%, ctx=12030604, majf=0, minf=7
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=551MiB/s (578MB/s), 551MiB/s-551MiB/s (578MB/s-578MB/s), io=113GiB (121GB), run=208929-208929msec
  WRITE: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=37.5GiB (40.3GB), run=208929-208929msec

Disk stats (read/write):
    dm-0: ios=29453038/9816140, merge=0/0, ticks=12182596/107212, in_queue=12289808, util=99.99%, aggrios=29492326/9829312, aggrmerge=0/0, aggrticks=12181730/113359, aggrin_queue=12295224, aggrutil=99.96%
  nvme0n1: ios=29492326/9829312, merge=0/0, ticks=12181730/113359, in_queue=12295224, util=99.96%

@yorickdowne
Copy link
Author

@bussyjd Thanks for confirming that SA500 SATA SSD doesn't cut the mustard. Not unexpected. Great results on the SN700 as expected.

@laurenzberger , thanks for taking one for the team and testing a slow model.

And @duncancmt , I've added the MP44. 8TB with decent speed is a new one. Surprising as it's DRAMless.

@bussyjd
Copy link

bussyjd commented Feb 21, 2024

TEAMGROUP MP34

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=697MiB/s,w=231MiB/s][r=179k,w=59.0k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=4068: Wed Feb 21 20:05:50 2024
  read: IOPS=94.5k, BW=369MiB/s (387MB/s)(113GiB/312117msec)
   bw (  KiB/s): min=21624, max=801584, per=99.96%, avg=377803.39, stdev=169305.16, samples=623
   iops        : min= 5406, max=200396, avg=94450.87, stdev=42326.30, samples=623
  write: IOPS=31.5k, BW=123MiB/s (129MB/s)(37.5GiB/312117msec); 0 zone resets
   bw (  KiB/s): min= 7320, max=264896, per=99.96%, avg=125916.83, stdev=56412.81, samples=623
   iops        : min= 1830, max=66224, avg=31479.20, stdev=14103.20, samples=623
  cpu          : usr=8.57%, sys=23.25%, ctx=3343348, majf=0, minf=7
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=369MiB/s (387MB/s), 369MiB/s-369MiB/s (387MB/s-387MB/s), io=113GiB (121GB), run=312117-312117msec
  WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=37.5GiB (40.3GB), run=312117-312117msec

Disk stats (read/write):
  nvme0n1: ios=29457519/9817778, merge=0/62, ticks=16948691/2236682, in_queue=19185556, util=100.00%

@cevatbostancioglu
Copy link

Can i ask, why

  • Corsair Force MP600 Pro (but not "Core XT"), 2TB/4TB
    why not MP600 PRO Core XT 8TB ?

@yorickdowne
Copy link
Author

For starters because I'm not aware of that model. "MP600 Pro Core XT" may not exist. The spreadsheet has an "MP600 Core XT", that's DRAMless and QLC, and an "MP600 Pro XT", DRAM and TLC. Afaik all "MP600 Pro" models are fine; "MP600 Core XT" is not.

@SnoepNFTs
Copy link

I recently conducted a performance test on the Samsung SSD 990 Pro 4TB model - with heatsink. To my disappointment, the read/write (R/W) speeds were significantly lower than expected, recording at 125k/41.4k IOPS. Given its high DRAM capacity and fast R/W speeds, I expected much better performance.

test: (groupid=0, jobs=1): err= 0: pid=9866: Fri Mar  1 23:07:07 2024
  read: IOPS=124k, BW=486MiB/s (509MB/s)(113GiB/237104msec)
   bw (  KiB/s): min=473160, max=515672, per=100.00%, avg=497642.84, stdev=3501.77, samples=474
   iops        : min=118290, max=128918, avg=124410.73, stdev=875.44, samples=474
  write: IOPS=41.5k, BW=162MiB/s (170MB/s)(37.5GiB/237104msec); 0 zone resets
   bw (  KiB/s): min=158056, max=171448, per=100.00%, avg=165854.41, stdev=1341.47, samples=474
   iops        : min=39514, max=42862, avg=41463.60, stdev=335.37, samples=474
  cpu          : usr=10.13%, sys=76.00%, ctx=8989513, majf=0, minf=10
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=486MiB/s (509MB/s), 486MiB/s-486MiB/s (509MB/s-509MB/s), io=113GiB (121GB), run=237104-237104msec
  WRITE: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=37.5GiB (40.3GB), run=237104-237104msec

Disk stats (read/write):
  nvme0n1: ios=29487370/9828723, merge=34/827, ticks=1093523/76506, in_queue=1170137, util=99.98%

I have tested this on two systems:

System 1: HP prodesk 600 G3 - Desktop mini
CPU: I5-7500t: 4 cores 4 threads @2.7 GHz - with integrated graphics
RAM: 16GB ddr4 @ 2400MHz
SSD: Samsung SSD 990 Pro 4TB model - with heatsink on a PCIe 3.0 x4 slot

System 2: Custom built pc
CPU: Ryzen 7 5800x
RAM: 32GB ddr4 @ 3200Mhz
GPU: NVIDIA Geforce RTX 3070ti
SSD: Samsung SSD 990 Pro 4TB model - with heatsink on a PCIe 4.0 x4

Things that I made sure to do:

  1. Update to the newest firmware using the official Samsung magician software utility tool - This was very important as it was using old firmware which is detrimental to the drives' health.
  2. Test on both PCie 3.0 x4 and PCie 4.0 x4 slots. (didn't make a difference btw)
  3. Make sure atime was turned off
  4. Testing on both Ubuntu server and Ubuntu desktop - 22.04 and 23.10
  5. Went into BIOS settings and made sure everything was completely optimized for IOPS
  6. Also made sure that it did not thermally throttle by actively cooling and benchmarking with the smartctl -x utility which @yorickdowne recommended.
  7. Make sure that the drive was genuine and not some off-market knockoff by registering its serial number with the official Samsung website.
  8. The drive also had 0 power ons and power-on hours, so I confirmed for myself that it was brand new.

I have no clue why these results are so bad, this is supposed to be one of (if not the) the best drive on the market. I went through a bunch of troubleshooting but I am starting to believe that this is what the drive is capable of providing in terms of performance. Also since the performance closely resembles the earlier shown results for the Samsung 970 EVO 2tb .

I'm keen to hear from others who own the same SSD model. If you've conducted similar tests, please share your results. I am very interested to know whether this is an isolated case or a common issue with this SSD model.

If anyone has additional troubleshooting tips please let me know, might as well try some stuff while I am at it

@yorickdowne
Copy link
Author

Thanks for the extra-diligent tests! Tbh those results aren’t bad at all, they are right in line with what you expect from a good NVMe drive. As you found, PCIe 3 or 4 makes no difference at all for this use case: It’s all about random read/write, not raw bandwidth.

It’s one of the most stressful things you can ask a drive to do, which is why many budget drives struggle, although they are more than fine for desktop and gaming use.

@Tymn77
Copy link

Tymn77 commented Mar 2, 2024

More data points for the Samsung 990 Pro 4TB w/ Heatsink.
noatime set in fstab, nvme_core.default_ps_max_latency_us=0 set in grub
My drive is using the new v8 process, so has the 0B2QJXG7 firmware.
Fresh install of Ubuntu 22.04 LTS, Xeon E-2136, 2x32GB Kingston ECC @ 2666

I was monitoring temp with smartctl -x, and it did get up there at ~55C, but IOPs are looking good at 260k/86.7k Am I reading this right?

$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=999MiB/s,w=332MiB/s][r=256k,w=85.1k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1676: Fri Mar  1 21:13:32 2024
  read: IOPS=260k, BW=1016MiB/s (1065MB/s)(113GiB/113384msec)
   bw (  KiB/s): min=1001280, max=1063096, per=100.00%, avg=1040833.45, stdev=11991.07, samples=226
   iops        : min=250320, max=265774, avg=260208.42, stdev=2997.74, samples=226
  write: IOPS=86.7k, BW=339MiB/s (355MB/s)(37.5GiB/113384msec); 0 zone resets
   bw (  KiB/s): min=332488, max=355800, per=100.00%, avg=346900.35, stdev=4130.15, samples=226
   iops        : min=83122, max=88950, avg=86725.08, stdev=1032.54, samples=226
  cpu          : usr=15.66%, sys=71.73%, ctx=9318105, majf=0, minf=6
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=1016MiB/s (1065MB/s), 1016MiB/s-1016MiB/s (1065MB/s-1065MB/s), io=113GiB (121GB), run=113384-113384msec
  WRITE: bw=339MiB/s (355MB/s), 339MiB/s-339MiB/s (355MB/s-355MB/s), io=37.5GiB (40.3GB), run=113384-113384msec

Disk stats (read/write):
    dm-0: ios=29428508/9808565, merge=0/0, ticks=1220972/95488, in_queue=1316460, util=99.95%, aggrios=29492333/9829587, aggrmerge=0/72, aggrticks=1226083/100143, aggrin_queue=1326263, aggrutil=99.91%
  nvme0n1: ios=29492333/9829587, merge=0/72, ticks=1226083/100143, in_queue=1326263, util=99.91%

@SnoepNFTs
Copy link

SnoepNFTs commented Mar 2, 2024

More data points for the Samsung 990 Pro 4TB w/ Heatsink. noatime set in fstab, nvme_core.default_ps_max_latency_us=0 set in grub My drive is using the new v8 process, so has the 0B2QJXG7 firmware. Fresh install of Ubuntu 22.04 LTS, Xeon E-2136, 2x32GB Kingston ECC @ 2666

I was monitoring temp with smartctl -x, and it did get up there at ~55C, but IOPs are looking good at 260k/86.7k Am I reading this right?

$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=999MiB/s,w=332MiB/s][r=256k,w=85.1k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1676: Fri Mar  1 21:13:32 2024
  read: IOPS=260k, BW=1016MiB/s (1065MB/s)(113GiB/113384msec)
   bw (  KiB/s): min=1001280, max=1063096, per=100.00%, avg=1040833.45, stdev=11991.07, samples=226
   iops        : min=250320, max=265774, avg=260208.42, stdev=2997.74, samples=226
  write: IOPS=86.7k, BW=339MiB/s (355MB/s)(37.5GiB/113384msec); 0 zone resets
   bw (  KiB/s): min=332488, max=355800, per=100.00%, avg=346900.35, stdev=4130.15, samples=226
   iops        : min=83122, max=88950, avg=86725.08, stdev=1032.54, samples=226
  cpu          : usr=15.66%, sys=71.73%, ctx=9318105, majf=0, minf=6
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=1016MiB/s (1065MB/s), 1016MiB/s-1016MiB/s (1065MB/s-1065MB/s), io=113GiB (121GB), run=113384-113384msec
  WRITE: bw=339MiB/s (355MB/s), 339MiB/s-339MiB/s (355MB/s-355MB/s), io=37.5GiB (40.3GB), run=113384-113384msec

Disk stats (read/write):
    dm-0: ios=29428508/9808565, merge=0/0, ticks=1220972/95488, in_queue=1316460, util=99.95%, aggrios=29492333/9829587, aggrmerge=0/72, aggrticks=1226083/100143, aggrin_queue=1326263, aggrutil=99.91%
  nvme0n1: ios=29492333/9829587, merge=0/72, ticks=1226083/100143, in_queue=1326263, util=99.91%

Thanks for sharing your experience! I recently applied the "nvme_core.default_ps_max_latency_us=0" tweak and noticed a modest improvement in IOPS, achieving an improved 1k/0.5k. Not much, but I'll take it.

Initially, I wasn't aware of the distinctions between the V7 and V8 models, particularly how some V7 versions were impacted by firmware issues leading to accelerated wear. It turns out that not all firmware versions starting with "0B2QJX" are problematic. After some digging and checking my own SSD, I discovered I own the V8 model manufactured in December 2023.

Here's a quick breakdown of the firmware differences:

  • V7 Detrimental firmware: 0B2QJXD7 – Known for causing rapid degradation.

  • V8 Safe firmware: 0B2QJXG7 – Does not cause rapid degradation.

Notice the difference between the "D" and "G" in the firmware versions?

Before updating the firmware through Samsung Magician, I took a photo for reference in case any issues arose. My SSD originally had the same firmware version you are currently running.
image

Regrettably, Samsung's website does not offer the older firmware versions for downgrade, only displaying the newer "4B2QJXD7" version, which my SSD is currently running. Reverting to 0B2QJXG7 would be the ultimate final test in determining whether it is a firmware issue or my device's performance is just worse.

image

Now obviously I mean "Worse" in a relative way, because the results are more than sufficient to sync an Ethereum node.
I was comparing against the earlier impressive results of the KC3000, the Firecuda and your IOPS results which are definitely much better at 260k/86.7k. I thought that the Samsung SSD due to its later release date and higher DRAM capacity would be a bit better than those other SSD's.

@kmarci9
Copy link

kmarci9 commented Mar 2, 2024

1x SAMSUNG PM963 MZQLW1T9HMJP-00003 U.2
1x SAMSUNG SM963 MZQKW1T9HMJP-00003 U.2
(Similar SSD's one of them is MLC the other is TLC)
Very durable Server SSDs can be obtained cheap in secondmarket
ON RAID 0 BTRFS

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process
test: Laying out IO file (1 file / 153600MiB)
Jobs: 1 (f=0): [f(1)][100.0%][r=219MiB/s,w=72.5MiB/s][r=56.0k,w=18.6k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=11504: Sat Mar  2 15:53:12 2024
  read: IOPS=53.7k, BW=210MiB/s (220MB/s)(113GiB/548945msec)
   bw (  KiB/s): min=184032, max=238496, per=100.00%, avg=214950.29, stdev=8732.04, samples=1097
   iops        : min=46008, max=59624, avg=53737.59, stdev=2183.02, samples=1097
  write: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(37.5GiB/548945msec); 0 zone resets
   bw (  KiB/s): min=61320, max=80000, per=100.00%, avg=71638.72, stdev=2931.50, samples=1097
   iops        : min=15330, max=20000, avg=17909.68, stdev=732.87, samples=1097
  cpu          : usr=7.29%, sys=86.91%, ctx=6151117, majf=0, minf=6
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=29492326,9829274,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=210MiB/s (220MB/s), 210MiB/s-210MiB/s (220MB/s-220MB/s), io=113GiB (121GB), run=548945-548945msec
  WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=37.5GiB (40.3GB), run=548945-548945msec

@tlsol
Copy link

tlsol commented Mar 25, 2024

Samsung PM863a 3.84TB TLC - SATA
image
Also very durable and affordable server SSDs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment