Skip to content

Instantly share code, notes, and snippets.

@SAPikachu
Last active August 1, 2021 03:22
Show Gist options
  • Save SAPikachu/02504650ec1626465452994d7450afc0 to your computer and use it in GitHub Desktop.
Save SAPikachu/02504650ec1626465452994d7450afc0 to your computer and use it in GitHub Desktop.
ZFS / couchdb / zstd compression test

Environment

  • Ubuntu 20.04
  • Kernel 5.4.0-67
  • OpenZFS 2.0.4-0york2~20.04
  • ashift: 12
  • Compression: zstd, default level

Data

  • Multiple couchdb databases, 300M+ documents spreaded over a few big dbs (1M+ docs) and 100k+ small dbs.
  • Documents are mostly smallish (a few KBs to tens of KBs).
  • DBs have large indexes (mostly bigger than data itself)
  • Snappy compression is enabled because the live instance has not been migrated to ZFS yet (test was done on backup server), planned to disable couchdb compression during migration.

Result

Raw data size: 371GB (16KB and 128KB) / 393GB (32KB)

  • 128KB record size: 201GB (54.18%)
  • 16KB record size: 281GB (75.74%)
  • 32KB record size: 231GB (58.78%) (Note: A small amount of data is uncompressed in this test)

(After more data is added and most DB decompressed: 208GB / 608GB (34.2%))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment