I decided to move a big chunk of the data on my everyday Mac to ZFS. Here are some of my notes.
Compression is the killer feature for casual users. I only have 1TB of storage in my MacBook Pro, and I was constantly running out of space. A lot of the files I store compress very well, for example my Mail archive, source code, applications and Parallels virtual machines. The overall compression ratio for my pool is currently 2.06×.
The LZ4 compression algorithm that is usually recommended is very fast, and usually does not incur a performance penalty. Even though the CPU has to work a little to compress and decompress, less data has to be read from or written to the disk.
ZFS checksums everything—by default, using the fletcher4
algorithm, so I can have confidence that corrupted data will never be used.
Atomic snapshots allow me to roll back changes. I can snapshot a dataset, install some weird piece of software, and quickly roll back if there is something wrong. It’s like Git for your filesystem.
The zfs send
and zfs receive
commands allow you to serialize a pool, dataset or snapshot, preserving almost every aspect of the filesystem, and copy it to another system, or for later recovery of backups. If you have a NAS or backup server that also uses ZFS, you can receive these snapshots on the NAS, delete them from the original system to save space, and restore them later if necessary.
Individual datasets can be set as case insensitive, which gives me a little confidence that various Mac applications won’t freak out.
Here is what I read about ZFS before first setting it up on a Linux server and then on my MacBook Pro:
- zvols
- Clones