Skip to content

Instantly share code, notes, and snippets.

@braindevices
Last active July 5, 2024 20:44
Show Gist options
  • Save braindevices/4d0e0a030c7c6500dd3290714e5c0467 to your computer and use it in GitHub Desktop.
Save braindevices/4d0e0a030c7c6500dd3290714e5c0467 to your computer and use it in GitHub Desktop.
XFS reflinks vs btrfs snapshots

XFS reflinks vs BTRFS snapshots

reflinks in XFS

reflinks can be used as CoW mechanism for XFS since 4.17.0-2.el8

This feature enables two or more files to share a common set of data blocks. When either of the files sharing common blocks changes, XFS breaks the link to common blocks and creates a new file.

so with cp --reflink we can create CoW blocks already. This is similar to btrfs CoW. We can simply do following to mimic the snapshot:

find /root -mindepth 1 -maxdepth 1 -not -name "backup" -exec cp -r --reflink {} /root/backup/$(date)/ ;

reflink vs hardlink

hardlink does not have CoW per block. If one changed one version of the hardlink, all the hardlink copies are changed.

backup the FS has reflinks

the btrfs can use btrfs send <snapshot> | ssh <user@remote> "btrfs receive /mnt/server_backups" for remote backup. Or btrfs send <snapshot> | btrfs receive /mnt/server_backups for local backup.

But there is no good way to do this with XFS reflinks.

local disk-to-disk

In theory, if I use cp -r --reflinks /mnt/sda1/ /mnt/sdb1 it should be able to replicate the reflink structure on the disk. But it is unclear if it smart enough to keep the reflink that already exists on sdb1. For example, if I have a sdb1/a->backup/a already, then if the /mnt/sda1/a will create completely new sdb1/a or just change modified blocks.

remote

So far it is not possible yet.

Although rsync claims to support the hardlinks, backup the hardlinked XFS file system can be very difficult. The rsync -H does not really work, because it has to discover the hardlinked block by itself.

xfsdump

In theory I can use xfsdump as block level backup. But it only support 9 incremental backups. Also the dump files cannot be mounted to be accessed. So I think it is not very useful.

# Perform the initial full dump (level 0)
xfsdump -l 0 - /mount/point | ssh user@remote_host "cat > /path/to/backup/backup_level0.dump"

# Transfer the dump inventory
scp /var/lib/xfsdump/inventory user@remote_host:/path/to/backup/inventory

If the online inventory is destroied, one probably have to copy the backup ones to /var in order to do incremental backup. But since it can only do 9 incremental ones, it is not very useful.

It is also possible to dump to remote device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment