First, we need recent btrfs tools. Otherwise, we're entering a world of pain.
So:
apt-get build-dep btrfs-tools
apt-get install liblzo2-dev libblkid-dev
git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git
make -j -C btrfs-progs
Feel free to adjust those.
DEV=/dev/xvdc
MNT=/var/lib/docker
Now let's take a look around.
btrfs-progs/btrfs filesys show $DEV
Label: none uuid: de060d4c-99b6-4da0-90fa-fb47166db38b
Total devices 1 FS bytes used 2.51GiB
devid 1 size 87.50GiB used 87.50GiB path /dev/xvdc
used
is close (even equal, here) to size
, which means all chunks were allocated and OH CAPTAIN WE'RE OUT OF THEM SPACE!
Let's see how space was allocated.
btrfs-progs/btrfs filesys df $MNT
Data, single: total=85.48GiB, used=1.81GiB
System, DUP: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, DUP: total=1.00GiB, used=312.79MiB
Metadata, single: total=8.00MiB, used=0.00
Plenty of Data
chunks are not used; but Metadata
is a big tight. (Yes, it's only using 30%, but for metadata, especially below 1G, that can be a lot.)
Let's see if we have a least a tiny bit of available metadata space:
FILE=$MNT/tmp.$$
touch $FILE && rm -f $FILE
If you see nothing, all is good. If you see "no space left on device" ...
FIXME add stuff for the really bad case
Now, let's tell btrfs to re-allocate data chunks:
btrfs-progs/btrfs fi balance start -dusage=10 $MNT
Done, had to relocate 82 out of 90 chunks
Now, look:
btrfs-progs/btrfs filesys show $DEV
Label: none uuid: de060d4c-99b6-4da0-90fa-fb47166db38b
Total devices 1 FS bytes used 2.06GiB
devid 1 size 87.50GiB used 6.03GiB path /dev/xvdc
Disk chunk usage went down to 6 GB. Much better.
And:
btrfs-progs/btrfs filesys df $MNT
Data, single: total=4.00GiB, used=1.75GiB
System, DUP: total=8.00MiB, used=4.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, DUP: total=1.00GiB, used=314.35MiB
Metadata, single: total=8.00MiB, used=0.00
If, for some reason, total
and used
are pretty close for Data
(and your problem comes from wasteful metadata chunks), you might want to rebalance the Metadata
part instead; then use -musage=10
instead of -dusage=10
.
On a production system with lots of I/O, you can use lower values, i.e. -dusage=5
or even maybe -dusage=1
; it will repack only chunks with less data, i.e. will put less I/O pressure on the machine, but since there are probably less chunks like that, it will also free up less space.
If you do something like -dusage=90
, it will relocate almost all chunks (so it will take forever, basically to time to read and rewrite the whole FS) but it will optimize disk usage a lot (think defrag).
Still not enough?
btrfs-progs/btrfs quota enable $MNT
btrfs-progs/btrfs quota rescan $MNT
(The latter command might fail if rescan is automatically triggered...)
(You might have to wait a bit for the quotas to be recomputed.)
PATH=$(pwd)/btrfs-progs:$PATH
python btrfsQuota.py | grep -v '0.00G .* 0.00G' | sort -n -k 3
This will show you the biggest volumes.