I have two Debian 10 bhyve VMs on the same Triton CN both with the same package.
Here is the package which shows a disk quota of 50 GB
:
[root@headnode (us-west-agc) ~]# sdc-papi /packages?name=bhyve-flexible-2G-50G-2CPU | json -Ha
{
"brand": "bhyve",
"name": "bhyve-flexible-2G-50G-2CPU",
"version": "1.0.0",
"active": true,
"vcpus": 2,
"cpu_cap": 200,
"description": "General Purpose Bhyve - 2 GB RAM, 50 GB Disk, 2 vCPUs",
"max_lwps": 4000,
"max_physical_memory": 2048,
"max_swap": 8192,
"quota": 51200,
"zfs_io_priority": 64,
"flexible_disk": true,
"disks": [
{
"size": 51200
}
],
"uuid": "d8a8bba6-0b2a-6aa3-b45e-d592658bd2eb",
"created_at": "2022-04-05T15:27:30.953Z",
"updated_at": "2022-04-05T15:27:30.953Z",
"group": "General Purpose Bhyve",
"v": 1
}
Here is the disk usage for the first VM named control-1
form inside the guest:
root@control-1:~# mdata-get sdc:uuid
84e84279-07dd-41c4-992b-287b98a0caf9
root@control-1:~# df -h /dev/vd*
Filesystem Size Used Avail Use% Mounted on
udev 972M 0 972M 0% /dev
udev 972M 0 972M 0% /dev
/dev/vda2 48G 7.1G 39G 16% /
And here is the ZFS info from the CN global for this VM:
[root@dn01 (us-west-agc) ~]# zfs list -t all -r -o name,quota,used,lused,avail,compression,compressratio,volsize,volblocksize,copies,refreservation zones/$vm_uuid
NAME QUOTA USED LUSED AVAIL COMPRESS RATIO VOLSIZE VOLBLOCK COPIES REFRESERV
zones/84e84279-07dd-41c4-992b-287b98a0caf9 117G 117G 44.0G 1023M lz4 1.89x - - 1 1G
zones/84e84279-07dd-41c4-992b-287b98a0caf9/disk0 - 116G 44.0G 62.8G lz4 1.89x 50G 8K 1 116G
I thought I could rely on logicalused (44 GB) / compressratio (1.89) = 23.28 GB
to give me an idea of how much of the volume is actually in use by the guest (factoring in compression) but the guest shows that only 7.1 GB
are used rather than 23.28 GB
. So for this VM there is a difference of 16.18 GB
.
I know there is additional space required by ZFS for metadata and that the zpool layout affects this but I'm wondering why the same calculation above for the second VM below is much closer to the used disk space the second VM guest shows...
Here is the disk usage for the second VM named etcd-1
form inside the guest:
root@etcd-1:~# mdata-get sdc:uuid
e0d22fb7-49bd-4e3b-92d0-accad65c8b94
root@etcd-1:~# df -h /dev/vd*
Filesystem Size Used Avail Use% Mounted on
udev 972M 0 972M 0% /dev
udev 972M 0 972M 0% /dev
/dev/vda2 48G 6.1G 40G 14% /
And here is the ZFS info from the CN global for this VM:
NAME QUOTA USED LUSED AVAIL COMPRESS RATIO VOLSIZE VOLBLOCK COPIES REFRESERV
zones/e0d22fb7-49bd-4e3b-92d0-accad65c8b94 117G 117G 11.6G 1023M lz4 1.45x - - 1 1G
zones/e0d22fb7-49bd-4e3b-92d0-accad65c8b94/disk0 - 116G 11.6G 97.7G lz4 1.45x 50G 8K 1 116G
For this VM logicalused (11.6 GB) / compressratio (1.45) = 8 GB
which is closer to the 6.1 GB
used disk shown in the guest. For this VM there is only a difference of 1.9 GB
as opposed to 16.18 GB
.
Here is the zpool layout:
[root@dn01 (us-west-agc) ~]# zpool status
pool: zones
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c2t0d1 ONLINE 0 0 0
c2t1d1 ONLINE 0 0 0
c2t2d1 ONLINE 0 0 0
c2t3d1 ONLINE 0 0 0
c2t4d1 ONLINE 0 0 0
c2t5d1 ONLINE 0 0 0
c2t6d1 ONLINE 0 0 0
c2t7d1 ONLINE 0 0 0
c2t8d1 ONLINE 0 0 0
c2t9d1 ONLINE 0 0 0
logs
c3t0d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
errors: No known data errors