Skip to content

Instantly share code, notes, and snippets.

@ezbik
Created March 11, 2020 03:01
Show Gist options
  • Save ezbik/f00aaa2f9a096a4d6ad1c0f6a739d42e to your computer and use it in GitHub Desktop.
Save ezbik/f00aaa2f9a096a4d6ad1c0f6a739d42e to your computer and use it in GitHub Desktop.
==== Ceph tests on my laptop:
NODE FS: 60.25Mb/sec
Ceph backed CT: 6Mb/s ---- (10x slower!)
Local FS backed CT: 30Mb/s
Description: Full native support of Ceph in Proxmox. Ceph is built on top of 2nd drive.
+ Snapshots: yes
+ HA compatible - yes
- requires whole disk on each node => can't be on top of MD_RAID
+ native support in PVE: yes
- speed, 10x slower
===== glusterfs tests on OVH
Description: Glusterfs is built manually & mounted in /mnt/gfs01; Proxmox is not aware of it, it is just a folder for Proxmox. Thus no features like self-healing, tuning are available in Proxmox.
local FS: read 1997MB/s write 1290MB/s
local FS backed CT: read 1115MiB/s write 752MiB/s
glusterfs , read 150MB/s , write 75MB/s ---- (10x slower!)
glusterfs backed CT: read 1200MB/s write 60MB/s
- Snapshots: no
+ Ha compatible - yes
+ doesn't require whole disk on each node => can be on top of MD_RAID
- native support in PVE: no
- speed, 10x slower
====== ZFS , local FS
+ Snapshots: yes
~ Ha compatible - ALMOST ( needs periodical sync between nodes )
+ doesn't require whole disk on each node => has built-in software RAID
+ native support in PVE: yes
+ speed, fast, it is local FS
===== Conclusion
If we need HA, and can live with 10x slower disks then let's go with Ceph. This way there will not be software RAID, so the system (ZFS) is installed on 1st disk, and 2nd disk if for Ceph.
If we don't need HA, let's go with ZFS, without any network FS. ZFS is as fast as local FS. We can implement periodical syncs between nodes to simulate
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment