Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save satoru-takeuchi/041a7766888b411755d59677a76c4e30 to your computer and use it in GitHub Desktop.
Save satoru-takeuchi/041a7766888b411755d59677a76c4e30 to your computer and use it in GitHub Desktop.
rook and cephadm performance comparison

What is measured

The elapsed time to create 1 MON, 1 MGR, and 1 OSD Ceph cluster in one node. All container is in local host.

result

  • rook
    • launching a one node k8s cluster by kubeadm: 84s
    • launching a rook/ceph cluster on top of this k8s cluster: 64s
    • total: 148s
  • cephadm
    • launching cephadm cluster: 56s

how to measure

rook

cd <rook top dir>/ && ./own-cluster-init.sh && ./test-init.sh

The above two shell scripts is in the following URL. https://github.com/satoru-takeuchi/rook-helper

own-cluster-init.sh is to create a k8s cluster. test-init.sh is to create a rook/ceph cluster.

Here is the local.yaml and cluster-on-pvc.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: local-osd
  labels:
    type: local-osd
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Block
  local:
    path: /dev/sdb
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - neco-dev
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  dataDirHostPath: /var/lib/rook
  mon:
    count: 1
    allowMultiplePerNode: false
  cephVersion:
    image: ceph/ceph:v15.2.8
    allowUnsupported: false
  skipUpgradeChecks: false
  continueUpgradeAfterChecksEvenIfNotHealthy: false
  dashboard:
    enabled: false
    ssl: true
  network:
    hostNetwork: false
  crashCollector:
    disable: true
  storage:
    storageClassDeviceSets:
    - name: set1
      count: 1
      portable: false
      tuneSlowDeviceClass: true
#      encrypted: true
      placement:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - rook-ceph-osd
                - key: app
                  operator: In
                  values:
                  - rook-ceph-osd-prepare
              topologyKey: kubernetes.io/hostname
      resources:
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          resources:
            requests:
              storage: 5Gi
          storageClassName: manual
          volumeMode: Block
          accessModes:
            - ReadWriteOnce

cephadm

./cephadm bootstrap --skip-dashboard --skip-firewalld --skip-monitoring-stack --allow-overwrite --mon-ip 192.168.253.2 && ./cephadm shell -- ceph orch daemon add osd neco-dev:/dev/sdb

environment

  • host -hardware
    • OptiPlex7050
      • CPU: Core i7-7700 (4core x 8thread)
      • memory: 32GB
      • storage: KXG50ZNV512G (NVMe SSD 512GB
    • software
      • OS: Windows 10 build 19041.746
  • guest
    • hardware
      • Hyper-V guest
        • CPU: 8core
        • memory: 16GB
        • disk: 256GB for system, 6GB * 3 for OSDs
    • software
      • OS: Ubuntu 18.04
      • kernel: 4.15.0-88-generic
      • Kubernetes: v1.18.6
        • rook: v1.5.5
          • configurations: disable crashcollector, monitoring, and dashboard
        • cephadm: octopus
          • configurations: skip dashboard, firewalld, and monitoring stack
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment