Enable Stateful applications to access Dynamic Local PVs or Replicated PVs.
- Website: https://openebs.io/
- Source-code: https://github.com/openebs/openebs
- CNCF Certified software company from Bengaluru, India
Run any StatefulSet Kubernetes application, with any Storage System, using OpenEBS as provisioner and replication system.
- Lower costs
- Easier management
- More control for their teams
- Kubernetes native; runs in user-space
- Open Source; no vendor lock-in
- The only multi-cloud storage solution
- Most active K8s storage project with a large community
- Run on any Kubernetes platform (AKS, EKS, GCP…, Minikube, Vagrant…)
Doesn't rely on CSI plugins or kernel dependent software. Runs entirely in user-space as micro-service PODs.
-
Container Attached Storage (CAS) - Storage is coupled to application using a micro-service
-
DAS - Direct-Attached Storage - Storage directly linked to one computer or server, usually via a cable. Including external/internal hard drives or SSD flash drives.
-
NAS - Network Attached Storage - Network storage devices, allow multiple devices in a network to share the same storage space at once.
In non-CAS models, Kubernetes Persistent Volumes are still tightly coupled to the Kernel modules, making the storage software on Kubernetes nodes monolithic in nature.
In contrast, CAS enables you to leverage flexibility and scalability of cloud-native applications. The storage software that defines a Kubernetes PV (Persistent Volume) is based on the micro services architecture. The control plane (storage controller) and the data plane of the storage software (storage replica) are run as Kubernetes Pods.
In CAS architecture, the software of storage controller and replicas are completely micro services based and hence no kernel components are involved.
Currently, the OpenEBS provisioner supports only one type of binding: iSCSI.
Choose a storage backend to use:
- cStor: Recently released, very robust.
- Jiva: Built in Go and uses Longhorn and
gotgt
stacks inside. - OpenEBS Local PV: Creates PVs out of local disks or host paths. No replication.
OpenEBS does Synchronous Replication for high-availability cross AZ setups. There is no blast radius effect, metadata of the volume is not centralized and is kept local to the volume. In the event of a node failure, the data continues to be available at the same performance levels.
OpenEBS cStor volume is working based on cStor/ZFS snapshot using Velero. For OpenEBS Local PV and Jiva volume, it is based on Restic using Velero.
- cStor is recommended most of the times.
- Jiva is recommended for a low capacity workloads.
- Read Write Many (RWM) is only supported with NFS underlining storage.
Few differences among the CAS engines:
Feature | Jiva | cStor | LocalPV |
---|---|---|---|
Light weight and completely in user space | Yes | Yes | Yes |
Synchronous replication | Yes | Yes | No |
Suitable for low capacity workloads | Yes | Yes | Yes |
Snapshots and cloning support | Basic | Advanced | No |
Data consistency | Yes | Yes | NA |
Backup and Restore using Velero | Yes | Yes | Yes |
Suitable for high capacity workloads | Yes | Yes | |
Thin Provisioning | Yes | No | |
Disk pool or aggregate support | Yes | No | |
On demand capacity expansion | Yes | Yes* | |
Data resiliency (RAID support ) | Yes | No | |
Near disk performance | No | No | Yes |
See more: https://docs.openebs.io/docs/next/casengines.html
NDM treats block devices as resources that need to be monitored and managed just like other resources such as CPU, Memory and Network, running as a DaemonSet. Currently, runs in the privileged mode.
- Easy access inventory of Block-Devices available in Kubernetes
- Predict failures on the Disks to help with taking preventive actions.
- Allow dynamically attaching/detaching disks to a storage pod, without restarting the corresponding NDM pod running on the Node where the disk is attached/detached.
- Kubernetes 1.13+ installed. Latest tested Kubernetes version is 1.17.2.
- For using features like Local PV and Backup & Restore, you must require Kubernetes version 1.13 or above.
- For provisioning cStor volume via CSI driver support and performing basic operations on this volume such as expanding volume and snapshot & clone, you must require Kubernetes version 1.14 or above
- Understand the pre-requisites for your Kubernetes platform
Minimum resource requirements:
- The OpenEBS control plane comprises of minimum two pods i.e. API server and Dynamic Provisioner. You can run these using 2GB RAM and 2 CPUs.
- Each volume will spin up IO controller and replica pods. Each of these will require 1GB RAM and 0.5 CPU by default.
- For enabling high availability, OpenEBS recommends having a minimum of 3 nodes in the Kubernetes cluster.
- Verify if iSCSI client is running, see instructions.
- Select installation method:
- helm chart
(or)
- kubectl yaml spec file
- helm chart
See more: https://docs.openebs.io/docs/next/installation.html
OpenEBS introduces new K8s Custom Resources Definitions (CRD):
castemplates.openebs.io
cstorpools.openebs.io
cstorpoolinstances.openebs.io
cstorvolumeclaims.openebs.io
cstorvolumereplicas.openebs.io
cstorvolumepolicies.openebs.io
cstorvolumes.openebs.io
runtasks.openebs.io
storagepoolclaims.openebs.io
storagepools.openebs.io
volumesnapshotdatas.volumesnapshot.external-storage.k8s.io
volumesnapshots.volumesnapshot.external-storage.k8s.io
disks.openebs.io
blockdevices.openebs.io
blockdeviceclaims.openebs.io
cstorbackups.openebs.io
cstorrestores.openebs.io
cstorcompletedbackups.openebs.io
cstorpoolclusters.openebs.io
upgradetasks.openebs.io
$ kubectl get pods -n openebs
NAME READY STATUS RESTARTS AGE
maya-apiserver-d77867956-mv9ls 1/1 Running 3 99s
openebs-admission-server-7f565bcbb5-lp5sk 1/1 Running 0 95s
openebs-localpv-provisioner-7bb98f549d-ljcc5 1/1 Running 0 94s
openebs-ndm-dn422 1/1 Running 0 96s
openebs-ndm-operator-84849677b7-rhfbk 1/1 Running 1 95s
openebs-ndm-ptxss 1/1 Running 0 96s
openebs-ndm-zpr2l 1/1 Running 0 96s
openebs-provisioner-657486f6ff-pxdbc 1/1 Running 0 98s
openebs-snapshot-operator-5bdcdc9b77-v7n4w 2/2 Running 0 97s
$ kubectl get sc
NAME PROVISIONER AGE
openebs-device openebs.io/local 64s
openebs-hostpath openebs.io/local 64s
openebs-jiva-default openebs.io/provisioner-iscsi 64s
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 64s
standard (default) kubernetes.io/gce-pd 6m41s
$ kubectl get blockdevice -n openebs
NAME NODENAME SIZE CLAIMSTATE STATUS AGE
blockdevice-1c10eb1… gke-ran…-default-pool-da9… 42949672960 Unclaimed Active 14s
blockdevice-77f834e… gke-ran…-default-pool-da9… 42949672960 Unclaimed Active 22s
blockdevice-936911c… gke-ran…-default-pool-da9… 42949672960 Unclaimed Active 30s
To know which block device CR belongs to which node, check the node label set:
$ kubectl describe blockdevice blockdevice-db1254ebd777a99e6b9b5626358c7038 -n openebs
Name: blockdevice-db1254ebd777a99e6b9b5626358c7038
Namespace: openebs
Labels: kubernetes.io/hostname=k8snodv01p
ndm.io/blockdevice-type=blockdevice
ndm.io/managed=true
Annotations: <none>
API Version: openebs.io/v1alpha1
Kind: BlockDevice
Spec:
Capacity:
Logical Sector Size: 512
Physical Sector Size: 0
Storage: 34359738368
Details:
Compliance:
Device Type:
Firmware Revision:
Model: QEMU_HARDDISK
Serial: drive-scsi8
Vendor: QEMU
Devlinks:
Kind: by-id
Links:
/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi8
Kind: by-path
Links:
/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:8
Filesystem:
Partitioned: No
Path: /dev/sdi
Status:
Claim State: Unclaimed
State: Active
And many more…