Skip to content

Instantly share code, notes, and snippets.

@Vertiwell
Last active November 12, 2021 03:57
Show Gist options
  • Save Vertiwell/5fd2c301872d058672e79c93ad6399af to your computer and use it in GitHub Desktop.
Save Vertiwell/5fd2c301872d058672e79c93ad6399af to your computer and use it in GitHub Desktop.
openebs-cstor/cstor
#!/bin/bash
### Deploying OpenEBS on Kubernetes for Debian/Ubuntu based OS
## Baseline Guide: https://github.com/openebs/cstor-operators/blob/develop/docs/quick.md
# Type of Deployment: Helm
### Minimum Requirements ###
## Three Worker Node Cluster (Tested on K0s, K3s, K8s)
## Each worker node must have a blank drive to consume, if you need to wipe said drives, use: dd if=/dev/zero of=/dev/sdb bs=1M
#
## The following base packages are required:
# JQ, tool to parse JSON output
apt-get install -y jq && \
# Helm, Package Manager
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && \
chmod 700 get_helm.sh && \
./get_helm.sh && \
#
### Installation ###
# Install the Helm chart repo
helm repo add openebs-cstor https://openebs.github.io/cstor-operators && helm repo update && \
# Deploy OpenEBS
helm install openebs openebs-cstor/cstor --create-namespace --namespace openebs && \
# If there are any issues with the following commands. you can check for block devices: kubectl get bd -n openebs
# Set the blockdevice names created to variables
WORKER1=$(kubectl get bd -n openebs -l kubernetes.io/hostname=worker-node-1 -o json | jq -r ".items[].metadata.name")
WORKER2=$(kubectl get bd -n openebs -l kubernetes.io/hostname=worker-node-2 -o json | jq -r ".items[].metadata.name")
WORKER3=$(kubectl get bd -n openebs -l kubernetes.io/hostname=worker-node-3 -o json | jq -r ".items[].metadata.name")
# Create a Pool config file with the blockdevice variables
cat <<EOF >CStorPoolCluster.yaml
apiVersion: cstor.openebs.io/v1
kind: CStorPoolCluster
metadata:
name: cstor-storage
namespace: openebs
spec:
pools:
- nodeSelector:
kubernetes.io/hostname: "worker-node-1"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "$WORKER1"
poolConfig:
dataRaidGroupType: "stripe"
- nodeSelector:
kubernetes.io/hostname: "worker-node-2"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "$WORKER2"
poolConfig:
dataRaidGroupType: "stripe"
- nodeSelector:
kubernetes.io/hostname: "worker-node-3"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "$WORKER3"
poolConfig:
dataRaidGroupType: "stripe"
EOF
# Apply the pool
kubectl apply -f CStorPoolCluster.yaml && \
# Create the StorageClass to connect apps to the Pool
cat <<EOF >StorageClass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cstor-csi
provisioner: cstor.csi.openebs.io
allowVolumeExpansion: true
parameters:
cas-type: cstor
cstorPoolCluster: cstor-storage
replicaCount: "3"
EOF
kubectl apply -f StorageClass.yaml && \
# Cleanup
rm StorageClass.yaml CStorPoolCluster.yaml
## Wipe Everything: kubectl delete ns openebs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment