Skip to content

Instantly share code, notes, and snippets.

@taking
Last active October 14, 2024 02:56
Show Gist options
  • Save taking/3b2e511dbde79b9d9ab361f9fcbd7003 to your computer and use it in GitHub Desktop.
Save taking/3b2e511dbde79b9d9ab361f9fcbd7003 to your computer and use it in GitHub Desktop.

velero Installation with Helm

  • velero on Kubernetes

Prerequisites

  • Kubernetes 1.30+
  • Helm 3.15.0+
  • minio

Helm Chart Reference

minio update

bucketName=velero
accessKey=taking-access-key
secretKey=taking-secret-key


helm repo add minio https://charts.min.io/
helm repo update minio

helm install minio minio/minio \
    --create-namespace \
    --namespace minio-system \
    --set mode=standalone \
    --set replicas=2 \
    --set persistence.size=10Gi \
    --set MINIO_REGION=us-east-1 \
    --set buckets[0].name=${bucketName} \
    --set buckets[0].policy=none \
    --set buckets[0].purge=false \
    --set users[0].accessKey=${accessKey} \
    --set users[0].secretKey=${secretKey} \
    --set users[0].policy=readwrite \
    --set resources.requests.memory=10Gi

helm update

helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts/
helm repo update vmware-tanzu

velero-values.yaml (Local)

cat <<EOF > velero-values.yaml
initContainers:
  - name: velero-plugin-for-aws
    image: velero/velero-plugin-for-aws:v1.10.1
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - mountPath: /target
        name: plugins
        
configuration:
  defaultVolumesToFsBackup: true
  backupStorageLocation:
  - name: minio
    provider: aws
    bucket: velero
    accessMode: ReadWrite
    default: true
    config:
      region: us-east-1
      s3ForcePathStyle: true
      s3Url: http://minio.minio-system.svc.cluster.local:9000
      publicUrl: http://minio.minio-system.svc.cluster.local:9000
  volumeSnapshotLocation:
  - name: minio
    provider: aws
    config:
      region: us-east-1
      
credentials:
  useSecret: true
  secretContents:
    cloud: |
      [default]
      aws_access_key_id = {your-minio-access-key}
      aws_secret_access_key = {your-minio-secret-key}
deployNodeAgent: true
EOF

velero-values.yaml (External)

cat <<EOF > velero-values.yaml
initContainers:
  - name: velero-plugin-for-aws
    image: velero/velero-plugin-for-aws:v1.10.1
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - mountPath: /target
        name: plugins
        
configuration:
  defaultVolumesToFsBackup: true
  backupStorageLocation:
  - name: minio
    provider: aws
    bucket: velero
    accessMode: ReadWrite
    default: true
    config:
      region: us-east-1
      s3ForcePathStyle: true
      s3Url: http://192.168.0.101:9000
      publicUrl: http://192.168.0.101:9000
  volumeSnapshotLocation:
  - name: minio
    provider: aws
    config:
      region: us-east-1
      
credentials:
  useSecret: true
  secretContents:
    cloud: |
      [default]
      aws_access_key_id = {your-minio-access-key}
      aws_secret_access_key = {your-minio-secret-key}
deployNodeAgent: true
EOF

install

helm upgrade my-velero vmware-tanzu/velero \
  --install \
  --create-namespace \
  --namespace velero \
  -f velero-values.yaml

CleanShot 2024-10-14 at 11 55 21

cli

VERSION=$(basename $(curl -s -w %{redirect_url} https://github.com/vmware-tanzu/velero/releases/latest))
UNAME=$(uname | sed 's/^Darwin$/darwin/; s/^Linux$/linux/')
ARCH=$(uname -m | sed 's/^aarch64$/arm64/; s/^x86_64$/amd64/')

wget https://github.com/vmware-tanzu/velero/releases/download/$VERSION/velero-$VERSION-$UNAME-$ARCH.tar.gz
tar xvzf velero-$VERSION-$UNAME-$ARCH.tar.gz
sudo mv velero-$VERSION-$UNAME-$ARCH/velero /usr/local/bin
rm -rf ./velero-$VERSION-$UNAME-$ARCH
rm -rf ./velero-$VERSION-$UNAME-$ARCH.tar.gz

velero version

CleanShot 2024-10-14 at 11 54 52

backup-location default set

velero backup-location set minio --default

velero backup-location get

CleanShot 2024-10-14 at 11 52 35

Log Check

kubectl logs deploy/my-velero -n velero

CleanShot 2024-10-14 at 11 54 26

backupstoragelocations Check

kubectl get backupstoragelocations -n velero

CleanShot 2024-10-14 at 11 52 58

Backup (non PV,PVC)

./velero backup create nfs-server \
  --include-namespaces nfs-server \
  --storage-location aws

Backup (Include PV,PVC)

./velero backup create nfs-server-backup \
  --include-namespaces nfs-server \
  --default-volumes-to-fs-backup \
  --wait

image

Restore

./velero restore create \
  --from-backup nfs-server-backup

Schedule

velero schedule create nfs-server --include-namespaces nfs-server --schedule="*/10 * * * *"
@taking
Copy link
Author

taking commented Jul 21, 2023

깔끔하고 다 좋은데... PV,PVC 가 복구 시 UUID가 새로 생성된다. 그래서 기존 PVC를 못 찾음.
kasten10 으로 넘어가본다. ㅠ

@taking
Copy link
Author

taking commented Jul 25, 2023

추가테스트

  • nfs-subdir-external-provisioner
  • 결론 실패

복구 전
image

복구 후
image

Static PV,PVC 생성

o 우선사항

  • nfs 내 폴더를 생성 해주고 진행한다.
  • mkdir -m 777 -p /volume/1TB_NVME/kubernetes/portainer-pvc
NAME=portainer
NFS_PATH=/volume/1TB_NVME/kubernetes/portainer-pvc
NFS_ADDR=192.168.0.100
STORAGE_ClASS=nfs-client
STORAGE_CAPACITY=1Gi

cat << EOF | kubectl apply -f -
---
# namespace
---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    component: ${NAME}
    kubernetes.io/metadata.name: ${NAME}
  name: ${NAME}
---
# nfs pv,pvc
---
apiVersion: v1 
kind: PersistentVolume 
metadata: 
  name: ${NAME}-pv
  namespace: portainer
spec: 
  accessModes: 
    - ReadWriteMany
  capacity: 
    storage: ${STORAGE_CAPACITY}
  volumeMode: Filesystem 
  persistentVolumeReclaimPolicy: Retain 
  nfs: 
    path:  ${NFS_PATH}
    server: ${NFS_ADDR}
  storageClassName: ${STORAGE_ClASS}
--- 
apiVersion: v1 
kind: PersistentVolumeClaim 
metadata:               
  name: ${NAME}-pvc 
  namespace: portainer
spec: 
  accessModes: 
    - ReadWriteMany
  resources: 
    requests: 
      storage: ${STORAGE_CAPACITY}
  storageClassName: ${STORAGE_ClASS}
  volumeMode: Filesystem
  volumeName: ${NAME}-pv
EOF

helm upgrade portainer portainer/portainer \
    --install \
    --create-namespace \
    --namespace portainer \
    --set service.type=LoadBalancer \
    --set enterpriseEdition.enabled=true \
    --set persistence.existingClaim=portainer-pvc \
    --set tls.force=true

@taking
Copy link
Author

taking commented Jul 25, 2023

추가테스트

  • nfs 는 별도로 mount 했음
  • hostpath 로 pv,pvc 생성
  • local-path-provisioner 또는 nfs-subdir-external-provisioner 를 사용해서 진행하면 recovery 시 무작위로 uid 를 붙여서 생성하기 떄문에 기존 폴더를 찾아가지 못한다.
  • rook ceph 을 사용하면 문제 없다고 하던데.. 테스트 해봐야겠다.
  • 결론 : 성공

복구 전
image

복구 후
image

Static PV,PVC 생성

o 우선사항

  • 폴더를 생성 해주고 진행한다.
  • mkdir -m 777 -p /volume/1TB_NVME/kubernetes/portainer-pvc
NAME=portainer
HOSTPATH=/volume/1TB_NVME/kubernetes/portainer-pvc
STORAGE_ClASS=manual
STORAGE_CAPACITY=1Gi

cat << EOF | kubectl apply -f -
---
# namespace
---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    component: ${NAME}
    kubernetes.io/metadata.name: ${NAME}
  name: ${NAME}
---
# nfs pv,pvc
---
apiVersion: v1 
kind: PersistentVolume 
metadata: 
  name: ${NAME}-pv
  namespace: portainer
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: ${STORAGE_CAPACITY}
  volumeMode: Filesystem 
  persistentVolumeReclaimPolicy: Retain 
  hostPath:
    path: ${HOSTPATH}
  storageClassName: ${STORAGE_ClASS}
--- 
apiVersion: v1 
kind: PersistentVolumeClaim 
metadata:               
  name: ${NAME}-pvc 
  namespace: portainer
spec: 
  accessModes: 
    - ReadWriteOnce
  resources: 
    requests: 
      storage: ${STORAGE_CAPACITY}
  storageClassName: ${STORAGE_ClASS}
  volumeMode: Filesystem
  volumeName: ${NAME}-pv
EOF

helm upgrade portainer portainer/portainer \
    --install \
    --create-namespace \
    --namespace portainer \
    --set service.type=NodePort \
    --set enterpriseEdition.enabled=true \
    --set persistence.existingClaim=portainer-pvc \
    --set tls.force=true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment