Skip to content

Instantly share code, notes, and snippets.

@taking
Last active July 25, 2023 06:04
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save taking/3b2e511dbde79b9d9ab361f9fcbd7003 to your computer and use it in GitHub Desktop.
Save taking/3b2e511dbde79b9d9ab361f9fcbd7003 to your computer and use it in GitHub Desktop.

velero Installation with Helm

  • velero on Kubernetes

Prerequisites

Helm Chart Reference

helm update

helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts/
helm repo update vmware-tanzu

velero-values.yaml (Local)

cat <<EOF > velero-values.yaml
initContainers:
  - name: velero-plugin-for-aws
    image: velero/velero-plugin-for-aws:v1.7.0
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - mountPath: /target
        name: plugins
        
configuration:
  defaultVolumesToFsBackup: true
  backupStorageLocation:
  - name: default
    provider: aws
    bucket: velero
    accessMode: ReadWrite
    default: true
    config:
      region: us-east-1
      s3ForcePathStyle: true
      s3Url: http://minio.minio-system.svc.cluster.local:9000
      publicUrl: http://minio.minio-system.svc.cluster.local:9000
  volumeSnapshotLocation:
  - name: aws
    provider: aws
    config:
      region: us-east-1
      
credentials:
  useSecret: true
  secretContents:
    cloud: |
      [default]
      aws_access_key_id = {your-minio-access-key}
      aws_secret_access_key = {your-minio-secret-key}
deployNodeAgent: true
EOF

velero-values.yaml (External)

cat <<EOF > velero-values.yaml
initContainers:
  - name: velero-plugin-for-aws
    image: velero/velero-plugin-for-aws:v1.7.0
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - mountPath: /target
        name: plugins
        
configuration:
  defaultVolumesToFsBackup: true
  backupStorageLocation:
  - name: default
    provider: aws
    bucket: velero
    accessMode: ReadWrite
    default: true
    config:
      region: us-east-1
      s3ForcePathStyle: true
      s3Url: http://192.168.0.101:9000
      publicUrl: http://192.168.0.101:9000
  volumeSnapshotLocation:
  - name: aws
    provider: aws
    config:
      region: us-east-1
      
credentials:
  useSecret: true
  secretContents:
    cloud: |
      [default]
      aws_access_key_id = {your-minio-access-key}
      aws_secret_access_key = {your-minio-secret-key}
deployNodeAgent: true
EOF

install

helm upgrade my-velero vmware-tanzu/velero \
  --install \
  --create-namespace \
  --namespace velero \
  -f velero-values.yaml

cli

wget https://github.com/vmware-tanzu/velero/releases/download/v1.10.3/velero-v1.10.3-linux-amd64.tar.gz
tar xvzf velero-v1.10.3-linux-amd64.tar.gz 
sudo mv velero-v1.10.3-linux-amd64/velero /usr/local/bin
rm -rf ./velero-v1.10.3-linux-amd64

sdf

velero backup-location set aws --default

Log Check

kubectl logs deploy/my-velero -n velero

backupstoragelocations Check

kubectl get backupstoragelocations -n velero

image

Backup (non PV,PVC)

./velero backup create nfs-server \
  --include-namespaces nfs-server \
  --storage-location aws

Backup (Include PV,PVC)

./velero backup create nfs-server-backup \
  --include-namespaces nfs-server \
  --default-volumes-to-fs-backup \
  --wait

image

Restore

./velero restore create \
  --from-backup nfs-server-backup

Schedule

velero schedule create nfs-server --include-namespaces nfs-server --schedule="*/10 * * * *"
@taking
Copy link
Author

taking commented Jul 21, 2023

Velero CLI

cat <<EOF > credentials-velero
[default]
 aws_access_key_id = {your-minio-access-key}
 aws_secret_access_key = {your-minio-secret-key}
EOF

providerName="aws"
pluginName="velero/velero-plugin-for-aws:v1.7.0"
bucketName="velero"

velero install \
 --provider ${providerName} \
 --plugins ${pluginName} \
 --bucket ${bucketName}  \
 --secret-file ./credentials-velero \
 --use-volume-snapshots=true \
 --backup-location-config region=us-east-1,s3ForcePathStyle="true",s3Url="http://192.168.0.101:9000",publicUrl="http://192.168.0.101:9000" \
 --image velero/velero:v1.11.0  \
 --snapshot-location-config region="us-east-1" \
 --use-node-agent \
 --default-volumes-to-fs-backup \
 --wait

@taking
Copy link
Author

taking commented Jul 21, 2023

깔끔하고 다 좋은데... PV,PVC 가 복구 시 UUID가 새로 생성된다. 그래서 기존 PVC를 못 찾음.
kasten10 으로 넘어가본다. ㅠ

@taking
Copy link
Author

taking commented Jul 25, 2023

추가테스트

  • nfs-subdir-external-provisioner
  • 결론 실패

복구 전
image

복구 후
image

Static PV,PVC 생성

o 우선사항

  • nfs 내 폴더를 생성 해주고 진행한다.
  • mkdir -m 777 -p /volume/1TB_NVME/kubernetes/portainer-pvc
NAME=portainer
NFS_PATH=/volume/1TB_NVME/kubernetes/portainer-pvc
NFS_ADDR=192.168.0.100
STORAGE_ClASS=nfs-client
STORAGE_CAPACITY=1Gi

cat << EOF | kubectl apply -f -
---
# namespace
---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    component: ${NAME}
    kubernetes.io/metadata.name: ${NAME}
  name: ${NAME}
---
# nfs pv,pvc
---
apiVersion: v1 
kind: PersistentVolume 
metadata: 
  name: ${NAME}-pv
  namespace: portainer
spec: 
  accessModes: 
    - ReadWriteMany
  capacity: 
    storage: ${STORAGE_CAPACITY}
  volumeMode: Filesystem 
  persistentVolumeReclaimPolicy: Retain 
  nfs: 
    path:  ${NFS_PATH}
    server: ${NFS_ADDR}
  storageClassName: ${STORAGE_ClASS}
--- 
apiVersion: v1 
kind: PersistentVolumeClaim 
metadata:               
  name: ${NAME}-pvc 
  namespace: portainer
spec: 
  accessModes: 
    - ReadWriteMany
  resources: 
    requests: 
      storage: ${STORAGE_CAPACITY}
  storageClassName: ${STORAGE_ClASS}
  volumeMode: Filesystem
  volumeName: ${NAME}-pv
EOF

helm upgrade portainer portainer/portainer \
    --install \
    --create-namespace \
    --namespace portainer \
    --set service.type=LoadBalancer \
    --set enterpriseEdition.enabled=true \
    --set persistence.existingClaim=portainer-pvc \
    --set tls.force=true

@taking
Copy link
Author

taking commented Jul 25, 2023

추가테스트

  • nfs 는 별도로 mount 했음
  • hostpath 로 pv,pvc 생성
  • local-path-provisioner 또는 nfs-subdir-external-provisioner 를 사용해서 진행하면 recovery 시 무작위로 uid 를 붙여서 생성하기 떄문에 기존 폴더를 찾아가지 못한다.
  • rook ceph 을 사용하면 문제 없다고 하던데.. 테스트 해봐야겠다.
  • 결론 : 성공

복구 전
image

복구 후
image

Static PV,PVC 생성

o 우선사항

  • 폴더를 생성 해주고 진행한다.
  • mkdir -m 777 -p /volume/1TB_NVME/kubernetes/portainer-pvc
NAME=portainer
HOSTPATH=/volume/1TB_NVME/kubernetes/portainer-pvc
STORAGE_ClASS=manual
STORAGE_CAPACITY=1Gi

cat << EOF | kubectl apply -f -
---
# namespace
---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    component: ${NAME}
    kubernetes.io/metadata.name: ${NAME}
  name: ${NAME}
---
# nfs pv,pvc
---
apiVersion: v1 
kind: PersistentVolume 
metadata: 
  name: ${NAME}-pv
  namespace: portainer
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: ${STORAGE_CAPACITY}
  volumeMode: Filesystem 
  persistentVolumeReclaimPolicy: Retain 
  hostPath:
    path: ${HOSTPATH}
  storageClassName: ${STORAGE_ClASS}
--- 
apiVersion: v1 
kind: PersistentVolumeClaim 
metadata:               
  name: ${NAME}-pvc 
  namespace: portainer
spec: 
  accessModes: 
    - ReadWriteOnce
  resources: 
    requests: 
      storage: ${STORAGE_CAPACITY}
  storageClassName: ${STORAGE_ClASS}
  volumeMode: Filesystem
  volumeName: ${NAME}-pv
EOF

helm upgrade portainer portainer/portainer \
    --install \
    --create-namespace \
    --namespace portainer \
    --set service.type=NodePort \
    --set enterpriseEdition.enabled=true \
    --set persistence.existingClaim=portainer-pvc \
    --set tls.force=true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment