Created
March 19, 2024 02:32
-
-
Save danfengliu/b2177367f7c37f0917d1cbf062c066c5 to your computer and use it in GitHub Desktop.
In nightly, backup in default BSL(AWS S3) with IRSA credential and additional BSL(Minio) with credential file
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[0m[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups[0m [90mwhen kibishii is the sample workload[0m | |
[1mshould successfully back up and restore to an additional BackupStorageLocation with unique credentials[0m | |
[37m/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/backup/backup.go:146[0m | |
Velero install 2024-03-02 12:56:13 | |
For public cloud platforms, object store plugin provider will be set as cloud provider | |
[create namespace velero] | |
[delete clusterrolebinding velero-cluster-role] | |
/usr/local/bin/kubectl delete clusterrolebinding velero-cluster-role | |
[create clusterrolebinding velero-cluster-role --clusterrole=cluster-admin --serviceaccount=velero:nightly-default-sa-1709374097477] | |
[delete iamserviceaccount nightly-default-sa-1709374097477 --namespace velero --cluster nightly-test-1709374097476-aws-default-4 --wait] | |
/usr/local/bin/eksctl delete iamserviceaccount nightly-default-sa-1709374097477 --namespace velero --cluster nightly-test-1709374097476-aws-default-4 --wait | |
Output: 2024-03-02 12:56:17 [ℹ] 1 iamserviceaccount (velero/nightly-default-sa-1709374097477) was included (based on the include/exclude rules) | |
2024-03-02 12:56:18 [ℹ] 1 task: { | |
2 sequential sub-tasks: { | |
delete IAM role for serviceaccount "velero/nightly-default-sa-1709374097477", | |
delete serviceaccount "velero/nightly-default-sa-1709374097477", | |
} }2024-03-02 12:56:19 [ℹ] will delete stack "eksctl-nightly-test-1709374097476-aws-default-4-addon-iamserviceaccount-velero-nightly-default-sa-1709374097477" | |
2024-03-02 12:56:19 [ℹ] waiting for stack "eksctl-nightly-test-1709374097476-aws-default-4-addon-iamserviceaccount-velero-nightly-default-sa-1709374097477" to get deleted | |
2024-03-02 12:56:19 [ℹ] waiting for CloudFormation stack "eksctl-nightly-test-1709374097476-aws-default-4-addon-iamserviceaccount-velero-nightly-default-sa-1709374097477" | |
2024-03-02 12:56:49 [ℹ] waiting for CloudFormation stack "eksctl-nightly-test-1709374097476-aws-default-4-addon-iamserviceaccount-velero-nightly-default-sa-1709374097477" | |
2024-03-02 12:56:49 [ℹ] serviceaccount "velero/nightly-default-sa-1709374097477" was already deleted | |
/usr/local/bin/eksctl create iamserviceaccount nightly-default-sa-1709374097477 --namespace velero --cluster nightly-test-1709374097476-aws-default-4 --attach-policy-arn arn:aws:iam::278970431578:policy/irsa-nightly-test --approve --override-existing-serviceaccounts | |
Start to install aws VolumeSnapshotClass ... | |
[apply -f ../testdata/volume-snapshot-class/aws.yaml --force=true] | |
Running cmd "/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero install --namespace velero --image gcr.io/velero-gcp/nightly/velero:velero-15798427 --use-node-agent --use-volume-snapshots --provider aws --backup-location-config region=us-east-1 --bucket nightly-normal-account3-test --prefix nightly --service-account-name nightly-default-sa-1709374097477 --no-secret --snapshot-location-config region=us-east-1 --plugins velero/velero-plugin-for-aws:main,velero/velero-plugin-for-csi:main --disable-informer-cache=true --features EnableCSI --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/datadownloads.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/datauploads.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/datadownloads.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/datauploads.velero.io condition met | |
Running cmd "/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero install --namespace velero --image gcr.io/velero-gcp/nightly/velero:velero-15798427 --use-node-agent --use-volume-snapshots --provider aws --backup-location-config region=us-east-1 --bucket nightly-normal-account3-test --prefix nightly --service-account-name nightly-default-sa-1709374097477 --no-secret --snapshot-location-config region=us-east-1 --plugins velero/velero-plugin-for-aws:main,velero/velero-plugin-for-csi:main --disable-informer-cache=true --features EnableCSI --dry-run --output json" | |
the restic restore helper image is set by the configmap "restic-restore-action-config" | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/datadownloads.velero.io configured | |
customresourcedefinition.apiextensions.k8s.io/datauploads.velero.io configured | |
Warning: resource namespaces/velero is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. | |
namespace/velero configured | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
daemonset.apps/node-agent created | |
deployment.apps/velero created | |
configmap/restic-restore-action-config created | |
Waiting for Velero deployment to be ready. | |
Waiting for node-agent daemonset to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Finish velero install 2024-03-02 12:58:38 | |
Get Version Command:/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero version --timeout 60s --client-only | |
Version:Client: Version: velero-15798427 Git commit: 157984279be733a50a3f8f32c47553654bb33279 | |
provider cmd = aws | |
plugins cmd = [velero/velero-plugin-for-aws:main] | |
installPluginCmd cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero plugin add velero/velero-plugin-for-aws:main | |
An error occurred: Deployment.apps "velero" is invalid: spec.template.spec.initContainers[2].name: Duplicate value: "velero-velero-plugin-for-aws" | |
velero cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero create backup-location add-bsl --provider aws --bucket velero-e2e-testing --prefix additional --config region=minio,s3ForcePathStyle=true,s3Url=http://minio.minio.svc:9000 --credential bsl-credentials-6d06711d-7442-48b9-a064-cd1c4e2ba3f0=creds-aws | |
Backup storage location "add-bsl" configured successfully. | |
KibishiiPrepareBeforeBackup 2024-03-02 12:58:40 | |
installKibishii 2024-03-02 12:58:40 | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/aws-csi --timeout=90s | |
Label namespace with PSA policy: /usr/local/bin/kubectl label namespace k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default pod-security.kubernetes.io/enforce=baseline pod-security.kubernetes.io/enforce-version=latest --overwrite=true | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready 2024-03-02 12:59:12 | |
generateData 2024-03-02 12:59:12 | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
generateData done 2024-03-02 12:59:53 | |
KibishiiPrepareBeforeBackup done 2024-03-02 12:59:53 | |
VeleroBackupNamespace 2024-03-02 12:59:53 | |
velero cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero create backup backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0 --wait --include-namespaces k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default --default-volumes-to-fs-backup --snapshot-volumes=false --storage-location default | |
Backup request "backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
........... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0` and `velero backup logs backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0`. | |
get backup cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0 | |
VeleroBackupNamespace done 2024-03-02 13:00:07 | |
/usr/local/bin/kubectl get podvolumebackup -n velero | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
/usr/bin/awk {print $1} | |
line: backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0-72c47 | |
line: backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0-bcmj8 | |
apiregistration.k8s.io | |
apps | |
events.k8s.io | |
authentication.k8s.io | |
authorization.k8s.io | |
autoscaling | |
batch | |
certificates.k8s.io | |
networking.k8s.io | |
policy | |
rbac.authorization.k8s.io | |
storage.k8s.io | |
admissionregistration.k8s.io | |
apiextensions.k8s.io | |
scheduling.k8s.io | |
coordination.k8s.io | |
node.k8s.io | |
discovery.k8s.io | |
flowcontrol.apiserver.k8s.io | |
snapshot.storage.k8s.io | |
v1 | |
No VolumeSnapshotContent from backup backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0 | |
snapshotContentNameList: [] | |
snapshotCheckPoint: {k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default [] 0 [kibishii-data-kibishii-deployment-0 kibishii-data-kibishii-deployment-1] true} | |
Re-poulate volume 2024-03-02 13:01:08 | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default -c kibishii kibishii-deployment-0 -- /bin/sh -c rm -rf /data/* | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default -c kibishii kibishii-deployment-0 -- /bin/sh -c echo ns-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default pod-kibishii-deployment-0 volume-data > /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default -c kibishii kibishii-deployment-1 -- /bin/sh -c rm -rf /data/* | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default -c kibishii kibishii-deployment-1 -- /bin/sh -c echo ns-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default pod-kibishii-deployment-1 volume-data > /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
Re-poulate volume done 2024-03-02 13:01:12 | |
Simulating a disaster by removing namespace k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default 2024-03-02 13:01:12 | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 2m33s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 2m39s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 2m44s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 2m49s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 2m54s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 2m59s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 3m4s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 3m9s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
VeleroRestore 2024-03-02 13:01:53 | |
velero cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero create restore restore-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0 --from-backup backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0 --wait | |
Restore request "restore-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
............ | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0` and `velero restore logs restore-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0`. | |
get restore cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0 | |
/usr/local/bin/kubectl get podvolumerestore -n velero | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
/usr/bin/awk {print $1} | |
line: restore-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0-2m8vt | |
line: restore-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0-cglj6 | |
KibishiiVerifyAfterRestore 2024-03-02 13:02:09 | |
Waiting for kibishii pods to be ready | |
Pod kibishii-deployment-1 is in state Pending waiting for it to be Running | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default -c kibishii kibishii-deployment-0 -- cat /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
stdout: | |
stderr: cat: /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default: No such file or directory | |
command terminated with exit code 1 | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default -c kibishii kibishii-deployment-1 -- cat /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
stdout: | |
stderr: cat: /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default: No such file or directory | |
command terminated with exit code 1 | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
Success to verify kibishii data in namespace k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
kibishii test completed successfully 2024-03-02 13:02:47 | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 54s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 60s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 65s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 70s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 75s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 80s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 85s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default Terminating 90s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0default" is still being deleted... | |
KibishiiPrepareBeforeBackup 2024-03-02 13:03:28 | |
installKibishii 2024-03-02 13:03:28 | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/aws-csi --timeout=90s | |
Label namespace with PSA policy: /usr/local/bin/kubectl label namespace k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl pod-security.kubernetes.io/enforce=baseline pod-security.kubernetes.io/enforce-version=latest --overwrite=true | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready 2024-03-02 13:03:57 | |
generateData 2024-03-02 13:03:57 | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
generateData done 2024-03-02 13:04:39 | |
KibishiiPrepareBeforeBackup done 2024-03-02 13:04:39 | |
VeleroBackupNamespace 2024-03-02 13:04:39 | |
velero cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero create backup backup-add-bsl --wait --include-namespaces k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl --default-volumes-to-fs-backup --snapshot-volumes=false --storage-location add-bsl | |
Backup request "backup-add-bsl" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-add-bsl` and `velero backup logs backup-add-bsl`. | |
get backup cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-add-bsl | |
VeleroBackupNamespace done 2024-03-02 13:04:45 | |
/usr/local/bin/kubectl get podvolumebackup -n velero | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
/usr/bin/awk {print $1} | |
line: backup-add-bsl-692zl | |
line: backup-add-bsl-cdfr9 | |
apiregistration.k8s.io | |
apps | |
events.k8s.io | |
authentication.k8s.io | |
authorization.k8s.io | |
autoscaling | |
batch | |
certificates.k8s.io | |
networking.k8s.io | |
policy | |
rbac.authorization.k8s.io | |
storage.k8s.io | |
admissionregistration.k8s.io | |
apiextensions.k8s.io | |
scheduling.k8s.io | |
coordination.k8s.io | |
node.k8s.io | |
discovery.k8s.io | |
flowcontrol.apiserver.k8s.io | |
snapshot.storage.k8s.io | |
v1 | |
No VolumeSnapshotContent from backup backup-add-bsl | |
snapshotContentNameList: [] | |
snapshotCheckPoint: {k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl [] 0 [kibishii-data-kibishii-deployment-0 kibishii-data-kibishii-deployment-1] true} | |
Re-poulate volume 2024-03-02 13:05:46 | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl -c kibishii kibishii-deployment-0 -- /bin/sh -c rm -rf /data/* | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl -c kibishii kibishii-deployment-0 -- /bin/sh -c echo ns-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl pod-kibishii-deployment-0 volume-data > /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl -c kibishii kibishii-deployment-1 -- /bin/sh -c rm -rf /data/* | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl -c kibishii kibishii-deployment-1 -- /bin/sh -c echo ns-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl pod-kibishii-deployment-1 volume-data > /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
Re-poulate volume done 2024-03-02 13:05:50 | |
Simulating a disaster by removing namespace k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl 2024-03-02 13:05:50 | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 2m23s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 2m29s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 2m34s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 2m39s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 2m44s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 2m49s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 2m54s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 2m59s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
VeleroRestore 2024-03-02 13:06:31 | |
velero cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero create restore restore-add-bsl --from-backup backup-add-bsl --wait | |
Restore request "restore-add-bsl" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
................ | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-add-bsl` and `velero restore logs restore-add-bsl`. | |
get restore cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-add-bsl | |
/usr/local/bin/kubectl get podvolumerestore -n velero | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
/usr/bin/awk {print $1} | |
line: restore-add-bsl-8jcf2 | |
line: restore-add-bsl-gtqcc | |
KibishiiVerifyAfterRestore 2024-03-02 13:06:51 | |
Waiting for kibishii pods to be ready | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl -c kibishii kibishii-deployment-0 -- cat /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
stdout: | |
stderr: cat: /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl: No such file or directory | |
command terminated with exit code 1 | |
Kubectl exec cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl -c kibishii kibishii-deployment-1 -- cat /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
stdout: | |
stderr: cat: /data/file-k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl: No such file or directory | |
command terminated with exit code 1 | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
Success to verify kibishii data in namespace k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
kibishii test completed successfully 2024-03-02 13:07:24 | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 53s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 59s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 64s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 69s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 74s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 79s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 84s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
/usr/local/bin/kubectl get ns | |
/bin/grep k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl | |
line: k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl Terminating 89s | |
namespace "k-6d06711d-7442-48b9-a064-cd1c4e2ba3f0add-bsl" is still being deleted... | |
[1mSTEP[0m: Clean backups after test | |
Backup backup-add-bsl is going to be deleted... | |
velero cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero delete backup backup-add-bsl --confirm | |
Request to delete backup "backup-add-bsl" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Backup backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0 is going to be deleted... | |
velero cmd =/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero delete backup backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0 --confirm | |
Request to delete backup "backup-default-6d06711d-7442-48b9-a064-cd1c4e2ba3f0" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Velero uninstalled ⛵ | |
[32m• [SLOW TEST:728.576 seconds][0m | |
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups | |
[90m/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/e2e_suite_test.go:118[0m | |
when kibishii is the sample workload | |
[90m/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/backup/backup.go:118[0m | |
should successfully back up and restore to an additional BackupStorageLocation with unique credentials | |
[90m/velero/workspace/velero-e2e-test/e2e/velero/test/e2e/backup/backup.go:146[0m | |
[90m------------------------------[0m |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment