Created
March 2, 2023 06:53
-
-
Save blackpiglet/2a2176ec5913b1bdf1937cdda421ab75 to your computer and use it in GitHub Desktop.
basic E2E test result
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
GOOS=linux \ | |
GOARCH=amd64 \ | |
VERSION=main \ | |
REGISTRY=velero \ | |
PKG=github.com/vmware-tanzu/velero \ | |
BIN=velero \ | |
GIT_SHA=1a0e5b471dae447a3f7b7df1b41a59e24e544295 \ | |
GIT_TREE_STATE=dirty \ | |
OUTPUT_DIR=$(pwd)/_output/bin/linux/amd64 \ | |
./hack/build.sh | |
make -e VERSION=main -C test/e2e run | |
make[1]: Entering directory '/root/go/src/github.com/blackpiglet/velero/test/e2e' | |
go install github.com/onsi/ginkgo/ginkgo@v1.16.5 | |
Using credentials from /root/credentials-velero | |
Using bucket jxun to store backups from E2E tests | |
Using cloud provider gcp | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
Running Suite: E2e Suite | |
======================== | |
Random Seed: 1677723216 | |
Will run 8 of 45 specs | |
SSSSSSSSSSSSSSSS | |
------------------------------ | |
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups when kibishii is the sample workload | |
should be successfully backed up and restored to the default BackupStorageLocation | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:78 | |
For cloud platforms, object store plugin provider will be set as cloud providerRunning cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --use-node-agent --default-volumes-to-fs-backup --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --use-node-agent --default-volumes-to-fs-backup --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json" | |
image pull secret "image-pull-secret" set for velero serviceaccount | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged | |
namespace/velero created | |
clusterrolebinding.rbac.authorization.k8s.io/velero created | |
serviceaccount/velero created | |
secret/cloud-credentials created | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
deployment.apps/velero created | |
daemonset.apps/node-agent created | |
secret/image-pull-secret created | |
Waiting for Velero deployment to be ready. | |
Waiting for node-agent daemonset to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-113d3921-a9e5-45bd-bcfb-bc16729e311b --wait --include-namespaces kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b | |
Backup request "backup-113d3921-a9e5-45bd-bcfb-bc16729e311b" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
................ | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-113d3921-a9e5-45bd-bcfb-bc16729e311b` and `velero backup logs backup-113d3921-a9e5-45bd-bcfb-bc16729e311b`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-113d3921-a9e5-45bd-bcfb-bc16729e311b | |
/usr/local/bin/kubectl get podvolumebackup -n velero | |
/usr/bin/grep kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b | |
/usr/bin/awk {print $1} | |
line: backup-113d3921-a9e5-45bd-bcfb-bc16729e311b-5bqpt | |
line: backup-113d3921-a9e5-45bd-bcfb-bc16729e311b-q7l95 | |
|| VERIFICATION || - Snapshots should not exist in cloud, backup backup-113d3921-a9e5-45bd-bcfb-bc16729e311b | |
Snapshot count 0 is as expected 0 | |
|| EXPECTED || - Snapshots do not exist in cloud, backup backup-113d3921-a9e5-45bd-bcfb-bc16729e311b | |
Simulating a disaster by removing namespace kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-113d3921-a9e5-45bd-bcfb-bc16729e311b --from-backup backup-113d3921-a9e5-45bd-bcfb-bc16729e311b --wait | |
Restore request "restore-113d3921-a9e5-45bd-bcfb-bc16729e311b" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
...................... | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-113d3921-a9e5-45bd-bcfb-bc16729e311b` and `velero restore logs restore-113d3921-a9e5-45bd-bcfb-bc16729e311b`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-113d3921-a9e5-45bd-bcfb-bc16729e311b | |
/usr/local/bin/kubectl get podvolumerestore -n velero | |
/usr/bin/grep kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b | |
/usr/bin/awk {print $1} | |
line: restore-113d3921-a9e5-45bd-bcfb-bc16729e311b-n5xqw | |
line: restore-113d3921-a9e5-45bd-bcfb-bc16729e311b-tmhxf | |
Waiting for kibishii pods to be ready | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
kibishii test completed successfully | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
namespace "kibishii-workload113d3921-a9e5-45bd-bcfb-bc16729e311b" is still being deleted... | |
STEP: Clean backups after test | |
Backup backup-113d3921-a9e5-45bd-bcfb-bc16729e311b is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-113d3921-a9e5-45bd-bcfb-bc16729e311b --confirm | |
Request to delete backup "backup-113d3921-a9e5-45bd-bcfb-bc16729e311b" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Velero uninstalled ⛵ | |
• [SLOW TEST:640.705 seconds] | |
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:87 | |
when kibishii is the sample workload | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:77 | |
should be successfully backed up and restored to the default BackupStorageLocation | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:78 | |
------------------------------ | |
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups when kibishii is the sample workload | |
should successfully back up and restore to an additional BackupStorageLocation with unique credentials | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:109 | |
For cloud platforms, object store plugin provider will be set as cloud providerRunning cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --use-node-agent --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --use-node-agent --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json" | |
image pull secret "image-pull-secret" set for velero serviceaccount | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged | |
namespace/velero created | |
clusterrolebinding.rbac.authorization.k8s.io/velero created | |
serviceaccount/velero created | |
secret/cloud-credentials created | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
deployment.apps/velero created | |
daemonset.apps/node-agent created | |
secret/image-pull-secret created | |
Waiting for Velero deployment to be ready. | |
Waiting for node-agent daemonset to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Get Version Command:/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero version --timeout 60s --client-only | |
Version:Client: Version: main Git commit: 1a0e5b471dae447a3f7b7df1b41a59e24e544295-dirty | |
addPlugins cmd = | |
provider cmd = aws | |
plugins cmd = [velero/velero-plugin-for-aws:main] | |
installPluginCmd cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero plugin add velero/velero-plugin-for-aws:main | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup-location bsl-00709616-67a9-4cb3-ad55-9e6c3e772463 --provider aws --bucket mqiu-bucket --credential bsl-credentials-00709616-67a9-4cb3-ad55-9e6c3e772463=creds-aws | |
Backup storage location "bsl-00709616-67a9-4cb3-ad55-9e6c3e772463" configured successfully. | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463 -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463 jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463 --wait --include-namespaces kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463 --default-volumes-to-fs-backup --snapshot-volumes=false --storage-location default | |
Backup request "backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.............. | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463` and `velero backup logs backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463 | |
/usr/local/bin/kubectl get podvolumebackup -n velero | |
/usr/bin/grep kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463 | |
/usr/bin/awk {print $1} | |
line: backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463-l46wf | |
line: backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463-pv9xz | |
|| VERIFICATION || - Snapshots should not exist in cloud, backup backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463 | |
Snapshot count 0 is as expected 0 | |
|| EXPECTED || - Snapshots do not exist in cloud, backup backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463 | |
Simulating a disaster by removing namespace kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463 | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-default-00709616-67a9-4cb3-ad55-9e6c3e772463 --from-backup backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463 --wait | |
Restore request "restore-default-00709616-67a9-4cb3-ad55-9e6c3e772463" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
.................. | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-default-00709616-67a9-4cb3-ad55-9e6c3e772463` and `velero restore logs restore-default-00709616-67a9-4cb3-ad55-9e6c3e772463`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-default-00709616-67a9-4cb3-ad55-9e6c3e772463 | |
/usr/local/bin/kubectl get podvolumerestore -n velero | |
/usr/bin/grep kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463 | |
/usr/bin/awk {print $1} | |
line: restore-default-00709616-67a9-4cb3-ad55-9e6c3e772463-b92lb | |
line: restore-default-00709616-67a9-4cb3-ad55-9e6c3e772463-xwqmc | |
Waiting for kibishii pods to be ready | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463 jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
kibishii test completed successfully | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463 -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463 jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-bsl-00709616-67a9-4cb3-ad55-9e6c3e772463 --wait --include-namespaces kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463 --default-volumes-to-fs-backup --snapshot-volumes=false --storage-location bsl-00709616-67a9-4cb3-ad55-9e6c3e772463 | |
Backup request "backup-bsl-00709616-67a9-4cb3-ad55-9e6c3e772463" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
................ | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-bsl-00709616-67a9-4cb3-ad55-9e6c3e772463` and `velero backup logs backup-bsl-00709616-67a9-4cb3-ad55-9e6c3e772463`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-bsl-00709616-67a9-4cb3-ad55-9e6c3e772463 | |
/usr/local/bin/kubectl get podvolumebackup -n velero | |
/usr/bin/grep kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463 | |
/usr/bin/awk {print $1} | |
line: backup-bsl-00709616-67a9-4cb3-ad55-9e6c3e772463-9cm9w | |
line: backup-bsl-00709616-67a9-4cb3-ad55-9e6c3e772463-hp684 | |
line: backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463-l46wf | |
line: backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463-pv9xz | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
namespace "kibishii-workload00709616-67a9-4cb3-ad55-9e6c3e772463" is still being deleted... | |
STEP: Clean backups after test | |
Backup backup-bsl-00709616-67a9-4cb3-ad55-9e6c3e772463 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-bsl-00709616-67a9-4cb3-ad55-9e6c3e772463 --confirm | |
Request to delete backup "backup-bsl-00709616-67a9-4cb3-ad55-9e6c3e772463" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Backup backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463 --confirm | |
Request to delete backup "backup-default-00709616-67a9-4cb3-ad55-9e6c3e772463" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Backup t0 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup t0 --confirm | |
Request to delete backup "t0" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Velero uninstalled ⛵ | |
• [SLOW TEST:737.794 seconds] | |
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:87 | |
when kibishii is the sample workload | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:77 | |
should successfully back up and restore to an additional BackupStorageLocation with unique credentials | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:109 | |
------------------------------ | |
SSSS | |
------------------------------ | |
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups when kibishii is the sample workload | |
should be successfully backed up and restored to the default BackupStorageLocation | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:78 | |
For cloud platforms, object store plugin provider will be set as cloud providerRunning cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --use-node-agent --default-volumes-to-fs-backup --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --use-node-agent --default-volumes-to-fs-backup --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json" | |
image pull secret "image-pull-secret" set for velero serviceaccount | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged | |
namespace/velero created | |
clusterrolebinding.rbac.authorization.k8s.io/velero created | |
serviceaccount/velero created | |
secret/cloud-credentials created | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
deployment.apps/velero created | |
daemonset.apps/node-agent created | |
secret/image-pull-secret created | |
Waiting for Velero deployment to be ready. | |
Waiting for node-agent daemonset to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9 -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9 jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-36fc8456-2ffa-4691-8277-5249a80deaf9 --wait --include-namespaces kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9 | |
Backup request "backup-36fc8456-2ffa-4691-8277-5249a80deaf9" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
............... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-36fc8456-2ffa-4691-8277-5249a80deaf9` and `velero backup logs backup-36fc8456-2ffa-4691-8277-5249a80deaf9`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-36fc8456-2ffa-4691-8277-5249a80deaf9 | |
/usr/local/bin/kubectl get podvolumebackup -n velero | |
/usr/bin/grep kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9 | |
/usr/bin/awk {print $1} | |
line: backup-36fc8456-2ffa-4691-8277-5249a80deaf9-2wvk2 | |
line: backup-36fc8456-2ffa-4691-8277-5249a80deaf9-g9hwc | |
namespace "kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9" is still being deleted... | |
namespace "kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9" is still being deleted... | |
namespace "kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9" is still being deleted... | |
namespace "kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9" is still being deleted... | |
namespace "kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9" is still being deleted... | |
namespace "kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9" is still being deleted... | |
namespace "kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9" is still being deleted... | |
namespace "kibishii-workload36fc8456-2ffa-4691-8277-5249a80deaf9" is still being deleted... | |
STEP: Clean backups after test | |
Backup backup-36fc8456-2ffa-4691-8277-5249a80deaf9 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-36fc8456-2ffa-4691-8277-5249a80deaf9 --confirm | |
Request to delete backup "backup-36fc8456-2ffa-4691-8277-5249a80deaf9" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Velero uninstalled ⛵ | |
• [SLOW TEST:151.305 seconds] | |
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:89 | |
when kibishii is the sample workload | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:77 | |
should be successfully backed up and restored to the default BackupStorageLocation | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:78 | |
------------------------------ | |
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups when kibishii is the sample workload | |
should successfully back up and restore to an additional BackupStorageLocation with unique credentials | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:109 | |
For cloud platforms, object store plugin provider will be set as cloud providerRunning cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json" | |
image pull secret "image-pull-secret" set for velero serviceaccount | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged | |
namespace/velero created | |
clusterrolebinding.rbac.authorization.k8s.io/velero created | |
serviceaccount/velero created | |
secret/cloud-credentials created | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
deployment.apps/velero created | |
secret/image-pull-secret created | |
Waiting for Velero deployment to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Get Version Command:/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero version --timeout 60s --client-only | |
Version:Client: Version: main Git commit: 1a0e5b471dae447a3f7b7df1b41a59e24e544295-dirty | |
addPlugins cmd = | |
provider cmd = aws | |
plugins cmd = [velero/velero-plugin-for-aws:main] | |
installPluginCmd cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero plugin add velero/velero-plugin-for-aws:main | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup-location bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf --provider aws --bucket mqiu-bucket --credential bsl-credentials-8458e178-dda7-46c1-b2e3-57cf28f0b1bf=creds-aws | |
Backup storage location "bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf" configured successfully. | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf --wait --include-namespaces kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf --snapshot-volumes --storage-location default | |
Backup request "backup-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf` and `velero backup logs backup-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
/usr/local/bin/kubectl get podvolumebackup -n velero | |
/usr/bin/grep kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
/usr/bin/awk {print $1} | |
{kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf [] 2 [kibishii-deployment-0 kibishii-deployment-1] false} | |
|| VERIFICATION || - Snapshots should exist in cloud, backup backup-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
Snapshot count 2 is as expected 0 | |
|| EXPECTED || - Snapshots exist in cloud, backup backup-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
Simulating a disaster by removing namespace kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
Waiting 5 minutes to make sure the snapshots are ready... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf --from-backup backup-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf --wait | |
Restore request "restore-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
... | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf` and `velero restore logs restore-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
Waiting for kibishii pods to be ready | |
Pod jump-pad is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
kibishii test completed successfully | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf --wait --include-namespaces kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf --snapshot-volumes --storage-location bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
Backup request "backup-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
..... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf` and `velero backup logs backup-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
/usr/local/bin/kubectl get podvolumebackup -n velero | |
/usr/bin/grep kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
/usr/bin/awk {print $1} | |
{kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf [] 2 [kibishii-deployment-0 kibishii-deployment-1] false} | |
|| VERIFICATION || - Snapshots should exist in cloud, backup backup-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
Snapshot count 2 is as expected 0 | |
|| EXPECTED || - Snapshots exist in cloud, backup backup-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
Simulating a disaster by removing namespace kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
Waiting 5 minutes to make sure the snapshots are ready... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf --from-backup backup-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf --wait | |
Restore request "restore-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
.... | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf` and `velero restore logs restore-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf | |
Waiting for kibishii pods to be ready | |
Pod etcd0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
kibishii test completed successfully | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
namespace "kibishii-workload8458e178-dda7-46c1-b2e3-57cf28f0b1bf" is still being deleted... | |
STEP: Clean backups after test | |
Backup backup-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf --confirm | |
Request to delete backup "backup-bsl-8458e178-dda7-46c1-b2e3-57cf28f0b1bf" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Backup backup-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf --confirm | |
Request to delete backup "backup-default-8458e178-dda7-46c1-b2e3-57cf28f0b1bf" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Backup t0 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup t0 --confirm | |
Request to delete backup "t0" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Velero uninstalled ⛵ | |
• [SLOW TEST:1214.053 seconds] | |
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:89 | |
when kibishii is the sample workload | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:77 | |
should successfully back up and restore to an additional BackupStorageLocation with unique credentials | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:109 | |
------------------------------ | |
SSSSSSSSS | |
------------------------------ | |
[Basic][Nodeport] Service nodeport reservation during restore is configurable | |
Nodeport can be preserved or omit during restore | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:106 | |
For cloud platforms, object store plugin provider will be set as cloud providerRunning cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json" | |
image pull secret "image-pull-secret" set for velero serviceaccount | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged | |
namespace/velero created | |
clusterrolebinding.rbac.authorization.k8s.io/velero created | |
serviceaccount/velero created | |
secret/cloud-credentials created | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
deployment.apps/velero created | |
secret/image-pull-secret created | |
Waiting for Velero deployment to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Running test case Nodeport preservation | |
STEP: Creating service nginx-service-a89a9ce0-17d1-4685-bd5d-67c6af1c6547 in namespaces nodeport-1 ...... | |
Kubectl exec cmd =/usr/local/bin/kubectl get service -A | |
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | |
default kubernetes ClusterIP 10.112.0.1 <none> 443/TCP 126d | |
kube-system default-http-backend NodePort 10.112.14.201 <none> 80:32380/TCP 126d | |
kube-system kube-dns ClusterIP 10.112.0.10 <none> 53/UDP,53/TCP 126d | |
kube-system metrics-server ClusterIP 10.112.11.41 <none> 443/TCP 126d | |
nodeport-1 nginx-service-a89a9ce0-17d1-4685-bd5d-67c6af1c6547 LoadBalancer 10.112.5.167 <pending> 80:32161/TCP 0s | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero backup backup-label-selector-a89a9ce0-17d1-4685-bd5d-67c6af1c6547 --include-namespaces nodeport-1 --wait | |
Backup request "backup-label-selector-a89a9ce0-17d1-4685-bd5d-67c6af1c6547" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
......... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-label-selector-a89a9ce0-17d1-4685-bd5d-67c6af1c6547` and `velero backup logs backup-label-selector-a89a9ce0-17d1-4685-bd5d-67c6af1c6547`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-label-selector-a89a9ce0-17d1-4685-bd5d-67c6af1c6547 | |
STEP: Start to destroy namespace nodeport-...... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
namespace "nodeport-1" is still being deleted... | |
Namespace nodeport-1 was deleted | |
Kubectl exec cmd =/usr/local/bin/kubectl get service -A | |
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | |
default kubernetes ClusterIP 10.112.0.1 <none> 443/TCP 126d | |
kube-system default-http-backend NodePort 10.112.14.201 <none> 80:32380/TCP 126d | |
kube-system kube-dns ClusterIP 10.112.0.10 <none> 53/UDP,53/TCP 126d | |
kube-system metrics-server ClusterIP 10.112.11.41 <none> 443/TCP 126d | |
STEP: Creating a new service which has the same nodeport as backed up service has in a new namespaces for nodeport collision ...nodeport-tmp | |
STEP: Clean namespace with prefix nodeport- after test | |
STEP: Clean backups after test | |
Backup backup-label-selector-a89a9ce0-17d1-4685-bd5d-67c6af1c6547 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-label-selector-a89a9ce0-17d1-4685-bd5d-67c6af1c6547 --confirm | |
Request to delete backup "backup-label-selector-a89a9ce0-17d1-4685-bd5d-67c6af1c6547" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Velero uninstalled ⛵ | |
• Failure [95.293 seconds] | |
[Basic][Nodeport] Service nodeport reservation during restore is configurable | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:135 | |
Nodeport can be preserved or omit during restore [It] | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:106 | |
Failed to create service nginx-service-a89a9ce0-17d1-4685-bd5d-67c6af1c6547 | |
Expected success, but got an error: | |
<*errors.StatusError | 0xc000d75400>: { | |
ErrStatus: { | |
TypeMeta: {Kind: "", APIVersion: ""}, | |
ListMeta: { | |
SelfLink: "", | |
ResourceVersion: "", | |
Continue: "", | |
RemainingItemCount: nil, | |
}, | |
Status: "Failure", | |
Message: "Service \"nginx-service-a89a9ce0-17d1-4685-bd5d-67c6af1c6547\" is invalid: spec.ports[0].nodePort: Invalid value: 32161: provided port is already allocated", | |
Reason: "Invalid", | |
Details: { | |
Name: "nginx-service-a89a9ce0-17d1-4685-bd5d-67c6af1c6547", | |
Group: "", | |
Kind: "Service", | |
UID: "", | |
Causes: [ | |
{ | |
Type: "FieldValueInvalid", | |
Message: "Invalid value: 32161: provided port is already allocated", | |
Field: "spec.ports[0].nodePort", | |
}, | |
], | |
RetryAfterSeconds: 0, | |
}, | |
Code: 422, | |
}, | |
} | |
Service "nginx-service-a89a9ce0-17d1-4685-bd5d-67c6af1c6547" is invalid: spec.ports[0].nodePort: Invalid value: 32161: provided port is already allocated | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/basic/nodeport.go:94 | |
------------------------------ | |
SSSSS | |
------------------------------ | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
Should be successfully backed up and restored including annotations | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:150 | |
For cloud platforms, object store plugin provider will be set as cloud providerRunning cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:11c4a9 --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json" | |
image pull secret "image-pull-secret" set for velero serviceaccount | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged | |
namespace/velero created | |
clusterrolebinding.rbac.authorization.k8s.io/velero created | |
serviceaccount/velero created | |
secret/cloud-credentials created | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
deployment.apps/velero created | |
daemonset.apps/node-agent created | |
secret/image-pull-secret created | |
Waiting for Velero deployment to be ready. | |
Waiting for node-agent daemonset to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Running test case Backup/restore namespace annotation test | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero backup backup-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811 --include-namespaces namespace-annotations-23bfefb9-381f-4e6c-b9de-42638d175811-0 --default-volumes-to-fs-backup --wait | |
Backup request "backup-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.......... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811` and `velero backup logs backup-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811 | |
STEP: Start to destroy namespace namespace-annotations-23bfefb9-381f-4e6c-b9de-42638d175811...... | |
namespace "namespace-annotations-23bfefb9-381f-4e6c-b9de-42638d175811-0" is still being deleted... | |
namespace "namespace-annotations-23bfefb9-381f-4e6c-b9de-42638d175811-0" is still being deleted... | |
Namespace namespace-annotations-23bfefb9-381f-4e6c-b9de-42638d175811-0 was deleted | |
STEP: Start to restore ...... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero restore restore-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811 --from-backup backup-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811 --wait | |
Restore request "restore-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
. | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811` and `velero restore logs restore-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811 | |
STEP: Clean namespace with prefix namespace-annotations-23bfefb9-381f-4e6c-b9de-42638d175811 after test | |
STEP: Clean backups after test | |
Backup backup-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811 --confirm | |
Request to delete backup "backup-namespace-annotations23bfefb9-381f-4e6c-b9de-42638d175811" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
• [SLOW TEST:30.014 seconds] | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:91 | |
Should be successfully backed up and restored including annotations | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:150 | |
------------------------------ | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
When I create 2 namespaces should be successfully backed up and restored | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:150 | |
Running test case | |
Creating namespaces ... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero backup backup-6c001c32-1d04-4e12-a543-76b9d260f2c2 --exclude-namespaces default,kube-node-lease,kube-public,kube-system,namespace-annotations-23bfefb9-381f-4e6c-b9de-42638d175811-0,upgrade,upgrade01,velero,velero-repo-test --default-volumes-to-fs-backup --wait | |
Backup request "backup-6c001c32-1d04-4e12-a543-76b9d260f2c2" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.. | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-6c001c32-1d04-4e12-a543-76b9d260f2c2` and `velero backup logs backup-6c001c32-1d04-4e12-a543-76b9d260f2c2`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-6c001c32-1d04-4e12-a543-76b9d260f2c2 | |
STEP: Start to restore ...... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero restore restore-6c001c32-1d04-4e12-a543-76b9d260f2c2 --from-backup backup-6c001c32-1d04-4e12-a543-76b9d260f2c2 --wait | |
Restore request "restore-6c001c32-1d04-4e12-a543-76b9d260f2c2" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-6c001c32-1d04-4e12-a543-76b9d260f2c2` and `velero restore logs restore-6c001c32-1d04-4e12-a543-76b9d260f2c2`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-6c001c32-1d04-4e12-a543-76b9d260f2c2 | |
STEP: Clean namespace with prefix nstest-6c001c32-1d04-4e12-a543-76b9d260f2c2 after test | |
STEP: Clean backups after test | |
Backup backup-6c001c32-1d04-4e12-a543-76b9d260f2c2 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-6c001c32-1d04-4e12-a543-76b9d260f2c2 --confirm | |
Request to delete backup "backup-6c001c32-1d04-4e12-a543-76b9d260f2c2" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
• [SLOW TEST:14.695 seconds] | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:91 | |
When I create 2 namespaces should be successfully backed up and restored | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:150 | |
------------------------------ | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
should be successfully backed up and restored | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:150 | |
Running test case Backup/restore of Namespaced Scoped and Cluster Scoped RBAC | |
Creating namespaces ...rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0 | |
Creating service account ...rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero backup backup-rbac9e7e4e3f-1d19-40ac-8331-3297ad626f2c --include-namespaces rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0 --default-volumes-to-fs-backup --wait | |
Backup request "backup-rbac9e7e4e3f-1d19-40ac-8331-3297ad626f2c" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.. | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-rbac9e7e4e3f-1d19-40ac-8331-3297ad626f2c` and `velero backup logs backup-rbac9e7e4e3f-1d19-40ac-8331-3297ad626f2c`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-rbac9e7e4e3f-1d19-40ac-8331-3297ad626f2c | |
Cleaning up clusterrole clusterrole-rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0 | |
Cleaning up clusterrolebinding clusterrolebinding-rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0 | |
namespace "rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0" is still being deleted... | |
namespace "rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0" is still being deleted... | |
Namespace rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0 was deleted | |
STEP: Start to restore ...... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero restore restore-rbac9e7e4e3f-1d19-40ac-8331-3297ad626f2c --from-backup backup-rbac9e7e4e3f-1d19-40ac-8331-3297ad626f2c --wait | |
Restore request "restore-rbac9e7e4e3f-1d19-40ac-8331-3297ad626f2c" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-rbac9e7e4e3f-1d19-40ac-8331-3297ad626f2c` and `velero restore logs restore-rbac9e7e4e3f-1d19-40ac-8331-3297ad626f2c`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-rbac9e7e4e3f-1d19-40ac-8331-3297ad626f2c | |
Cleaning up clusterrole clusterrole-rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0 | |
Cleaning up clusterrolebinding clusterrolebinding-rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0 | |
namespace "rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0" is still being deleted... | |
namespace "rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0" is still being deleted... | |
Namespace rabc-9e7e4e3f-1d19-40ac-8331-3297ad626f2c-0 was deleted | |
Velero uninstalled ⛵ | |
• [SLOW TEST:36.251 seconds] | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:91 | |
should be successfully backed up and restored | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:150 | |
------------------------------ | |
SSS | |
JUnit report was created: /root/go/src/github.com/blackpiglet/velero/test/e2e/report.xml | |
Summarizing 1 Failure: | |
[Fail] [Basic][Nodeport] Service nodeport reservation during restore is configurable [It] Nodeport can be preserved or omit during restore | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/basic/nodeport.go:94 | |
Ran 8 of 45 Specs in 2920.114 seconds | |
FAIL! -- 7 Passed | 1 Failed | 0 Pending | 37 Skipped | |
--- FAIL: TestE2e (2920.41s) | |
FAIL | |
You're using deprecated Ginkgo functionality: | |
============================================= | |
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes. | |
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. Please give the RC a try and send us feedback! | |
- To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md | |
- For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta | |
- To comment, chime in at https://github.com/onsi/ginkgo/issues/711 | |
You are using a custom reporter. Support for custom reporters will likely be removed in V2. Most users were using them to generate junit or teamcity reports and this functionality will be merged into the core reporter. In addition, Ginkgo 2.0 will support emitting a JSON-formatted report that users can then manipulate to generate custom reports. | |
If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711 | |
Learn more at: https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-custom-reporters | |
To silence deprecations that can be silenced set the following environment variable: | |
ACK_GINKGO_DEPRECATIONS=1.16.5 | |
Ginkgo ran 1 suite in 52m35.20592527s | |
Test Suite Failed | |
Ginkgo 2.0 is coming soon! | |
========================== | |
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes. | |
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. Please give the RC a try and send us feedback! | |
- To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md | |
- For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta | |
- To comment, chime in at https://github.com/onsi/ginkgo/issues/711 | |
To silence this notice, set the environment variable: ACK_GINKGO_RC=true | |
Alternatively you can: touch $HOME/.ack-ginkgo-rc | |
make[1]: *** [Makefile:111: run] Error 1 | |
make[1]: Leaving directory '/root/go/src/github.com/blackpiglet/velero/test/e2e' | |
make: *** [Makefile:364: test-e2e] Error 2 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment