Skip to content

Instantly share code, notes, and snippets.

@blackpiglet
Created February 10, 2023 02:37
Show Gist options
  • Save blackpiglet/6debcb5507122ad138a369dcba8d4ada to your computer and use it in GitHub Desktop.
Save blackpiglet/6debcb5507122ad138a369dcba8d4ada to your computer and use it in GitHub Desktop.
GOOS=linux \
GOARCH=amd64 \
VERSION=main \
REGISTRY=velero \
PKG=github.com/vmware-tanzu/velero \
BIN=velero \
GIT_SHA=7602577306d4a6019944c47c20e7735cb3b5d391 \
GIT_TREE_STATE=dirty \
OUTPUT_DIR=$(pwd)/_output/bin/linux/amd64 \
./hack/build.sh
make -e VERSION=main -C test/e2e run
make[1]: Entering directory '/root/go/src/github.com/blackpiglet/velero/test/e2e'
go install github.com/onsi/ginkgo/ginkgo@v1.16.5
Using credentials from /root/credentials-velero
Using bucket jxun to store backups from E2E tests
Using cloud provider gcp
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
STEP: Create test client instance
Running Suite: E2e Suite
========================
Random Seed: 1675933910
Will run 7 of 44 specs
SSSSSSSSSS
------------------------------
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups when kibishii is the sample workload
should be successfully backed up and restored to the default BackupStorageLocation
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:74
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:add-new-resource-filters --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only"
Applying velero CRDs...
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created
Waiting velero CRDs ready...
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:add-new-resource-filters --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json"
image pull secret "image-pull-secret" set for velero serviceaccount
Running cmd "/usr/local/bin/kubectl apply -f -"
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged
namespace/velero created
clusterrolebinding.rbac.authorization.k8s.io/velero created
serviceaccount/velero created
secret/cloud-credentials created
backupstoragelocation.velero.io/default created
volumesnapshotlocation.velero.io/default created
deployment.apps/velero created
secret/image-pull-secret created
Waiting for Velero deployment to be ready.
Velero is installed and ready to be tested in the velero namespace! ⛵
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s
Waiting for kibishii jump-pad pod to be ready
Waiting for kibishii pods to be ready
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-7ccacc18-7df6-407e-a940-48b5bc7616ef --wait --include-namespaces kibishii-workload --snapshot-volumes
Backup request "backup-7ccacc18-7df6-407e-a940-48b5bc7616ef" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
....
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-7ccacc18-7df6-407e-a940-48b5bc7616ef` and `velero backup logs backup-7ccacc18-7df6-407e-a940-48b5bc7616ef`.
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-7ccacc18-7df6-407e-a940-48b5bc7616ef
{kibishii-workload [] 2 [kibishii-deployment-0 kibishii-deployment-1] false}
|| VERIFICATION || - Snapshots should exist in cloud, backup backup-7ccacc18-7df6-407e-a940-48b5bc7616ef
Snapshot count 2 is as expected 0
|| EXPECTED || - Snapshots exist in cloud, backup backup-7ccacc18-7df6-407e-a940-48b5bc7616ef
Simulating a disaster by removing namespace kibishii-workload
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
Waiting 5 minutes to make sure the snapshots are ready...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-7ccacc18-7df6-407e-a940-48b5bc7616ef --from-backup backup-7ccacc18-7df6-407e-a940-48b5bc7616ef --wait
Restore request "restore-7ccacc18-7df6-407e-a940-48b5bc7616ef" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
...
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-7ccacc18-7df6-407e-a940-48b5bc7616ef` and `velero restore logs restore-7ccacc18-7df6-407e-a940-48b5bc7616ef`.
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-7ccacc18-7df6-407e-a940-48b5bc7616ef
Waiting for kibishii pods to be ready
Pod jump-pad is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
running kibishii verify
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2
kibishii test completed successfully
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
STEP: Clean backups after test
Backup backup-7ccacc18-7df6-407e-a940-48b5bc7616ef is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-7ccacc18-7df6-407e-a940-48b5bc7616ef --confirm
Request to delete backup "backup-7ccacc18-7df6-407e-a940-48b5bc7616ef" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Backup test01 is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup test01 --confirm
Request to delete backup "test01" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Backup test2 is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup test2 --confirm
Request to delete backup "test2" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Velero uninstalled ⛵
• [SLOW TEST:630.924 seconds]
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:89
when kibishii is the sample workload
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:73
should be successfully backed up and restored to the default BackupStorageLocation
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:74
------------------------------
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups when kibishii is the sample workload
should successfully back up and restore to an additional BackupStorageLocation with unique credentials
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:83
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:add-new-resource-filters --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only"
Applying velero CRDs...
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created
Waiting velero CRDs ready...
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:add-new-resource-filters --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json"
image pull secret "image-pull-secret" set for velero serviceaccount
Running cmd "/usr/local/bin/kubectl apply -f -"
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged
namespace/velero created
clusterrolebinding.rbac.authorization.k8s.io/velero created
serviceaccount/velero created
secret/cloud-credentials created
backupstoragelocation.velero.io/default created
volumesnapshotlocation.velero.io/default created
deployment.apps/velero created
secret/image-pull-secret created
Waiting for Velero deployment to be ready.
Velero is installed and ready to be tested in the velero namespace! ⛵
Get Version Command:/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero version --timeout 60s --client-only
Version:Client: Version: main Git commit: 7602577306d4a6019944c47c20e7735cb3b5d391-dirty
addPlugins cmd =
provider cmd = aws
plugins cmd = [velero/velero-plugin-for-aws:main]
installPluginCmd cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero plugin add velero/velero-plugin-for-aws:main
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup-location bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3 --provider aws --bucket mqiu-bucket --credential bsl-credentials-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3=creds-aws
Backup storage location "bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3" configured successfully.
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s
Waiting for kibishii jump-pad pod to be ready
Waiting for kibishii pods to be ready
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3 --wait --include-namespaces kibishii-workload --snapshot-volumes --storage-location default
Backup request "backup-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
...
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3` and `velero backup logs backup-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3`.
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3
{kibishii-workload [] 2 [kibishii-deployment-0 kibishii-deployment-1] false}
|| VERIFICATION || - Snapshots should exist in cloud, backup backup-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3
Snapshot count 2 is as expected 0
|| EXPECTED || - Snapshots exist in cloud, backup backup-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3
Simulating a disaster by removing namespace kibishii-workload
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
Waiting 5 minutes to make sure the snapshots are ready...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3 --from-backup backup-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3 --wait
Restore request "restore-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
..
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3` and `velero restore logs restore-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3`.
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3
Waiting for kibishii pods to be ready
Pod jump-pad is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-1 is in state Pending waiting for it to be Running
Pod kibishii-deployment-1 is in state Pending waiting for it to be Running
running kibishii verify
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2
kibishii test completed successfully
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s
Waiting for kibishii jump-pad pod to be ready
Waiting for kibishii pods to be ready
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3 --wait --include-namespaces kibishii-workload --snapshot-volumes --storage-location bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3
Backup request "backup-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
.....
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3` and `velero backup logs backup-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3`.
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3
{kibishii-workload [] 2 [kibishii-deployment-0 kibishii-deployment-1] false}
|| VERIFICATION || - Snapshots should exist in cloud, backup backup-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3
Snapshot count 2 is as expected 0
|| EXPECTED || - Snapshots exist in cloud, backup backup-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3
Simulating a disaster by removing namespace kibishii-workload
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
Waiting 5 minutes to make sure the snapshots are ready...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3 --from-backup backup-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3 --wait
Restore request "restore-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
....
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3` and `velero restore logs restore-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3`.
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3
Waiting for kibishii pods to be ready
Pod etcd0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
running kibishii verify
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2
kibishii test completed successfully
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
STEP: Clean backups after test
Backup backup-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3 is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3 --confirm
Request to delete backup "backup-bsl-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Backup backup-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3 is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3 --confirm
Request to delete backup "backup-default-c42a6ea4-8d0e-447d-abf0-0541c9bb39f3" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Backup backup-rbac is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-rbac --confirm
Request to delete backup "backup-rbac" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Backup test2 is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup test2 --confirm
Request to delete backup "test2" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Velero uninstalled ⛵
• [SLOW TEST:1211.981 seconds]
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:89
when kibishii is the sample workload
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:73
should successfully back up and restore to an additional BackupStorageLocation with unique credentials
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:83
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[Basic][ClusterResource] Backup/restore of cluster resources
Should be successfully backed up and restored including annotations
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:add-new-resource-filters --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only"
Applying velero CRDs...
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created
Waiting velero CRDs ready...
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:add-new-resource-filters --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json"
image pull secret "image-pull-secret" set for velero serviceaccount
Running cmd "/usr/local/bin/kubectl apply -f -"
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged
namespace/velero created
clusterrolebinding.rbac.authorization.k8s.io/velero created
serviceaccount/velero created
secret/cloud-credentials created
backupstoragelocation.velero.io/default created
volumesnapshotlocation.velero.io/default created
deployment.apps/velero created
daemonset.apps/node-agent created
secret/image-pull-secret created
Waiting for Velero deployment to be ready.
Waiting for node-agent daemonset to be ready.
Velero is installed and ready to be tested in the velero namespace! ⛵
Running test case Backup/restore namespace annotation test
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero backup backup-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d --include-namespaces namespace-annotations-1e3ca30f-b036-4738-a4dc-4851fe11ae9d-0 --default-volumes-to-fs-backup --wait
Backup request "backup-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
...
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d` and `velero backup logs backup-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d`.
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d
namespace "namespace-annotations-1e3ca30f-b036-4738-a4dc-4851fe11ae9d-0" is still being deleted...
namespace "namespace-annotations-1e3ca30f-b036-4738-a4dc-4851fe11ae9d-0" is still being deleted...
Delete namespace namespace-annotations-1e3ca30f-b036-4738-a4dc-4851fe11ae9d-0
STEP: Start to restore ......
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero restore restore-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d --from-backup backup-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d --wait
Restore request "restore-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d` and `velero restore logs restore-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d`.
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d
STEP: Clean namespace with prefix namespace-annotations-1e3ca30f-b036-4738-a4dc-4851fe11ae9d after test
STEP: Clean backups after test
Backup backup-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d --confirm
Request to delete backup "backup-namespace-annotations1e3ca30f-b036-4738-a4dc-4851fe11ae9d" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Backup test2 is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup test2 --confirm
Request to delete backup "test2" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
• [SLOW TEST:29.731 seconds]
[Basic][ClusterResource] Backup/restore of cluster resources
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:91
Should be successfully backed up and restored including annotations
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135
------------------------------
[Basic][ClusterResource] Backup/restore of cluster resources
When I create 2 namespaces should be successfully backed up and restored
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135
Running test case
Creating namespaces ...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero backup backup-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a --exclude-namespaces default,kube-node-lease,kube-public,kube-system,namespace-annotations-1e3ca30f-b036-4738-a4dc-4851fe11ae9d-0,upgrade,upgrade01,velero,velero-repo-test --default-volumes-to-fs-backup --wait
Backup request "backup-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
..
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a` and `velero backup logs backup-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a`.
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a
STEP: Start to restore ......
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero restore restore-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a --from-backup backup-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a --wait
Restore request "restore-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a` and `velero restore logs restore-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a`.
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a
STEP: Clean namespace with prefix nstest-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a after test
STEP: Clean backups after test
Backup backup-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a --confirm
Request to delete backup "backup-204c49ee-5e02-41ca-8d0d-df32d0bdfe8a" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
• [SLOW TEST:14.608 seconds]
[Basic][ClusterResource] Backup/restore of cluster resources
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:91
When I create 2 namespaces should be successfully backed up and restored
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135
------------------------------
[Basic][ClusterResource] Backup/restore of cluster resources
should be successfully backed up and restored
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135
Running test case Backup/restore of Namespaced Scoped and Cluster Scoped RBAC
Creating namespaces ...rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0
Creating service account ...rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero backup backup-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3 --include-namespaces rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0 --default-volumes-to-fs-backup --wait
Backup request "backup-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
..
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3` and `velero backup logs backup-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3`.
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3
Cleaning up clusterrole clusterrole-rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0
Cleaning up clusterrolebinding clusterrolebinding-rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0
namespace "rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0" is still being deleted...
namespace "rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0" is still being deleted...
Delete namespace rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0
STEP: Start to restore ......
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero restore restore-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3 --from-backup backup-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3 --wait
Restore request "restore-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3` and `velero restore logs restore-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3`.
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3
Cleaning up clusterrole clusterrole-rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0
Cleaning up clusterrolebinding clusterrolebinding-rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0
namespace "rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0" is still being deleted...
namespace "rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0" is still being deleted...
Delete namespace rabc-321d4b70-d62b-4e5f-acf8-1a7b172602f3-0
Velero uninstalled ⛵
• [SLOW TEST:36.348 seconds]
[Basic][ClusterResource] Backup/restore of cluster resources
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:91
should be successfully backed up and restored
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135
------------------------------
SSSSSSSSSSS
------------------------------
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups when kibishii is the sample workload
should be successfully backed up and restored to the default BackupStorageLocation
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:74
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:add-new-resource-filters --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only"
Applying velero CRDs...
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created
Waiting velero CRDs ready...
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:add-new-resource-filters --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json"
image pull secret "image-pull-secret" set for velero serviceaccount
Running cmd "/usr/local/bin/kubectl apply -f -"
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged
namespace/velero created
clusterrolebinding.rbac.authorization.k8s.io/velero created
serviceaccount/velero created
secret/cloud-credentials created
backupstoragelocation.velero.io/default created
volumesnapshotlocation.velero.io/default created
deployment.apps/velero created
daemonset.apps/node-agent created
secret/image-pull-secret created
Waiting for Velero deployment to be ready.
Waiting for node-agent daemonset to be ready.
Velero is installed and ready to be tested in the velero namespace! ⛵
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s
Waiting for kibishii jump-pad pod to be ready
Waiting for kibishii pods to be ready
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-003259ea-b23b-4be9-b4e5-abe6862bab4b --wait --include-namespaces kibishii-workload --default-volumes-to-fs-backup --snapshot-volumes=false
Backup request "backup-003259ea-b23b-4be9-b4e5-abe6862bab4b" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
................
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-003259ea-b23b-4be9-b4e5-abe6862bab4b` and `velero backup logs backup-003259ea-b23b-4be9-b4e5-abe6862bab4b`.
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-003259ea-b23b-4be9-b4e5-abe6862bab4b
Simulating a disaster by removing namespace kibishii-workload
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-003259ea-b23b-4be9-b4e5-abe6862bab4b --from-backup backup-003259ea-b23b-4be9-b4e5-abe6862bab4b --wait
Restore request "restore-003259ea-b23b-4be9-b4e5-abe6862bab4b" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
.....................
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-003259ea-b23b-4be9-b4e5-abe6862bab4b` and `velero restore logs restore-003259ea-b23b-4be9-b4e5-abe6862bab4b`.
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-003259ea-b23b-4be9-b4e5-abe6862bab4b
Waiting for kibishii pods to be ready
Pod kibishii-deployment-1 is in state Pending waiting for it to be Running
running kibishii verify
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2
kibishii test completed successfully
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
STEP: Clean backups after test
Backup backup-003259ea-b23b-4be9-b4e5-abe6862bab4b is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-003259ea-b23b-4be9-b4e5-abe6862bab4b --confirm
Request to delete backup "backup-003259ea-b23b-4be9-b4e5-abe6862bab4b" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Backup backup-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3 is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3 --confirm
Request to delete backup "backup-rbac321d4b70-d62b-4e5f-acf8-1a7b172602f3" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Velero uninstalled ⛵
• [SLOW TEST:305.082 seconds]
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:87
when kibishii is the sample workload
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:73
should be successfully backed up and restored to the default BackupStorageLocation
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:74
------------------------------
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups when kibishii is the sample workload
should successfully back up and restore to an additional BackupStorageLocation with unique credentials
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:83
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:add-new-resource-filters --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only"
Applying velero CRDs...
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created
Waiting velero CRDs ready...
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:add-new-resource-filters --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json"
image pull secret "image-pull-secret" set for velero serviceaccount
Running cmd "/usr/local/bin/kubectl apply -f -"
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged
namespace/velero created
clusterrolebinding.rbac.authorization.k8s.io/velero created
serviceaccount/velero created
secret/cloud-credentials created
backupstoragelocation.velero.io/default created
volumesnapshotlocation.velero.io/default created
deployment.apps/velero created
daemonset.apps/node-agent created
secret/image-pull-secret created
Waiting for Velero deployment to be ready.
Waiting for node-agent daemonset to be ready.
Velero is installed and ready to be tested in the velero namespace! ⛵
Get Version Command:/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero version --timeout 60s --client-only
Version:Client: Version: main Git commit: 7602577306d4a6019944c47c20e7735cb3b5d391-dirty
addPlugins cmd =
provider cmd = aws
plugins cmd = [velero/velero-plugin-for-aws:main]
installPluginCmd cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero plugin add velero/velero-plugin-for-aws:main
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup-location bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d --provider aws --bucket mqiu-bucket --credential bsl-credentials-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d=creds-aws
Backup storage location "bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d" configured successfully.
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s
Waiting for kibishii jump-pad pod to be ready
Waiting for kibishii pods to be ready
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d --wait --include-namespaces kibishii-workload --default-volumes-to-fs-backup --snapshot-volumes=false --storage-location default
Backup request "backup-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
..............
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d` and `velero backup logs backup-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d`.
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d
Simulating a disaster by removing namespace kibishii-workload
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d --from-backup backup-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d --wait
Restore request "restore-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
.....................
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d` and `velero restore logs restore-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d`.
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d
Waiting for kibishii pods to be ready
Pod kibishii-deployment-1 is in state Pending waiting for it to be Running
running kibishii verify
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2
kibishii test completed successfully
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s
Waiting for kibishii jump-pad pod to be ready
Waiting for kibishii pods to be ready
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d --wait --include-namespaces kibishii-workload --default-volumes-to-fs-backup --snapshot-volumes=false --storage-location bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d
Backup request "backup-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
.................
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d` and `velero backup logs backup-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d`.
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d
Simulating a disaster by removing namespace kibishii-workload
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d --from-backup backup-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d --wait
Restore request "restore-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
.............................................................
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d` and `velero restore logs restore-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d`.
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d
Waiting for kibishii pods to be ready
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running
running kibishii verify
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2
kibishii test completed successfully
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
namespace "kibishii-workload" is still being deleted...
STEP: Clean backups after test
Backup backup-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d --confirm
Request to delete backup "backup-bsl-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Backup backup-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d --confirm
Request to delete backup "backup-default-248ffca4-b2dc-4b54-b2b0-3b66cfe3061d" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Backup backup-rbac is going to be deleted...
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-rbac --confirm
Request to delete backup "backup-rbac" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Velero uninstalled ⛵
• [SLOW TEST:632.398 seconds]
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:87
when kibishii is the sample workload
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:73
should successfully back up and restore to an additional BackupStorageLocation with unique credentials
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:83
------------------------------
JUnit report was created: /root/go/src/github.com/blackpiglet/velero/test/e2e/report.xml
Ran 7 of 44 Specs in 2861.077 seconds
SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 37 Skipped
PASS
You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. Please give the RC a try and send us feedback!
- To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
- For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
- To comment, chime in at https://github.com/onsi/ginkgo/issues/711
You are using a custom reporter. Support for custom reporters will likely be removed in V2. Most users were using them to generate junit or teamcity reports and this functionality will be merged into the core reporter. In addition, Ginkgo 2.0 will support emitting a JSON-formatted report that users can then manipulate to generate custom reports.
If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711
Learn more at: https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-custom-reporters
To silence deprecations that can be silenced set the following environment variable:
ACK_GINKGO_DEPRECATIONS=1.16.5
Ginkgo ran 1 suite in 52m10.080676411s
Test Suite Passed
make[1]: Leaving directory '/root/go/src/github.com/blackpiglet/velero/test/e2e'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment