Created
February 16, 2023 13:18
-
-
Save blackpiglet/fa25ca51ec7e0f1739fa908e59005d64 to your computer and use it in GitHub Desktop.
The basic E2E test result of restore controller refactor
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
GOOS=linux \ | |
GOARCH=amd64 \ | |
VERSION=main \ | |
REGISTRY=velero \ | |
PKG=github.com/vmware-tanzu/velero \ | |
BIN=velero \ | |
GIT_SHA=2a4daf4336a2a1bd921d439debd73093f6fb47f6 \ | |
GIT_TREE_STATE=dirty \ | |
OUTPUT_DIR=$(pwd)/_output/bin/linux/amd64 \ | |
./hack/build.sh | |
make -e VERSION=main -C test/e2e run | |
make[1]: Entering directory '/root/go/src/github.com/blackpiglet/velero/test/e2e' | |
go install github.com/onsi/ginkgo/ginkgo@v1.16.5 | |
Using credentials from /root/credentials-velero | |
Using bucket jxun to store backups from E2E tests | |
Using cloud provider gcp | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
STEP: Create test client instance | |
Running Suite: E2e Suite | |
======================== | |
Random Seed: 1676547088 | |
Will run 7 of 44 specs | |
SSSSSSSSSSSS | |
------------------------------ | |
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups when kibishii is the sample workload | |
should be successfully backed up and restored to the default BackupStorageLocation | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:74 | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:restore-controller-refactor-3 --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:restore-controller-refactor-3 --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json" | |
image pull secret "image-pull-secret" set for velero serviceaccount | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged | |
namespace/velero created | |
clusterrolebinding.rbac.authorization.k8s.io/velero created | |
serviceaccount/velero created | |
secret/cloud-credentials created | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
deployment.apps/velero created | |
daemonset.apps/node-agent created | |
secret/image-pull-secret created | |
Waiting for Velero deployment to be ready. | |
Waiting for node-agent daemonset to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7 --wait --include-namespaces kibishii-workload --default-volumes-to-fs-backup --snapshot-volumes=false | |
Backup request "backup-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.............. | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7` and `velero backup logs backup-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7 | |
Simulating a disaster by removing namespace kibishii-workload | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7 --from-backup backup-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7 --wait | |
Restore request "restore-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
................................................................ | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7` and `velero restore logs restore-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7 | |
Waiting for kibishii pods to be ready | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
kibishii test completed successfully | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
STEP: Clean backups after test | |
Backup backup-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7 --confirm | |
Request to delete backup "backup-8ebd456a-8cb8-49b5-acfe-2bb2f36611d7" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Backup backup-rbacba64060d-fff6-4fec-937c-29c4885be506 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-rbacba64060d-fff6-4fec-937c-29c4885be506 --confirm | |
Request to delete backup "backup-rbacba64060d-fff6-4fec-937c-29c4885be506" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Velero uninstalled ⛵ | |
• [SLOW TEST:352.107 seconds] | |
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:87 | |
when kibishii is the sample workload | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:73 | |
should be successfully backed up and restored to the default BackupStorageLocation | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:74 | |
------------------------------ | |
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups when kibishii is the sample workload | |
should successfully back up and restore to an additional BackupStorageLocation with unique credentials | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:83 | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:restore-controller-refactor-3 --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:restore-controller-refactor-3 --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json" | |
image pull secret "image-pull-secret" set for velero serviceaccount | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged | |
namespace/velero created | |
clusterrolebinding.rbac.authorization.k8s.io/velero created | |
serviceaccount/velero created | |
secret/cloud-credentials created | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
deployment.apps/velero created | |
daemonset.apps/node-agent created | |
secret/image-pull-secret created | |
Waiting for Velero deployment to be ready. | |
Waiting for node-agent daemonset to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Get Version Command:/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero version --timeout 60s --client-only | |
Version:Client: Version: main Git commit: 2a4daf4336a2a1bd921d439debd73093f6fb47f6-dirty | |
addPlugins cmd = | |
provider cmd = aws | |
plugins cmd = [velero/velero-plugin-for-aws:main] | |
installPluginCmd cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero plugin add velero/velero-plugin-for-aws:main | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup-location bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 --provider aws --bucket mqiu-bucket --credential bsl-credentials-f40dec76-4b1b-4a6f-9f2e-211a7a900d23=creds-aws | |
Backup storage location "bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23" configured successfully. | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 --wait --include-namespaces kibishii-workload --default-volumes-to-fs-backup --snapshot-volumes=false --storage-location default | |
Backup request "backup-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
............... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23` and `velero backup logs backup-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 | |
Simulating a disaster by removing namespace kibishii-workload | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 --from-backup backup-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 --wait | |
Restore request "restore-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
.................... | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23` and `velero restore logs restore-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 | |
Waiting for kibishii pods to be ready | |
Pod kibishii-deployment-1 is in state Pending waiting for it to be Running | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
kibishii test completed successfully | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 --wait --include-namespaces kibishii-workload --default-volumes-to-fs-backup --snapshot-volumes=false --storage-location bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 | |
Backup request "backup-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
............... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23` and `velero backup logs backup-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 | |
Simulating a disaster by removing namespace kibishii-workload | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 --from-backup backup-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 --wait | |
Restore request "restore-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
................. | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23` and `velero restore logs restore-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 | |
Waiting for kibishii pods to be ready | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
kibishii test completed successfully | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
STEP: Clean backups after test | |
Backup backup-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 --confirm | |
Request to delete backup "backup-bsl-f40dec76-4b1b-4a6f-9f2e-211a7a900d23" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Backup backup-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23 --confirm | |
Request to delete backup "backup-default-f40dec76-4b1b-4a6f-9f2e-211a7a900d23" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Backup t0 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup t0 --confirm | |
Request to delete backup "t0" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Velero uninstalled ⛵ | |
• [SLOW TEST:600.871 seconds] | |
[Basic][Restic] Velero tests on cluster using the plugin provider for object storage and Restic for volume backups | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:87 | |
when kibishii is the sample workload | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:73 | |
should successfully back up and restore to an additional BackupStorageLocation with unique credentials | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:83 | |
------------------------------ | |
SS | |
------------------------------ | |
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups when kibishii is the sample workload | |
should be successfully backed up and restored to the default BackupStorageLocation | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:74 | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:restore-controller-refactor-3 --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:restore-controller-refactor-3 --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json" | |
image pull secret "image-pull-secret" set for velero serviceaccount | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged | |
namespace/velero created | |
clusterrolebinding.rbac.authorization.k8s.io/velero created | |
serviceaccount/velero created | |
secret/cloud-credentials created | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
deployment.apps/velero created | |
secret/image-pull-secret created | |
Waiting for Velero deployment to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-c41d4067-93b7-4223-a80a-25a11aa8f1ee --wait --include-namespaces kibishii-workload --snapshot-volumes | |
Backup request "backup-c41d4067-93b7-4223-a80a-25a11aa8f1ee" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-c41d4067-93b7-4223-a80a-25a11aa8f1ee` and `velero backup logs backup-c41d4067-93b7-4223-a80a-25a11aa8f1ee`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-c41d4067-93b7-4223-a80a-25a11aa8f1ee | |
{kibishii-workload [] 2 [kibishii-deployment-0 kibishii-deployment-1] false} | |
|| VERIFICATION || - Snapshots should exist in cloud, backup backup-c41d4067-93b7-4223-a80a-25a11aa8f1ee | |
Snapshot count 2 is as expected 0 | |
|| EXPECTED || - Snapshots exist in cloud, backup backup-c41d4067-93b7-4223-a80a-25a11aa8f1ee | |
Simulating a disaster by removing namespace kibishii-workload | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
Waiting 5 minutes to make sure the snapshots are ready... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-c41d4067-93b7-4223-a80a-25a11aa8f1ee --from-backup backup-c41d4067-93b7-4223-a80a-25a11aa8f1ee --wait | |
Restore request "restore-c41d4067-93b7-4223-a80a-25a11aa8f1ee" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
.. | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-c41d4067-93b7-4223-a80a-25a11aa8f1ee` and `velero restore logs restore-c41d4067-93b7-4223-a80a-25a11aa8f1ee`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-c41d4067-93b7-4223-a80a-25a11aa8f1ee | |
Waiting for kibishii pods to be ready | |
Pod jump-pad is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-1 is in state Pending waiting for it to be Running | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
kibishii test completed successfully | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
STEP: Clean backups after test | |
Backup backup-c41d4067-93b7-4223-a80a-25a11aa8f1ee is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-c41d4067-93b7-4223-a80a-25a11aa8f1ee --confirm | |
Request to delete backup "backup-c41d4067-93b7-4223-a80a-25a11aa8f1ee" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Velero uninstalled ⛵ | |
• [SLOW TEST:604.377 seconds] | |
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:89 | |
when kibishii is the sample workload | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:73 | |
should be successfully backed up and restored to the default BackupStorageLocation | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:74 | |
------------------------------ | |
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups when kibishii is the sample workload | |
should successfully back up and restore to an additional BackupStorageLocation with unique credentials | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:83 | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:restore-controller-refactor-3 --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:restore-controller-refactor-3 --use-volume-snapshots --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json" | |
image pull secret "image-pull-secret" set for velero serviceaccount | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged | |
namespace/velero created | |
clusterrolebinding.rbac.authorization.k8s.io/velero created | |
serviceaccount/velero created | |
secret/cloud-credentials created | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
deployment.apps/velero created | |
secret/image-pull-secret created | |
Waiting for Velero deployment to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Get Version Command:/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero version --timeout 60s --client-only | |
Version:Client: Version: main Git commit: 2a4daf4336a2a1bd921d439debd73093f6fb47f6-dirty | |
addPlugins cmd = | |
provider cmd = aws | |
plugins cmd = [velero/velero-plugin-for-aws:main] | |
installPluginCmd cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero plugin add velero/velero-plugin-for-aws:main | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup-location bsl-6a5c383e-60bd-407c-8478-eba9ff166396 --provider aws --bucket mqiu-bucket --credential bsl-credentials-6a5c383e-60bd-407c-8478-eba9ff166396=creds-aws | |
Backup storage location "bsl-6a5c383e-60bd-407c-8478-eba9ff166396" configured successfully. | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-default-6a5c383e-60bd-407c-8478-eba9ff166396 --wait --include-namespaces kibishii-workload --snapshot-volumes --storage-location default | |
Backup request "backup-default-6a5c383e-60bd-407c-8478-eba9ff166396" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-default-6a5c383e-60bd-407c-8478-eba9ff166396` and `velero backup logs backup-default-6a5c383e-60bd-407c-8478-eba9ff166396`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-default-6a5c383e-60bd-407c-8478-eba9ff166396 | |
{kibishii-workload [] 2 [kibishii-deployment-0 kibishii-deployment-1] false} | |
|| VERIFICATION || - Snapshots should exist in cloud, backup backup-default-6a5c383e-60bd-407c-8478-eba9ff166396 | |
Snapshot count 2 is as expected 0 | |
|| EXPECTED || - Snapshots exist in cloud, backup backup-default-6a5c383e-60bd-407c-8478-eba9ff166396 | |
Simulating a disaster by removing namespace kibishii-workload | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
Waiting 5 minutes to make sure the snapshots are ready... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-default-6a5c383e-60bd-407c-8478-eba9ff166396 --from-backup backup-default-6a5c383e-60bd-407c-8478-eba9ff166396 --wait | |
Restore request "restore-default-6a5c383e-60bd-407c-8478-eba9ff166396" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
.. | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-default-6a5c383e-60bd-407c-8478-eba9ff166396` and `velero restore logs restore-default-6a5c383e-60bd-407c-8478-eba9ff166396`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-default-6a5c383e-60bd-407c-8478-eba9ff166396 | |
Waiting for kibishii pods to be ready | |
Pod jump-pad is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
kibishii test completed successfully | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
Install Kibishii cmd: /usr/local/bin/kubectl apply -n kibishii-workload -k github.com/vmware-tanzu-experiments/distributed-data-generator/kubernetes/yaml/gcp --timeout=90s | |
Waiting for kibishii jump-pad pod to be ready | |
Waiting for kibishii pods to be ready | |
kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create backup backup-bsl-6a5c383e-60bd-407c-8478-eba9ff166396 --wait --include-namespaces kibishii-workload --snapshot-volumes --storage-location bsl-6a5c383e-60bd-407c-8478-eba9ff166396 | |
Backup request "backup-bsl-6a5c383e-60bd-407c-8478-eba9ff166396" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-bsl-6a5c383e-60bd-407c-8478-eba9ff166396` and `velero backup logs backup-bsl-6a5c383e-60bd-407c-8478-eba9ff166396`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-bsl-6a5c383e-60bd-407c-8478-eba9ff166396 | |
{kibishii-workload [] 2 [kibishii-deployment-0 kibishii-deployment-1] false} | |
|| VERIFICATION || - Snapshots should exist in cloud, backup backup-bsl-6a5c383e-60bd-407c-8478-eba9ff166396 | |
Snapshot count 2 is as expected 0 | |
|| EXPECTED || - Snapshots exist in cloud, backup backup-bsl-6a5c383e-60bd-407c-8478-eba9ff166396 | |
Simulating a disaster by removing namespace kibishii-workload | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
Waiting 5 minutes to make sure the snapshots are ready... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero create restore restore-bsl-6a5c383e-60bd-407c-8478-eba9ff166396 --from-backup backup-bsl-6a5c383e-60bd-407c-8478-eba9ff166396 --wait | |
Restore request "restore-bsl-6a5c383e-60bd-407c-8478-eba9ff166396" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
... | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-bsl-6a5c383e-60bd-407c-8478-eba9ff166396` and `velero restore logs restore-bsl-6a5c383e-60bd-407c-8478-eba9ff166396`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-bsl-6a5c383e-60bd-407c-8478-eba9ff166396 | |
Waiting for kibishii pods to be ready | |
Pod etcd0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
Pod kibishii-deployment-0 is in state Pending waiting for it to be Running | |
running kibishii verify | |
kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 | |
kibishii test completed successfully | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
namespace "kibishii-workload" is still being deleted... | |
STEP: Clean backups after test | |
Backup backup-bsl-6a5c383e-60bd-407c-8478-eba9ff166396 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-bsl-6a5c383e-60bd-407c-8478-eba9ff166396 --confirm | |
Request to delete backup "backup-bsl-6a5c383e-60bd-407c-8478-eba9ff166396" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Backup backup-default-6a5c383e-60bd-407c-8478-eba9ff166396 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-default-6a5c383e-60bd-407c-8478-eba9ff166396 --confirm | |
Request to delete backup "backup-default-6a5c383e-60bd-407c-8478-eba9ff166396" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Backup t0 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup t0 --confirm | |
Request to delete backup "t0" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
Velero uninstalled ⛵ | |
• [SLOW TEST:1236.307 seconds] | |
[Basic][Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:89 | |
when kibishii is the sample workload | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:73 | |
should successfully back up and restore to an additional BackupStorageLocation with unique credentials | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/backup/backup.go:83 | |
------------------------------ | |
SSSSSSSSSSSSSSSSS | |
------------------------------ | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
Should be successfully backed up and restored including annotations | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135 | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:restore-controller-refactor-3 --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json --crds-only" | |
Applying velero CRDs... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created | |
Waiting velero CRDs ready... | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met | |
Running cmd "/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero install --namespace velero --image blackpiglet/velero:restore-controller-refactor-3 --use-node-agent --provider gcp --bucket jxun --secret-file /root/credentials-velero --plugins velero/velero-plugin-for-gcp:v1.6.1 --dry-run --output json" | |
image pull secret "image-pull-secret" set for velero serviceaccount | |
Running cmd "/usr/local/bin/kubectl apply -f -" | |
customresourcedefinition.apiextensions.k8s.io/backuprepositories.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged | |
customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged | |
namespace/velero created | |
clusterrolebinding.rbac.authorization.k8s.io/velero created | |
serviceaccount/velero created | |
secret/cloud-credentials created | |
backupstoragelocation.velero.io/default created | |
volumesnapshotlocation.velero.io/default created | |
deployment.apps/velero created | |
daemonset.apps/node-agent created | |
secret/image-pull-secret created | |
Waiting for Velero deployment to be ready. | |
Waiting for node-agent daemonset to be ready. | |
Velero is installed and ready to be tested in the velero namespace! ⛵ | |
Running test case Backup/restore namespace annotation test | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero backup backup-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02 --include-namespaces namespace-annotations-c1fea887-e7f4-4675-8854-b63040ba4c02-0 --default-volumes-to-fs-backup --wait | |
Backup request "backup-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
........... | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02` and `velero backup logs backup-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02 | |
namespace "namespace-annotations-c1fea887-e7f4-4675-8854-b63040ba4c02-0" is still being deleted... | |
namespace "namespace-annotations-c1fea887-e7f4-4675-8854-b63040ba4c02-0" is still being deleted... | |
Delete namespace namespace-annotations-c1fea887-e7f4-4675-8854-b63040ba4c02-0 | |
STEP: Start to restore ...... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero restore restore-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02 --from-backup backup-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02 --wait | |
Restore request "restore-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02` and `velero restore logs restore-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02 | |
STEP: Clean namespace with prefix namespace-annotations-c1fea887-e7f4-4675-8854-b63040ba4c02 after test | |
STEP: Clean backups after test | |
Backup backup-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02 --confirm | |
Request to delete backup "backup-namespace-annotationsc1fea887-e7f4-4675-8854-b63040ba4c02" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
• [SLOW TEST:43.259 seconds] | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:91 | |
Should be successfully backed up and restored including annotations | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135 | |
------------------------------ | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
When I create 2 namespaces should be successfully backed up and restored | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135 | |
Running test case | |
Creating namespaces ... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero backup backup-d01cfa15-006f-4bc0-8dde-616b6c70d778 --exclude-namespaces default,kube-node-lease,kube-public,kube-system,namespace-annotations-c1fea887-e7f4-4675-8854-b63040ba4c02-0,upgrade,upgrade01,velero,velero-repo-test --default-volumes-to-fs-backup --wait | |
Backup request "backup-d01cfa15-006f-4bc0-8dde-616b6c70d778" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.. | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-d01cfa15-006f-4bc0-8dde-616b6c70d778` and `velero backup logs backup-d01cfa15-006f-4bc0-8dde-616b6c70d778`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-d01cfa15-006f-4bc0-8dde-616b6c70d778 | |
STEP: Start to restore ...... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero restore restore-d01cfa15-006f-4bc0-8dde-616b6c70d778 --from-backup backup-d01cfa15-006f-4bc0-8dde-616b6c70d778 --wait | |
Restore request "restore-d01cfa15-006f-4bc0-8dde-616b6c70d778" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-d01cfa15-006f-4bc0-8dde-616b6c70d778` and `velero restore logs restore-d01cfa15-006f-4bc0-8dde-616b6c70d778`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-d01cfa15-006f-4bc0-8dde-616b6c70d778 | |
STEP: Clean namespace with prefix nstest-d01cfa15-006f-4bc0-8dde-616b6c70d778 after test | |
STEP: Clean backups after test | |
Backup backup-d01cfa15-006f-4bc0-8dde-616b6c70d778 is going to be deleted... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero delete backup backup-d01cfa15-006f-4bc0-8dde-616b6c70d778 --confirm | |
Request to delete backup "backup-d01cfa15-006f-4bc0-8dde-616b6c70d778" submitted successfully. | |
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. | |
• [SLOW TEST:16.778 seconds] | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:91 | |
When I create 2 namespaces should be successfully backed up and restored | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135 | |
------------------------------ | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
should be successfully backed up and restored | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135 | |
Running test case Backup/restore of Namespaced Scoped and Cluster Scoped RBAC | |
Creating namespaces ...rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0 | |
Creating service account ...rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0 | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero backup backup-rbac65147b5b-7d04-41a7-8547-95a50170ac63 --include-namespaces rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0 --default-volumes-to-fs-backup --wait | |
Backup request "backup-rbac65147b5b-7d04-41a7-8547-95a50170ac63" submitted successfully. | |
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. | |
.. | |
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-rbac65147b5b-7d04-41a7-8547-95a50170ac63` and `velero backup logs backup-rbac65147b5b-7d04-41a7-8547-95a50170ac63`. | |
get backup cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-rbac65147b5b-7d04-41a7-8547-95a50170ac63 | |
Cleaning up clusterrole clusterrole-rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0 | |
Cleaning up clusterrolebinding clusterrolebinding-rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0 | |
namespace "rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0" is still being deleted... | |
namespace "rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0" is still being deleted... | |
Delete namespace rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0 | |
STEP: Start to restore ...... | |
velero cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero create --namespace velero restore restore-rbac65147b5b-7d04-41a7-8547-95a50170ac63 --from-backup backup-rbac65147b5b-7d04-41a7-8547-95a50170ac63 --wait | |
Restore request "restore-rbac65147b5b-7d04-41a7-8547-95a50170ac63" submitted successfully. | |
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. | |
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-rbac65147b5b-7d04-41a7-8547-95a50170ac63` and `velero restore logs restore-rbac65147b5b-7d04-41a7-8547-95a50170ac63`. | |
get restore cmd =/root/go/src/github.com/blackpiglet/velero/_output/bin/linux/amd64/velero --namespace velero restore get -o json restore-rbac65147b5b-7d04-41a7-8547-95a50170ac63 | |
Cleaning up clusterrole clusterrole-rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0 | |
Cleaning up clusterrolebinding clusterrolebinding-rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0 | |
namespace "rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0" is still being deleted... | |
namespace "rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0" is still being deleted... | |
Delete namespace rabc-65147b5b-7d04-41a7-8547-95a50170ac63-0 | |
Velero uninstalled ⛵ | |
• [SLOW TEST:38.716 seconds] | |
[Basic][ClusterResource] Backup/restore of cluster resources | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/e2e_suite_test.go:91 | |
should be successfully backed up and restored | |
/root/go/src/github.com/blackpiglet/velero/test/e2e/test/test.go:135 | |
------------------------------ | |
SSSSSS | |
JUnit report was created: /root/go/src/github.com/blackpiglet/velero/test/e2e/report.xml | |
Ran 7 of 44 Specs in 2892.424 seconds | |
SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 37 Skipped | |
PASS | |
You're using deprecated Ginkgo functionality: | |
============================================= | |
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes. | |
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. Please give the RC a try and send us feedback! | |
- To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md | |
- For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta | |
- To comment, chime in at https://github.com/onsi/ginkgo/issues/711 | |
You are using a custom reporter. Support for custom reporters will likely be removed in V2. Most users were using them to generate junit or teamcity reports and this functionality will be merged into the core reporter. In addition, Ginkgo 2.0 will support emitting a JSON-formatted report that users can then manipulate to generate custom reports. | |
If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711 | |
Learn more at: https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-custom-reporters | |
To silence deprecations that can be silenced set the following environment variable: | |
ACK_GINKGO_DEPRECATIONS=1.16.5 | |
Ginkgo ran 1 suite in 48m25.131061313s | |
Test Suite Passed | |
make[1]: Leaving directory '/root/go/src/github.com/blackpiglet/velero/test/e2e' |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment