Skip to content

Instantly share code, notes, and snippets.

@McCoyAle
Created January 19, 2021 18:10
Show Gist options
  • Save McCoyAle/1f999e4fe8c5de760a528d2beee6b25b to your computer and use it in GitHub Desktop.
Save McCoyAle/1f999e4fe8c5de760a528d2beee6b25b to your computer and use it in GitHub Desktop.
Velero Logs
ubuntu@cli-vm:/usr/local/share/ca-certificates$ velero backup describe tkg-mgmt1-cluster
Name: tkg-mgmt1-cluster
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.19.3+vmware.1
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=19
Phase: PartiallyFailed (run `velero backup logs tkg-mgmt1-cluster` for more information)
Errors: 1
Warnings: 0
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2020-12-24 17:02:55 -0700 MST
Completed: 2020-12-24 17:03:11 -0700 MST
Expiration: 2021-01-23 17:02:54 -0700 MST
Total items to be backed up: 901
Items backed up: 901
Velero-Native Snapshots: <none included>
------------------------------------------------------------
ubuntu@cli-vm:/usr/local/share/ca-certificates$ velero backup logs tkg-mgmt1-cluster
go:114" name=volumesnapshotlocations.velero.io namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:10Z" level=info msg="Found associated CRD certificaterequests.cert-manager.io to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:10Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=certificaterequests.cert-manager.io namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:10Z" level=info msg="Found associated CRD clusterresourcesetbindings.addons.cluster.x-k8s.io to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:10Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=clusterresourcesetbindings.addons.cluster.x-k8s.io namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:10Z" level=info msg="Found associated CRD vspheremachines.infrastructure.cluster.x-k8s.io to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:10Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=vspheremachines.infrastructure.cluster.x-k8s.io namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:10Z" level=info msg="Found associated CRD kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:10Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:10Z" level=info msg="Found associated CRD antreacontrollerinfos.clusterinformation.antrea.tanzu.vmware.com to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:10Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=antreacontrollerinfos.clusterinformation.antrea.tanzu.vmware.com namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:11Z" level=info msg="Found associated CRD antreaagentinfos.clusterinformation.antrea.tanzu.vmware.com to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:11Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=antreaagentinfos.clusterinformation.antrea.tanzu.vmware.com namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:11Z" level=info msg="Found associated CRD issuers.cert-manager.io to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:11Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=issuers.cert-manager.io namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:11Z" level=info msg="Found associated CRD vsphereclusters.infrastructure.cluster.x-k8s.io to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:11Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=vsphereclusters.infrastructure.cluster.x-k8s.io namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:11Z" level=info msg="Found associated CRD kubeadmconfigs.bootstrap.cluster.x-k8s.io to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:11Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=kubeadmconfigs.bootstrap.cluster.x-k8s.io namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:11Z" level=info msg="Found associated CRD machinehealthchecks.cluster.x-k8s.io to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:11Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=machinehealthchecks.cluster.x-k8s.io namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:11Z" level=info msg="Found associated CRD backupstoragelocations.velero.io to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:11Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=backupstoragelocations.velero.io namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:11Z" level=info msg="Found associated CRD machinedeployments.cluster.x-k8s.io to add to backup" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:493"
time="2020-12-25T00:03:11Z" level=info msg="Skipping item because it's already been backed up." backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/item_backupper.go:114" name=machinedeployments.cluster.x-k8s.io namespace= resource=customresourcedefinitions.apiextensions.k8s.io
time="2020-12-25T00:03:11Z" level=info msg="Backed up a total of 901 items" backup=velero/tkg-mgmt1-cluster logSource="pkg/backup/backup.go:436" progress=
ubuntu@cli-vm:/usr/local/share/ca-certificates$ ^C
ubuntu@cli-vm:/usr/local/share/ca-certificates$
--------------------------------------------------------------------------------
ubuntu@cli-vm:/usr/local/share/ca-certificates$ velero restore logs mgmt1635
time="2020-12-25T01:09:22Z" level=info msg="Attempting to restore ClusterRole: cert-manager-controller-orders" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:22Z" level=info msg="Restore of ClusterRole, cert-manager-controller-orders skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:22Z" level=info msg="Attempting to restore ClusterRole: cert-manager-edit" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:22Z" level=info msg="Restore of ClusterRole, cert-manager-edit skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:22Z" level=info msg="Attempting to restore ClusterRole: cert-manager-view" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:22Z" level=info msg="Restore of ClusterRole, cert-manager-view skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:22Z" level=info msg="Attempting to restore ClusterRole: cluster-admin" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:22Z" level=info msg="Restore of ClusterRole, cluster-admin skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:22Z" level=info msg="Attempting to restore ClusterRole: edit" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:24Z" level=info msg="Restore of ClusterRole, edit skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:24Z" level=info msg="Attempting to restore ClusterRole: kubeadm:get-nodes" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:24Z" level=info msg="Restore of ClusterRole, kubeadm:get-nodes skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:24Z" level=info msg="Attempting to restore ClusterRole: system:aggregate-to-admin" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:aggregate-to-admin skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:aggregate-to-edit" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:aggregate-to-edit skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:aggregate-to-view" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:aggregate-to-view skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:auth-delegator" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:auth-delegator skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:basic-user" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:basic-user skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:certificates.k8s.io:certificatesigningrequests:nodeclient" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:certificates.k8s.io:certificatesigningrequests:nodeclient skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:certificates.k8s.io:certificatesigningrequests:selfnodeclient skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:certificates.k8s.io:kube-apiserver-client-approver" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:certificates.k8s.io:kube-apiserver-client-approver skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:certificates.k8s.io:kube-apiserver-client-kubelet-approver" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:certificates.k8s.io:kube-apiserver-client-kubelet-approver skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:certificates.k8s.io:kubelet-serving-approver" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:certificates.k8s.io:kubelet-serving-approver skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:certificates.k8s.io:legacy-unknown-approver" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:certificates.k8s.io:legacy-unknown-approver skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:cloud-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:cloud-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:attachdetach-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:attachdetach-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:certificate-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:certificate-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:clusterrole-aggregation-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:clusterrole-aggregation-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:cronjob-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:cronjob-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:daemon-set-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:daemon-set-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:deployment-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:deployment-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:disruption-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:disruption-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:endpoint-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:endpoint-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:endpointslice-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:endpointslice-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:endpointslicemirroring-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:endpointslicemirroring-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:expand-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:expand-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:generic-garbage-collector" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Restore of ClusterRole, system:controller:generic-garbage-collector skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:25Z" level=info msg="Attempting to restore ClusterRole: system:controller:horizontal-pod-autoscaler" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Restore of ClusterRole, system:controller:horizontal-pod-autoscaler skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Attempting to restore ClusterRole: system:controller:job-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Restore of ClusterRole, system:controller:job-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Attempting to restore ClusterRole: system:controller:namespace-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Restore of ClusterRole, system:controller:namespace-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Attempting to restore ClusterRole: system:controller:node-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Restore of ClusterRole, system:controller:node-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Attempting to restore ClusterRole: system:controller:persistent-volume-binder" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Restore of ClusterRole, system:controller:persistent-volume-binder skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Attempting to restore ClusterRole: system:controller:pod-garbage-collector" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Restore of ClusterRole, system:controller:pod-garbage-collector skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Attempting to restore ClusterRole: system:controller:pv-protection-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Restore of ClusterRole, system:controller:pv-protection-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Attempting to restore ClusterRole: system:controller:pvc-protection-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Restore of ClusterRole, system:controller:pvc-protection-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Attempting to restore ClusterRole: system:controller:replicaset-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Restore of ClusterRole, system:controller:replicaset-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:26Z" level=info msg="Attempting to restore ClusterRole: system:controller:replication-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Restore of ClusterRole, system:controller:replication-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Attempting to restore ClusterRole: system:controller:resourcequota-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Restore of ClusterRole, system:controller:resourcequota-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Attempting to restore ClusterRole: system:controller:route-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Restore of ClusterRole, system:controller:route-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Attempting to restore ClusterRole: system:controller:service-account-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Restore of ClusterRole, system:controller:service-account-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Attempting to restore ClusterRole: system:controller:service-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Restore of ClusterRole, system:controller:service-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Attempting to restore ClusterRole: system:controller:statefulset-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Restore of ClusterRole, system:controller:statefulset-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Attempting to restore ClusterRole: system:controller:ttl-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Restore of ClusterRole, system:controller:ttl-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Attempting to restore ClusterRole: system:coredns" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Restore of ClusterRole, system:coredns skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Attempting to restore ClusterRole: system:discovery" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Restore of ClusterRole, system:discovery skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Attempting to restore ClusterRole: system:heapster" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Restore of ClusterRole, system:heapster skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Attempting to restore ClusterRole: system:kube-aggregator" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Restore of ClusterRole, system:kube-aggregator skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:27Z" level=info msg="Attempting to restore ClusterRole: system:kube-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Restore of ClusterRole, system:kube-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Attempting to restore ClusterRole: system:kube-dns" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Restore of ClusterRole, system:kube-dns skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Attempting to restore ClusterRole: system:kube-scheduler" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Restore of ClusterRole, system:kube-scheduler skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Attempting to restore ClusterRole: system:kubelet-api-admin" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Restore of ClusterRole, system:kubelet-api-admin skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Attempting to restore ClusterRole: system:node-bootstrapper" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Restore of ClusterRole, system:node-bootstrapper skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Attempting to restore ClusterRole: system:node-problem-detector" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Restore of ClusterRole, system:node-problem-detector skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Attempting to restore ClusterRole: system:node-proxier" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Restore of ClusterRole, system:node-proxier skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Attempting to restore ClusterRole: system:node" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Restore of ClusterRole, system:node skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Attempting to restore ClusterRole: system:persistent-volume-provisioner" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Restore of ClusterRole, system:persistent-volume-provisioner skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:28Z" level=info msg="Attempting to restore ClusterRole: system:public-info-viewer" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:29Z" level=info msg="Restore of ClusterRole, system:public-info-viewer skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:29Z" level=info msg="Attempting to restore ClusterRole: system:volume-scheduler" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:29Z" level=info msg="Restore of ClusterRole, system:volume-scheduler skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:29Z" level=info msg="Attempting to restore ClusterRole: tkg-telemetry-cluster-role" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:29Z" level=info msg="Restore of ClusterRole, tkg-telemetry-cluster-role skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:29Z" level=info msg="Attempting to restore ClusterRole: view" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:29Z" level=info msg="Restore of ClusterRole, view skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:29Z" level=info msg="Attempting to restore ClusterRole: vsphere-csi-controller-role" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:29Z" level=info msg="Restore of ClusterRole, vsphere-csi-controller-role skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:29Z" level=info msg="Restoring resource 'clusters.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:30Z" level=info msg="Getting client for cluster.x-k8s.io/v1alpha3, Kind=Cluster" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:30Z" level=info msg="Attempting to restore Cluster: my-cluster" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:30Z" level=info msg="Restoring resource 'clusters.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:31Z" level=info msg="Getting client for cluster.x-k8s.io/v1alpha3, Kind=Cluster" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:31Z" level=info msg="Attempting to restore Cluster: tkg-mgmt1-cluster" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:31Z" level=info msg="Restoring resource 'controllerrevisions.apps' into namespace 'kube-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:32Z" level=info msg="Getting client for apps/v1, Kind=ControllerRevision" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:32Z" level=info msg="Attempting to restore ControllerRevision: antrea-agent-f66d49f78" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:33Z" level=info msg="Restore of ControllerRevision, antrea-agent-f66d49f78 skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:33Z" level=info msg="Attempting to restore ControllerRevision: kube-proxy-f68798d8f" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Restore of ControllerRevision, kube-proxy-f68798d8f skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Attempting to restore ControllerRevision: vsphere-cloud-controller-manager-7444d4cdcd" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Restore of ControllerRevision, vsphere-cloud-controller-manager-7444d4cdcd skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Attempting to restore ControllerRevision: vsphere-csi-node-5c567574c5" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Restore of ControllerRevision, vsphere-csi-node-5c567574c5 skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Restoring resource 'controllerrevisions.apps' into namespace 'velero'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Getting client for apps/v1, Kind=ControllerRevision" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Attempting to restore ControllerRevision: datamgr-for-vsphere-plugin-6d9477d795" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Attempting to restore ControllerRevision: restic-54cf7747f6" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Restore of ControllerRevision, restic-54cf7747f6 skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Restoring resource 'cronjobs.batch' into namespace 'tkg-system-telemetry'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Getting client for batch/v1beta1, Kind=CronJob" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Attempting to restore CronJob: tkg-telemetry" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Restoring cluster level resource 'csidrivers.storage.k8s.io'" logSource="pkg/restore/restore.go:704" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Getting client for storage.k8s.io/v1, Kind=CSIDriver" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Attempting to restore CSIDriver: csi.vsphere.vmware.com" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Restore of CSIDriver, csi.vsphere.vmware.com skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Restoring cluster level resource 'csinodes.storage.k8s.io'" logSource="pkg/restore/restore.go:704" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Getting client for storage.k8s.io/v1, Kind=CSINode" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Attempting to restore CSINode: tkg-mgmt1-cluster-control-plane-wrrhs" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Attempting to restore CSINode: tkg-mgmt1-cluster-md-0-7757f89b6-4w4vt" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Restoring resource 'daemonsets.apps' into namespace 'kube-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Getting client for apps/v1, Kind=DaemonSet" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:35Z" level=info msg="Attempting to restore DaemonSet: antrea-agent" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of DaemonSet, antrea-agent skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore DaemonSet: kube-proxy" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of DaemonSet, kube-proxy skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore DaemonSet: vsphere-cloud-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of DaemonSet, vsphere-cloud-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore DaemonSet: vsphere-csi-node" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of DaemonSet, vsphere-csi-node skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restoring resource 'daemonsets.apps' into namespace 'velero'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Getting client for apps/v1, Kind=DaemonSet" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore DaemonSet: datamgr-for-vsphere-plugin" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore DaemonSet: restic" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of DaemonSet, restic skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restoring resource 'deployments.apps' into namespace 'capi-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Getting client for apps/v1, Kind=Deployment" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore Deployment: capi-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of Deployment, capi-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restoring resource 'deployments.apps' into namespace 'capi-webhook-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Getting client for apps/v1, Kind=Deployment" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore Deployment: capi-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of Deployment, capi-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore Deployment: capi-kubeadm-bootstrap-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of Deployment, capi-kubeadm-bootstrap-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore Deployment: capi-kubeadm-control-plane-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of Deployment, capi-kubeadm-control-plane-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore Deployment: capv-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of Deployment, capv-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restoring resource 'deployments.apps' into namespace 'capv-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Getting client for apps/v1, Kind=Deployment" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore Deployment: capv-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of Deployment, capv-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restoring resource 'deployments.apps' into namespace 'cert-manager'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Getting client for apps/v1, Kind=Deployment" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore Deployment: cert-manager-cainjector" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of Deployment, cert-manager-cainjector skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore Deployment: cert-manager-webhook" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of Deployment, cert-manager-webhook skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore Deployment: cert-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of Deployment, cert-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restoring resource 'deployments.apps' into namespace 'kube-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Getting client for apps/v1, Kind=Deployment" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore Deployment: antrea-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of Deployment, antrea-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Attempting to restore Deployment: coredns" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:36Z" level=info msg="Restore of Deployment, coredns skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:37Z" level=info msg="Attempting to restore Deployment: vsphere-csi-controller" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:37Z" level=info msg="Restore of Deployment, vsphere-csi-controller skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:37Z" level=info msg="Restoring resource 'deployments.apps' into namespace 'velero'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:37Z" level=info msg="Getting client for apps/v1, Kind=Deployment" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:37Z" level=info msg="Attempting to restore Deployment: backup-driver" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:37Z" level=info msg="Attempting to restore Deployment: minio" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restore of Deployment, minio skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Deployment: velero" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restoring resource 'deployments.apps' into namespace 'capi-kubeadm-bootstrap-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Getting client for apps/v1, Kind=Deployment" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Deployment: capi-kubeadm-bootstrap-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restore of Deployment, capi-kubeadm-bootstrap-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restoring resource 'deployments.apps' into namespace 'capi-kubeadm-control-plane-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Getting client for apps/v1, Kind=Deployment" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Deployment: capi-kubeadm-control-plane-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restore of Deployment, capi-kubeadm-control-plane-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restoring resource 'endpoints' into namespace 'capi-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: capi-controller-manager-metrics-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restoring resource 'endpoints' into namespace 'cert-manager'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: cert-manager-webhook" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: cert-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restoring resource 'endpoints' into namespace 'kube-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: antrea" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: cloud-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: kube-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: kube-dns" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: kube-scheduler" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restoring resource 'endpoints' into namespace 'capi-kubeadm-bootstrap-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: capi-kubeadm-bootstrap-controller-manager-metrics-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restoring resource 'endpoints' into namespace 'capi-webhook-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: capi-kubeadm-bootstrap-webhook-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: capi-kubeadm-control-plane-webhook-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: capi-webhook-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: capv-webhook-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restoring resource 'endpoints' into namespace 'capv-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Attempting to restore Endpoints: capv-controller-manager-metrics-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:38Z" level=info msg="Restoring resource 'endpoints' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:39Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:39Z" level=info msg="Attempting to restore Endpoints: kubernetes" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:39Z" level=info msg="Restoring resource 'endpoints' into namespace 'velero'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:39Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:39Z" level=info msg="Attempting to restore Endpoints: minio" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:39Z" level=info msg="Restoring resource 'endpoints' into namespace 'capi-kubeadm-control-plane-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:39Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:39Z" level=info msg="Attempting to restore Endpoints: capi-kubeadm-control-plane-controller-manager-metrics-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:39Z" level=info msg="Restoring resource 'endpointslices.discovery.k8s.io' into namespace 'capi-kubeadm-bootstrap-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Getting client for discovery.k8s.io/v1beta1, Kind=EndpointSlice" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: capi-kubeadm-bootstrap-controller-manager-metrics-service-nz7fc" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restoring resource 'endpointslices.discovery.k8s.io' into namespace 'capi-kubeadm-control-plane-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Getting client for discovery.k8s.io/v1beta1, Kind=EndpointSlice" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: capi-kubeadm-control-plane-controller-manager-metrics-servb2fzp" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restoring resource 'endpointslices.discovery.k8s.io' into namespace 'capi-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Getting client for discovery.k8s.io/v1beta1, Kind=EndpointSlice" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: capi-controller-manager-metrics-service-c2md8" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restoring resource 'endpointslices.discovery.k8s.io' into namespace 'capi-webhook-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Getting client for discovery.k8s.io/v1beta1, Kind=EndpointSlice" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: capi-kubeadm-bootstrap-webhook-service-46cdx" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: capi-kubeadm-control-plane-webhook-service-m7n55" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: capi-webhook-service-m52gs" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: capv-webhook-service-cswjs" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restoring resource 'endpointslices.discovery.k8s.io' into namespace 'capv-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Getting client for discovery.k8s.io/v1beta1, Kind=EndpointSlice" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: capv-controller-manager-metrics-service-hc96w" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restoring resource 'endpointslices.discovery.k8s.io' into namespace 'kube-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Getting client for discovery.k8s.io/v1beta1, Kind=EndpointSlice" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: antrea-t7db4" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: cloud-controller-manager-s8gjx" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: kube-dns-kmq5m" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restoring resource 'endpointslices.discovery.k8s.io' into namespace 'velero'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Getting client for discovery.k8s.io/v1beta1, Kind=EndpointSlice" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: minio-9xcjw" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restoring resource 'endpointslices.discovery.k8s.io' into namespace 'cert-manager'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Getting client for discovery.k8s.io/v1beta1, Kind=EndpointSlice" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: cert-manager-webhook-jrgch" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: cert-manager-z8vbk" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restoring resource 'endpointslices.discovery.k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Getting client for discovery.k8s.io/v1beta1, Kind=EndpointSlice" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore EndpointSlice: kubernetes" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Skipping restore of resource because the restore spec excludes it" logSource="pkg/restore/restore.go:398" resource=events restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restoring resource 'issuers.cert-manager.io' into namespace 'capi-webhook-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Getting client for cert-manager.io/v1beta1, Kind=Issuer" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore Issuer: capi-kubeadm-bootstrap-selfsigned-issuer" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restore of Issuer, capi-kubeadm-bootstrap-selfsigned-issuer skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore Issuer: capi-kubeadm-control-plane-selfsigned-issuer" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restore of Issuer, capi-kubeadm-control-plane-selfsigned-issuer skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Attempting to restore Issuer: capi-selfsigned-issuer" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:40Z" level=info msg="Restore of Issuer, capi-selfsigned-issuer skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Attempting to restore Issuer: capv-selfsigned-issuer" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Restore of Issuer, capv-selfsigned-issuer skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Restoring resource 'jobs.batch' into namespace 'tkg-system-telemetry'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="tkg-system-telemetry/tkg-telemetry-1608854400 is complete - skipping" logSource="pkg/restore/restore.go:831" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Restoring resource 'jobs.batch' into namespace 'velero'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="velero/minio-setup is complete - skipping" logSource="pkg/restore/restore.go:831" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Restoring resource 'kubeadmconfigs.bootstrap.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Getting client for bootstrap.cluster.x-k8s.io/v1alpha3, Kind=KubeadmConfig" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Attempting to restore KubeadmConfig: my-cluster-control-plane-6sdsc" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="error restoring my-cluster-control-plane-6sdsc: Internal error occurred: failed calling webhook \"validation.kubeadmconfig.bootstrap.cluster.x-k8s.io\": Post \"https://capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc:443/validate-bootstrap-cluster-x-k8s-io-v1alpha3-kubeadmconfig?timeout=30s\": x509: certificate is valid for capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc, capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc.cluster.local, not capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc" logSource="pkg/restore/restore.go:1133" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Attempting to restore KubeadmConfig: my-cluster-md-0-86v7z" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="error restoring my-cluster-md-0-86v7z: Internal error occurred: failed calling webhook \"validation.kubeadmconfig.bootstrap.cluster.x-k8s.io\": Post \"https://capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc:443/validate-bootstrap-cluster-x-k8s-io-v1alpha3-kubeadmconfig?timeout=30s\": x509: certificate is valid for capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc, capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc.cluster.local, not capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc" logSource="pkg/restore/restore.go:1133" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Attempting to restore KubeadmConfig: my-cluster-md-0-gzpl9" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="error restoring my-cluster-md-0-gzpl9: Internal error occurred: failed calling webhook \"validation.kubeadmconfig.bootstrap.cluster.x-k8s.io\": Post \"https://capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc:443/validate-bootstrap-cluster-x-k8s-io-v1alpha3-kubeadmconfig?timeout=30s\": x509: certificate is valid for capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc, capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc.cluster.local, not capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc" logSource="pkg/restore/restore.go:1133" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Restoring resource 'kubeadmconfigs.bootstrap.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Getting client for bootstrap.cluster.x-k8s.io/v1alpha3, Kind=KubeadmConfig" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Attempting to restore KubeadmConfig: tkg-mgmt1-cluster-control-plane-q2krk" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="error restoring tkg-mgmt1-cluster-control-plane-q2krk: Internal error occurred: failed calling webhook \"validation.kubeadmconfig.bootstrap.cluster.x-k8s.io\": Post \"https://capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc:443/validate-bootstrap-cluster-x-k8s-io-v1alpha3-kubeadmconfig?timeout=30s\": x509: certificate is valid for capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc, capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc.cluster.local, not capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc" logSource="pkg/restore/restore.go:1133" restore=velero/mgmt1635
time="2020-12-25T01:09:41Z" level=info msg="Attempting to restore KubeadmConfig: tkg-mgmt1-cluster-md-0-kj4n8" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:42Z" level=info msg="Restoring resource 'kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:42Z" level=info msg="Getting client for bootstrap.cluster.x-k8s.io/v1alpha3, Kind=KubeadmConfigTemplate" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:42Z" level=info msg="Attempting to restore KubeadmConfigTemplate: my-cluster-md-0" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:42Z" level=info msg="Restoring resource 'kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:42Z" level=info msg="Getting client for bootstrap.cluster.x-k8s.io/v1alpha3, Kind=KubeadmConfigTemplate" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:42Z" level=info msg="Attempting to restore KubeadmConfigTemplate: tkg-mgmt1-cluster-md-0" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:42Z" level=info msg="Restoring resource 'kubeadmcontrolplanes.controlplane.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:42Z" level=info msg="Getting client for controlplane.cluster.x-k8s.io/v1alpha3, Kind=KubeadmControlPlane" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:42Z" level=info msg="Attempting to restore KubeadmControlPlane: my-cluster-control-plane" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:43Z" level=info msg="Restoring resource 'kubeadmcontrolplanes.controlplane.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:43Z" level=info msg="Getting client for controlplane.cluster.x-k8s.io/v1alpha3, Kind=KubeadmControlPlane" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:43Z" level=info msg="Attempting to restore KubeadmControlPlane: tkg-mgmt1-cluster-control-plane" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:43Z" level=info msg="Restoring resource 'leases.coordination.k8s.io' into namespace 'kube-node-lease'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:43Z" level=info msg="Getting client for coordination.k8s.io/v1, Kind=Lease" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:43Z" level=info msg="Attempting to restore Lease: tkg-mgmt1-cluster-control-plane-wrrhs" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:43Z" level=info msg="Attempting to restore Lease: tkg-mgmt1-cluster-md-0-7757f89b6-4w4vt" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:43Z" level=info msg="Restoring resource 'leases.coordination.k8s.io' into namespace 'kube-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:43Z" level=info msg="Getting client for coordination.k8s.io/v1, Kind=Lease" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:43Z" level=info msg="Attempting to restore Lease: csi-vsphere-vmware-com" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:43Z" level=info msg="Attempting to restore Lease: external-attacher-leader-csi-vsphere-vmware-com" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:44Z" level=info msg="Attempting to restore Lease: kube-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:44Z" level=info msg="Attempting to restore Lease: kube-scheduler" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:44Z" level=info msg="Attempting to restore Lease: plunder-lock" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:44Z" level=info msg="Attempting to restore Lease: vsphere-syncer" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:44Z" level=info msg="Restoring resource 'machinedeployments.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:44Z" level=info msg="Getting client for cluster.x-k8s.io/v1alpha3, Kind=MachineDeployment" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:44Z" level=info msg="Attempting to restore MachineDeployment: tkg-mgmt1-cluster-md-0" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:45Z" level=info msg="Restoring resource 'machinedeployments.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:45Z" level=info msg="Getting client for cluster.x-k8s.io/v1alpha3, Kind=MachineDeployment" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:45Z" level=info msg="Attempting to restore MachineDeployment: my-cluster-md-0" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:46Z" level=info msg="Restoring resource 'machinehealthchecks.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:46Z" level=info msg="Getting client for cluster.x-k8s.io/v1alpha3, Kind=MachineHealthCheck" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:46Z" level=info msg="Attempting to restore MachineHealthCheck: my-cluster" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring resource 'machinehealthchecks.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for cluster.x-k8s.io/v1alpha3, Kind=MachineHealthCheck" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore MachineHealthCheck: tkg-mgmt1-cluster" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring resource 'machines.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for cluster.x-k8s.io/v1alpha3, Kind=Machine" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore Machine: my-cluster-control-plane-q8vjw" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore Machine: my-cluster-md-0-8667bdf668-5fqgp" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore Machine: my-cluster-md-0-8667bdf668-9tqnk" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring resource 'machines.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for cluster.x-k8s.io/v1alpha3, Kind=Machine" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore Machine: tkg-mgmt1-cluster-control-plane-wrrhs" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore Machine: tkg-mgmt1-cluster-md-0-7757f89b6-4w4vt" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring resource 'machinesets.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for cluster.x-k8s.io/v1alpha3, Kind=MachineSet" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore MachineSet: tkg-mgmt1-cluster-md-0-7757f89b6" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring resource 'machinesets.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for cluster.x-k8s.io/v1alpha3, Kind=MachineSet" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore MachineSet: my-cluster-md-0-8667bdf668" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring cluster level resource 'mutatingwebhookconfigurations.admissionregistration.k8s.io'" logSource="pkg/restore/restore.go:704" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for admissionregistration.k8s.io/v1, Kind=MutatingWebhookConfiguration" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore MutatingWebhookConfiguration: capi-kubeadm-control-plane-mutating-webhook-configuration" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore MutatingWebhookConfiguration: capi-mutating-webhook-configuration" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore MutatingWebhookConfiguration: cert-manager-webhook" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Skipping restore of resource because the restore spec excludes it" logSource="pkg/restore/restore.go:398" resource=nodes restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring cluster level resource 'priorityclasses.scheduling.k8s.io'" logSource="pkg/restore/restore.go:704" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for scheduling.k8s.io/v1, Kind=PriorityClass" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore PriorityClass: system-cluster-critical" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of PriorityClass, system-cluster-critical skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore PriorityClass: system-node-critical" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of PriorityClass, system-node-critical skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring resource 'providers.clusterctl.cluster.x-k8s.io' into namespace 'capv-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for clusterctl.cluster.x-k8s.io/v1alpha3, Kind=Provider" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore Provider: infrastructure-vsphere" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of Provider, infrastructure-vsphere skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring resource 'providers.clusterctl.cluster.x-k8s.io' into namespace 'capi-kubeadm-bootstrap-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for clusterctl.cluster.x-k8s.io/v1alpha3, Kind=Provider" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore Provider: bootstrap-kubeadm" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of Provider, bootstrap-kubeadm skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring resource 'providers.clusterctl.cluster.x-k8s.io' into namespace 'capi-kubeadm-control-plane-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for clusterctl.cluster.x-k8s.io/v1alpha3, Kind=Provider" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore Provider: control-plane-kubeadm" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of Provider, control-plane-kubeadm skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring resource 'providers.clusterctl.cluster.x-k8s.io' into namespace 'capi-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for clusterctl.cluster.x-k8s.io/v1alpha3, Kind=Provider" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore Provider: cluster-api" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of Provider, cluster-api skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restoring resource 'rolebindings.rbac.authorization.k8s.io' into namespace 'kube-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=RoleBinding" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore RoleBinding: cert-manager-cainjector:leaderelection" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of RoleBinding, cert-manager-cainjector:leaderelection skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore RoleBinding: cert-manager:leaderelection" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of RoleBinding, cert-manager:leaderelection skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore RoleBinding: kube-proxy" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of RoleBinding, kube-proxy skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore RoleBinding: kubeadm:kubelet-config-1.19" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of RoleBinding, kubeadm:kubelet-config-1.19 skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore RoleBinding: kubeadm:nodes-kubeadm-config" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of RoleBinding, kubeadm:nodes-kubeadm-config skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Attempting to restore RoleBinding: servicecatalog.k8s.io:apiserver-authentication-reader" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Restore of RoleBinding, servicecatalog.k8s.io:apiserver-authentication-reader skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:47Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: system::extension-apiserver-authentication-reader" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, system::extension-apiserver-authentication-reader skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: system::leader-locking-kube-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, system::leader-locking-kube-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: system::leader-locking-kube-scheduler" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, system::leader-locking-kube-scheduler skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: system:controller:bootstrap-signer" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, system:controller:bootstrap-signer skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: system:controller:cloud-provider" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, system:controller:cloud-provider skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: system:controller:token-cleaner" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, system:controller:token-cleaner skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restoring resource 'rolebindings.rbac.authorization.k8s.io' into namespace 'tkg-system-public'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=RoleBinding" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: tkg-metadata-reader" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, tkg-metadata-reader skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restoring resource 'rolebindings.rbac.authorization.k8s.io' into namespace 'capi-kubeadm-bootstrap-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=RoleBinding" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: capi-kubeadm-bootstrap-leader-election-rolebinding" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, capi-kubeadm-bootstrap-leader-election-rolebinding skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restoring resource 'rolebindings.rbac.authorization.k8s.io' into namespace 'capi-kubeadm-control-plane-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=RoleBinding" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: capi-kubeadm-control-plane-leader-election-rolebinding" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, capi-kubeadm-control-plane-leader-election-rolebinding skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restoring resource 'rolebindings.rbac.authorization.k8s.io' into namespace 'capi-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=RoleBinding" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: capi-leader-election-rolebinding" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, capi-leader-election-rolebinding skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restoring resource 'rolebindings.rbac.authorization.k8s.io' into namespace 'capv-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=RoleBinding" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: capv-leader-election-rolebinding" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, capv-leader-election-rolebinding skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restoring resource 'rolebindings.rbac.authorization.k8s.io' into namespace 'cert-manager'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=RoleBinding" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: cert-manager-webhook:dynamic-serving" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, cert-manager-webhook:dynamic-serving skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restoring resource 'rolebindings.rbac.authorization.k8s.io' into namespace 'kube-public'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=RoleBinding" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: kubeadm:bootstrap-signer-clusterinfo" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, kubeadm:bootstrap-signer-clusterinfo skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Executing item action for rolebindings.rbac.authorization.k8s.io" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore RoleBinding: system:controller:bootstrap-signer" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restore of RoleBinding, system:controller:bootstrap-signer skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Restoring resource 'roles.rbac.authorization.k8s.io' into namespace 'kube-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=Role" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:48Z" level=info msg="Attempting to restore Role: cert-manager-cainjector:leaderelection" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Restore of Role, cert-manager-cainjector:leaderelection skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Attempting to restore Role: cert-manager:leaderelection" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Restore of Role, cert-manager:leaderelection skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Attempting to restore Role: extension-apiserver-authentication-reader" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Restore of Role, extension-apiserver-authentication-reader skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Attempting to restore Role: kube-proxy" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Restore of Role, kube-proxy skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Attempting to restore Role: kubeadm:kubelet-config-1.19" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Restore of Role, kubeadm:kubelet-config-1.19 skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Attempting to restore Role: kubeadm:nodes-kubeadm-config" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Restore of Role, kubeadm:nodes-kubeadm-config skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Attempting to restore Role: system::leader-locking-kube-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Restore of Role, system::leader-locking-kube-controller-manager skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Attempting to restore Role: system::leader-locking-kube-scheduler" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Restore of Role, system::leader-locking-kube-scheduler skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Attempting to restore Role: system:controller:bootstrap-signer" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Restore of Role, system:controller:bootstrap-signer skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Attempting to restore Role: system:controller:cloud-provider" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Restore of Role, system:controller:cloud-provider skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:49Z" level=info msg="Attempting to restore Role: system:controller:token-cleaner" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restore of Role, system:controller:token-cleaner skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restoring resource 'roles.rbac.authorization.k8s.io' into namespace 'tkg-system-public'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=Role" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Attempting to restore Role: tkg-metadata-reader" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restore of Role, tkg-metadata-reader skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restoring resource 'roles.rbac.authorization.k8s.io' into namespace 'capi-kubeadm-bootstrap-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=Role" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Attempting to restore Role: capi-kubeadm-bootstrap-leader-election-role" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restore of Role, capi-kubeadm-bootstrap-leader-election-role skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restoring resource 'roles.rbac.authorization.k8s.io' into namespace 'capi-kubeadm-control-plane-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=Role" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Attempting to restore Role: capi-kubeadm-control-plane-leader-election-role" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restore of Role, capi-kubeadm-control-plane-leader-election-role skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restoring resource 'roles.rbac.authorization.k8s.io' into namespace 'capi-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=Role" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Attempting to restore Role: capi-leader-election-role" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restore of Role, capi-leader-election-role skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restoring resource 'roles.rbac.authorization.k8s.io' into namespace 'capv-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=Role" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Attempting to restore Role: capv-leader-election-role" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restore of Role, capv-leader-election-role skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Restoring resource 'roles.rbac.authorization.k8s.io' into namespace 'cert-manager'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=Role" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:50Z" level=info msg="Attempting to restore Role: cert-manager-webhook:dynamic-serving" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:51Z" level=info msg="Restore of Role, cert-manager-webhook:dynamic-serving skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:51Z" level=info msg="Restoring resource 'roles.rbac.authorization.k8s.io' into namespace 'kube-public'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:51Z" level=info msg="Getting client for rbac.authorization.k8s.io/v1, Kind=Role" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:51Z" level=info msg="Attempting to restore Role: kubeadm:bootstrap-signer-clusterinfo" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:51Z" level=info msg="Restore of Role, kubeadm:bootstrap-signer-clusterinfo skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:51Z" level=info msg="Attempting to restore Role: system:controller:bootstrap-signer" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:51Z" level=info msg="Restore of Role, system:controller:bootstrap-signer skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:09:55Z" level=info msg="Restoring resource 'services' into namespace 'capi-kubeadm-bootstrap-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:55Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:55Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:55Z" level=info msg="Attempting to restore Service: capi-kubeadm-bootstrap-controller-manager-metrics-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:55Z" level=info msg="Restoring resource 'services' into namespace 'capi-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:55Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:55Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:55Z" level=info msg="Attempting to restore Service: capi-controller-manager-metrics-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:56Z" level=info msg="Restoring resource 'services' into namespace 'capi-webhook-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:56Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:56Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:56Z" level=info msg="Attempting to restore Service: capi-kubeadm-bootstrap-webhook-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:56Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:56Z" level=info msg="Attempting to restore Service: capi-kubeadm-control-plane-webhook-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:56Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:56Z" level=info msg="Attempting to restore Service: capi-webhook-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:57Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:57Z" level=info msg="Attempting to restore Service: capv-webhook-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Restoring resource 'services' into namespace 'capv-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Attempting to restore Service: capv-controller-manager-metrics-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Restoring resource 'services' into namespace 'cert-manager'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Attempting to restore Service: cert-manager-webhook" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Attempting to restore Service: cert-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Restoring resource 'services' into namespace 'velero'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Attempting to restore Service: minio" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Restoring resource 'services' into namespace 'capi-kubeadm-control-plane-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Attempting to restore Service: capi-kubeadm-control-plane-controller-manager-metrics-service" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:58Z" level=info msg="Restoring resource 'services' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Attempting to restore Service: kubernetes" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Restoring resource 'services' into namespace 'kube-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Attempting to restore Service: antrea" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Attempting to restore Service: cloud-controller-manager" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:964" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Attempting to restore Service: kube-dns" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Restoring cluster level resource 'validatingwebhookconfigurations.admissionregistration.k8s.io'" logSource="pkg/restore/restore.go:704" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Getting client for admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Attempting to restore ValidatingWebhookConfiguration: capi-kubeadm-bootstrap-validating-webhook-configuration" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:09:59Z" level=info msg="Attempting to restore ValidatingWebhookConfiguration: capi-kubeadm-control-plane-validating-webhook-configuration" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:00Z" level=info msg="Attempting to restore ValidatingWebhookConfiguration: capi-validating-webhook-configuration" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:00Z" level=info msg="Attempting to restore ValidatingWebhookConfiguration: capv-validating-webhook-configuration" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:00Z" level=info msg="Attempting to restore ValidatingWebhookConfiguration: cert-manager-webhook" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:00Z" level=info msg="Restoring cluster level resource 'volumeattachments.storage.k8s.io'" logSource="pkg/restore/restore.go:704" restore=velero/mgmt1635
time="2020-12-25T01:10:00Z" level=info msg="Getting client for storage.k8s.io/v1, Kind=VolumeAttachment" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:10:00Z" level=info msg="Attempting to restore VolumeAttachment: csi-e69617e220981982ec0cb5a96c7514d829ba206a709364267354e6c61062fcf5" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:00Z" level=info msg="Restoring resource 'volumesnapshotlocations.velero.io' into namespace 'velero'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:10:00Z" level=info msg="Getting client for velero.io/v1, Kind=VolumeSnapshotLocation" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:10:00Z" level=info msg="Attempting to restore VolumeSnapshotLocation: vsl-vsphere" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:00Z" level=info msg="Restore of VolumeSnapshotLocation, vsl-vsphere skipped: it already exists in the cluster and is the same as the backed up version" logSource="pkg/restore/restore.go:1127" restore=velero/mgmt1635
time="2020-12-25T01:10:00Z" level=info msg="Restoring resource 'vsphereclusters.infrastructure.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:10:01Z" level=info msg="Getting client for infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereCluster" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:10:01Z" level=info msg="Attempting to restore VSphereCluster: my-cluster" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:01Z" level=info msg="Restoring resource 'vsphereclusters.infrastructure.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:10:02Z" level=info msg="Getting client for infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereCluster" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:10:02Z" level=info msg="Attempting to restore VSphereCluster: tkg-mgmt1-cluster" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:02Z" level=info msg="Restoring resource 'vspheremachines.infrastructure.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:10:03Z" level=info msg="Getting client for infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereMachine" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:10:03Z" level=info msg="Attempting to restore VSphereMachine: my-cluster-control-plane-h87j8" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereMachine: my-cluster-worker-49gqs" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereMachine: my-cluster-worker-hp8kl" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Restoring resource 'vspheremachines.infrastructure.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Getting client for infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereMachine" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereMachine: tkg-mgmt1-cluster-control-plane-rrpzc" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereMachine: tkg-mgmt1-cluster-worker-bjrrb" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Restoring resource 'vspheremachinetemplates.infrastructure.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Getting client for infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereMachineTemplate" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereMachineTemplate: my-cluster-control-plane" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereMachineTemplate: my-cluster-worker" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Restoring resource 'vspheremachinetemplates.infrastructure.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Getting client for infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereMachineTemplate" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereMachineTemplate: tkg-mgmt1-cluster-control-plane" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereMachineTemplate: tkg-mgmt1-cluster-worker" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Restoring resource 'vspherevms.infrastructure.cluster.x-k8s.io' into namespace 'tkg-system'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Getting client for infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereVM" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereVM: tkg-mgmt1-cluster-control-plane-wrrhs" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereVM: tkg-mgmt1-cluster-md-0-7757f89b6-4w4vt" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Restoring resource 'vspherevms.infrastructure.cluster.x-k8s.io' into namespace 'default'" logSource="pkg/restore/restore.go:702" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Getting client for infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereVM" logSource="pkg/restore/restore.go:746" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereVM: my-cluster-control-plane-q8vjw" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereVM: my-cluster-md-0-8667bdf668-5fqgp" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Attempting to restore VSphereVM: my-cluster-md-0-8667bdf668-9tqnk" logSource="pkg/restore/restore.go:1070" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Waiting for all restic restores to complete" logSource="pkg/restore/restore.go:470" restore=velero/mgmt1635
time="2020-12-25T01:10:05Z" level=info msg="Done waiting for all restic restores to complete" logSource="pkg/restore/restore.go:486" restore=velero/mgmt1635
time="2020-12-25T01:10:06Z" level=info msg="restore completed" logSource="pkg/controller/restore_controller.go:468" restore=velero/mgmt1635
ubuntu@cli-vm:/usr/local/share/ca-certificates$
------------------------------------------------------------------------------
ubuntu@cli-vm:/usr/local/share/ca-certificates$ velero restore describe mgmt1635
Name: mgmt1635
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: PartiallyFailed (run 'velero restore logs mgmt1635' for more information)
Warnings:
Velero: <none>
Cluster: could not restore, customresourcedefinitions.apiextensions.k8s.io "backups.velero.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "backupstoragelocations.velero.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "certificaterequests.cert-manager.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "certificates.cert-manager.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "challenges.acme.cert-manager.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "clusterissuers.cert-manager.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "clusternetworkpolicies.security.antrea.tanzu.vmware.com" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "clusterresourcesetbindings.addons.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "clusterresourcesets.addons.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "clusters.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "deletebackuprequests.velero.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "downloadrequests.velero.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "haproxyloadbalancers.infrastructure.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "issuers.cert-manager.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "kubeadmconfigs.bootstrap.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "kubeadmcontrolplanes.controlplane.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "machinedeployments.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "machinehealthchecks.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "machinepools.exp.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "machines.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "machinesets.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "orders.acme.cert-manager.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "podvolumebackups.velero.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "podvolumerestores.velero.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "providers.clusterctl.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "resticrepositories.velero.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "restores.velero.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "schedules.velero.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "serverstatusrequests.velero.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "traceflows.ops.antrea.tanzu.vmware.com" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "volumesnapshotlocations.velero.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "vsphereclusters.infrastructure.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "vspheremachines.infrastructure.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "vspheremachinetemplates.infrastructure.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, customresourcedefinitions.apiextensions.k8s.io "vspherevms.infrastructure.cluster.x-k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, antreacontrollerinfos.clusterinformation.antrea.tanzu.vmware.com "antrea-controller" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, apiservices.apiregistration.k8s.io "v1beta1.controlplane.antrea.tanzu.vmware.com" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, apiservices.apiregistration.k8s.io "v1beta1.system.antrea.tanzu.vmware.com" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, mutatingwebhookconfigurations.admissionregistration.k8s.io "capi-kubeadm-control-plane-mutating-webhook-configuration" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, mutatingwebhookconfigurations.admissionregistration.k8s.io "capi-mutating-webhook-configuration" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, mutatingwebhookconfigurations.admissionregistration.k8s.io "cert-manager-webhook" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, validatingwebhookconfigurations.admissionregistration.k8s.io "capi-kubeadm-bootstrap-validating-webhook-configuration" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, validatingwebhookconfigurations.admissionregistration.k8s.io "capi-kubeadm-control-plane-validating-webhook-configuration" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, validatingwebhookconfigurations.admissionregistration.k8s.io "capi-validating-webhook-configuration" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, validatingwebhookconfigurations.admissionregistration.k8s.io "capv-validating-webhook-configuration" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, validatingwebhookconfigurations.admissionregistration.k8s.io "cert-manager-webhook" already exists. Warning: the in-cluster version is different than the backed-up version.
Namespaces:
capi-webhook-system: could not restore, secrets "capi-kubeadm-bootstrap-webhook-service-cert" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, secrets "capi-kubeadm-control-plane-webhook-service-cert" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, secrets "capi-webhook-service-cert" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, secrets "capv-webhook-service-cert" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "capi-kubeadm-bootstrap-webhook-service" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "capi-kubeadm-control-plane-webhook-service" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "capi-webhook-service" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "capv-webhook-service" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "capi-kubeadm-bootstrap-webhook-service" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "capi-kubeadm-control-plane-webhook-service" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "capi-webhook-service" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "capv-webhook-service" already exists. Warning: the in-cluster version is different than the backed-up version.
kube-public: could not restore, configmaps "cluster-info" already exists. Warning: the in-cluster version is different than the backed-up version.
kube-system: could not restore, secrets "csi-vsphere-config" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, configmaps "antrea-ca" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, configmaps "cert-manager-cainjector-leader-election-core" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, configmaps "cert-manager-cainjector-leader-election" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, configmaps "cert-manager-controller" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, configmaps "extension-apiserver-authentication" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, configmaps "kube-proxy" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, configmaps "kubeadm-config" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "antrea" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "cloud-controller-manager" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "kube-controller-manager" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "kube-dns" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "kube-scheduler" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, leases.coordination.k8s.io "csi-vsphere-vmware-com" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, leases.coordination.k8s.io "external-attacher-leader-csi-vsphere-vmware-com" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, leases.coordination.k8s.io "kube-controller-manager" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, leases.coordination.k8s.io "kube-scheduler" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, leases.coordination.k8s.io "plunder-lock" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, leases.coordination.k8s.io "vsphere-syncer" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "antrea" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "cloud-controller-manager" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "kube-dns" already exists. Warning: the in-cluster version is different than the backed-up version.
tkg-system-public: could not restore, configmaps "tkg-metadata" already exists. Warning: the in-cluster version is different than the backed-up version.
tkg-system-telemetry: could not restore, cronjobs.batch "tkg-telemetry" already exists. Warning: the in-cluster version is different than the backed-up version.
capi-kubeadm-bootstrap-system: could not restore, configmaps "kubeadm-bootstrap-manager-leader-election-capi" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "capi-kubeadm-bootstrap-controller-manager-metrics-service" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "capi-kubeadm-bootstrap-controller-manager-metrics-service" already exists. Warning: the in-cluster version is different than the backed-up version.
capi-system: could not restore, configmaps "controller-leader-election-capi" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "capi-controller-manager-metrics-service" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "capi-controller-manager-metrics-service" already exists. Warning: the in-cluster version is different than the backed-up version.
capv-system: could not restore, configmaps "capv-controller-manager-runtime" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "capv-controller-manager-metrics-service" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "capv-controller-manager-metrics-service" already exists. Warning: the in-cluster version is different than the backed-up version.
cert-manager: could not restore, secrets "cert-manager-webhook-ca" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "cert-manager-webhook" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "cert-manager" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "cert-manager-webhook" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "cert-manager" already exists. Warning: the in-cluster version is different than the backed-up version.
default: could not restore, endpoints "kubernetes" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpointslices.discovery.k8s.io "kubernetes" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "kubernetes" already exists. Warning: the in-cluster version is different than the backed-up version.
velero: could not restore, persistentvolumeclaims "minio-pv" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, replicasets.apps "velero-674c86948f" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, replicasets.apps "velero-76d79cd7c9" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, backupstoragelocations.velero.io "default" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, deployments.apps "velero" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "minio" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "minio" already exists. Warning: the in-cluster version is different than the backed-up version.
capi-kubeadm-control-plane-system: could not restore, configmaps "kubeadm-control-plane-manager-leader-election-capi" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, endpoints "capi-kubeadm-control-plane-controller-manager-metrics-service" already exists. Warning: the in-cluster version is different than the backed-up version.
could not restore, services "capi-kubeadm-control-plane-controller-manager-metrics-service" already exists. Warning: the in-cluster version is different than the backed-up version.
Errors:
Velero: <none>
Cluster: <none>
Namespaces:
default: error restoring kubeadmconfigs.bootstrap.cluster.x-k8s.io/default/my-cluster-control-plane-6sdsc: Internal error occurred: failed calling webhook "validation.kubeadmconfig.bootstrap.cluster.x-k8s.io": Post "https://capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc:443/validate-bootstrap-cluster-x-k8s-io-v1alpha3-kubeadmconfig?timeout=30s": x509: certificate is valid for capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc, capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc.cluster.local, not capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc
error restoring kubeadmconfigs.bootstrap.cluster.x-k8s.io/default/my-cluster-md-0-86v7z: Internal error occurred: failed calling webhook "validation.kubeadmconfig.bootstrap.cluster.x-k8s.io": Post "https://capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc:443/validate-bootstrap-cluster-x-k8s-io-v1alpha3-kubeadmconfig?timeout=30s": x509: certificate is valid for capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc, capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc.cluster.local, not capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc
error restoring kubeadmconfigs.bootstrap.cluster.x-k8s.io/default/my-cluster-md-0-gzpl9: Internal error occurred: failed calling webhook "validation.kubeadmconfig.bootstrap.cluster.x-k8s.io": Post "https://capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc:443/validate-bootstrap-cluster-x-k8s-io-v1alpha3-kubeadmconfig?timeout=30s": x509: certificate is valid for capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc, capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc.cluster.local, not capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc
tkg-system: error restoring kubeadmconfigs.bootstrap.cluster.x-k8s.io/tkg-system/tkg-mgmt1-cluster-control-plane-q2krk: Internal error occurred: failed calling webhook "validation.kubeadmconfig.bootstrap.cluster.x-k8s.io": Post "https://capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc:443/validate-bootstrap-cluster-x-k8s-io-v1alpha3-kubeadmconfig?timeout=30s": x509: certificate is valid for capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc, capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc.cluster.local, not capi-kubeadm-bootstrap-webhook-service.capi-webhook-system.svc
Backup: tkg-mgmt1-cluster
Namespaces:
Included: all namespaces found in the backup
Excluded: <none>
Resources:
Included: *
Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
ubuntu@cli-vm:/usr/local/share/ca-certificates$
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment