Skip to content

Instantly share code, notes, and snippets.

@hansbogert
Created August 13, 2021 21:13
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save hansbogert/e4281d6ee4b407164ca30a1ae1c339cb to your computer and use it in GitHub Desktop.
Save hansbogert/e4281d6ee4b407164ca30a1ae1c339cb to your computer and use it in GitHub Desktop.
2021-08-13 20:24:14.719798 I | rookcmd: starting Rook v1.6.8 with arguments '/usr/local/bin/rook ceph operator'
2021-08-13 20:24:14.719875 I | rookcmd: flag values: --add_dir_header=false, --alsologtostderr=false, --csi-cephfs-plugin-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin.yaml, --csi-cephfs-provisioner-dep-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner-dep.yaml, --csi-rbd-plugin-template-path=/etc/ceph-csi/rbd/csi-rbdplugin.yaml, --csi-rbd-provisioner-dep-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner-dep.yaml, --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-flush-frequency=5s, --log-level=DEBUG, --log_backtrace_at=:0, --log_dir=, --log_file=, --log_file_max_size=1800, --logtostderr=true, --mon-healthcheck-interval=45s, --mon-out-timeout=5m0s, --one_output=false, --operator-image=, --service-account=, --skip_headers=false, --skip_log_headers=false, --stderrthreshold=2, --v=0, --vmodule=
2021-08-13 20:24:14.719879 I | cephcmd: starting Rook-Ceph operator
2021-08-13 20:24:14.749442 D | exec: Running command: ceph --version
2021-08-13 20:24:14.863784 I | cephcmd: base ceph version inside the rook operator image is "ceph version 16.2.2 (e8f22dde28889481f4dda2beb8a07788204821d3) pacific (stable)"
2021-08-13 20:24:14.873124 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (env var)
2021-08-13 20:24:14.886240 D | operator: checking for admission controller secrets
2021-08-13 20:24:14.886272 I | operator: looking for secret "rook-ceph-admission-controller"
2021-08-13 20:24:14.889853 I | operator: secret "rook-ceph-admission-controller" not found. proceeding without the admission controller
2021-08-13 20:24:14.894661 I | op-k8sutil: ROOK_ENABLE_FLEX_DRIVER="false" (env var)
2021-08-13 20:24:14.894678 I | operator: watching all namespaces for ceph cluster CRs
2021-08-13 20:24:14.894761 I | operator: setting up the controller-runtime manager
2021-08-13 20:24:14.897096 I | ceph-cluster-controller: ConfigMap "rook-ceph-operator-config" changes detected. Updating configurations
2021-08-13 20:24:14.900984 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (env var)
2021-08-13 20:24:15.903130 I | ceph-cluster-controller: successfully started
2021-08-13 20:24:15.903215 I | ceph-cluster-controller: enabling hotplug orchestration
2021-08-13 20:24:15.903234 I | ceph-crashcollector-controller: successfully started
2021-08-13 20:24:15.903242 D | ceph-crashcollector-controller: watch for changes to the nodes
2021-08-13 20:24:15.903249 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments
2021-08-13 20:24:15.903256 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes
2021-08-13 20:24:15.903312 I | ceph-block-pool-controller: successfully started
2021-08-13 20:24:15.903379 I | ceph-object-store-user-controller: successfully started
2021-08-13 20:24:15.903444 I | ceph-object-realm-controller: successfully started
2021-08-13 20:24:15.903515 I | ceph-object-zonegroup-controller: successfully started
2021-08-13 20:24:15.903569 I | ceph-object-zone-controller: successfully started
2021-08-13 20:24:15.903688 I | ceph-object-controller: successfully started
2021-08-13 20:24:15.903767 I | ceph-file-controller: successfully started
2021-08-13 20:24:15.903835 I | ceph-nfs-controller: successfully started
2021-08-13 20:24:15.903907 I | ceph-rbd-mirror-controller: successfully started
2021-08-13 20:24:15.903975 I | ceph-client-controller: successfully started
2021-08-13 20:24:15.904032 I | ceph-filesystem-mirror-controller: successfully started
2021-08-13 20:24:15.905687 D | op-k8sutil: kubernetes version fetched 1.21.4
2021-08-13 20:24:15.905734 I | operator: starting the controller-runtime manager
2021-08-13 20:24:16.006733 D | ceph-spec: create event from a CR
2021-08-13 20:24:16.006784 D | ceph-cluster-controller: create event from a CR
2021-08-13 20:24:16.006924 D | ceph-spec: create event from a CR
2021-08-13 20:24:16.006938 D | ceph-spec: create event from a CR
2021-08-13 20:24:16.007075 D | ceph-spec: create event from a CR
2021-08-13 20:24:16.007433 I | clusterdisruption-controller: create event from ceph cluster CR
2021-08-13 20:24:16.008134 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:16.008161 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:16.008494 D | ceph-spec: create event from a CR
2021-08-13 20:24:16.008559 D | ceph-cluster-controller: node watcher: node "nldw1-6-26-1" is not tolerable for cluster "rook-ceph", skipping
2021-08-13 20:24:16.008593 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration
2021-08-13 20:24:16.008625 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration
2021-08-13 20:24:16.008671 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration
2021-08-13 20:24:16.008876 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph"
2021-08-13 20:24:16.009204 D | ceph-spec: create event from a CR
2021-08-13 20:24:16.009671 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:16.009705 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:16.009775 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-node4-vf4rm" is a ceph pod!
2021-08-13 20:24:16.009833 D | ceph-crashcollector-controller: "rook-ceph-mon-p-69fc5844f8-7r5bs" is a ceph pod!
2021-08-13 20:24:16.009852 D | ceph-crashcollector-controller: "rook-ceph-rgw-objects-a-5dc4f79795-ccdvn" is a ceph pod!
2021-08-13 20:24:16.009885 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-fd7bbd985-tv8s8" is a ceph pod!
2021-08-13 20:24:16.009899 D | ceph-crashcollector-controller: "rook-ceph-mon-u-75669bb6c4-rkb92" is a ceph pod!
2021-08-13 20:24:16.009930 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-node1-mkhtk" is a ceph pod!
2021-08-13 20:24:16.009958 D | ceph-crashcollector-controller: "rook-ceph-mon-t-6b6ffc9467-lbbpc" is a ceph pod!
2021-08-13 20:24:16.009969 D | ceph-crashcollector-controller: reconciling node: "node4"
2021-08-13 20:24:16.009990 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-node1-74b646946f-9n7qb" is a ceph pod!
2021-08-13 20:24:16.010009 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-node2-vmhmp" is a ceph pod!
2021-08-13 20:24:16.010021 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:16.010032 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:16.010118 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-node2-644f4d59b6-6lsbn" is a ceph pod!
2021-08-13 20:24:16.010154 D | ceph-crashcollector-controller: "rook-ceph-osd-4-f8fccc68d-pllrm" is a ceph pod!
2021-08-13 20:24:16.010174 D | ceph-crashcollector-controller: "rook-ceph-osd-2-6df9b986cd-2b2g7" is a ceph pod!
2021-08-13 20:24:16.010222 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-node4-66795f7f4d-z75gz" is a ceph pod!
2021-08-13 20:24:16.010243 D | ceph-crashcollector-controller: "rook-ceph-osd-3-c8c6576bd-j6m78" is a ceph pod!
2021-08-13 20:24:16.010816 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:24:16.010935 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph"
2021-08-13 20:24:16.011030 D | clusterdisruption-controller: reconciling "rook-ceph/"
2021-08-13 20:24:16.011640 D | ceph-spec: create event from a CR
2021-08-13 20:24:16.013417 I | op-k8sutil: ROOK_ENABLE_FLEX_DRIVER="false" (env var)
2021-08-13 20:24:16.014124 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:16.014204 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:16.014578 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:16.017209 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:16.017281 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:16.017344 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc000dd0f40 t:0xc000dd0f00 u:0xc000dd0f80], assignment=&{Schedule:map[p:0xc00080c200 t:0xc00080c240 u:0xc00080c280]}
2021-08-13 20:24:16.017379 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc000eb77a0 t:0xc000eb7760 u:0xc000eb77e0], assignment=&{Schedule:map[p:0xc000b1c300 t:0xc000b1c340 u:0xc000b1c380]}
2021-08-13 20:24:16.017409 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:24:16.017435 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:24:16.017489 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:16.017530 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc000f3bd40 t:0xc000f3bd00 u:0xc000f3bda0], assignment=&{Schedule:map[p:0xc00093ec80 t:0xc00093ecc0 u:0xc00093ed00]}
2021-08-13 20:24:16.017545 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/628891127
2021-08-13 20:24:16.017575 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:24:16.017581 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:24:16.017607 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/436488938
2021-08-13 20:24:16.017652 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:24:16.017671 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:24:16.017686 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:24:16.017692 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:24:16.017708 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:24:16.018676 D | ceph-crashcollector-controller: deployment successfully reconciled for node "node4". operation: "updated"
2021-08-13 20:24:16.019375 D | ceph-crashcollector-controller: deleting cronjob if it exists...
2021-08-13 20:24:16.027835 D | ceph-crashcollector-controller: cronJob resource not found. Ignoring since object must be deleted.
2021-08-13 20:24:16.027888 D | ceph-crashcollector-controller: reconciling node: "node1"
2021-08-13 20:24:16.028382 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:24:16.032665 D | ceph-crashcollector-controller: deployment successfully reconciled for node "node1". operation: "updated"
2021-08-13 20:24:16.033679 D | ceph-crashcollector-controller: deleting cronjob if it exists...
2021-08-13 20:24:16.039419 D | ceph-crashcollector-controller: cronJob resource not found. Ignoring since object must be deleted.
2021-08-13 20:24:16.039564 D | ceph-crashcollector-controller: reconciling node: "node2"
2021-08-13 20:24:16.040125 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:24:16.045566 D | ceph-crashcollector-controller: deployment successfully reconciled for node "node2". operation: "updated"
2021-08-13 20:24:16.046794 D | ceph-crashcollector-controller: deleting cronjob if it exists...
2021-08-13 20:24:16.052428 D | ceph-crashcollector-controller: cronJob resource not found. Ignoring since object must be deleted.
2021-08-13 20:24:16.052485 D | ceph-crashcollector-controller: reconciling node: "nldw1-6-26-1"
2021-08-13 20:24:16.053639 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:24:16.059297 D | ceph-crashcollector-controller: deleting cronjob if it exists...
2021-08-13 20:24:16.075062 D | ceph-crashcollector-controller: cronJob resource not found. Ignoring since object must be deleted.
2021-08-13 20:24:16.075273 D | ceph-crashcollector-controller: reconciling node: "node4"
2021-08-13 20:24:16.075380 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:24:16.075957 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:24:16.080591 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:24:16.082278 D | ceph-crashcollector-controller: deployment successfully reconciled for node "node4". operation: "updated"
2021-08-13 20:24:16.083689 D | ceph-crashcollector-controller: deleting cronjob if it exists...
2021-08-13 20:24:16.090314 D | ceph-crashcollector-controller: cronJob resource not found. Ignoring since object must be deleted.
2021-08-13 20:24:16.126677 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:24:16.126795 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:24:16.126957 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:24:16.127003 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:24:16.127098 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:16.127108 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:16.137693 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config"
2021-08-13 20:24:16.327692 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:16.527308 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:16.683276 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:16.691969 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:16.726942 I | op-k8sutil: ROOK_CSI_ENABLE_RBD="true" (configmap)
2021-08-13 20:24:16.926085 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:16.926147 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc000902420 t:0xc0009023e0 u:0xc0009024c0], assignment=&{Schedule:map[p:0xc000a2ef80 t:0xc000a2efc0 u:0xc000a2f000]}
2021-08-13 20:24:16.926249 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/714237249
2021-08-13 20:24:17.053638 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:24:17.053726 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:24:17.126734 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:17.126798 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc000902760 t:0xc000902720 u:0xc0009027a0], assignment=&{Schedule:map[p:0xc000a2f040 t:0xc000a2f080 u:0xc000a2f0c0]}
2021-08-13 20:24:17.137760 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "rook-ceph"
2021-08-13 20:24:17.137803 I | ceph-cluster-controller: enabling ceph osd monitoring goroutine for cluster "rook-ceph"
2021-08-13 20:24:17.137818 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "rook-ceph"
2021-08-13 20:24:17.137854 D | ceph-cluster-controller: checking health of cluster
2021-08-13 20:24:17.137866 D | exec: Running command: ceph status --format json --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:24:17.244983 I | ceph-cluster-controller: skipping ceph status since operator is still initializing
2021-08-13 20:24:17.328055 I | op-k8sutil: ROOK_CSI_ENABLE_CEPHFS="true" (configmap)
2021-08-13 20:24:17.528003 I | op-k8sutil: ROOK_OBC_WATCH_OPERATOR_NAMESPACE="true" (configmap)
2021-08-13 20:24:17.528039 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "rook-ceph.ceph.rook.io/bucket"
I0813 20:24:17.529986 7 manager.go:118] objectbucket.io/provisioner-manager "msg"="starting provisioner" "name"="rook-ceph.ceph.rook.io/bucket"
2021-08-13 20:24:17.727227 I | op-k8sutil: ROOK_CSI_ALLOW_UNSUPPORTED_VERSION="false" (configmap)
2021-08-13 20:24:17.932359 D | ceph-cluster-controller: cluster spec successfully validated
2021-08-13 20:24:17.942157 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Detecting Ceph version"
2021-08-13 20:24:17.957794 I | ceph-cluster-controller: detecting the ceph image version for image ceph/ceph:v15.2.13-20210526...
2021-08-13 20:24:17.959657 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:17.959671 D | ceph-cluster-controller: update event on CephCluster CR
2021-08-13 20:24:17.959742 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:17.959797 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:17.959872 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:18.127390 I | op-k8sutil: ROOK_CSI_ENABLE_GRPC_METRICS="true" (configmap)
2021-08-13 20:24:18.170472 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:24:18.527285 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default)
2021-08-13 20:24:18.702658 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:18.714597 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:18.726157 I | op-k8sutil: Retrying 20 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:18.926614 I | op-k8sutil: ROOK_CSI_CEPH_IMAGE="quay.io/cephcsi/cephcsi:v3.3.1" (default)
2021-08-13 20:24:19.127902 I | op-k8sutil: ROOK_CSI_REGISTRAR_IMAGE="k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0" (default)
2021-08-13 20:24:19.326133 I | op-k8sutil: ROOK_CSI_PROVISIONER_IMAGE="k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2" (default)
2021-08-13 20:24:19.527058 I | op-k8sutil: ROOK_CSI_ATTACHER_IMAGE="k8s.gcr.io/sig-storage/csi-attacher:v3.2.1" (default)
2021-08-13 20:24:19.726494 I | op-k8sutil: ROOK_CSI_SNAPSHOTTER_IMAGE="k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1" (default)
2021-08-13 20:24:19.926985 I | op-k8sutil: ROOK_CSI_KUBELET_DIR_PATH="/var/lib/kubelet" (default)
2021-08-13 20:24:20.127532 I | op-k8sutil: CSI_VOLUME_REPLICATION_IMAGE="quay.io/csiaddons/volumereplication-operator:v0.1.0" (default)
2021-08-13 20:24:20.327079 I | op-k8sutil: ROOK_CSI_CEPHFS_POD_LABELS="" (default)
2021-08-13 20:24:20.526442 I | op-k8sutil: ROOK_CSI_RBD_POD_LABELS="" (default)
2021-08-13 20:24:20.526478 I | ceph-csi: detecting the ceph csi image version for image "quay.io/cephcsi/cephcsi:v3.3.1"
2021-08-13 20:24:20.717641 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:20.726127 I | op-k8sutil: CSI_PROVISIONER_TOLERATIONS="- effect: NoSchedule\n key: offsite\n operator: Exists\n" (configmap)
2021-08-13 20:24:20.742435 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:21.127135 I | op-k8sutil: Retrying 19 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:21.327271 I | op-k8sutil: Retrying 20 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:22.737109 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:22.754585 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:23.132186 I | op-k8sutil: Retrying 18 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:23.334324 I | op-k8sutil: Retrying 19 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:24.749652 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:24.767103 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:25.137348 I | op-k8sutil: Retrying 17 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:25.300950 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-controller" cm is handled by another watcher
2021-08-13 20:24:25.339024 I | op-k8sutil: Retrying 18 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:25.687475 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:24:26.081240 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:26.081286 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:26.087236 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:26.090830 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:26.090919 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc00172d4c0 t:0xc00172d480 u:0xc00172d500], assignment=&{Schedule:map[p:0xc00093ea80 t:0xc00093eac0 u:0xc00093eb00]}
2021-08-13 20:24:26.090968 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:24:26.090983 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:24:26.091104 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:24:26.091141 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:24:26.091155 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:24:26.091164 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:24:26.091187 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:24:26.114734 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:24:26.120978 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:24:26.127103 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:26.127131 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:26.127201 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:26.127219 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:26.130693 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:26.131518 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:26.134412 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:26.134496 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc00130c240 t:0xc00130c200 u:0xc00130c280], assignment=&{Schedule:map[p:0xc000c803c0 t:0xc000c80400 u:0xc000c80440]}
2021-08-13 20:24:26.134632 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/851928492
2021-08-13 20:24:26.135170 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:26.135221 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc000c338a0 t:0xc000c33860 u:0xc000c338e0], assignment=&{Schedule:map[p:0xc000a60580 t:0xc000a605c0 u:0xc000a60600]}
2021-08-13 20:24:26.135237 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:24:26.135256 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:24:26.135361 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/028366619
2021-08-13 20:24:26.241993 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:24:26.242117 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:24:26.243008 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:24:26.243083 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:24:26.767872 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:26.777113 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:27.143524 I | op-k8sutil: Retrying 16 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:27.346629 I | op-k8sutil: Retrying 17 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:28.780936 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:28.788557 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:29.150599 I | op-k8sutil: Retrying 15 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:29.350885 I | op-k8sutil: Retrying 16 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:30.807732 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:30.808851 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:31.156429 I | op-k8sutil: Retrying 14 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:31.357441 I | op-k8sutil: Retrying 15 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:32.828282 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:32.830541 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:33.161901 I | op-k8sutil: Retrying 13 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:33.205282 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:24:33.363533 I | op-k8sutil: Retrying 14 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:34.844507 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:34.850551 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:35.167768 I | op-k8sutil: Retrying 12 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:35.369354 I | op-k8sutil: Retrying 13 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:36.121715 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:36.121754 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:36.126447 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:36.130327 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:36.130415 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc00130c800 t:0xc00130c7c0 u:0xc00130c840], assignment=&{Schedule:map[p:0xc000c80700 t:0xc000c80740 u:0xc000c80780]}
2021-08-13 20:24:36.130470 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:24:36.130479 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:24:36.130575 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:24:36.130601 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:24:36.130613 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:24:36.130622 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:24:36.130646 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:24:36.152024 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:24:36.158180 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:24:36.242380 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:36.242420 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:36.243464 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:36.243483 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:36.246387 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:36.246509 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:36.249207 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:36.249300 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc000c33fe0 t:0xc000c33fa0 u:0xc0012ea020], assignment=&{Schedule:map[p:0xc000a60c80 t:0xc000a60cc0 u:0xc000a60d00]}
2021-08-13 20:24:36.249457 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/758944190
2021-08-13 20:24:36.249918 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:36.249983 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc00172dc00 t:0xc00172dbc0 u:0xc00172dc40], assignment=&{Schedule:map[p:0xc00093f040 t:0xc00093f080 u:0xc00093f0c0]}
2021-08-13 20:24:36.250002 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:24:36.250025 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:24:36.250115 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/235025413
2021-08-13 20:24:36.355960 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:24:36.356094 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:24:36.356138 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:24:36.356175 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:24:36.862493 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:36.869664 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:37.174672 I | op-k8sutil: Retrying 11 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:37.374098 I | op-k8sutil: Retrying 12 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:38.882714 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:38.892057 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:39.181728 I | op-k8sutil: Retrying 10 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:39.380073 I | op-k8sutil: Retrying 11 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:40.320469 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-controller" cm is handled by another watcher
2021-08-13 20:24:40.720786 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:24:40.905623 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:40.909080 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:41.186349 I | op-k8sutil: Retrying 9 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:41.385422 I | op-k8sutil: Retrying 10 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:42.936764 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:42.940122 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:43.192792 I | op-k8sutil: Retrying 8 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:43.391907 I | op-k8sutil: Retrying 9 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:44.953535 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:44.956916 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:45.199814 I | op-k8sutil: Retrying 7 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:45.397830 I | op-k8sutil: Retrying 8 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:46.158721 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:46.158751 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:46.164519 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:46.168260 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:46.168321 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001097360 t:0xc001097320 u:0xc0010973a0], assignment=&{Schedule:map[p:0xc000cd6000 t:0xc000cd6040 u:0xc000cd6080]}
2021-08-13 20:24:46.168356 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:24:46.168364 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:24:46.168437 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:24:46.168450 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:24:46.168456 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:24:46.168460 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:24:46.168475 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:24:46.188507 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:24:46.194706 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:24:46.356994 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:46.357046 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:46.357096 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:46.357132 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:46.362155 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:46.362253 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:46.365511 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:46.365581 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0012ea520 t:0xc0012ea4e0 u:0xc0012ea560], assignment=&{Schedule:map[p:0xc000a61200 t:0xc000a61240 u:0xc000a61280]}
2021-08-13 20:24:46.365718 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/801856416
2021-08-13 20:24:46.366095 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:46.366157 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001097980 t:0xc001097940 u:0xc0010979c0], assignment=&{Schedule:map[p:0xc000cd6540 t:0xc000cd6580 u:0xc000cd65c0]}
2021-08-13 20:24:46.366171 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:24:46.366206 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:24:46.366297 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/732874367
2021-08-13 20:24:46.474875 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:24:46.474994 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:24:46.475158 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:24:46.475231 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:24:46.972671 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:46.981529 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:47.206785 I | op-k8sutil: Retrying 6 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:47.402994 I | op-k8sutil: Retrying 7 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:48.257228 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:24:48.990587 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:49.001247 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:49.213411 I | op-k8sutil: Retrying 5 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:49.409208 I | op-k8sutil: Retrying 6 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:51.007671 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:51.015263 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:51.220280 I | op-k8sutil: Retrying 4 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:51.415356 I | op-k8sutil: Retrying 5 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:53.029195 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:53.032763 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:53.226043 I | op-k8sutil: Retrying 3 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:53.422389 I | op-k8sutil: Retrying 4 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:55.046011 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:55.098067 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:55.232044 I | op-k8sutil: Retrying 2 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:55.336778 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-controller" cm is handled by another watcher
2021-08-13 20:24:55.428374 I | op-k8sutil: Retrying 3 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:55.775398 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:24:56.195167 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:56.195206 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:56.201124 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:56.205402 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:56.205486 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0016460e0 t:0xc0016460a0 u:0xc001646120], assignment=&{Schedule:map[p:0xc000cd6a80 t:0xc000cd6ac0 u:0xc000cd6b00]}
2021-08-13 20:24:56.205530 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:24:56.205547 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:24:56.205650 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:24:56.205687 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:24:56.205700 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:24:56.205709 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:24:56.205735 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:24:56.226963 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:24:56.233412 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:24:56.475997 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:56.476027 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:56.476234 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:24:56.476268 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:24:56.486102 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:56.486330 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:56.488887 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:56.488963 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0012eac00 t:0xc0012eabc0 u:0xc0012eac40], assignment=&{Schedule:map[p:0xc000a61800 t:0xc000a61840 u:0xc000a61880]}
2021-08-13 20:24:56.489069 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/417487826
2021-08-13 20:24:56.489397 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:56.489457 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0014361e0 t:0xc0014361a0 u:0xc001436220], assignment=&{Schedule:map[p:0xc00093f780 t:0xc00093f7c0 u:0xc00093f800]}
2021-08-13 20:24:56.489471 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:24:56.489490 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:24:56.489604 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/358451977
2021-08-13 20:24:56.599690 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:24:56.599809 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:24:56.603748 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:24:56.603828 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:24:57.064875 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:57.109594 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:57.237156 I | op-k8sutil: Retrying 1 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:57.433803 I | op-k8sutil: Retrying 2 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:24:59.081535 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:24:59.122184 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:24:59.241779 I | op-k8sutil: Retrying 0 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:59.251599 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. gave up waiting after 20 retries every 2ns seconds. <nil>"
2021-08-13 20:24:59.261825 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:59.261884 D | ceph-cluster-controller: update event on CephCluster CR
2021-08-13 20:24:59.262007 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:59.262138 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:59.262248 E | ceph-cluster-controller: failed to reconcile. failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. gave up waiting after 20 retries every 2ns seconds. <nil>
2021-08-13 20:24:59.262263 I | op-k8sutil: Reporting Event rook-ceph:rook-ceph Warning:ReconcileFailed:failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. gave up waiting after 20 retries every 2ns seconds. <nil>
2021-08-13 20:24:59.262341 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:59.267994 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph"
2021-08-13 20:24:59.276384 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:24:59.279838 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:24:59.279920 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001885220 t:0xc0018851e0 u:0xc001885260], assignment=&{Schedule:map[p:0xc000cd7680 t:0xc000cd76c0 u:0xc000cd7700]}
2021-08-13 20:24:59.288760 D | ceph-cluster-controller: ceph mon health go routine is already running for cluster "rook-ceph"
2021-08-13 20:24:59.288786 D | ceph-cluster-controller: ceph osd health go routine is already running for cluster "rook-ceph"
2021-08-13 20:24:59.288795 D | ceph-cluster-controller: ceph status health go routine is already running for cluster "rook-ceph"
2021-08-13 20:24:59.288803 D | ceph-cluster-controller: cluster is already being watched by bucket and client provisioner for cluster "rook-ceph"
2021-08-13 20:24:59.295081 D | ceph-cluster-controller: cluster spec successfully validated
2021-08-13 20:24:59.299366 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Detecting Ceph version"
2021-08-13 20:24:59.310055 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:59.310078 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:59.310227 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:59.310326 D | ceph-cluster-controller: update event on CephCluster CR
2021-08-13 20:24:59.310475 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:24:59.314798 I | ceph-cluster-controller: detecting the ceph image version for image ceph/ceph:v15.2.13-20210526...
2021-08-13 20:24:59.324487 I | op-k8sutil: Retrying 20 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:24:59.439417 I | op-k8sutil: Retrying 1 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:25:01.098253 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:01.132549 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:01.329676 I | op-k8sutil: Retrying 19 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:01.443429 I | op-k8sutil: Retrying 0 more times every 2 seconds for ConfigMap rook-ceph-csi-detect-version to be deleted
2021-08-13 20:25:01.443539 E | ceph-csi: invalid csi version. failed to run CmdReporter rook-ceph-csi-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-csi-detect-version. failed to delete ConfigMap rook-ceph-csi-detect-version. gave up waiting after 20 retries every 2ns seconds. <nil>
failed to complete ceph CSI version job
github.com/rook/rook/pkg/operator/ceph/csi.validateCSIVersion
/home/rook/go/src/github.com/rook/rook/pkg/operator/ceph/csi/spec.go:725
github.com/rook/rook/pkg/operator/ceph/csi.ValidateAndConfigureDrivers
/home/rook/go/src/github.com/rook/rook/pkg/operator/ceph/csi/csi.go:49
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1371
2021-08-13 20:25:02.138081 D | op-mon: checking health of mons
2021-08-13 20:25:02.138141 D | op-mon: Acquiring lock for mon orchestration
2021-08-13 20:25:02.138171 D | op-mon: Acquired lock for mon orchestration
2021-08-13 20:25:02.138179 E | cephclient: clusterInfo is nil
2021-08-13 20:25:02.138197 D | op-mon: Released lock for mon orchestration
2021-08-13 20:25:02.138212 W | op-mon: failed to check mon health. skipping mon health check since cluster details are not initialized
2021-08-13 20:25:03.133418 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:03.154135 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:03.287621 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:25:03.344879 I | op-k8sutil: Retrying 18 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:05.146468 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:05.164792 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:05.350174 I | op-k8sutil: Retrying 17 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:06.234612 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:06.234649 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:06.238168 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:06.241590 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:06.241683 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc00186d160 t:0xc00186d120 u:0xc00186d1a0], assignment=&{Schedule:map[p:0xc000a69140 t:0xc000a69180 u:0xc000a691c0]}
2021-08-13 20:25:06.241729 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:06.241739 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:25:06.241843 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:25:06.241881 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:06.241909 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:25:06.241916 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:25:06.241938 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:25:06.263776 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:25:06.267283 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:25:06.600725 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:06.600763 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:06.604754 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:06.604775 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:06.604930 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:06.608048 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:06.608095 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001885780 t:0xc001885740 u:0xc0018857c0], assignment=&{Schedule:map[p:0xc000cd7bc0 t:0xc000cd7c00 u:0xc000cd7c40]}
2021-08-13 20:25:06.608107 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:25:06.608123 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:25:06.608214 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/729127124
2021-08-13 20:25:06.608627 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:06.611038 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:06.611090 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001437040 t:0xc001437000 u:0xc001437080], assignment=&{Schedule:map[p:0xc000cf8100 t:0xc000cf8140 u:0xc000cf8180]}
2021-08-13 20:25:06.611175 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/956313123
2021-08-13 20:25:06.715489 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:06.715771 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:06.716438 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:06.716560 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:07.159414 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:07.174615 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:07.355126 I | op-k8sutil: Retrying 16 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:09.176404 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:09.196210 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:09.359573 I | op-k8sutil: Retrying 15 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:10.352371 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-controller" cm is handled by another watcher
2021-08-13 20:25:10.806146 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:25:11.192689 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:11.207975 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:11.365618 I | op-k8sutil: Retrying 14 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:13.211756 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:13.251130 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:13.369540 I | op-k8sutil: Retrying 13 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:15.225543 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:15.272901 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:15.373865 I | op-k8sutil: Retrying 12 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:16.268133 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:16.268173 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:16.272765 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:16.276646 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:16.276738 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001885ba0 t:0xc001885b60 u:0xc001885be0], assignment=&{Schedule:map[p:0xc000cd7e80 t:0xc000cd7ec0 u:0xc000cd7f00]}
2021-08-13 20:25:16.276791 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:16.276806 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:25:16.276918 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:25:16.276954 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:16.276970 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:25:16.276979 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:25:16.277000 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:25:16.299232 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:25:16.305314 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:25:16.716533 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:16.716573 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:16.716895 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:16.716917 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:16.720171 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:16.721731 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:16.723323 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:16.723410 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0018be280 t:0xc0018be240 u:0xc0018be2c0], assignment=&{Schedule:map[p:0xc000e02480 t:0xc000e024c0 u:0xc000e02500]}
2021-08-13 20:25:16.723433 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:25:16.723480 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:25:16.723664 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/040754982
2021-08-13 20:25:16.723986 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:16.724057 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0014ffb20 t:0xc0014ffae0 u:0xc0014ffb60], assignment=&{Schedule:map[p:0xc000cdc940 t:0xc000cdc980 u:0xc000cdc9c0]}
2021-08-13 20:25:16.724161 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/900411981
2021-08-13 20:25:16.829259 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:16.829405 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:16.832687 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:16.832772 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:17.138305 D | op-osd: checking osd processes status.
2021-08-13 20:25:17.138435 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/363867976
2021-08-13 20:25:17.245693 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:17.248642 D | ceph-cluster-controller: checking health of cluster
2021-08-13 20:25:17.248663 D | exec: Running command: ceph status --format json --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:25:17.248795 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:17.248873 D | op-osd: failed to check OSD Dump. failed to get osd dump: failed to get osd dump: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:17.248952 D | exec: Running command: ceph osd crush class ls --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/658508295
2021-08-13 20:25:17.282369 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:17.354062 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:17.354160 D | op-osd: failed to check device classes. failed to get osd device classes: failed to get deviceclasses. Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
. : Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:17.354709 I | ceph-cluster-controller: skipping ceph status since operator is still initializing
2021-08-13 20:25:17.379064 I | op-k8sutil: Retrying 11 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:18.316561 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:25:19.269299 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:19.294100 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:19.384908 I | op-k8sutil: Retrying 10 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:21.287953 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:21.305962 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:21.389589 I | op-k8sutil: Retrying 9 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:23.309339 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:23.316644 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:23.394089 I | op-k8sutil: Retrying 8 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:25.336385 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:25.344422 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:25.364301 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-controller" cm is handled by another watcher
2021-08-13 20:25:25.398952 I | op-k8sutil: Retrying 7 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:25.840039 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:25:26.305830 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:26.305869 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:26.321241 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:26.324985 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:26.325074 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0018be680 t:0xc0018be640 u:0xc0018be6c0], assignment=&{Schedule:map[p:0xc000e02780 t:0xc000e027c0 u:0xc000e02800]}
2021-08-13 20:25:26.325117 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:26.325128 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:25:26.325211 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:25:26.325236 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:26.325249 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:25:26.325255 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:25:26.325275 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:25:26.348377 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:25:26.354219 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:25:26.829955 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:26.829994 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:26.833121 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:26.833150 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:26.835550 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:26.837290 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:26.839940 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:26.840025 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0018bed60 t:0xc0018bed20 u:0xc0018beda0], assignment=&{Schedule:map[p:0xc000e02d80 t:0xc000e02e00 u:0xc000e02e40]}
2021-08-13 20:25:26.840045 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:25:26.840071 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:25:26.840255 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/109896122
2021-08-13 20:25:26.840577 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:26.840653 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001a28200 t:0xc001a281c0 u:0xc001a28240], assignment=&{Schedule:map[p:0xc000cdcd80 t:0xc000cdcdc0 u:0xc000cdce00]}
2021-08-13 20:25:26.840831 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/112806865
2021-08-13 20:25:26.950797 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:26.950890 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:26.951108 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:26.951188 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:27.353241 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:27.359674 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:27.403795 I | op-k8sutil: Retrying 6 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:29.371407 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:29.382939 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:29.408843 I | op-k8sutil: Retrying 5 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:31.393124 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:31.397387 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:31.413238 I | op-k8sutil: Retrying 4 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:33.353864 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:25:33.409925 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:33.418075 I | op-k8sutil: Retrying 3 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:33.420579 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:35.423156 I | op-k8sutil: Retrying 2 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:35.427447 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:35.444201 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:36.354598 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:36.354637 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:36.361933 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:36.366215 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:36.366337 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc00186dbe0 t:0xc00186dba0 u:0xc00186dc20], assignment=&{Schedule:map[p:0xc000a69980 t:0xc000a699c0 u:0xc000a69a00]}
2021-08-13 20:25:36.366408 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:36.366419 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:25:36.372440 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:25:36.372527 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:36.372562 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:25:36.372568 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:25:36.372585 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:25:36.395510 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:25:36.403524 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:25:36.951753 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:36.951795 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:36.951804 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:36.951826 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:36.958234 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:36.959119 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:36.961515 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:36.961626 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001b1e2c0 t:0xc001b1e280 u:0xc001b1e300], assignment=&{Schedule:map[p:0xc000cf8100 t:0xc000cf8140 u:0xc000cf8180]}
2021-08-13 20:25:36.961650 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:25:36.961684 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:25:36.961831 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/480621820
2021-08-13 20:25:36.968631 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:36.968722 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0018be2c0 t:0xc0018be280 u:0xc0018be300], assignment=&{Schedule:map[p:0xc000e02240 t:0xc000e02280 u:0xc000e022c0]}
2021-08-13 20:25:36.968830 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/879017515
2021-08-13 20:25:37.072352 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:37.072469 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:37.075555 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:37.075612 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:37.431337 I | op-k8sutil: Retrying 1 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:37.452048 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:37.459639 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:39.440899 I | op-k8sutil: Retrying 0 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:39.445906 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. gave up waiting after 20 retries every 2ns seconds. <nil>"
2021-08-13 20:25:39.456976 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:25:39.457008 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:25:39.457042 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:25:39.457208 D | ceph-cluster-controller: update event on CephCluster CR
2021-08-13 20:25:39.457315 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:25:39.458262 E | ceph-cluster-controller: failed to reconcile. failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. gave up waiting after 20 retries every 2ns seconds. <nil>
2021-08-13 20:25:39.458284 D | op-k8sutil: Not Reporting Event because event is same as the old one:rook-ceph:rook-ceph Warning:ReconcileFailed:failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. gave up waiting after 20 retries every 2ns seconds. <nil>
2021-08-13 20:25:39.468622 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph"
2021-08-13 20:25:39.477256 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:39.478812 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:39.486996 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:39.487089 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001b1e980 t:0xc001b1e940 u:0xc001b1e9c0], assignment=&{Schedule:map[p:0xc000cf8580 t:0xc000cf85c0 u:0xc000cf8600]}
2021-08-13 20:25:39.489972 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:39.512233 D | ceph-cluster-controller: ceph mon health go routine is already running for cluster "rook-ceph"
2021-08-13 20:25:39.512268 D | ceph-cluster-controller: ceph osd health go routine is already running for cluster "rook-ceph"
2021-08-13 20:25:39.512278 D | ceph-cluster-controller: ceph status health go routine is already running for cluster "rook-ceph"
2021-08-13 20:25:39.512286 D | ceph-cluster-controller: cluster is already being watched by bucket and client provisioner for cluster "rook-ceph"
2021-08-13 20:25:39.519439 D | ceph-cluster-controller: cluster spec successfully validated
2021-08-13 20:25:39.524903 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Detecting Ceph version"
2021-08-13 20:25:39.535522 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:25:39.535671 I | ceph-cluster-controller: detecting the ceph image version for image ceph/ceph:v15.2.13-20210526...
2021-08-13 20:25:39.535714 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:25:39.535790 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:25:39.535944 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:25:39.536040 D | ceph-cluster-controller: update event on CephCluster CR
2021-08-13 20:25:39.544485 I | op-k8sutil: Retrying 20 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:40.380748 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-controller" cm is handled by another watcher
2021-08-13 20:25:40.872282 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:25:41.497696 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:41.507007 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:41.550206 I | op-k8sutil: Retrying 19 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:43.513553 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:43.520714 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:43.555992 I | op-k8sutil: Retrying 18 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:45.532817 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:45.536445 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:45.564979 I | op-k8sutil: Retrying 17 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:46.404713 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:46.404751 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:46.409640 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:46.413461 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:46.413546 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0018bf280 t:0xc0018bf240 u:0xc0018bf2c0], assignment=&{Schedule:map[p:0xc000e02c80 t:0xc000e02cc0 u:0xc000e02d00]}
2021-08-13 20:25:46.413592 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:46.413600 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:25:46.413877 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:25:46.413903 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:46.413912 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:25:46.413917 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:25:46.413939 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:25:46.436034 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:25:46.444165 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:25:47.073135 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:47.073163 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:47.076268 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:47.076292 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:47.079605 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:47.079803 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:47.083045 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:47.083093 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001a29e60 t:0xc001a29e20 u:0xc001a29ea0], assignment=&{Schedule:map[p:0xc000cdd4c0 t:0xc000cdd500 u:0xc000cdd540]}
2021-08-13 20:25:47.083103 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:25:47.083118 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:47.083159 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc00186c4c0 t:0xc00186c480 u:0xc00186c500], assignment=&{Schedule:map[p:0xc000a68640 t:0xc000a68680 u:0xc000a686c0]}
2021-08-13 20:25:47.083248 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/983201678
2021-08-13 20:25:47.083373 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:25:47.083434 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/693075349
2021-08-13 20:25:47.138724 D | op-mon: checking health of mons
2021-08-13 20:25:47.139610 D | op-mon: Acquiring lock for mon orchestration
2021-08-13 20:25:47.140046 D | op-mon: Acquired lock for mon orchestration
2021-08-13 20:25:47.140526 E | cephclient: clusterInfo is nil
2021-08-13 20:25:47.140932 D | op-mon: Released lock for mon orchestration
2021-08-13 20:25:47.141333 W | op-mon: failed to check mon health. skipping mon health check since cluster details are not initialized
2021-08-13 20:25:47.193378 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:47.193507 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:47.198461 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:47.198558 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:47.552602 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:47.556312 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:47.569919 I | op-k8sutil: Retrying 16 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:48.383547 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:25:49.571054 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:49.575664 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:49.581264 I | op-k8sutil: Retrying 15 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:51.587008 I | op-k8sutil: Retrying 14 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:51.592510 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:51.601276 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:53.592893 I | op-k8sutil: Retrying 13 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:53.608335 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:53.620341 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:55.401674 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-controller" cm is handled by another watcher
2021-08-13 20:25:55.599139 I | op-k8sutil: Retrying 12 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:55.623879 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:55.632444 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:55.898114 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:25:56.445359 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:56.445397 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:56.449321 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:56.453764 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:56.453856 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc00186c6c0 t:0xc00186c680 u:0xc00186c700], assignment=&{Schedule:map[p:0xc000a68740 t:0xc000a68780 u:0xc000a687c0]}
2021-08-13 20:25:56.453906 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:56.453918 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:25:56.454034 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:25:56.454075 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:25:56.454091 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:25:56.454100 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:25:56.454130 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:25:56.479551 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:25:56.487839 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:25:57.193763 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:57.193798 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:57.198966 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:25:57.198989 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:25:57.202225 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:57.202889 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:25:57.205924 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:57.206006 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001b1f140 t:0xc001b1f100 u:0xc001b1f180], assignment=&{Schedule:map[p:0xc000cf8ec0 t:0xc000cf8f00 u:0xc000cf8f40]}
2021-08-13 20:25:57.206032 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:25:57.206056 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:25:57.206173 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:25:57.206198 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/367125488
2021-08-13 20:25:57.206245 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0009aabe0 t:0xc0009aaba0 u:0xc0009aac40], assignment=&{Schedule:map[p:0xc000cdd980 t:0xc000cdd9c0 u:0xc000cdda00]}
2021-08-13 20:25:57.206358 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/993976463
2021-08-13 20:25:57.314222 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:57.314348 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:25:57.314384 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:57.314437 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:25:57.606219 I | op-k8sutil: Retrying 11 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:57.636680 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:57.665486 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:25:59.614293 I | op-k8sutil: Retrying 10 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:25:59.658301 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:25:59.672286 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:01.619154 I | op-k8sutil: Retrying 9 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:01.677360 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:01.686677 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:03.408555 D | ceph-cluster-controller: node watcher: node "nldw1-6-26-1" is not tolerable for cluster "rook-ceph", skipping
2021-08-13 20:26:03.415158 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:26:03.625580 I | op-k8sutil: Retrying 8 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:03.693791 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:03.722446 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:05.634568 I | op-k8sutil: Retrying 7 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:05.710384 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:05.739049 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:06.488125 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:26:06.488164 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:26:06.495542 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:26:06.499082 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:26:06.499169 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0018bfe40 t:0xc0018bfe00 u:0xc0018bfe80], assignment=&{Schedule:map[p:0xc000e03780 t:0xc000e037c0 u:0xc000e03800]}
2021-08-13 20:26:06.499213 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:26:06.499231 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:26:06.499336 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:26:06.499385 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:26:06.499404 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:26:06.499415 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:26:06.499437 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:26:06.526255 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:26:06.533080 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:26:07.314974 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:26:07.315017 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:26:07.315252 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:26:07.315270 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:26:07.321253 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:26:07.321715 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:26:07.324495 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:26:07.324550 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001b1f660 t:0xc001b1f620 u:0xc001b1f6a0], assignment=&{Schedule:map[p:0xc000cf9280 t:0xc000cf92c0 u:0xc000cf9300]}
2021-08-13 20:26:07.324673 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/808649378
2021-08-13 20:26:07.324835 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:26:07.324870 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc000dd0980 t:0xc000dd0940 u:0xc000dd09e0], assignment=&{Schedule:map[p:0xc000b30e40 t:0xc000b30e80 u:0xc000b30ec0]}
2021-08-13 20:26:07.324880 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:26:07.324896 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:26:07.324955 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/513090969
2021-08-13 20:26:07.431094 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:26:07.431161 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:26:07.431218 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:26:07.431258 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:26:07.641037 I | op-k8sutil: Retrying 6 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:07.725553 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:07.768344 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:09.646091 I | op-k8sutil: Retrying 5 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:09.737778 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:09.788734 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:10.435639 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-controller" cm is handled by another watcher
2021-08-13 20:26:10.941413 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:26:11.655525 I | op-k8sutil: Retrying 4 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:11.756515 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:11.801526 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:13.662202 I | op-k8sutil: Retrying 3 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:13.777367 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:13.813332 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:15.668212 I | op-k8sutil: Retrying 2 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:15.804828 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:15.831891 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:16.533330 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:26:16.533371 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:26:16.537265 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:26:16.540558 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:26:16.540648 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001b1faa0 t:0xc001b1fa60 u:0xc001b1fae0], assignment=&{Schedule:map[p:0xc000cf97c0 t:0xc000cf9800 u:0xc000cf9840]}
2021-08-13 20:26:16.540695 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:26:16.540708 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:26:16.540815 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:26:16.540851 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:26:16.540873 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:26:16.540882 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:26:16.540906 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:26:16.563709 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:26:16.570509 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:26:17.355244 D | ceph-cluster-controller: checking health of cluster
2021-08-13 20:26:17.355341 D | exec: Running command: ceph status --format json --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:26:17.355545 D | op-osd: checking osd processes status.
2021-08-13 20:26:17.355666 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/807236644
2021-08-13 20:26:17.432240 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:26:17.432267 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:26:17.432443 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:26:17.432454 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:26:17.438845 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:26:17.438972 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:26:17.442429 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:26:17.442489 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc000dd1420 t:0xc000dd13a0 u:0xc000dd14a0], assignment=&{Schedule:map[p:0xc000b31cc0 t:0xc000b31d00 u:0xc000b31d40]}
2021-08-13 20:26:17.442502 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:26:17.442532 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc00186d1e0 t:0xc00186d1a0 u:0xc00186d220], assignment=&{Schedule:map[p:0xc000a693c0 t:0xc000a69400 u:0xc000a69440]}
2021-08-13 20:26:17.442543 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:26:17.442558 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:26:17.442580 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/163641139
2021-08-13 20:26:17.442614 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/249853174
2021-08-13 20:26:17.468512 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:26:17.468614 D | op-osd: failed to check OSD Dump. failed to get osd dump: failed to get osd dump: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:26:17.468685 D | exec: Running command: ceph osd crush class ls --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/098652125
2021-08-13 20:26:17.485105 I | ceph-cluster-controller: skipping ceph status since operator is still initializing
2021-08-13 20:26:17.557603 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:26:17.557730 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:26:17.571350 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:26:17.571483 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:26:17.574989 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:26:17.575063 D | op-osd: failed to check device classes. failed to get osd device classes: failed to get deviceclasses. Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
. : Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:26:17.673877 I | op-k8sutil: Retrying 1 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:17.819688 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:17.847061 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:18.460955 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:26:19.679839 I | op-k8sutil: Retrying 0 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:19.683206 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. gave up waiting after 20 retries every 2ns seconds. <nil>"
2021-08-13 20:26:19.698595 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:26:19.698844 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:26:19.699034 D | ceph-cluster-controller: update event on CephCluster CR
2021-08-13 20:26:19.699209 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:26:19.699382 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:26:19.700894 E | ceph-cluster-controller: failed to reconcile. failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. gave up waiting after 20 retries every 2ns seconds. <nil>
2021-08-13 20:26:19.700935 D | op-k8sutil: Not Reporting Event because event is same as the old one:rook-ceph:rook-ceph Warning:ReconcileFailed:failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. gave up waiting after 20 retries every 2ns seconds. <nil>
2021-08-13 20:26:19.722067 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph"
2021-08-13 20:26:19.726039 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:26:19.729549 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:26:19.729635 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc00186d820 t:0xc00186d7e0 u:0xc00186d860], assignment=&{Schedule:map[p:0xc000a69900 t:0xc000a69a80 u:0xc000a69ac0]}
2021-08-13 20:26:19.738430 D | ceph-cluster-controller: ceph mon health go routine is already running for cluster "rook-ceph"
2021-08-13 20:26:19.738455 D | ceph-cluster-controller: ceph osd health go routine is already running for cluster "rook-ceph"
2021-08-13 20:26:19.738465 D | ceph-cluster-controller: ceph status health go routine is already running for cluster "rook-ceph"
2021-08-13 20:26:19.738473 D | ceph-cluster-controller: cluster is already being watched by bucket and client provisioner for cluster "rook-ceph"
2021-08-13 20:26:19.745722 D | ceph-cluster-controller: cluster spec successfully validated
2021-08-13 20:26:19.750446 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Detecting Ceph version"
2021-08-13 20:26:19.760766 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:26:19.760922 I | ceph-cluster-controller: detecting the ceph image version for image ceph/ceph:v15.2.13-20210526...
2021-08-13 20:26:19.761010 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:26:19.761096 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:26:19.761262 D | ceph-spec: update event on CephCluster CR
2021-08-13 20:26:19.761377 D | ceph-cluster-controller: update event on CephCluster CR
2021-08-13 20:26:19.776549 I | op-k8sutil: Retrying 20 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:19.833132 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:19.866177 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:21.788854 I | op-k8sutil: Retrying 19 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:21.850433 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:21.878304 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:23.794066 I | op-k8sutil: Retrying 18 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:23.866286 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:23.891275 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:25.448985 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-controller" cm is handled by another watcher
2021-08-13 20:26:25.799079 I | op-k8sutil: Retrying 17 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:25.879140 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:25.913903 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:25.974290 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:26:26.571281 D | ceph-spec: "ceph-object-store-user-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:26:26.571321 D | ceph-spec: "ceph-object-store-user-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:26:26.578013 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:26:26.581735 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:26:26.581820 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc001789ae0 t:0xc001789aa0 u:0xc001789b40], assignment=&{Schedule:map[p:0xc000eb1680 t:0xc000eb16c0 u:0xc000eb1700]}
2021-08-13 20:26:26.581865 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:26:26.581878 D | ceph-object-store-user-controller: CephObjectStore exists
2021-08-13 20:26:26.581991 D | ceph-object-store-user-controller: CephObjectStore "nextcloud" is running with 1 pods
2021-08-13 20:26:26.582025 I | ceph-object-store-user-controller: CephObjectStore "objects" found
2021-08-13 20:26:26.582038 D | ceph-object-controller: creating s3 user object "rgw-admin-ops-user" for object store "rook-ceph"
2021-08-13 20:26:26.582046 D | ceph-object-controller: creating s3 user "rgw-admin-ops-user"
2021-08-13 20:26:26.582067 D | exec: Running command: radosgw-admin user create --uid rgw-admin-ops-user --display-name RGW Admin Ops User --caps buckets=*;users=*;usage=read;metadata=read;zone=read --rgw-realm=objects --rgw-zonegroup=objects --rgw-zone=objects --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
2021-08-13 20:26:26.606253 D | ceph-object-store-user-controller: ObjectStore resource not ready in namespace "rook-ceph", retrying in "10s". failed to fetch rgw admin ops api user credentials: failed to create object user "rgw-admin-ops-user". error code 1 for object store "objects": skipping reconcile since operator is still initializing
2021-08-13 20:26:26.612313 D | ceph-object-store-user-controller: object store user "rook-ceph/nextcloud" status updated to "ReconcileFailed"
2021-08-13 20:26:27.558122 D | ceph-spec: "ceph-object-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:26:27.558155 D | ceph-spec: "ceph-object-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:26:27.562170 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:26:27.565562 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:26:27.565632 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0002e90c0 t:0xc0002e8e20 u:0xc0002e9560], assignment=&{Schedule:map[p:0xc000b1c340 t:0xc000b1c380 u:0xc000b1c3c0]}
2021-08-13 20:26:27.565765 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/666176408
2021-08-13 20:26:27.572097 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph"
2021-08-13 20:26:27.572132 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling
2021-08-13 20:26:27.575791 D | op-mon: found existing monitor secrets for cluster rook-ceph
2021-08-13 20:26:27.578836 I | op-mon: parsing mon endpoints: t=10.152.183.117:6789,p=10.152.183.74:6789,u=10.152.183.156:6789
2021-08-13 20:26:27.578883 D | op-mon: loaded: maxMonID=20, mons=map[p:0xc0004f9880 t:0xc0004f9840 u:0xc0004f98c0], assignment=&{Schedule:map[p:0xc000ae3640 t:0xc000ae3680 u:0xc000ae36c0]}
2021-08-13 20:26:27.578894 D | ceph-spec: ceph version found "15.2.13-0"
2021-08-13 20:26:27.578909 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph"
2021-08-13 20:26:27.578989 D | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/067384343
2021-08-13 20:26:27.673620 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:26:27.673724 I | ceph-object-controller: skipping reconcile since operator is still initializing
2021-08-13 20:26:27.681838 D | exec: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
.
2021-08-13 20:26:27.681936 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing
2021-08-13 20:26:27.803744 I | op-k8sutil: Retrying 16 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:27.908579 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:27.924851 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:29.813715 I | op-k8sutil: Retrying 15 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:29.924877 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:29.935375 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:31.820593 I | op-k8sutil: Retrying 14 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:31.942712 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:31.958281 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:32.141430 D | op-mon: checking health of mons
2021-08-13 20:26:32.141471 D | op-mon: Acquiring lock for mon orchestration
2021-08-13 20:26:32.141480 D | op-mon: Acquired lock for mon orchestration
2021-08-13 20:26:32.141487 E | cephclient: clusterInfo is nil
2021-08-13 20:26:32.141511 D | op-mon: Released lock for mon orchestration
2021-08-13 20:26:32.141524 W | op-mon: failed to check mon health. skipping mon health check since cluster details are not initialized
2021-08-13 20:26:33.529800 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "ingress-controller-leader-nginx" cm is handled by another watcher
2021-08-13 20:26:33.827350 I | op-k8sutil: Retrying 13 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:33.962589 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:33.970969 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
2021-08-13 20:26:35.844383 I | op-k8sutil: Retrying 12 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted
2021-08-13 20:26:35.977905 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election-core" cm is handled by another watcher
2021-08-13 20:26:35.986918 D | ceph-cluster-controller: hot-plug cm watcher: only reconcile on hot plug cm changes, this "cert-manager-cainjector-leader-election" cm is handled by another watcher
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment