Created
July 18, 2023 22:34
-
-
Save ti-ka/bbff90948cacdf70f62e520184500d36 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
sa@r0:~$ cat op.log | |
2023-07-18 22:32:03.463522 I | rookcmd: starting Rook v1.11.6 with arguments '/usr/local/bin/rook ceph operator' | |
2023-07-18 22:32:03.463598 I | rookcmd: flag values: --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-level=INFO | |
2023-07-18 22:32:03.463602 I | cephcmd: starting Rook-Ceph operator | |
2023-07-18 22:32:03.610172 I | cephcmd: base ceph version inside the rook operator image is "ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)" | |
2023-07-18 22:32:03.620652 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) | |
2023-07-18 22:32:03.620675 I | operator: watching all namespaces for Ceph CRs | |
2023-07-18 22:32:03.620734 I | operator: setting up schemes | |
2023-07-18 22:32:03.624566 I | operator: setting up the controller-runtime manager | |
2023-07-18 22:32:03.635888 I | op-k8sutil: ROOK_DISABLE_ADMISSION_CONTROLLER="true" (configmap) | |
2023-07-18 22:32:03.635899 I | operator: delete webhook resources since webhook is disabled | |
2023-07-18 22:32:03.635903 I | operator: deleting validating webhook rook-ceph-webhook | |
2023-07-18 22:32:03.639247 I | operator: deleting webhook cert manager Certificate rook-admission-controller-cert | |
2023-07-18 22:32:03.642999 I | operator: deleting webhook cert manager Issuer "selfsigned-issuer" | |
2023-07-18 22:32:03.648319 I | operator: deleting validating webhook service "rook-ceph-admission-controller" | |
2023-07-18 22:32:03.698466 I | ceph-cluster-controller: successfully started | |
2023-07-18 22:32:03.698603 I | ceph-cluster-controller: enabling hotplug orchestration | |
2023-07-18 22:32:03.698640 I | ceph-nodedaemon-controller: successfully started | |
2023-07-18 22:32:03.698686 I | ceph-block-pool-controller: successfully started | |
2023-07-18 22:32:03.698720 I | ceph-object-store-user-controller: successfully started | |
2023-07-18 22:32:03.698751 I | ceph-object-realm-controller: successfully started | |
2023-07-18 22:32:03.698774 I | ceph-object-zonegroup-controller: successfully started | |
2023-07-18 22:32:03.698796 I | ceph-object-zone-controller: successfully started | |
2023-07-18 22:32:03.699062 I | ceph-object-controller: successfully started | |
2023-07-18 22:32:03.699117 I | ceph-file-controller: successfully started | |
2023-07-18 22:32:03.699167 I | ceph-nfs-controller: successfully started | |
2023-07-18 22:32:03.699224 I | ceph-rbd-mirror-controller: successfully started | |
2023-07-18 22:32:03.699274 I | ceph-client-controller: successfully started | |
2023-07-18 22:32:03.699307 I | ceph-filesystem-mirror-controller: successfully started | |
2023-07-18 22:32:03.699358 I | operator: rook-ceph-operator-config-controller successfully started | |
2023-07-18 22:32:03.707440 I | op-k8sutil: ROOK_DISABLE_ADMISSION_CONTROLLER="true" (configmap) | |
2023-07-18 22:32:03.707499 I | ceph-csi: rook-ceph-operator-csi-controller successfully started | |
2023-07-18 22:32:03.707539 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started | |
2023-07-18 22:32:03.707567 I | ceph-bucket-topic: successfully started | |
2023-07-18 22:32:03.707595 I | ceph-bucket-notification: successfully started | |
2023-07-18 22:32:03.707620 I | ceph-bucket-notification: successfully started | |
2023-07-18 22:32:03.707640 I | ceph-fs-subvolumegroup-controller: successfully started | |
2023-07-18 22:32:03.707659 I | blockpool-rados-namespace-controller: successfully started | |
2023-07-18 22:32:03.708883 I | operator: starting the controller-runtime manager | |
2023-07-18 22:32:03.811354 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) | |
2023-07-18 22:32:03.811378 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) | |
2023-07-18 22:32:03.811393 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) | |
2023-07-18 22:32:03.811628 D | clusterdisruption-controller: create event from ceph cluster CR | |
2023-07-18 22:32:03.814084 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" | |
2023-07-18 22:32:03.814101 D | ceph-cluster-controller: create event from a CR | |
2023-07-18 22:32:03.814159 D | ceph-spec: create event from a CR: "ceph-erasure-default-data" | |
2023-07-18 22:32:03.814186 D | ceph-spec: create event from a CR: "ceph-erasure-default-md" | |
2023-07-18 22:32:03.814200 D | ceph-spec: create event from a CR: "ceph-nvme-replica-default" | |
2023-07-18 22:32:03.814211 D | ceph-spec: create event from a CR: "ceph-ssd-erasure-default-data" | |
2023-07-18 22:32:03.814222 D | ceph-spec: create event from a CR: "ceph-nvme-erasure-default-data" | |
2023-07-18 22:32:03.814232 D | ceph-spec: create event from a CR: "ceph-nvme-erasure-default-md" | |
2023-07-18 22:32:03.814246 D | ceph-spec: create event from a CR: "ceph-replica-default" | |
2023-07-18 22:32:03.814256 D | ceph-spec: create event from a CR: "ceph-ssd-erasure-default-md" | |
2023-07-18 22:32:03.814267 D | ceph-spec: create event from a CR: "ceph-ssd-replica-default" | |
2023-07-18 22:32:03.814303 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.814354 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.814396 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.814507 D | ceph-cluster-controller: node watcher: skipping cluster update. added node "r0.z1.sea.pahadi.net" is unschedulable | |
2023-07-18 22:32:03.814529 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" | |
2023-07-18 22:32:03.814601 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.814768 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" | |
2023-07-18 22:32:03.814858 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.814887 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:03.814924 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:03.814962 I | op-k8sutil: ROOK_CEPH_ALLOW_LOOP_DEVICES="false" (configmap) | |
2023-07-18 22:32:03.814978 I | operator: rook-ceph-operator-config-controller done reconciling | |
2023-07-18 22:32:03.815006 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" | |
2023-07-18 22:32:03.815053 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.815138 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.815185 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.815236 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.815284 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.815321 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.815367 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.815406 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.815582 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r4.z1.sea.pahadi.net-cxgk8" is a ceph pod! | |
2023-07-18 22:32:03.815619 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c13.z1.sea.pahadi.net-g4wgs" is a ceph pod! | |
2023-07-18 22:32:03.815641 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r5.z1.sea.pahadi.net-5dlfk" is a ceph pod! | |
2023-07-18 22:32:03.815695 D | ceph-nodedaemon-controller: "rook-ceph-mon-j-6558c6d7b4-hfx87" is a ceph pod! | |
2023-07-18 22:32:03.815708 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c17.z1.sea.pahadi.net-c75lt" is a ceph pod! | |
2023-07-18 22:32:03.815764 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) | |
2023-07-18 22:32:03.815790 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder | |
2023-07-18 22:32:03.815852 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c12.z1.sea.pahadi.net-q79zv" is a ceph pod! | |
2023-07-18 22:32:03.815904 D | ceph-nodedaemon-controller: "rook-ceph-mgr-b-865cdf8f9c-4dlpx" is a ceph pod! | |
2023-07-18 22:32:03.815925 D | ceph-nodedaemon-controller: "rook-ceph-osd-56-745cd45cdc-qnwtg" is a ceph pod! | |
2023-07-18 22:32:03.815937 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c16.z1.sea.pahadi.net-2sgbf" is a ceph pod! | |
2023-07-18 22:32:03.815959 D | ceph-nodedaemon-controller: "rook-ceph-mgr-a-79dfb6d9cd-2sbr2" is a ceph pod! | |
2023-07-18 22:32:03.815983 D | ceph-nodedaemon-controller: "rook-ceph-mon-l-85c897c947-6zzrv" is a ceph pod! | |
2023-07-18 22:32:03.816001 D | ceph-nodedaemon-controller: reconciling node: "c12.z1.sea.pahadi.net" | |
2023-07-18 22:32:03.816134 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r8.z1.sea.pahadi.net-j9x4d" is a ceph pod! | |
2023-07-18 22:32:03.816164 D | ceph-nodedaemon-controller: "rook-ceph-osd-54-bb8964d4b-z9n52" is a ceph pod! | |
2023-07-18 22:32:03.816190 D | ceph-nodedaemon-controller: "rook-ceph-osd-55-7f576cf46d-rhqwm" is a ceph pod! | |
2023-07-18 22:32:03.816212 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c18.z1.sea.pahadi.net-wch4f" is a ceph pod! | |
2023-07-18 22:32:03.816236 D | ceph-nodedaemon-controller: "rook-ceph-osd-52-687dc6c79c-t7hvr" is a ceph pod! | |
2023-07-18 22:32:03.816245 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c10.z1.sea.pahadi.net-vzng8" is a ceph pod! | |
2023-07-18 22:32:03.816252 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r7.z1.sea.pahadi.net-n9gbm" is a ceph pod! | |
2023-07-18 22:32:03.816261 D | ceph-nodedaemon-controller: "rook-ceph-mon-k-89d675d6b-qcfjh" is a ceph pod! | |
2023-07-18 22:32:03.816268 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c11.z1.sea.pahadi.net-n98d8" is a ceph pod! | |
2023-07-18 22:32:03.816274 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r3.z1.sea.pahadi.net-krjl8" is a ceph pod! | |
2023-07-18 22:32:03.816284 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c14.z1.sea.pahadi.net-rjkv9" is a ceph pod! | |
2023-07-18 22:32:03.817087 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:03.817718 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" | |
2023-07-18 22:32:03.818916 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:03.819270 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:03.850777 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c12.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:03.850803 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:03.852010 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:03.873024 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:03.873078 D | ceph-nodedaemon-controller: reconciling node: "c17.z1.sea.pahadi.net" | |
2023-07-18 22:32:03.873907 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:03.874004 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:03.874074 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0007293b0 k:0xc0007293e0 l:0xc000729500], assignment=&{Schedule:map[c:0xc000e635c0 e:0xc000e63600 j:0xc000e63640 k:0xc000e63680 l:0xc000e636c0]} | |
2023-07-18 22:32:03.874097 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:03.874119 I | ceph-block-pool-controller: creating pool "ceph-erasure-default-data" in namespace "rook-ceph" | |
2023-07-18 22:32:03.874155 D | exec: Running command: ceph osd erasure-code-profile get default --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:03.887848 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c17.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:03.887874 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:03.888802 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:03.894708 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:03.894754 D | ceph-nodedaemon-controller: reconciling node: "r3.z1.sea.pahadi.net" | |
2023-07-18 22:32:03.895608 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:03.897199 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:03.897371 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:03.917649 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:03.917700 D | ceph-nodedaemon-controller: reconciling node: "r0.z1.sea.pahadi.net" | |
2023-07-18 22:32:03.918550 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:03.919784 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:03.919844 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:03.952288 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:03.952331 D | ceph-nodedaemon-controller: reconciling node: "r8.z1.sea.pahadi.net" | |
2023-07-18 22:32:03.953191 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:03.960596 I | clusterdisruption-controller: deleted all legacy node drain canary pods | |
2023-07-18 22:32:03.963581 D | clusterdisruption-controller: osd "18" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.963599 I | clusterdisruption-controller: osd "rook-ceph-osd-18" is down and a possible node drain is detected | |
2023-07-18 22:32:03.963646 D | clusterdisruption-controller: osd "28" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.963656 I | clusterdisruption-controller: osd "rook-ceph-osd-28" is down and a possible node drain is detected | |
2023-07-18 22:32:03.963693 D | clusterdisruption-controller: osd "37" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.963702 I | clusterdisruption-controller: osd "rook-ceph-osd-37" is down and a possible node drain is detected | |
2023-07-18 22:32:03.963737 D | clusterdisruption-controller: osd "49" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.963745 I | clusterdisruption-controller: osd "rook-ceph-osd-49" is down and a possible node drain is detected | |
2023-07-18 22:32:03.963779 D | clusterdisruption-controller: osd "50" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.963788 I | clusterdisruption-controller: osd "rook-ceph-osd-50" is down and a possible node drain is detected | |
2023-07-18 22:32:03.963951 I | clusterdisruption-controller: osd "rook-ceph-osd-52" is down but no node drain is detected | |
2023-07-18 22:32:03.963998 D | clusterdisruption-controller: osd "9" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964007 I | clusterdisruption-controller: osd "rook-ceph-osd-9" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964042 D | clusterdisruption-controller: osd "11" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964051 I | clusterdisruption-controller: osd "rook-ceph-osd-11" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964085 D | clusterdisruption-controller: osd "19" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964093 I | clusterdisruption-controller: osd "rook-ceph-osd-19" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964130 D | clusterdisruption-controller: osd "36" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964139 I | clusterdisruption-controller: osd "rook-ceph-osd-36" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964175 D | clusterdisruption-controller: osd "14" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964184 I | clusterdisruption-controller: osd "rook-ceph-osd-14" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964217 D | clusterdisruption-controller: osd "25" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964226 I | clusterdisruption-controller: osd "rook-ceph-osd-25" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964260 D | clusterdisruption-controller: osd "26" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964268 I | clusterdisruption-controller: osd "rook-ceph-osd-26" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964302 D | clusterdisruption-controller: osd "41" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964311 I | clusterdisruption-controller: osd "rook-ceph-osd-41" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964344 D | clusterdisruption-controller: osd "48" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964373 I | clusterdisruption-controller: osd "rook-ceph-osd-48" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964409 D | clusterdisruption-controller: osd "51" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964418 I | clusterdisruption-controller: osd "rook-ceph-osd-51" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964453 D | clusterdisruption-controller: osd "6" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964462 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964502 D | clusterdisruption-controller: osd "0" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964510 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964548 D | clusterdisruption-controller: osd "1" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964557 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964589 D | clusterdisruption-controller: osd "43" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964597 I | clusterdisruption-controller: osd "rook-ceph-osd-43" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964631 D | clusterdisruption-controller: osd "10" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964640 I | clusterdisruption-controller: osd "rook-ceph-osd-10" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964676 D | clusterdisruption-controller: osd "38" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964685 I | clusterdisruption-controller: osd "rook-ceph-osd-38" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964717 D | clusterdisruption-controller: osd "40" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964726 I | clusterdisruption-controller: osd "rook-ceph-osd-40" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964758 D | clusterdisruption-controller: osd "42" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964766 I | clusterdisruption-controller: osd "rook-ceph-osd-42" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964801 D | clusterdisruption-controller: osd "16" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964809 I | clusterdisruption-controller: osd "rook-ceph-osd-16" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964843 D | clusterdisruption-controller: osd "2" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964851 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964883 D | clusterdisruption-controller: osd "4" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964892 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964925 D | clusterdisruption-controller: osd "12" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964933 I | clusterdisruption-controller: osd "rook-ceph-osd-12" is down and a possible node drain is detected | |
2023-07-18 22:32:03.964965 D | clusterdisruption-controller: osd "31" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.964973 I | clusterdisruption-controller: osd "rook-ceph-osd-31" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965006 D | clusterdisruption-controller: osd "34" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965014 I | clusterdisruption-controller: osd "rook-ceph-osd-34" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965046 D | clusterdisruption-controller: osd "39" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965055 I | clusterdisruption-controller: osd "rook-ceph-osd-39" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965087 D | clusterdisruption-controller: osd "45" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965095 I | clusterdisruption-controller: osd "rook-ceph-osd-45" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965127 D | clusterdisruption-controller: osd "13" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965135 I | clusterdisruption-controller: osd "rook-ceph-osd-13" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965170 D | clusterdisruption-controller: osd "27" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965179 I | clusterdisruption-controller: osd "rook-ceph-osd-27" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965212 D | clusterdisruption-controller: osd "32" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965220 I | clusterdisruption-controller: osd "rook-ceph-osd-32" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965253 D | clusterdisruption-controller: osd "7" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965261 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965295 D | clusterdisruption-controller: osd "15" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965309 I | clusterdisruption-controller: osd "rook-ceph-osd-15" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965341 D | clusterdisruption-controller: osd "23" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965350 I | clusterdisruption-controller: osd "rook-ceph-osd-23" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965381 D | clusterdisruption-controller: osd "8" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965390 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965421 D | clusterdisruption-controller: osd "29" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965429 I | clusterdisruption-controller: osd "rook-ceph-osd-29" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965460 D | clusterdisruption-controller: osd "35" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965469 I | clusterdisruption-controller: osd "rook-ceph-osd-35" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965502 D | clusterdisruption-controller: osd "44" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965511 I | clusterdisruption-controller: osd "rook-ceph-osd-44" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965543 D | clusterdisruption-controller: osd "47" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965552 I | clusterdisruption-controller: osd "rook-ceph-osd-47" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965584 D | clusterdisruption-controller: osd "5" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965592 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965627 D | clusterdisruption-controller: osd "30" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965636 I | clusterdisruption-controller: osd "rook-ceph-osd-30" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965667 D | clusterdisruption-controller: osd "21" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965676 I | clusterdisruption-controller: osd "rook-ceph-osd-21" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965709 D | clusterdisruption-controller: osd "46" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965718 I | clusterdisruption-controller: osd "rook-ceph-osd-46" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965750 D | clusterdisruption-controller: osd "3" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965758 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965791 D | clusterdisruption-controller: osd "20" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965800 I | clusterdisruption-controller: osd "rook-ceph-osd-20" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965832 D | clusterdisruption-controller: osd "22" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965841 I | clusterdisruption-controller: osd "rook-ceph-osd-22" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965873 D | clusterdisruption-controller: osd "33" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965881 I | clusterdisruption-controller: osd "rook-ceph-osd-33" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965910 D | clusterdisruption-controller: osd "17" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965919 I | clusterdisruption-controller: osd "rook-ceph-osd-17" is down and a possible node drain is detected | |
2023-07-18 22:32:03.965951 D | clusterdisruption-controller: osd "24" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:03.965960 I | clusterdisruption-controller: osd "rook-ceph-osd-24" is down and a possible node drain is detected | |
2023-07-18 22:32:03.966007 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:03.967650 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "r8.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:03.967673 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:03.968664 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:03.972005 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:03.972075 D | ceph-nodedaemon-controller: reconciling node: "r7.z1.sea.pahadi.net" | |
2023-07-18 22:32:03.973143 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:04.015340 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "r7.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:04.015367 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:04.016325 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:04.016880 D | ceph-block-pool-controller: pool "rook-ceph/ceph-erasure-default-data" status updated to "Failure" | |
2023-07-18 22:32:04.016944 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/ceph-erasure-default-data". failed to create pool "ceph-erasure-default-data".: failed to create pool "ceph-erasure-default-data".: failed to create pool "ceph-erasure-default-data": failed to create erasure code profile for pool "ceph-erasure-default-data": failed to look up default erasure code profile: failed to get erasure-code-profile for "default": exit status 1 | |
2023-07-18 22:32:04.017197 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:04.017217 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:04.019219 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:04.019266 D | ceph-nodedaemon-controller: reconciling node: "c11.z1.sea.pahadi.net" | |
2023-07-18 22:32:04.020149 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:04.021200 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:04.024506 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:04.024548 D | ceph-nodedaemon-controller: reconciling node: "c13.z1.sea.pahadi.net" | |
2023-07-18 22:32:04.025393 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:04.026378 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:04.028977 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:04.029031 D | ceph-nodedaemon-controller: reconciling node: "c14.z1.sea.pahadi.net" | |
2023-07-18 22:32:04.029960 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:04.082974 D | clusterdisruption-controller: ceph "rook-ceph" cluster not ready, cannot check status yet. | |
2023-07-18 22:32:04.083046 D | clusterdisruption-controller: reconciling "rook-ceph/" | |
2023-07-18 22:32:04.086005 D | clusterdisruption-controller: osd "17" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086023 I | clusterdisruption-controller: osd "rook-ceph-osd-17" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086063 D | clusterdisruption-controller: osd "24" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086072 I | clusterdisruption-controller: osd "rook-ceph-osd-24" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086110 D | clusterdisruption-controller: osd "18" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086119 I | clusterdisruption-controller: osd "rook-ceph-osd-18" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086155 D | clusterdisruption-controller: osd "28" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086164 I | clusterdisruption-controller: osd "rook-ceph-osd-28" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086199 D | clusterdisruption-controller: osd "37" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086208 I | clusterdisruption-controller: osd "rook-ceph-osd-37" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086241 D | clusterdisruption-controller: osd "49" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086250 I | clusterdisruption-controller: osd "rook-ceph-osd-49" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086282 D | clusterdisruption-controller: osd "50" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086291 I | clusterdisruption-controller: osd "rook-ceph-osd-50" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086456 I | clusterdisruption-controller: osd "rook-ceph-osd-52" is down but no node drain is detected | |
2023-07-18 22:32:04.086499 D | clusterdisruption-controller: osd "9" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086508 I | clusterdisruption-controller: osd "rook-ceph-osd-9" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086540 D | clusterdisruption-controller: osd "11" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086548 I | clusterdisruption-controller: osd "rook-ceph-osd-11" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086581 D | clusterdisruption-controller: osd "19" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086589 I | clusterdisruption-controller: osd "rook-ceph-osd-19" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086625 D | clusterdisruption-controller: osd "36" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086641 I | clusterdisruption-controller: osd "rook-ceph-osd-36" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086676 D | clusterdisruption-controller: osd "14" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086685 I | clusterdisruption-controller: osd "rook-ceph-osd-14" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086718 D | clusterdisruption-controller: osd "25" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086727 I | clusterdisruption-controller: osd "rook-ceph-osd-25" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086762 D | clusterdisruption-controller: osd "26" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086771 I | clusterdisruption-controller: osd "rook-ceph-osd-26" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086805 D | clusterdisruption-controller: osd "41" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086813 I | clusterdisruption-controller: osd "rook-ceph-osd-41" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086846 D | clusterdisruption-controller: osd "48" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086854 I | clusterdisruption-controller: osd "rook-ceph-osd-48" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086888 D | clusterdisruption-controller: osd "51" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086897 I | clusterdisruption-controller: osd "rook-ceph-osd-51" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086931 D | clusterdisruption-controller: osd "6" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086939 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down and a possible node drain is detected | |
2023-07-18 22:32:04.086972 D | clusterdisruption-controller: osd "0" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.086981 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087011 D | clusterdisruption-controller: osd "1" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087019 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087050 D | clusterdisruption-controller: osd "43" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087058 I | clusterdisruption-controller: osd "rook-ceph-osd-43" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087092 D | clusterdisruption-controller: osd "10" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087100 I | clusterdisruption-controller: osd "rook-ceph-osd-10" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087132 D | clusterdisruption-controller: osd "38" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087140 I | clusterdisruption-controller: osd "rook-ceph-osd-38" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087171 D | clusterdisruption-controller: osd "40" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087180 I | clusterdisruption-controller: osd "rook-ceph-osd-40" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087209 D | clusterdisruption-controller: osd "42" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087218 I | clusterdisruption-controller: osd "rook-ceph-osd-42" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087251 D | clusterdisruption-controller: osd "16" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087260 I | clusterdisruption-controller: osd "rook-ceph-osd-16" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087291 D | clusterdisruption-controller: osd "2" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087300 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087333 D | clusterdisruption-controller: osd "4" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087341 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087374 D | clusterdisruption-controller: osd "12" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087383 I | clusterdisruption-controller: osd "rook-ceph-osd-12" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087417 D | clusterdisruption-controller: osd "31" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087426 I | clusterdisruption-controller: osd "rook-ceph-osd-31" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087457 D | clusterdisruption-controller: osd "34" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087465 I | clusterdisruption-controller: osd "rook-ceph-osd-34" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087497 D | clusterdisruption-controller: osd "39" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087505 I | clusterdisruption-controller: osd "rook-ceph-osd-39" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087538 D | clusterdisruption-controller: osd "45" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087546 I | clusterdisruption-controller: osd "rook-ceph-osd-45" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087580 D | clusterdisruption-controller: osd "13" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087589 I | clusterdisruption-controller: osd "rook-ceph-osd-13" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087619 D | clusterdisruption-controller: osd "27" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087627 I | clusterdisruption-controller: osd "rook-ceph-osd-27" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087658 D | clusterdisruption-controller: osd "32" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087666 I | clusterdisruption-controller: osd "rook-ceph-osd-32" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087698 D | clusterdisruption-controller: osd "7" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087706 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087740 D | clusterdisruption-controller: osd "15" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087749 I | clusterdisruption-controller: osd "rook-ceph-osd-15" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087781 D | clusterdisruption-controller: osd "23" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087790 I | clusterdisruption-controller: osd "rook-ceph-osd-23" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087822 D | clusterdisruption-controller: osd "8" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087831 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087863 D | clusterdisruption-controller: osd "29" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087872 I | clusterdisruption-controller: osd "rook-ceph-osd-29" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087904 D | clusterdisruption-controller: osd "35" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087915 I | clusterdisruption-controller: osd "rook-ceph-osd-35" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087946 D | clusterdisruption-controller: osd "44" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087956 I | clusterdisruption-controller: osd "rook-ceph-osd-44" is down and a possible node drain is detected | |
2023-07-18 22:32:04.087988 D | clusterdisruption-controller: osd "47" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.087997 I | clusterdisruption-controller: osd "rook-ceph-osd-47" is down and a possible node drain is detected | |
2023-07-18 22:32:04.088029 D | clusterdisruption-controller: osd "5" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.088037 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down and a possible node drain is detected | |
2023-07-18 22:32:04.088073 D | clusterdisruption-controller: osd "30" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.088082 I | clusterdisruption-controller: osd "rook-ceph-osd-30" is down and a possible node drain is detected | |
2023-07-18 22:32:04.088114 D | clusterdisruption-controller: osd "21" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.088123 I | clusterdisruption-controller: osd "rook-ceph-osd-21" is down and a possible node drain is detected | |
2023-07-18 22:32:04.088154 D | clusterdisruption-controller: osd "46" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.088162 I | clusterdisruption-controller: osd "rook-ceph-osd-46" is down and a possible node drain is detected | |
2023-07-18 22:32:04.088195 D | clusterdisruption-controller: osd "3" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.088204 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down and a possible node drain is detected | |
2023-07-18 22:32:04.088237 D | clusterdisruption-controller: osd "20" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.088246 I | clusterdisruption-controller: osd "rook-ceph-osd-20" is down and a possible node drain is detected | |
2023-07-18 22:32:04.088278 D | clusterdisruption-controller: osd "22" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.088287 I | clusterdisruption-controller: osd "rook-ceph-osd-22" is down and a possible node drain is detected | |
2023-07-18 22:32:04.088317 D | clusterdisruption-controller: osd "33" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:04.088325 I | clusterdisruption-controller: osd "rook-ceph-osd-33" is down and a possible node drain is detected | |
2023-07-18 22:32:04.088385 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:04.124699 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c14.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:04.124725 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:04.125656 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:04.154599 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:04.154683 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0009c0b40 k:0xc0009c0b70 l:0xc0009c0ba0], assignment=&{Schedule:map[c:0xc00086f540 e:0xc00086f580 j:0xc00086f5c0 k:0xc00086f600 l:0xc00086f640]} | |
2023-07-18 22:32:04.198350 D | clusterdisruption-controller: ceph "rook-ceph" cluster not ready, cannot check status yet. | |
2023-07-18 22:32:04.201683 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:04.201743 D | ceph-nodedaemon-controller: reconciling node: "c16.z1.sea.pahadi.net" | |
2023-07-18 22:32:04.202734 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:04.220369 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c16.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:04.220398 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:04.221321 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:04.224545 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:04.224594 D | ceph-nodedaemon-controller: reconciling node: "c18.z1.sea.pahadi.net" | |
2023-07-18 22:32:04.225529 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:04.271107 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:04.271206 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc001381f20 k:0xc001381f50 l:0xc001381f80], assignment=&{Schedule:map[c:0xc0003f2c40 e:0xc0003f2c80 j:0xc0003f2cc0 k:0xc0003f2d00 l:0xc0003f2d40]} | |
2023-07-18 22:32:04.271222 I | op-k8sutil: ROOK_OBC_WATCH_OPERATOR_NAMESPACE="true" (configmap) | |
2023-07-18 22:32:04.271233 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "rook-ceph.ceph.rook.io/bucket" | |
2023-07-18 22:32:04.271721 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c18.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:04.271744 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:04.271755 I | op-bucket-prov: successfully reconciled bucket provisioner | |
I0718 22:32:04.271851 1 manager.go:135] objectbucket.io/provisioner-manager "msg"="starting provisioner" "name"="rook-ceph.ceph.rook.io/bucket" | |
2023-07-18 22:32:04.272712 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:04.275498 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:04.275552 D | ceph-nodedaemon-controller: reconciling node: "r4.z1.sea.pahadi.net" | |
2023-07-18 22:32:04.276696 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:04.277817 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:04.280746 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:04.280794 D | ceph-nodedaemon-controller: reconciling node: "c10.z1.sea.pahadi.net" | |
2023-07-18 22:32:04.281931 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:04.296488 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c10.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:04.296523 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:04.297459 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:04.307732 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:04.307789 D | ceph-nodedaemon-controller: reconciling node: "r5.z1.sea.pahadi.net" | |
2023-07-18 22:32:04.307858 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "rook-ceph" | |
2023-07-18 22:32:04.307909 I | op-osd: ceph osd status in namespace "rook-ceph" check interval "1m0s" | |
2023-07-18 22:32:04.307919 I | ceph-cluster-controller: enabling ceph osd monitoring goroutine for cluster "rook-ceph" | |
2023-07-18 22:32:04.307930 I | ceph-cluster-controller: ceph status check interval is 1m0s | |
2023-07-18 22:32:04.307939 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "rook-ceph" | |
2023-07-18 22:32:04.307992 D | ceph-cluster-controller: checking health of cluster | |
2023-07-18 22:32:04.308016 D | exec: Running command: ceph status --format json --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring | |
2023-07-18 22:32:04.308070 D | op-mon: ceph mon status in namespace "rook-ceph" check interval "45s" | |
2023-07-18 22:32:04.309001 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:04.310059 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:04.313250 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:04.313297 D | ceph-nodedaemon-controller: reconciling node: "c12.z1.sea.pahadi.net" | |
2023-07-18 22:32:04.314540 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:04.329209 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c12.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:04.329238 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:04.330232 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:04.333238 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:04.422413 I | ceph-cluster-controller: skipping ceph status since operator is still initializing | |
2023-07-18 22:32:04.475273 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config" | |
2023-07-18 22:32:04.668152 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:04.934424 D | ceph-cluster-controller: cluster spec successfully validated | |
2023-07-18 22:32:04.934511 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Detecting Ceph version" | |
2023-07-18 22:32:04.969886 I | ceph-spec: detecting the ceph image version for image quay.io/ceph/ceph:v17.2.6... | |
2023-07-18 22:32:04.973525 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" | |
2023-07-18 22:32:04.973546 D | ceph-cluster-controller: update event on CephCluster CR | |
2023-07-18 22:32:05.075573 I | op-k8sutil: ROOK_CSI_ENABLE_RBD="true" (configmap) | |
2023-07-18 22:32:05.075587 I | op-k8sutil: ROOK_CSI_ENABLE_CEPHFS="true" (configmap) | |
2023-07-18 22:32:05.075592 I | op-k8sutil: ROOK_CSI_ENABLE_NFS="false" (configmap) | |
2023-07-18 22:32:05.075596 I | op-k8sutil: ROOK_CSI_ALLOW_UNSUPPORTED_VERSION="false" (configmap) | |
2023-07-18 22:32:05.075600 I | op-k8sutil: ROOK_CSI_ENABLE_GRPC_METRICS="false" (configmap) | |
2023-07-18 22:32:05.075605 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) | |
2023-07-18 22:32:05.075609 I | op-k8sutil: CSI_ENABLE_READ_AFFINITY="false" (configmap) | |
2023-07-18 22:32:05.075624 I | op-k8sutil: CSI_CRUSH_LOCATION_LABELS="kubernetes.io/hostname,topology.kubernetes.io/region,topology.kubernetes.io/zone,topology.rook.io/chassis,topology.rook.io/rack,topology.rook.io/row,topology.rook.io/pdu,topology.rook.io/pod,topology.rook.io/room,topology.rook.io/datacenter" (default) | |
2023-07-18 22:32:05.075630 I | op-k8sutil: CSI_FORCE_CEPHFS_KERNEL_CLIENT="true" (configmap) | |
2023-07-18 22:32:05.075634 I | op-k8sutil: CSI_GRPC_TIMEOUT_SECONDS="150" (configmap) | |
2023-07-18 22:32:05.075640 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default) | |
2023-07-18 22:32:05.075646 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default) | |
2023-07-18 22:32:05.075653 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default) | |
2023-07-18 22:32:05.075659 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default) | |
2023-07-18 22:32:05.075669 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default) | |
2023-07-18 22:32:05.075676 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default) | |
2023-07-18 22:32:05.075683 I | op-k8sutil: CSIADDONS_PORT="9070" (default) | |
2023-07-18 22:32:05.075689 I | op-k8sutil: CSIADDONS_PORT="9070" (default) | |
2023-07-18 22:32:05.075697 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default) | |
2023-07-18 22:32:05.075704 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default) | |
2023-07-18 22:32:05.075712 I | op-k8sutil: CSI_ENABLE_LIVENESS="false" (configmap) | |
2023-07-18 22:32:05.075720 I | op-k8sutil: CSI_PLUGIN_PRIORITY_CLASSNAME="system-node-critical" (configmap) | |
2023-07-18 22:32:05.075729 I | op-k8sutil: CSI_PROVISIONER_PRIORITY_CLASSNAME="system-cluster-critical" (configmap) | |
2023-07-18 22:32:05.075736 I | op-k8sutil: CSI_ENABLE_OMAP_GENERATOR="false" (default) | |
2023-07-18 22:32:05.075744 I | op-k8sutil: CSI_ENABLE_RBD_SNAPSHOTTER="true" (configmap) | |
2023-07-18 22:32:05.075749 I | op-k8sutil: CSI_ENABLE_CEPHFS_SNAPSHOTTER="true" (configmap) | |
2023-07-18 22:32:05.075753 I | op-k8sutil: CSI_ENABLE_NFS_SNAPSHOTTER="true" (configmap) | |
2023-07-18 22:32:05.075757 I | op-k8sutil: CSI_ENABLE_CSIADDONS="false" (configmap) | |
2023-07-18 22:32:05.075761 I | op-k8sutil: CSI_ENABLE_TOPOLOGY="false" (configmap) | |
2023-07-18 22:32:05.075765 I | op-k8sutil: CSI_ENABLE_ENCRYPTION="false" (configmap) | |
2023-07-18 22:32:05.075769 I | op-k8sutil: CSI_ENABLE_METADATA="false" (default) | |
2023-07-18 22:32:05.075774 I | op-k8sutil: CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) | |
2023-07-18 22:32:05.075778 I | op-k8sutil: CSI_NFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) | |
2023-07-18 22:32:05.075782 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) | |
2023-07-18 22:32:05.075786 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE="1" (default) | |
2023-07-18 22:32:05.075790 I | op-k8sutil: CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT="false" (configmap) | |
2023-07-18 22:32:05.075796 I | ceph-csi: Kubernetes version is 1.27 | |
2023-07-18 22:32:05.075803 I | op-k8sutil: CSI_LOG_LEVEL="" (default) | |
2023-07-18 22:32:05.075810 I | op-k8sutil: CSI_SIDECAR_LOG_LEVEL="" (default) | |
2023-07-18 22:32:05.273133 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:05.273227 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc00046af00 k:0xc00046af30 l:0xc00046af60], assignment=&{Schedule:map[c:0xc0005bca00 e:0xc0005bca40 j:0xc0005bca80 k:0xc0005bcac0 l:0xc0005bcb00]} | |
2023-07-18 22:32:05.273248 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:05.273272 I | ceph-block-pool-controller: creating pool "ceph-erasure-default-md" in namespace "rook-ceph" | |
2023-07-18 22:32:05.273294 D | exec: Running command: ceph osd crush rule create-replicated ceph-erasure-default-md default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:05.394422 D | ceph-block-pool-controller: pool "rook-ceph/ceph-erasure-default-md" status updated to "Failure" | |
2023-07-18 22:32:05.394475 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/ceph-erasure-default-md". failed to create pool "ceph-erasure-default-md".: failed to create pool "ceph-erasure-default-md".: failed to create pool "ceph-erasure-default-md": failed to create replicated crush rule "ceph-erasure-default-md": failed to create crush rule ceph-erasure-default-md: exit status 1 | |
2023-07-18 22:32:05.394616 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:05.394631 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:05.468341 D | op-k8sutil: ConfigMap rook-ceph-detect-version is already deleted | |
2023-07-18 22:32:05.798075 I | op-k8sutil: CSI_PROVISIONER_REPLICAS="2" (configmap) | |
2023-07-18 22:32:05.798101 I | op-k8sutil: ROOK_CSI_CEPH_IMAGE="quay.io/cephcsi/cephcsi:v3.8.0" (default) | |
2023-07-18 22:32:05.798111 I | op-k8sutil: ROOK_CSI_REGISTRAR_IMAGE="registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0" (default) | |
2023-07-18 22:32:05.798120 I | op-k8sutil: ROOK_CSI_PROVISIONER_IMAGE="registry.k8s.io/sig-storage/csi-provisioner:v3.4.0" (default) | |
2023-07-18 22:32:05.798128 I | op-k8sutil: ROOK_CSI_ATTACHER_IMAGE="registry.k8s.io/sig-storage/csi-attacher:v4.1.0" (default) | |
2023-07-18 22:32:05.798137 I | op-k8sutil: ROOK_CSI_SNAPSHOTTER_IMAGE="registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1" (default) | |
2023-07-18 22:32:05.798146 I | op-k8sutil: ROOK_CSI_RESIZER_IMAGE="registry.k8s.io/sig-storage/csi-resizer:v1.7.0" (default) | |
2023-07-18 22:32:05.798155 I | op-k8sutil: ROOK_CSI_KUBELET_DIR_PATH="/var/snap/microk8s/common/var/lib/kubelet" (configmap) | |
2023-07-18 22:32:05.798164 I | op-k8sutil: ROOK_CSIADDONS_IMAGE="quay.io/csiaddons/k8s-sidecar:v0.5.0" (default) | |
2023-07-18 22:32:05.798170 I | op-k8sutil: CSI_TOPOLOGY_DOMAIN_LABELS="" (default) | |
2023-07-18 22:32:05.798177 I | op-k8sutil: ROOK_CSI_CEPHFS_POD_LABELS="" (default) | |
2023-07-18 22:32:05.798184 I | op-k8sutil: ROOK_CSI_NFS_POD_LABELS="" (default) | |
2023-07-18 22:32:05.798191 I | op-k8sutil: ROOK_CSI_RBD_POD_LABELS="" (default) | |
2023-07-18 22:32:05.798198 I | op-k8sutil: CSI_CLUSTER_NAME="" (default) | |
2023-07-18 22:32:05.798205 I | op-k8sutil: ROOK_CSI_IMAGE_PULL_POLICY="IfNotPresent" (default) | |
2023-07-18 22:32:05.798212 I | op-k8sutil: CSI_CEPHFS_KERNEL_MOUNT_OPTIONS="" (default) | |
2023-07-18 22:32:05.798218 I | op-k8sutil: CSI_CEPHFS_ATTACH_REQUIRED="true" (configmap) | |
2023-07-18 22:32:05.798225 I | op-k8sutil: CSI_RBD_ATTACH_REQUIRED="true" (configmap) | |
2023-07-18 22:32:05.798232 I | op-k8sutil: CSI_NFS_ATTACH_REQUIRED="true" (configmap) | |
2023-07-18 22:32:05.798240 I | ceph-csi: detecting the ceph csi image version for image "quay.io/cephcsi/cephcsi:v3.8.0" | |
2023-07-18 22:32:05.798296 I | op-k8sutil: CSI_PROVISIONER_TOLERATIONS="" (default) | |
2023-07-18 22:32:05.798306 I | op-k8sutil: CSI_PROVISIONER_NODE_AFFINITY="" (default) | |
2023-07-18 22:32:05.867844 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:06.267234 D | op-k8sutil: ConfigMap rook-ceph-csi-detect-version is already deleted | |
2023-07-18 22:32:06.468179 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:06.468258 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc000f13680 k:0xc000f13710 l:0xc000f13740], assignment=&{Schedule:map[c:0xc000aca9c0 e:0xc000acaa40 j:0xc000acaac0 k:0xc000acab00 l:0xc000acabc0]} | |
2023-07-18 22:32:06.468286 D | exec: Running command: ceph osd crush dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:06.584869 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing | |
2023-07-18 22:32:06.584910 D | ceph-block-pool-controller: successfully configured CephBlockPool "rook-ceph/ceph-nvme-replica-default" | |
2023-07-18 22:32:06.585099 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:06.585117 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:06.869028 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:07.068222 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:07.068308 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0007cf560 k:0xc0007cf590 l:0xc0007cf5c0], assignment=&{Schedule:map[c:0xc0010c3f40 e:0xc0010c3f80 j:0xc0010c3fc0 k:0xc000aa6000 l:0xc000aa6040]} | |
2023-07-18 22:32:07.068340 D | exec: Running command: ceph osd crush dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:07.182220 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing | |
2023-07-18 22:32:07.182258 D | ceph-block-pool-controller: successfully configured CephBlockPool "rook-ceph/ceph-ssd-erasure-default-data" | |
2023-07-18 22:32:07.182409 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:07.182423 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:07.269729 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:07.468530 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:07.468619 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc000021920 k:0xc0000219b0 l:0xc0003cc6c0], assignment=&{Schedule:map[c:0xc000aa6f00 e:0xc000aa6f80 j:0xc000aa7000 k:0xc000aa7080 l:0xc000aa7100]} | |
2023-07-18 22:32:07.468649 D | exec: Running command: ceph osd crush dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:07.583968 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing | |
2023-07-18 22:32:07.584008 D | ceph-block-pool-controller: successfully configured CephBlockPool "rook-ceph/ceph-nvme-erasure-default-data" | |
2023-07-18 22:32:07.584158 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:07.584172 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:07.667640 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:07.926311 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:07.926406 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0001e0360 k:0xc0001e0510 l:0xc000b97110], assignment=&{Schedule:map[c:0xc0009bc000 e:0xc0009bc040 j:0xc0009bc2c0 k:0xc0009bc380 l:0xc0009bc480]} | |
2023-07-18 22:32:07.926439 D | exec: Running command: ceph osd crush dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:08.043055 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing | |
2023-07-18 22:32:08.043105 D | ceph-block-pool-controller: successfully configured CephBlockPool "rook-ceph/ceph-nvme-erasure-default-md" | |
2023-07-18 22:32:08.043281 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:08.043297 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:08.067744 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:08.268035 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:08.268124 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0013c6570 k:0xc0013c65a0 l:0xc0013c65d0], assignment=&{Schedule:map[c:0xc0009bd0c0 e:0xc0009bd200 j:0xc0009bd340 k:0xc0009bd480 l:0xc0009bd540]} | |
2023-07-18 22:32:08.268154 D | exec: Running command: ceph osd crush dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:08.383647 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing | |
2023-07-18 22:32:08.383681 D | ceph-block-pool-controller: successfully configured CephBlockPool "rook-ceph/ceph-replica-default" | |
2023-07-18 22:32:08.383827 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:08.383841 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:08.502538 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:08.536554 D | CmdReporter: job rook-ceph-detect-version has returned results | |
2023-07-18 22:32:08.668588 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:08.668696 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0022d0960 k:0xc0022d0990 l:0xc0022d09c0], assignment=&{Schedule:map[c:0xc000d48440 e:0xc000d48640 j:0xc000d48680 k:0xc000d486c0 l:0xc000d487c0]} | |
2023-07-18 22:32:08.668748 D | exec: Running command: ceph osd crush dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:08.783166 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing | |
2023-07-18 22:32:08.783223 D | ceph-block-pool-controller: successfully configured CephBlockPool "rook-ceph/ceph-ssd-erasure-default-md" | |
2023-07-18 22:32:08.783415 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:08.783432 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:09.011622 D | CmdReporter: job rook-ceph-csi-detect-version has returned results | |
2023-07-18 22:32:09.067761 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:09.274397 I | ceph-spec: detected ceph image version: "17.2.6-0 quincy" | |
2023-07-18 22:32:09.274423 I | ceph-cluster-controller: validating ceph version from provided image | |
2023-07-18 22:32:09.441462 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete | |
2023-07-18 22:32:09.441484 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete | |
2023-07-18 22:32:09.441513 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete | |
2023-07-18 22:32:09.441526 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete | |
2023-07-18 22:32:09.667671 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:09.667725 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0007cd2c0 k:0xc0007cd2f0 l:0xc0007cd320], assignment=&{Schedule:map[c:0xc0010a6440 e:0xc0010a64c0 j:0xc0010a6500 k:0xc0010a65c0 l:0xc0010a6640]} | |
2023-07-18 22:32:09.667747 D | exec: Running command: ceph osd crush dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:09.781032 I | ceph-block-pool-controller: skipping reconcile since operator is still initializing | |
2023-07-18 22:32:09.781064 D | ceph-block-pool-controller: successfully configured CephBlockPool "rook-ceph/ceph-ssd-replica-default" | |
2023-07-18 22:32:09.781229 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:09.781247 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:09.867858 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:10.077514 I | ceph-csi: Detected ceph CSI image version: "v3.8.0" | |
2023-07-18 22:32:10.084411 I | op-k8sutil: CSI_PROVISIONER_TOLERATIONS="" (default) | |
2023-07-18 22:32:10.084424 I | op-k8sutil: CSI_PROVISIONER_NODE_AFFINITY="" (default) | |
2023-07-18 22:32:10.084429 I | op-k8sutil: CSI_PLUGIN_TOLERATIONS="" (default) | |
2023-07-18 22:32:10.084433 I | op-k8sutil: CSI_PLUGIN_NODE_AFFINITY="" (default) | |
2023-07-18 22:32:10.084437 I | op-k8sutil: CSI_RBD_PLUGIN_TOLERATIONS="" (default) | |
2023-07-18 22:32:10.084441 I | op-k8sutil: CSI_RBD_PLUGIN_NODE_AFFINITY="" (default) | |
2023-07-18 22:32:10.084445 I | op-k8sutil: CSI_RBD_PLUGIN_RESOURCE="" (default) | |
2023-07-18 22:32:10.084449 I | op-k8sutil: CSI_RBD_PLUGIN_VOLUME="" (default) | |
2023-07-18 22:32:10.084454 I | op-k8sutil: CSI_RBD_PLUGIN_VOLUME_MOUNT="" (default) | |
2023-07-18 22:32:10.153042 D | ceph-spec: object "rook-ceph-csi-detect-version" did not match on delete | |
2023-07-18 22:32:10.153066 D | ceph-spec: object "rook-ceph-csi-detect-version" did not match on delete | |
2023-07-18 22:32:10.153085 D | ceph-spec: object "rook-ceph-csi-detect-version" did not match on delete | |
2023-07-18 22:32:10.153107 D | ceph-spec: object "rook-ceph-csi-detect-version" did not match on delete | |
2023-07-18 22:32:10.165480 I | op-k8sutil: CSI_RBD_PROVISIONER_TOLERATIONS="" (default) | |
2023-07-18 22:32:10.165495 I | op-k8sutil: CSI_RBD_PROVISIONER_NODE_AFFINITY="" (default) | |
2023-07-18 22:32:10.165503 I | op-k8sutil: CSI_RBD_PROVISIONER_RESOURCE="" (default) | |
2023-07-18 22:32:10.222054 I | ceph-csi: successfully started CSI Ceph RBD driver | |
2023-07-18 22:32:10.222077 I | op-k8sutil: CSI_CEPHFS_PLUGIN_TOLERATIONS="" (default) | |
2023-07-18 22:32:10.222085 I | op-k8sutil: CSI_CEPHFS_PLUGIN_NODE_AFFINITY="" (default) | |
2023-07-18 22:32:10.222093 I | op-k8sutil: CSI_CEPHFS_PLUGIN_RESOURCE="" (default) | |
2023-07-18 22:32:10.222100 I | op-k8sutil: CSI_CEPHFS_PLUGIN_VOLUME="" (default) | |
2023-07-18 22:32:10.222106 I | op-k8sutil: CSI_CEPHFS_PLUGIN_VOLUME_MOUNT="" (default) | |
2023-07-18 22:32:10.264160 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_TOLERATIONS="" (default) | |
2023-07-18 22:32:10.264180 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_NODE_AFFINITY="" (default) | |
2023-07-18 22:32:10.264203 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_RESOURCE="" (default) | |
2023-07-18 22:32:10.268051 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:10.304912 I | ceph-csi: successfully started CSI CephFS driver | |
2023-07-18 22:32:10.305121 I | op-k8sutil: CSI_RBD_FSGROUPPOLICY="File" (configmap) | |
2023-07-18 22:32:10.313933 I | ceph-csi: CSIDriver object updated for driver "rook-ceph.rbd.csi.ceph.com" | |
2023-07-18 22:32:10.313954 I | op-k8sutil: CSI_CEPHFS_FSGROUPPOLICY="File" (configmap) | |
2023-07-18 22:32:10.321472 I | ceph-csi: CSIDriver object updated for driver "rook-ceph.cephfs.csi.ceph.com" | |
2023-07-18 22:32:10.321493 I | ceph-csi: CSI NFS driver disabled | |
2023-07-18 22:32:10.321502 I | op-k8sutil: removing daemonset csi-nfsplugin if it exists | |
2023-07-18 22:32:10.324637 D | op-k8sutil: removing csi-nfsplugin-provisioner deployment if it exists | |
2023-07-18 22:32:10.324654 I | op-k8sutil: removing deployment csi-nfsplugin-provisioner if it exists | |
2023-07-18 22:32:10.469778 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:10.469857 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0012dc5a0 k:0xc0012dc5d0 l:0xc0012dc600], assignment=&{Schedule:map[c:0xc0010d0380 e:0xc0010d03c0 j:0xc0010d0400 k:0xc0010d04c0 l:0xc0010d0500]} | |
2023-07-18 22:32:10.668559 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:10.668646 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0012dccc0 k:0xc0012dccf0 l:0xc0012dcd20], assignment=&{Schedule:map[c:0xc0010d0700 e:0xc0010d0740 j:0xc0010d0780 k:0xc0010d07c0 l:0xc0010d0800]} | |
2023-07-18 22:32:10.668664 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:10.668714 I | ceph-block-pool-controller: creating pool "ceph-erasure-default-data" in namespace "rook-ceph" | |
2023-07-18 22:32:10.668737 D | exec: Running command: ceph osd erasure-code-profile get default --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:10.823166 D | ceph-block-pool-controller: pool "rook-ceph/ceph-erasure-default-data" status updated to "Failure" | |
2023-07-18 22:32:10.823246 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/ceph-erasure-default-data". failed to create pool "ceph-erasure-default-data".: failed to create pool "ceph-erasure-default-data".: failed to create pool "ceph-erasure-default-data": failed to create erasure code profile for pool "ceph-erasure-default-data": failed to look up default erasure code profile: failed to get erasure-code-profile for "default": exit status 1 | |
2023-07-18 22:32:10.823472 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:10.823491 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:10.871492 D | ceph-csi: rook-ceph.nfs.csi.ceph.com CSIDriver not found; skipping deletion. | |
2023-07-18 22:32:10.871512 I | ceph-csi: successfully removed CSI NFS driver | |
2023-07-18 22:32:11.067956 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config | |
2023-07-18 22:32:11.068062 I | cephclient: generated admin config in /var/lib/rook/rook-ceph | |
2023-07-18 22:32:11.068079 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:11.315312 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:11.482873 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:11.482956 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0027be480 k:0xc0027be4b0 l:0xc0027be4e0], assignment=&{Schedule:map[c:0xc000312d40 e:0xc000312d80 j:0xc000312dc0 k:0xc000312e00 l:0xc000312e40]} | |
2023-07-18 22:32:11.482975 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:11.483014 I | ceph-block-pool-controller: creating pool "ceph-erasure-default-md" in namespace "rook-ceph" | |
2023-07-18 22:32:11.483039 D | exec: Running command: ceph osd crush rule create-replicated ceph-erasure-default-md default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:11.553059 D | cephclient: {"mon":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mgr":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":2},"osd":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mds":{},"overall":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":8}} | |
2023-07-18 22:32:11.553081 D | cephclient: {"mon":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mgr":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":2},"osd":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mds":{},"overall":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":8}} | |
2023-07-18 22:32:11.553138 D | ceph-cluster-controller: both cluster and image spec versions are identical, doing nothing 17.2.6-0 quincy | |
2023-07-18 22:32:11.553149 I | ceph-cluster-controller: cluster "rook-ceph": version "17.2.6-0 quincy" detected for image "quay.io/ceph/ceph:v17.2.6" | |
2023-07-18 22:32:11.599978 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Configuring the Ceph cluster" | |
2023-07-18 22:32:11.628885 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" | |
2023-07-18 22:32:11.628908 D | ceph-cluster-controller: update event on CephCluster CR | |
2023-07-18 22:32:11.667217 D | ceph-cluster-controller: cluster helm chart is not configured, not adding helm annotations to configmap | |
2023-07-18 22:32:11.667227 D | ceph-cluster-controller: monitors are about to reconcile, executing pre actions | |
2023-07-18 22:32:11.667256 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Configuring Ceph Mons" | |
2023-07-18 22:32:11.696665 D | op-mon: Acquiring lock for mon orchestration | |
2023-07-18 22:32:11.696674 D | op-mon: Acquired lock for mon orchestration | |
2023-07-18 22:32:11.696678 I | op-mon: start running mons | |
2023-07-18 22:32:11.696681 D | op-mon: establishing ceph cluster info | |
2023-07-18 22:32:11.700202 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" | |
2023-07-18 22:32:11.700219 D | ceph-cluster-controller: update event on CephCluster CR | |
2023-07-18 22:32:11.868682 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:11.887752 D | exec: Running command: ceph osd pool get ceph-erasure-default-md all --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:12.114413 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:12.114502 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc002a83020 k:0xc002a83050 l:0xc002a83080], assignment=&{Schedule:map[c:0xc000d56580 e:0xc000d565c0 j:0xc000d56600 k:0xc000d56640 l:0xc000d56680]} | |
2023-07-18 22:32:12.300897 D | exec: Running command: ceph osd pool application get ceph-erasure-default-md --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:12.702919 I | cephclient: application "rbd" is already set on pool "ceph-erasure-default-md" | |
2023-07-18 22:32:12.702945 I | cephclient: reconciling replicated pool ceph-erasure-default-md succeeded | |
2023-07-18 22:32:12.702957 D | cephclient: skipping check for failure domain on pool "ceph-erasure-default-md" as it is not specified | |
2023-07-18 22:32:12.702965 I | ceph-block-pool-controller: initializing pool "ceph-erasure-default-md" for RBD use | |
2023-07-18 22:32:12.702993 D | exec: Running command: rbd pool init ceph-erasure-default-md --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring | |
2023-07-18 22:32:12.871518 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists | |
2023-07-18 22:32:13.099048 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.152.183.247:6789","10.152.183.192:6789","10.152.183.95:6789"],"namespace":""}] data:j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 mapping:{"node":{"c":{"Name":"r4.z1.sea.pahadi.net","Hostname":"r4.z1.sea.pahadi.net","Address":"192.168.0.104"},"e":{"Name":"r5.z1.sea.pahadi.net","Hostname":"r5.z1.sea.pahadi.net","Address":"192.168.0.105"},"j":{"Name":"r7.z1.sea.pahadi.net","Hostname":"r7.z1.sea.pahadi.net","Address":"192.168.0.107"},"k":{"Name":"c14.z1.sea.pahadi.net","Hostname":"c14.z1.sea.pahadi.net","Address":"10.10.14.1"},"l":{"Name":"c10.z1.sea.pahadi.net","Hostname":"c10.z1.sea.pahadi.net","Address":"10.10.10.1"}}} maxMonId:11 outOfQuorum:] | |
2023-07-18 22:32:13.268165 D | op-config: updating config secret "rook-ceph-config" | |
2023-07-18 22:32:13.668778 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config | |
2023-07-18 22:32:13.668950 I | cephclient: generated admin config in /var/lib/rook/rook-ceph | |
2023-07-18 22:32:13.668972 D | ceph-csi: using "rook-ceph" for csi configmap namespace | |
2023-07-18 22:32:14.083929 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" | |
2023-07-18 22:32:14.087168 D | clusterdisruption-controller: osd "12" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087187 I | clusterdisruption-controller: osd "rook-ceph-osd-12" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087230 D | clusterdisruption-controller: osd "31" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087239 I | clusterdisruption-controller: osd "rook-ceph-osd-31" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087273 D | clusterdisruption-controller: osd "34" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087282 I | clusterdisruption-controller: osd "rook-ceph-osd-34" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087315 D | clusterdisruption-controller: osd "39" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087323 I | clusterdisruption-controller: osd "rook-ceph-osd-39" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087355 D | clusterdisruption-controller: osd "45" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087363 I | clusterdisruption-controller: osd "rook-ceph-osd-45" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087395 D | clusterdisruption-controller: osd "13" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087403 I | clusterdisruption-controller: osd "rook-ceph-osd-13" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087443 D | clusterdisruption-controller: osd "27" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087452 I | clusterdisruption-controller: osd "rook-ceph-osd-27" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087488 D | clusterdisruption-controller: osd "32" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087496 I | clusterdisruption-controller: osd "rook-ceph-osd-32" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087529 D | clusterdisruption-controller: osd "7" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087537 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087567 D | clusterdisruption-controller: osd "15" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087575 I | clusterdisruption-controller: osd "rook-ceph-osd-15" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087606 D | clusterdisruption-controller: osd "23" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087614 I | clusterdisruption-controller: osd "rook-ceph-osd-23" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087647 D | clusterdisruption-controller: osd "8" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087655 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087688 D | clusterdisruption-controller: osd "29" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087697 I | clusterdisruption-controller: osd "rook-ceph-osd-29" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087731 D | clusterdisruption-controller: osd "35" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087739 I | clusterdisruption-controller: osd "rook-ceph-osd-35" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087772 D | clusterdisruption-controller: osd "44" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087781 I | clusterdisruption-controller: osd "rook-ceph-osd-44" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087813 D | clusterdisruption-controller: osd "47" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087822 I | clusterdisruption-controller: osd "rook-ceph-osd-47" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087857 D | clusterdisruption-controller: osd "5" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087866 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087903 D | clusterdisruption-controller: osd "30" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087911 I | clusterdisruption-controller: osd "rook-ceph-osd-30" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087944 D | clusterdisruption-controller: osd "21" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087952 I | clusterdisruption-controller: osd "rook-ceph-osd-21" is down and a possible node drain is detected | |
2023-07-18 22:32:14.087982 D | clusterdisruption-controller: osd "46" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.087990 I | clusterdisruption-controller: osd "rook-ceph-osd-46" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088023 D | clusterdisruption-controller: osd "3" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088031 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088067 D | clusterdisruption-controller: osd "20" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088075 I | clusterdisruption-controller: osd "rook-ceph-osd-20" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088109 D | clusterdisruption-controller: osd "22" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088118 I | clusterdisruption-controller: osd "rook-ceph-osd-22" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088149 D | clusterdisruption-controller: osd "33" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088158 I | clusterdisruption-controller: osd "rook-ceph-osd-33" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088191 D | clusterdisruption-controller: osd "17" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088199 I | clusterdisruption-controller: osd "rook-ceph-osd-17" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088231 D | clusterdisruption-controller: osd "24" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088239 I | clusterdisruption-controller: osd "rook-ceph-osd-24" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088271 D | clusterdisruption-controller: osd "18" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088280 I | clusterdisruption-controller: osd "rook-ceph-osd-18" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088311 D | clusterdisruption-controller: osd "28" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088320 I | clusterdisruption-controller: osd "rook-ceph-osd-28" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088367 D | clusterdisruption-controller: osd "37" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088380 I | clusterdisruption-controller: osd "rook-ceph-osd-37" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088417 D | clusterdisruption-controller: osd "49" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088426 I | clusterdisruption-controller: osd "rook-ceph-osd-49" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088461 D | clusterdisruption-controller: osd "50" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088469 I | clusterdisruption-controller: osd "rook-ceph-osd-50" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088651 I | clusterdisruption-controller: osd "rook-ceph-osd-52" is down but no node drain is detected | |
2023-07-18 22:32:14.088695 D | clusterdisruption-controller: osd "9" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088704 I | clusterdisruption-controller: osd "rook-ceph-osd-9" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088737 D | clusterdisruption-controller: osd "11" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088746 I | clusterdisruption-controller: osd "rook-ceph-osd-11" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088779 D | clusterdisruption-controller: osd "19" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088787 I | clusterdisruption-controller: osd "rook-ceph-osd-19" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088819 D | clusterdisruption-controller: osd "36" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088827 I | clusterdisruption-controller: osd "rook-ceph-osd-36" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088861 D | clusterdisruption-controller: osd "14" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088869 I | clusterdisruption-controller: osd "rook-ceph-osd-14" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088901 D | clusterdisruption-controller: osd "25" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088909 I | clusterdisruption-controller: osd "rook-ceph-osd-25" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088941 D | clusterdisruption-controller: osd "26" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088949 I | clusterdisruption-controller: osd "rook-ceph-osd-26" is down and a possible node drain is detected | |
2023-07-18 22:32:14.088982 D | clusterdisruption-controller: osd "41" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.088990 I | clusterdisruption-controller: osd "rook-ceph-osd-41" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089021 D | clusterdisruption-controller: osd "48" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089029 I | clusterdisruption-controller: osd "rook-ceph-osd-48" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089064 D | clusterdisruption-controller: osd "51" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089072 I | clusterdisruption-controller: osd "rook-ceph-osd-51" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089105 D | clusterdisruption-controller: osd "6" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089113 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089144 D | clusterdisruption-controller: osd "0" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089152 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089185 D | clusterdisruption-controller: osd "1" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089193 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089225 D | clusterdisruption-controller: osd "43" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089233 I | clusterdisruption-controller: osd "rook-ceph-osd-43" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089264 D | clusterdisruption-controller: osd "10" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089272 I | clusterdisruption-controller: osd "rook-ceph-osd-10" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089302 D | clusterdisruption-controller: osd "38" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089310 I | clusterdisruption-controller: osd "rook-ceph-osd-38" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089345 D | clusterdisruption-controller: osd "40" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089353 I | clusterdisruption-controller: osd "rook-ceph-osd-40" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089384 D | clusterdisruption-controller: osd "42" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089392 I | clusterdisruption-controller: osd "rook-ceph-osd-42" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089423 D | clusterdisruption-controller: osd "16" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089432 I | clusterdisruption-controller: osd "rook-ceph-osd-16" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089464 D | clusterdisruption-controller: osd "2" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089472 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089504 D | clusterdisruption-controller: osd "4" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:14.089512 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down and a possible node drain is detected | |
2023-07-18 22:32:14.089558 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:14.268389 D | op-cfg-keyring: updating secret for rook-ceph-mons-keyring | |
2023-07-18 22:32:14.579023 I | clusterdisruption-controller: osd is down in failure domain "c10-z1-sea-pahadi-net" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:186} {StateName:down Count:9} {StateName:incomplete Count:1}]" | |
2023-07-18 22:32:14.579064 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:14.667927 D | op-cfg-keyring: updating secret for rook-ceph-admin-keyring | |
2023-07-18 22:32:14.984208 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.985110 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.985963 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.986832 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.987759 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.988579 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.989377 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.990218 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.991069 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.991917 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.992709 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.993496 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.993536 D | clusterdisruption-controller: deleting default pdb with maxUnavailable=1 for all osd | |
2023-07-18 22:32:14.994322 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:14.998633 D | clusterdisruption-controller: reconciling "rook-ceph/" | |
2023-07-18 22:32:15.001687 D | clusterdisruption-controller: osd "30" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.001709 I | clusterdisruption-controller: osd "rook-ceph-osd-30" is down and a possible node drain is detected | |
2023-07-18 22:32:15.001779 D | clusterdisruption-controller: osd "21" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.001793 I | clusterdisruption-controller: osd "rook-ceph-osd-21" is down and a possible node drain is detected | |
2023-07-18 22:32:15.001850 D | clusterdisruption-controller: osd "46" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.001862 I | clusterdisruption-controller: osd "rook-ceph-osd-46" is down and a possible node drain is detected | |
2023-07-18 22:32:15.001916 D | clusterdisruption-controller: osd "3" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.001929 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down and a possible node drain is detected | |
2023-07-18 22:32:15.001987 D | clusterdisruption-controller: osd "20" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002000 I | clusterdisruption-controller: osd "rook-ceph-osd-20" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002052 D | clusterdisruption-controller: osd "22" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002064 I | clusterdisruption-controller: osd "rook-ceph-osd-22" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002115 D | clusterdisruption-controller: osd "33" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002127 I | clusterdisruption-controller: osd "rook-ceph-osd-33" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002180 D | clusterdisruption-controller: osd "17" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002192 I | clusterdisruption-controller: osd "rook-ceph-osd-17" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002245 D | clusterdisruption-controller: osd "24" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002257 I | clusterdisruption-controller: osd "rook-ceph-osd-24" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002309 D | clusterdisruption-controller: osd "37" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002321 I | clusterdisruption-controller: osd "rook-ceph-osd-37" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002371 D | clusterdisruption-controller: osd "49" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002383 I | clusterdisruption-controller: osd "rook-ceph-osd-49" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002437 D | clusterdisruption-controller: osd "50" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002449 I | clusterdisruption-controller: osd "rook-ceph-osd-50" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002655 I | clusterdisruption-controller: osd "rook-ceph-osd-52" is down but no node drain is detected | |
2023-07-18 22:32:15.002723 D | clusterdisruption-controller: osd "9" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002735 I | clusterdisruption-controller: osd "rook-ceph-osd-9" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002792 D | clusterdisruption-controller: osd "18" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002804 I | clusterdisruption-controller: osd "rook-ceph-osd-18" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002860 D | clusterdisruption-controller: osd "28" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002875 I | clusterdisruption-controller: osd "rook-ceph-osd-28" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002930 D | clusterdisruption-controller: osd "11" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.002942 I | clusterdisruption-controller: osd "rook-ceph-osd-11" is down and a possible node drain is detected | |
2023-07-18 22:32:15.002992 D | clusterdisruption-controller: osd "19" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003004 I | clusterdisruption-controller: osd "rook-ceph-osd-19" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003055 D | clusterdisruption-controller: osd "36" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003067 I | clusterdisruption-controller: osd "rook-ceph-osd-36" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003119 D | clusterdisruption-controller: osd "26" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003131 I | clusterdisruption-controller: osd "rook-ceph-osd-26" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003182 D | clusterdisruption-controller: osd "41" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003193 I | clusterdisruption-controller: osd "rook-ceph-osd-41" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003243 D | clusterdisruption-controller: osd "48" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003254 I | clusterdisruption-controller: osd "rook-ceph-osd-48" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003304 D | clusterdisruption-controller: osd "51" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003315 I | clusterdisruption-controller: osd "rook-ceph-osd-51" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003367 D | clusterdisruption-controller: osd "6" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003379 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003430 D | clusterdisruption-controller: osd "14" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003441 I | clusterdisruption-controller: osd "rook-ceph-osd-14" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003490 D | clusterdisruption-controller: osd "25" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003502 I | clusterdisruption-controller: osd "rook-ceph-osd-25" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003552 D | clusterdisruption-controller: osd "43" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003563 I | clusterdisruption-controller: osd "rook-ceph-osd-43" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003614 D | clusterdisruption-controller: osd "0" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003626 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003676 D | clusterdisruption-controller: osd "1" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003688 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003736 D | clusterdisruption-controller: osd "10" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003748 I | clusterdisruption-controller: osd "rook-ceph-osd-10" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003798 D | clusterdisruption-controller: osd "38" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003810 I | clusterdisruption-controller: osd "rook-ceph-osd-38" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003860 D | clusterdisruption-controller: osd "40" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003872 I | clusterdisruption-controller: osd "rook-ceph-osd-40" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003923 D | clusterdisruption-controller: osd "42" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003935 I | clusterdisruption-controller: osd "rook-ceph-osd-42" is down and a possible node drain is detected | |
2023-07-18 22:32:15.003984 D | clusterdisruption-controller: osd "16" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.003996 I | clusterdisruption-controller: osd "rook-ceph-osd-16" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004048 D | clusterdisruption-controller: osd "2" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004060 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004111 D | clusterdisruption-controller: osd "4" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004123 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004174 D | clusterdisruption-controller: osd "12" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004188 I | clusterdisruption-controller: osd "rook-ceph-osd-12" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004239 D | clusterdisruption-controller: osd "31" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004251 I | clusterdisruption-controller: osd "rook-ceph-osd-31" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004300 D | clusterdisruption-controller: osd "34" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004312 I | clusterdisruption-controller: osd "rook-ceph-osd-34" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004388 D | clusterdisruption-controller: osd "39" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004404 I | clusterdisruption-controller: osd "rook-ceph-osd-39" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004464 D | clusterdisruption-controller: osd "45" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004477 I | clusterdisruption-controller: osd "rook-ceph-osd-45" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004528 D | clusterdisruption-controller: osd "7" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004539 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004587 D | clusterdisruption-controller: osd "13" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004600 I | clusterdisruption-controller: osd "rook-ceph-osd-13" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004654 D | clusterdisruption-controller: osd "27" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004666 I | clusterdisruption-controller: osd "rook-ceph-osd-27" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004716 D | clusterdisruption-controller: osd "32" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004728 I | clusterdisruption-controller: osd "rook-ceph-osd-32" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004777 D | clusterdisruption-controller: osd "8" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004788 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004838 D | clusterdisruption-controller: osd "15" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004850 I | clusterdisruption-controller: osd "rook-ceph-osd-15" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004899 D | clusterdisruption-controller: osd "23" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004911 I | clusterdisruption-controller: osd "rook-ceph-osd-23" is down and a possible node drain is detected | |
2023-07-18 22:32:15.004962 D | clusterdisruption-controller: osd "29" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.004973 I | clusterdisruption-controller: osd "rook-ceph-osd-29" is down and a possible node drain is detected | |
2023-07-18 22:32:15.005022 D | clusterdisruption-controller: osd "35" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.005034 I | clusterdisruption-controller: osd "rook-ceph-osd-35" is down and a possible node drain is detected | |
2023-07-18 22:32:15.005083 D | clusterdisruption-controller: osd "44" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.005095 I | clusterdisruption-controller: osd "rook-ceph-osd-44" is down and a possible node drain is detected | |
2023-07-18 22:32:15.005144 D | clusterdisruption-controller: osd "47" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.005156 I | clusterdisruption-controller: osd "rook-ceph-osd-47" is down and a possible node drain is detected | |
2023-07-18 22:32:15.005207 D | clusterdisruption-controller: osd "5" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:15.005219 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down and a possible node drain is detected | |
2023-07-18 22:32:15.005280 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:15.308872 I | op-mon: targeting the mon count 3 | |
2023-07-18 22:32:15.413474 D | op-mon: Host network for mon "k" is false | |
2023-07-18 22:32:15.413499 D | op-mon: Host network for mon "l" is false | |
2023-07-18 22:32:15.413518 D | op-mon: Host network for mon "j" is false | |
2023-07-18 22:32:15.413528 D | op-mon: mon k already scheduled | |
2023-07-18 22:32:15.413533 D | op-mon: mon l already scheduled | |
2023-07-18 22:32:15.413538 D | op-mon: mon j already scheduled | |
2023-07-18 22:32:15.413542 D | op-mon: mons have been scheduled | |
2023-07-18 22:32:15.526295 I | clusterdisruption-controller: osd is down in failure domain "c10-z1-sea-pahadi-net" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:186} {StateName:down Count:9} {StateName:incomplete Count:1}]" | |
2023-07-18 22:32:15.526346 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:15.532682 I | op-config: applying ceph settings: | |
[global] | |
mon allow pool delete = true | |
mon cluster log file = | |
mon allow pool size one = true | |
2023-07-18 22:32:15.532709 D | exec: Running command: ceph config assimilate-conf -i /tmp/3827392888 -o /tmp/3827392888.out --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:15.928180 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.929151 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.930116 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.931005 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.931918 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.932907 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.933844 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.934586 I | op-config: successfully applied settings to the mon configuration database | |
2023-07-18 22:32:15.934868 I | op-config: applying ceph settings: | |
[global] | |
log to file = true | |
2023-07-18 22:32:15.934898 D | exec: Running command: ceph config assimilate-conf -i /tmp/2133716589 -o /tmp/2133716589.out --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:15.934915 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.935773 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.936624 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.937428 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.938240 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:15.938267 D | clusterdisruption-controller: deleting default pdb with maxUnavailable=1 for all osd | |
2023-07-18 22:32:15.939001 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:16.326304 I | op-config: successfully applied settings to the mon configuration database | |
2023-07-18 22:32:16.326372 I | op-config: deleting "log file" option from the mon configuration database | |
2023-07-18 22:32:16.326421 D | exec: Running command: ceph config rm global log_file --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:16.720539 I | op-config: successfully deleted "log file" option from the mon configuration database | |
2023-07-18 22:32:16.720572 I | op-mon: checking for basic quorum with existing mons | |
2023-07-18 22:32:16.725259 D | op-k8sutil: creating service rook-ceph-mon-k | |
2023-07-18 22:32:16.762610 D | op-k8sutil: updating service rook-ceph-mon-k | |
2023-07-18 22:32:16.771956 I | op-mon: mon "k" cluster IP is 10.152.183.192 | |
2023-07-18 22:32:16.775871 D | op-k8sutil: creating service rook-ceph-mon-l | |
2023-07-18 22:32:16.837089 D | op-k8sutil: updating service rook-ceph-mon-l | |
2023-07-18 22:32:16.870622 I | op-mon: mon "l" cluster IP is 10.152.183.95 | |
2023-07-18 22:32:17.068699 D | op-k8sutil: creating service rook-ceph-mon-j | |
2023-07-18 22:32:17.318071 D | op-k8sutil: updating service rook-ceph-mon-j | |
2023-07-18 22:32:17.669468 I | op-mon: mon "j" cluster IP is 10.152.183.247 | |
2023-07-18 22:32:18.071533 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists | |
2023-07-18 22:32:18.280252 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.152.183.95:6789","10.152.183.247:6789","10.152.183.192:6789"],"namespace":""}] data:j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 mapping:{"node":{"c":{"Name":"r4.z1.sea.pahadi.net","Hostname":"r4.z1.sea.pahadi.net","Address":"192.168.0.104"},"e":{"Name":"r5.z1.sea.pahadi.net","Hostname":"r5.z1.sea.pahadi.net","Address":"192.168.0.105"},"j":{"Name":"r7.z1.sea.pahadi.net","Hostname":"r7.z1.sea.pahadi.net","Address":"192.168.0.107"},"k":{"Name":"c14.z1.sea.pahadi.net","Hostname":"c14.z1.sea.pahadi.net","Address":"10.10.14.1"},"l":{"Name":"c10.z1.sea.pahadi.net","Hostname":"c10.z1.sea.pahadi.net","Address":"10.10.10.1"}}} maxMonId:11 outOfQuorum:] | |
2023-07-18 22:32:18.280953 D | ceph-spec: object "rook-ceph-mon-endpoints" matched on update | |
2023-07-18 22:32:18.280971 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:18.479017 D | op-config: updating config secret "rook-ceph-config" | |
2023-07-18 22:32:18.680280 D | ceph-spec: object "rook-ceph-config" matched on update | |
2023-07-18 22:32:18.680303 D | ceph-spec: do not reconcile on "rook-ceph-config" secret changes | |
2023-07-18 22:32:18.869208 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config | |
2023-07-18 22:32:18.869423 I | cephclient: generated admin config in /var/lib/rook/rook-ceph | |
2023-07-18 22:32:18.869448 D | ceph-csi: using "rook-ceph" for csi configmap namespace | |
2023-07-18 22:32:19.416343 D | op-mon: monConfig: &{ResourceName:rook-ceph-mon-k DaemonName:k PublicIP:10.152.183.192 Port:6789 Zone: NodeName:c14.z1.sea.pahadi.net DataPathMap:0xc0022d1800 UseHostNetwork:false} | |
2023-07-18 22:32:19.416496 D | ceph-spec: setting periodicity to "daily". Supported periodicity are hourly, daily, weekly and monthly | |
2023-07-18 22:32:19.473391 D | op-mon: adding host path volume source to mon deployment rook-ceph-mon-k | |
2023-07-18 22:32:19.473414 I | op-mon: deployment for mon rook-ceph-mon-k already exists. updating if needed | |
2023-07-18 22:32:19.488714 I | op-k8sutil: deployment "rook-ceph-mon-k" did not change, nothing to update | |
2023-07-18 22:32:19.488740 I | op-mon: waiting for mon quorum with [k l j] | |
2023-07-18 22:32:19.960375 I | op-mon: mons running: [k l j] | |
2023-07-18 22:32:19.960407 D | exec: Running command: ceph quorum_status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:20.451188 I | op-mon: Monitors in quorum: [j k l] | |
2023-07-18 22:32:20.451256 D | op-mon: monConfig: &{ResourceName:rook-ceph-mon-l DaemonName:l PublicIP:10.152.183.95 Port:6789 Zone: NodeName:c10.z1.sea.pahadi.net DataPathMap:0xc0022d1830 UseHostNetwork:false} | |
2023-07-18 22:32:20.451355 D | ceph-spec: setting periodicity to "daily". Supported periodicity are hourly, daily, weekly and monthly | |
2023-07-18 22:32:20.516866 D | op-mon: adding host path volume source to mon deployment rook-ceph-mon-l | |
2023-07-18 22:32:20.516891 I | op-mon: deployment for mon rook-ceph-mon-l already exists. updating if needed | |
2023-07-18 22:32:20.532368 I | op-k8sutil: deployment "rook-ceph-mon-l" did not change, nothing to update | |
2023-07-18 22:32:20.532399 I | op-mon: waiting for mon quorum with [k l j] | |
2023-07-18 22:32:20.697338 I | op-mon: mons running: [k l j] | |
2023-07-18 22:32:20.697372 D | exec: Running command: ceph quorum_status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:21.179341 I | op-mon: Monitors in quorum: [j k l] | |
2023-07-18 22:32:21.179420 D | op-mon: monConfig: &{ResourceName:rook-ceph-mon-j DaemonName:j PublicIP:10.152.183.247 Port:6789 Zone: NodeName:r7.z1.sea.pahadi.net DataPathMap:0xc0022d1860 UseHostNetwork:false} | |
2023-07-18 22:32:21.179538 D | ceph-spec: setting periodicity to "daily". Supported periodicity are hourly, daily, weekly and monthly | |
2023-07-18 22:32:21.188520 D | op-mon: adding host path volume source to mon deployment rook-ceph-mon-j | |
2023-07-18 22:32:21.188537 I | op-mon: deployment for mon rook-ceph-mon-j already exists. updating if needed | |
2023-07-18 22:32:21.201487 I | op-k8sutil: deployment "rook-ceph-mon-j" did not change, nothing to update | |
2023-07-18 22:32:21.201508 I | op-mon: waiting for mon quorum with [k l j] | |
2023-07-18 22:32:21.404623 I | op-mon: mons running: [k l j] | |
2023-07-18 22:32:21.404656 D | exec: Running command: ceph quorum_status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:21.887801 I | op-mon: Monitors in quorum: [j k l] | |
2023-07-18 22:32:21.887822 I | op-mon: mons created: 3 | |
2023-07-18 22:32:21.887839 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:22.370159 D | cephclient: {"mon":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mgr":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":2},"osd":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mds":{},"overall":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":8}} | |
2023-07-18 22:32:22.370183 D | cephclient: {"mon":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mgr":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":2},"osd":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mds":{},"overall":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":8}} | |
2023-07-18 22:32:22.370237 I | op-mon: waiting for mon quorum with [k l j] | |
2023-07-18 22:32:22.540755 I | op-mon: mons running: [k l j] | |
2023-07-18 22:32:22.540793 D | exec: Running command: ceph quorum_status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:23.031557 I | op-mon: Monitors in quorum: [j k l] | |
2023-07-18 22:32:23.031602 D | op-mon: mon endpoints used are: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:23.032567 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:23.032812 D | op-mon: skipping check for orphaned mon pvcs since using the host path | |
2023-07-18 22:32:23.032826 D | op-mon: Released lock for mon orchestration | |
2023-07-18 22:32:23.032834 D | ceph-cluster-controller: monitors are up and running, executing post actions | |
2023-07-18 22:32:23.032846 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" | |
2023-07-18 22:32:23.032868 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-provisioner mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:23.523996 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" | |
2023-07-18 22:32:23.524032 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-node mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:24.024103 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" | |
2023-07-18 22:32:24.024154 D | exec: Running command: ceph auth get-or-create-key client.csi-cephfs-provisioner mon allow r mgr allow rw osd allow rw tag cephfs metadata=* --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:24.524512 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-node" | |
2023-07-18 22:32:24.524561 D | exec: Running command: ceph auth get-or-create-key client.csi-cephfs-node mon allow r mgr allow rw osd allow rw tag cephfs *=* mds allow rw --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:25.053651 D | op-cfg-keyring: updating secret for rook-csi-rbd-provisioner | |
2023-07-18 22:32:25.088767 D | op-cfg-keyring: updating secret for rook-csi-rbd-node | |
2023-07-18 22:32:25.095806 D | op-cfg-keyring: updating secret for rook-csi-cephfs-provisioner | |
2023-07-18 22:32:25.102770 D | op-cfg-keyring: updating secret for rook-csi-cephfs-node | |
2023-07-18 22:32:25.107013 I | ceph-csi: created kubernetes csi secrets for cluster "rook-ceph" | |
2023-07-18 22:32:25.107030 I | cephclient: getting or creating ceph auth key "client.crash" | |
2023-07-18 22:32:25.107044 D | exec: Running command: ceph auth get-or-create-key client.crash mon allow profile crash mgr allow rw --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:25.595337 D | op-cfg-keyring: updating secret for rook-ceph-crash-collector-keyring | |
2023-07-18 22:32:25.599410 I | ceph-nodedaemon-controller: created kubernetes crash collector secret for cluster "rook-ceph" | |
2023-07-18 22:32:25.599427 I | op-config: deleting "ms_cluster_mode" option from the mon configuration database | |
2023-07-18 22:32:25.599441 D | exec: Running command: ceph config rm global ms_cluster_mode --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:25.986852 I | op-config: successfully deleted "ms_cluster_mode" option from the mon configuration database | |
2023-07-18 22:32:25.986872 I | op-config: deleting "ms_service_mode" option from the mon configuration database | |
2023-07-18 22:32:25.986889 D | exec: Running command: ceph config rm global ms_service_mode --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:26.375542 I | op-config: successfully deleted "ms_service_mode" option from the mon configuration database | |
2023-07-18 22:32:26.375564 I | op-config: deleting "ms_client_mode" option from the mon configuration database | |
2023-07-18 22:32:26.375590 D | exec: Running command: ceph config rm global ms_client_mode --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:26.761536 I | op-config: successfully deleted "ms_client_mode" option from the mon configuration database | |
2023-07-18 22:32:26.761553 I | op-config: deleting "rbd_default_map_options" option from the mon configuration database | |
2023-07-18 22:32:26.761568 D | exec: Running command: ceph config rm global rbd_default_map_options --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:27.146183 I | op-config: successfully deleted "rbd_default_map_options" option from the mon configuration database | |
2023-07-18 22:32:27.146223 I | op-config: deleting "ms_osd_compress_mode" option from the mon configuration database | |
2023-07-18 22:32:27.146249 D | exec: Running command: ceph config rm global ms_osd_compress_mode --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:27.537665 I | op-config: successfully deleted "ms_osd_compress_mode" option from the mon configuration database | |
2023-07-18 22:32:27.537695 I | cephclient: create rbd-mirror bootstrap peer token "client.rbd-mirror-peer" | |
2023-07-18 22:32:27.537704 I | cephclient: getting or creating ceph auth key "client.rbd-mirror-peer" | |
2023-07-18 22:32:27.537726 D | exec: Running command: ceph auth get-or-create-key client.rbd-mirror-peer mon profile rbd-mirror-peer osd profile rbd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:27.704139 I | exec: exec timeout waiting for process rbd to return. Sending interrupt signal to the process | |
2023-07-18 22:32:27.714625 D | ceph-block-pool-controller: pool "rook-ceph/ceph-erasure-default-md" status updated to "Failure" | |
2023-07-18 22:32:27.714668 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/ceph-erasure-default-md". failed to create pool "ceph-erasure-default-md".: failed to create pool "ceph-erasure-default-md".: failed to initialize pool "ceph-erasure-default-md" for RBD use. : signal: interrupt | |
2023-07-18 22:32:27.714799 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:27.714809 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:27.717700 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:27.721357 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:27.721414 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0028fec60 k:0xc0028fec90 l:0xc0028fecc0], assignment=&{Schedule:map[c:0xc000d569c0 e:0xc000d56a00 j:0xc000d56a40 k:0xc000d56a80 l:0xc000d56ac0]} | |
2023-07-18 22:32:27.721425 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:27.721443 I | ceph-block-pool-controller: creating pool "ceph-erasure-default-data" in namespace "rook-ceph" | |
2023-07-18 22:32:27.721458 D | exec: Running command: ceph osd erasure-code-profile get default --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:28.020994 I | cephclient: successfully created rbd-mirror bootstrap peer token for cluster "rook-ceph" | |
2023-07-18 22:32:28.021146 D | ceph-spec: store cluster-rbd-mirror bootstrap token in a Kubernetes Secret "cluster-peer-token-rook-ceph" in namespace "rook-ceph" | |
2023-07-18 22:32:28.021156 D | op-k8sutil: creating secret cluster-peer-token-rook-ceph | |
2023-07-18 22:32:28.069520 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Configuring Ceph Mgr(s)" | |
2023-07-18 22:32:28.124949 D | exec: Running command: ceph osd erasure-code-profile set ceph-erasure-default-data_ecprofile --force k=5 m=2 plugin=jerasure technique=reed_sol_van --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:28.143601 I | op-mgr: start running mgr | |
2023-07-18 22:32:28.143636 I | cephclient: getting or creating ceph auth key "mgr.a" | |
2023-07-18 22:32:28.143656 D | exec: Running command: ceph auth get-or-create-key mgr.a mon allow profile mgr mds allow * osd allow * --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:28.146141 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" | |
2023-07-18 22:32:28.146161 D | ceph-cluster-controller: update event on CephCluster CR | |
2023-07-18 22:32:28.541767 D | exec: Running command: ceph osd pool create ceph-erasure-default-data 0 erasure ceph-erasure-default-data_ecprofile --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:28.651333 D | op-mgr: legacy mgr key "rook-ceph-mgr-a" is already removed | |
2023-07-18 22:32:28.654402 D | op-cfg-keyring: updating secret for rook-ceph-mgr-a-keyring | |
2023-07-18 22:32:28.658538 D | op-mgr: mgrConfig: &{ResourceName:rook-ceph-mgr-a DaemonID:a DataPathMap:0xc00291a660} | |
2023-07-18 22:32:28.658647 D | ceph-spec: setting periodicity to "daily". Supported periodicity are hourly, daily, weekly and monthly | |
2023-07-18 22:32:28.721720 I | op-mgr: deployment for mgr rook-ceph-mgr-a already exists. updating if needed | |
2023-07-18 22:32:28.755078 I | op-k8sutil: deployment "rook-ceph-mgr-a" did not change, nothing to update | |
2023-07-18 22:32:28.755106 I | cephclient: getting or creating ceph auth key "mgr.b" | |
2023-07-18 22:32:28.755123 D | exec: Running command: ceph auth get-or-create-key mgr.b mon allow profile mgr mds allow * osd allow * --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:28.934682 I | cephclient: setting pool property "allow_ec_overwrites" to "true" on pool "ceph-erasure-default-data" | |
2023-07-18 22:32:28.934716 D | exec: Running command: ceph osd pool set ceph-erasure-default-data allow_ec_overwrites true --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:29.253542 D | op-mgr: legacy mgr key "rook-ceph-mgr-b" is already removed | |
2023-07-18 22:32:29.257005 D | op-cfg-keyring: updating secret for rook-ceph-mgr-b-keyring | |
2023-07-18 22:32:29.261341 D | op-mgr: mgrConfig: &{ResourceName:rook-ceph-mgr-b DaemonID:b DataPathMap:0xc002569c20} | |
2023-07-18 22:32:29.261489 D | ceph-spec: setting periodicity to "daily". Supported periodicity are hourly, daily, weekly and monthly | |
2023-07-18 22:32:29.278970 I | op-mgr: deployment for mgr rook-ceph-mgr-b already exists. updating if needed | |
2023-07-18 22:32:29.294833 I | op-k8sutil: deployment "rook-ceph-mgr-b" did not change, nothing to update | |
2023-07-18 22:32:29.295828 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:29.296061 D | op-k8sutil: creating service rook-ceph-mgr-dashboard | |
2023-07-18 22:32:29.364743 D | op-k8sutil: updating service rook-ceph-mgr-dashboard | |
2023-07-18 22:32:29.427720 D | op-k8sutil: creating service rook-ceph-mgr | |
2023-07-18 22:32:29.428281 D | ceph-spec: object "rook-ceph-mgr-dashboard" matched on update | |
2023-07-18 22:32:29.430945 D | ceph-spec: object "rook-ceph-mgr-dashboard" diff is | |
Patch: {"metadata":{"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:app":{},"f:rook_cluster":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af212b67-e789-4da2-a983-40594c8a934b\"}":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":8443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"rook","operation":"Update","time":"2023-07-18T22:32:29Z"}]},"spec":{"$setElementOrder/ports":[{"port":8443},{"port":7000}],"ports":[{"name":"https-dashboard","port":8443,"protocol":"TCP","targetPort":8443}]}} | |
Current: {"metadata":{"creationTimestamp":"2023-07-18T05:41:12Z","labels":{"app":"rook-ceph-mgr","rook_cluster":"rook-ceph"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:app":{},"f:rook_cluster":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af212b67-e789-4da2-a983-40594c8a934b\"}":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":7000,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"rook","operation":"Update","time":"2023-07-18T05:41:38Z"}],"name":"rook-ceph-mgr-dashboard","namespace":"rook-ceph","ownerReferences":[{"apiVersion":"ceph.rook.io/v1","blockOwnerDeletion":true,"controller":true,"kind":"CephCluster","name":"rook-ceph","uid":"af212b67-e789-4da2-a983-40594c8a934b"}],"resourceVersion":"51828269","uid":"14cddb7d-eeb9-44bc-8468-cc14094c731f"},"spec":{"clusterIP":"10.152.183.58","clusterIPs":["10.152.183.58"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-dashboard","port":7000,"protocol":"TCP","targetPort":7000}],"selector":{"app":"rook-ceph-mgr","mgr_role":"active","rook_cluster":"rook-ceph"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} | |
Modified: {"metadata":{"creationTimestamp":"2023-07-18T05:41:12Z","labels":{"app":"rook-ceph-mgr","rook_cluster":"rook-ceph"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:app":{},"f:rook_cluster":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af212b67-e789-4da2-a983-40594c8a934b\"}":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":8443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"rook","operation":"Update","time":"2023-07-18T22:32:29Z"}],"name":"rook-ceph-mgr-dashboard","namespace":"rook-ceph","ownerReferences":[{"apiVersion":"ceph.rook.io/v1","blockOwnerDeletion":true,"controller":true,"kind":"CephCluster","name":"rook-ceph","uid":"af212b67-e789-4da2-a983-40594c8a934b"}],"resourceVersion":"51828269","uid":"14cddb7d-eeb9-44bc-8468-cc14094c731f"},"spec":{"clusterIP":"10.152.183.58","clusterIPs":["10.152.183.58"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"https-dashboard","port":8443,"protocol":"TCP","targetPort":8443}],"selector":{"app":"rook-ceph-mgr","mgr_role":"active","rook_cluster":"rook-ceph"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} | |
Original: | |
2023-07-18 22:32:29.431018 D | ceph-spec: patch before trimming is {"metadata":{"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:app":{},"f:rook_cluster":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af212b67-e789-4da2-a983-40594c8a934b\"}":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":8443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"rook","operation":"Update","time":"2023-07-18T22:32:29Z"}]},"spec":{"$setElementOrder/ports":[{"port":8443},{"port":7000}],"ports":[{"name":"https-dashboard","port":8443,"protocol":"TCP","targetPort":8443}]}} | |
2023-07-18 22:32:29.431026 D | ceph-spec: trimming 'status' field from patch | |
2023-07-18 22:32:29.431032 D | ceph-spec: trimming 'metadata' field from patch | |
2023-07-18 22:32:29.431055 I | ceph-spec: controller will reconcile resource "rook-ceph-mgr-dashboard" based on patch: {"spec":{"$setElementOrder/ports":[{"port":8443},{"port":7000}],"ports":[{"name":"https-dashboard","port":8443,"protocol":"TCP","targetPort":8443}]}} | |
2023-07-18 22:32:29.495983 D | op-k8sutil: updating service rook-ceph-mgr | |
2023-07-18 22:32:29.509715 D | cephclient: balancer module is already 'on' on pacific, doing nothingbalancer | |
2023-07-18 22:32:29.509742 I | op-mgr: successful modules: balancer | |
2023-07-18 22:32:29.509748 D | exec: Running command: ceph mgr module enable dashboard --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:29.509757 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Configuring Ceph OSDs" | |
2023-07-18 22:32:29.509816 D | exec: Running command: ceph mgr module enable pg_autoscaler --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:29.509871 D | exec: Running command: ceph mgr module enable prometheus --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:29.556317 I | op-osd: start running osds in namespace "rook-ceph" | |
2023-07-18 22:32:29.556346 I | op-osd: wait timeout for healthy OSDs during upgrade or restart is "10m0s" | |
2023-07-18 22:32:29.558854 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" | |
2023-07-18 22:32:29.558873 D | ceph-cluster-controller: update event on CephCluster CR | |
2023-07-18 22:32:29.918815 D | op-osd: 56 of 56 OSD Deployments need updated | |
2023-07-18 22:32:29.918844 I | op-osd: start provisioning the OSDs on PVCs, if needed | |
2023-07-18 22:32:29.934810 I | op-osd: no storageClassDeviceSets defined to configure OSDs on PVCs | |
2023-07-18 22:32:29.934829 I | op-osd: start provisioning the OSDs on nodes, if needed | |
2023-07-18 22:32:30.015343 D | op-osd: storage nodes: [{Name:r3.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:c17.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:c10.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:r0.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:c16.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:c13.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:r4.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:c11.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:c18.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:r5.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:c14.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:c12.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:r7.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}} {Name:r8.z1.sea.pahadi.net Resources:{Limits:map[] Requests:map[] Claims:[]} Config:map[] Selection:{UseAllDevices:<nil> DeviceFilter: DevicePathFilter: Devices:[] VolumeClaimTemplates:[]}}] | |
2023-07-18 22:32:30.070369 I | op-k8sutil: skipping creation of OSDs on nodes [r0.z1.sea.pahadi.net]: node is unschedulable | |
2023-07-18 22:32:30.070458 I | op-osd: 13 of the 14 storage nodes are valid | |
2023-07-18 22:32:30.286192 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-r5.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:32:30.312969 D | exec: Running command: ceph osd pool application get ceph-erasure-default-data --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:30.332647 I | op-k8sutil: batch job rook-ceph-osd-prepare-r5.z1.sea.pahadi.net still exists | |
2023-07-18 22:32:30.523618 I | op-mgr: successful modules: prometheus | |
2023-07-18 22:32:30.524073 I | op-config: setting "global"="mon_pg_warn_min_per_osd"="0" option to the mon configuration database | |
2023-07-18 22:32:30.524117 D | exec: Running command: ceph config set global mon_pg_warn_min_per_osd 0 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:30.730255 I | cephclient: application "rbd" is already set on pool "ceph-erasure-default-data" | |
2023-07-18 22:32:30.730276 I | cephclient: creating EC pool ceph-erasure-default-data succeeded | |
2023-07-18 22:32:30.730285 I | ceph-block-pool-controller: initializing pool "ceph-erasure-default-data" for RBD use | |
2023-07-18 22:32:30.730307 D | exec: Running command: rbd pool init ceph-erasure-default-data --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring | |
2023-07-18 22:32:30.940656 I | op-config: successfully set "global"="mon_pg_warn_min_per_osd"="0" option to the mon configuration database | |
2023-07-18 22:32:30.940689 I | op-mgr: successful modules: mgr module(s) from the spec | |
2023-07-18 22:32:30.969801 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r5.z1.sea.pahadi.net-5dlfk" is a ceph pod! | |
2023-07-18 22:32:30.969897 D | ceph-nodedaemon-controller: reconciling node: "r5.z1.sea.pahadi.net" | |
2023-07-18 22:32:30.970940 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:30.972169 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:30.987123 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:33.361283 I | op-k8sutil: batch job rook-ceph-osd-prepare-r5.z1.sea.pahadi.net deleted | |
W0718 22:32:33.433586 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:32:33.434536 I | op-osd: started OSD provisioning job for node "r5.z1.sea.pahadi.net" | |
2023-07-18 22:32:33.458896 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-r3.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:32:33.482205 I | op-k8sutil: batch job rook-ceph-osd-prepare-r3.z1.sea.pahadi.net still exists | |
2023-07-18 22:32:34.014202 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r3.z1.sea.pahadi.net-krjl8" is a ceph pod! | |
2023-07-18 22:32:34.014283 D | ceph-nodedaemon-controller: reconciling node: "r3.z1.sea.pahadi.net" | |
2023-07-18 22:32:34.014356 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r5.z1.sea.pahadi.net-98xfv" is a ceph pod! | |
2023-07-18 22:32:34.015327 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:34.016590 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:34.020339 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:34.020404 D | ceph-nodedaemon-controller: reconciling node: "r5.z1.sea.pahadi.net" | |
2023-07-18 22:32:34.021277 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:34.022272 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:34.025648 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:35.570933 I | op-mgr: the dashboard secret was already generated | |
2023-07-18 22:32:35.570977 D | exec: Running command: ceph dashboard create-self-signed-cert --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:36.247570 D | ceph-spec: object "rook-ceph-osd-r5.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:36.247598 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:36.497006 I | op-k8sutil: batch job rook-ceph-osd-prepare-r3.z1.sea.pahadi.net deleted | |
W0718 22:32:36.519325 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:32:36.520274 I | op-osd: started OSD provisioning job for node "r3.z1.sea.pahadi.net" | |
2023-07-18 22:32:36.541500 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-c16.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:32:36.565409 I | op-k8sutil: batch job rook-ceph-osd-prepare-c16.z1.sea.pahadi.net still exists | |
2023-07-18 22:32:36.724146 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c16.z1.sea.pahadi.net-2sgbf" is a ceph pod! | |
2023-07-18 22:32:36.724206 D | ceph-nodedaemon-controller: reconciling node: "c16.z1.sea.pahadi.net" | |
2023-07-18 22:32:36.724301 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r3.z1.sea.pahadi.net-6gbgx" is a ceph pod! | |
2023-07-18 22:32:36.725244 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:36.739419 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c16.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:36.739445 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:36.740428 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:36.743600 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:36.743651 D | ceph-nodedaemon-controller: reconciling node: "r3.z1.sea.pahadi.net" | |
2023-07-18 22:32:36.744611 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:36.745639 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:36.748779 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:38.591162 D | ceph-spec: object "rook-ceph-osd-r5.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:38.591190 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:39.569851 I | op-k8sutil: batch job rook-ceph-osd-prepare-c16.z1.sea.pahadi.net deleted | |
2023-07-18 22:32:39.572386 D | ceph-spec: object "rook-ceph-osd-r3.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:39.572407 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
W0718 22:32:39.586517 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:32:39.587475 I | op-osd: started OSD provisioning job for node "c16.z1.sea.pahadi.net" | |
2023-07-18 22:32:39.606947 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-c13.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:32:39.630523 I | op-k8sutil: batch job rook-ceph-osd-prepare-c13.z1.sea.pahadi.net still exists | |
2023-07-18 22:32:40.015428 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c13.z1.sea.pahadi.net-g4wgs" is a ceph pod! | |
2023-07-18 22:32:40.015490 D | ceph-nodedaemon-controller: reconciling node: "c13.z1.sea.pahadi.net" | |
2023-07-18 22:32:40.016515 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:40.017601 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:40.020984 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:41.017664 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c16.z1.sea.pahadi.net-lmmkt" is a ceph pod! | |
2023-07-18 22:32:41.017735 D | ceph-nodedaemon-controller: reconciling node: "c16.z1.sea.pahadi.net" | |
2023-07-18 22:32:41.018430 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration | |
2023-07-18 22:32:41.018781 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:41.033863 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c16.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:41.033889 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:41.034865 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:41.052841 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:42.613705 D | ceph-spec: object "rook-ceph-osd-c16.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:42.613734 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:42.633607 I | op-k8sutil: batch job rook-ceph-osd-prepare-c13.z1.sea.pahadi.net deleted | |
W0718 22:32:42.655259 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:32:42.656212 I | op-osd: started OSD provisioning job for node "c13.z1.sea.pahadi.net" | |
2023-07-18 22:32:42.673273 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-c17.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:32:42.699521 I | op-k8sutil: batch job rook-ceph-osd-prepare-c17.z1.sea.pahadi.net still exists | |
2023-07-18 22:32:42.740686 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c13.z1.sea.pahadi.net-zdxfz" is a ceph pod! | |
2023-07-18 22:32:42.740749 D | ceph-nodedaemon-controller: reconciling node: "c13.z1.sea.pahadi.net" | |
2023-07-18 22:32:42.741754 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:42.742984 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:42.746060 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:43.013384 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c17.z1.sea.pahadi.net-c75lt" is a ceph pod! | |
2023-07-18 22:32:43.013430 D | ceph-nodedaemon-controller: reconciling node: "c17.z1.sea.pahadi.net" | |
2023-07-18 22:32:43.014277 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:43.026234 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c17.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:43.026257 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:43.027259 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:43.030964 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:44.011333 D | ceph-spec: object "rook-ceph-osd-c16.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:44.011358 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:44.473879 D | ceph-spec: object "rook-ceph-osd-c13.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:44.473907 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:44.903552 D | ceph-spec: object "rook-ceph-osd-r3.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:44.903582 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:44.999658 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" | |
2023-07-18 22:32:45.002877 D | clusterdisruption-controller: osd "11" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.002898 I | clusterdisruption-controller: osd "rook-ceph-osd-11" is down and a possible node drain is detected | |
2023-07-18 22:32:45.002944 D | clusterdisruption-controller: osd "19" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.002953 I | clusterdisruption-controller: osd "rook-ceph-osd-19" is down and a possible node drain is detected | |
2023-07-18 22:32:45.002991 D | clusterdisruption-controller: osd "36" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.002999 I | clusterdisruption-controller: osd "rook-ceph-osd-36" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003035 D | clusterdisruption-controller: osd "14" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003044 I | clusterdisruption-controller: osd "rook-ceph-osd-14" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003081 D | clusterdisruption-controller: osd "25" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003091 I | clusterdisruption-controller: osd "rook-ceph-osd-25" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003124 D | clusterdisruption-controller: osd "26" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003133 I | clusterdisruption-controller: osd "rook-ceph-osd-26" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003168 D | clusterdisruption-controller: osd "41" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003177 I | clusterdisruption-controller: osd "rook-ceph-osd-41" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003209 D | clusterdisruption-controller: osd "48" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003218 I | clusterdisruption-controller: osd "rook-ceph-osd-48" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003250 D | clusterdisruption-controller: osd "51" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003258 I | clusterdisruption-controller: osd "rook-ceph-osd-51" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003290 D | clusterdisruption-controller: osd "6" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003299 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003335 D | clusterdisruption-controller: osd "0" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003344 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003379 D | clusterdisruption-controller: osd "1" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003388 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003420 D | clusterdisruption-controller: osd "43" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003429 I | clusterdisruption-controller: osd "rook-ceph-osd-43" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003464 D | clusterdisruption-controller: osd "10" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003475 I | clusterdisruption-controller: osd "rook-ceph-osd-10" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003510 D | clusterdisruption-controller: osd "38" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003519 I | clusterdisruption-controller: osd "rook-ceph-osd-38" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003552 D | clusterdisruption-controller: osd "40" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003561 I | clusterdisruption-controller: osd "rook-ceph-osd-40" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003596 D | clusterdisruption-controller: osd "42" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003605 I | clusterdisruption-controller: osd "rook-ceph-osd-42" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003638 D | clusterdisruption-controller: osd "16" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003647 I | clusterdisruption-controller: osd "rook-ceph-osd-16" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003680 D | clusterdisruption-controller: osd "2" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003689 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003724 D | clusterdisruption-controller: osd "4" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003733 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003766 D | clusterdisruption-controller: osd "12" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003775 I | clusterdisruption-controller: osd "rook-ceph-osd-12" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003808 D | clusterdisruption-controller: osd "31" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003816 I | clusterdisruption-controller: osd "rook-ceph-osd-31" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003848 D | clusterdisruption-controller: osd "34" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003857 I | clusterdisruption-controller: osd "rook-ceph-osd-34" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003890 D | clusterdisruption-controller: osd "39" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003899 I | clusterdisruption-controller: osd "rook-ceph-osd-39" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003932 D | clusterdisruption-controller: osd "45" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003941 I | clusterdisruption-controller: osd "rook-ceph-osd-45" is down and a possible node drain is detected | |
2023-07-18 22:32:45.003974 D | clusterdisruption-controller: osd "13" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.003983 I | clusterdisruption-controller: osd "rook-ceph-osd-13" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004016 D | clusterdisruption-controller: osd "27" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004025 I | clusterdisruption-controller: osd "rook-ceph-osd-27" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004057 D | clusterdisruption-controller: osd "32" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004068 I | clusterdisruption-controller: osd "rook-ceph-osd-32" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004103 D | clusterdisruption-controller: osd "7" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004112 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004146 D | clusterdisruption-controller: osd "15" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004155 I | clusterdisruption-controller: osd "rook-ceph-osd-15" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004187 D | clusterdisruption-controller: osd "23" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004196 I | clusterdisruption-controller: osd "rook-ceph-osd-23" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004226 D | clusterdisruption-controller: osd "8" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004235 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004269 D | clusterdisruption-controller: osd "29" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004278 I | clusterdisruption-controller: osd "rook-ceph-osd-29" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004309 D | clusterdisruption-controller: osd "35" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004318 I | clusterdisruption-controller: osd "rook-ceph-osd-35" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004350 D | clusterdisruption-controller: osd "44" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004372 I | clusterdisruption-controller: osd "rook-ceph-osd-44" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004406 D | clusterdisruption-controller: osd "47" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004415 I | clusterdisruption-controller: osd "rook-ceph-osd-47" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004451 D | clusterdisruption-controller: osd "5" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004460 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004495 D | clusterdisruption-controller: osd "30" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004504 I | clusterdisruption-controller: osd "rook-ceph-osd-30" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004535 D | clusterdisruption-controller: osd "21" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004544 I | clusterdisruption-controller: osd "rook-ceph-osd-21" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004576 D | clusterdisruption-controller: osd "46" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004585 I | clusterdisruption-controller: osd "rook-ceph-osd-46" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004619 D | clusterdisruption-controller: osd "3" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004628 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004659 D | clusterdisruption-controller: osd "20" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004668 I | clusterdisruption-controller: osd "rook-ceph-osd-20" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004700 D | clusterdisruption-controller: osd "22" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004708 I | clusterdisruption-controller: osd "rook-ceph-osd-22" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004740 D | clusterdisruption-controller: osd "33" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004749 I | clusterdisruption-controller: osd "rook-ceph-osd-33" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004783 D | clusterdisruption-controller: osd "17" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004792 I | clusterdisruption-controller: osd "rook-ceph-osd-17" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004825 D | clusterdisruption-controller: osd "24" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004834 I | clusterdisruption-controller: osd "rook-ceph-osd-24" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004865 D | clusterdisruption-controller: osd "18" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004874 I | clusterdisruption-controller: osd "rook-ceph-osd-18" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004905 D | clusterdisruption-controller: osd "28" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004914 I | clusterdisruption-controller: osd "rook-ceph-osd-28" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004947 D | clusterdisruption-controller: osd "37" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004957 I | clusterdisruption-controller: osd "rook-ceph-osd-37" is down and a possible node drain is detected | |
2023-07-18 22:32:45.004990 D | clusterdisruption-controller: osd "49" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.004998 I | clusterdisruption-controller: osd "rook-ceph-osd-49" is down and a possible node drain is detected | |
2023-07-18 22:32:45.005032 D | clusterdisruption-controller: osd "50" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.005041 I | clusterdisruption-controller: osd "rook-ceph-osd-50" is down and a possible node drain is detected | |
2023-07-18 22:32:45.005217 I | clusterdisruption-controller: osd "rook-ceph-osd-52" is down but no node drain is detected | |
2023-07-18 22:32:45.005260 D | clusterdisruption-controller: osd "9" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.005269 I | clusterdisruption-controller: osd "rook-ceph-osd-9" is down and a possible node drain is detected | |
2023-07-18 22:32:45.005322 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:45.515029 I | clusterdisruption-controller: osd is down in failure domain "c10-z1-sea-pahadi-net" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:186} {StateName:down Count:9} {StateName:incomplete Count:1}]" | |
2023-07-18 22:32:45.515078 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:45.718789 I | op-k8sutil: batch job rook-ceph-osd-prepare-c17.z1.sea.pahadi.net deleted | |
2023-07-18 22:32:45.730853 I | exec: exec timeout waiting for process rbd to return. Sending interrupt signal to the process | |
W0718 22:32:45.736228 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:32:45.737220 I | op-osd: started OSD provisioning job for node "c17.z1.sea.pahadi.net" | |
2023-07-18 22:32:45.742142 D | ceph-block-pool-controller: pool "rook-ceph/ceph-erasure-default-data" status updated to "Failure" | |
2023-07-18 22:32:45.742192 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/ceph-erasure-default-data". failed to create pool "ceph-erasure-default-data".: failed to create pool "ceph-erasure-default-data".: failed to initialize pool "ceph-erasure-default-data" for RBD use. : signal: interrupt | |
2023-07-18 22:32:45.742374 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:32:45.742391 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:32:45.747006 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:32:45.750396 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:32:45.750483 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc0022d1830 k:0xc0022d1860 l:0xc0022d1890], assignment=&{Schedule:map[c:0xc000548780 e:0xc0005487c0 j:0xc000548800 k:0xc000548840 l:0xc000548880]} | |
2023-07-18 22:32:45.750512 D | exec: Running command: ceph osd crush dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:45.774258 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-c10.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:32:45.810004 I | op-k8sutil: batch job rook-ceph-osd-prepare-c10.z1.sea.pahadi.net still exists | |
2023-07-18 22:32:45.923333 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.924346 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.925258 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.926170 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.927061 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.928056 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.928894 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.929739 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.930569 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.931519 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.932450 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.933318 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.933363 D | clusterdisruption-controller: deleting default pdb with maxUnavailable=1 for all osd | |
2023-07-18 22:32:45.934312 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:45.956166 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c17.z1.sea.pahadi.net-hq9nt" is a ceph pod! | |
2023-07-18 22:32:45.956222 D | ceph-nodedaemon-controller: reconciling node: "c17.z1.sea.pahadi.net" | |
2023-07-18 22:32:45.957258 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:45.973269 D | clusterdisruption-controller: reconciling "rook-ceph/" | |
2023-07-18 22:32:45.980706 D | clusterdisruption-controller: osd "11" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.980734 I | clusterdisruption-controller: osd "rook-ceph-osd-11" is down and a possible node drain is detected | |
2023-07-18 22:32:45.980778 D | clusterdisruption-controller: osd "19" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.980789 I | clusterdisruption-controller: osd "rook-ceph-osd-19" is down and a possible node drain is detected | |
2023-07-18 22:32:45.980827 D | clusterdisruption-controller: osd "36" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.980836 I | clusterdisruption-controller: osd "rook-ceph-osd-36" is down and a possible node drain is detected | |
2023-07-18 22:32:45.980875 D | clusterdisruption-controller: osd "41" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.980885 I | clusterdisruption-controller: osd "rook-ceph-osd-41" is down and a possible node drain is detected | |
2023-07-18 22:32:45.980923 D | clusterdisruption-controller: osd "48" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.980932 I | clusterdisruption-controller: osd "rook-ceph-osd-48" is down and a possible node drain is detected | |
2023-07-18 22:32:45.980966 D | clusterdisruption-controller: osd "51" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.980975 I | clusterdisruption-controller: osd "rook-ceph-osd-51" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981017 D | clusterdisruption-controller: osd "6" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981026 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981064 D | clusterdisruption-controller: osd "14" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981073 I | clusterdisruption-controller: osd "rook-ceph-osd-14" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981107 D | clusterdisruption-controller: osd "25" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981116 I | clusterdisruption-controller: osd "rook-ceph-osd-25" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981147 D | clusterdisruption-controller: osd "26" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981156 I | clusterdisruption-controller: osd "rook-ceph-osd-26" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981189 D | clusterdisruption-controller: osd "0" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981198 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981233 D | clusterdisruption-controller: osd "1" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981242 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981275 D | clusterdisruption-controller: osd "43" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981284 I | clusterdisruption-controller: osd "rook-ceph-osd-43" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981319 D | clusterdisruption-controller: osd "38" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981328 I | clusterdisruption-controller: osd "rook-ceph-osd-38" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981364 D | clusterdisruption-controller: osd "40" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981373 I | clusterdisruption-controller: osd "rook-ceph-osd-40" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981407 D | clusterdisruption-controller: osd "42" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981415 I | clusterdisruption-controller: osd "rook-ceph-osd-42" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981450 D | clusterdisruption-controller: osd "10" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981458 I | clusterdisruption-controller: osd "rook-ceph-osd-10" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981491 D | clusterdisruption-controller: osd "16" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981500 I | clusterdisruption-controller: osd "rook-ceph-osd-16" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981536 D | clusterdisruption-controller: osd "2" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981545 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981581 D | clusterdisruption-controller: osd "4" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981590 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981622 D | clusterdisruption-controller: osd "12" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981630 I | clusterdisruption-controller: osd "rook-ceph-osd-12" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981664 D | clusterdisruption-controller: osd "31" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981675 I | clusterdisruption-controller: osd "rook-ceph-osd-31" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981709 D | clusterdisruption-controller: osd "34" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981717 I | clusterdisruption-controller: osd "rook-ceph-osd-34" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981750 D | clusterdisruption-controller: osd "39" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981758 I | clusterdisruption-controller: osd "rook-ceph-osd-39" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981791 D | clusterdisruption-controller: osd "45" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981800 I | clusterdisruption-controller: osd "rook-ceph-osd-45" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981835 D | clusterdisruption-controller: osd "13" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981844 I | clusterdisruption-controller: osd "rook-ceph-osd-13" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981876 D | clusterdisruption-controller: osd "27" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981885 I | clusterdisruption-controller: osd "rook-ceph-osd-27" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981917 D | clusterdisruption-controller: osd "32" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981926 I | clusterdisruption-controller: osd "rook-ceph-osd-32" is down and a possible node drain is detected | |
2023-07-18 22:32:45.981964 D | clusterdisruption-controller: osd "7" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.981974 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982010 D | clusterdisruption-controller: osd "15" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982019 I | clusterdisruption-controller: osd "rook-ceph-osd-15" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982052 D | clusterdisruption-controller: osd "23" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982061 I | clusterdisruption-controller: osd "rook-ceph-osd-23" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982092 D | clusterdisruption-controller: osd "8" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982101 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982136 D | clusterdisruption-controller: osd "35" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982145 I | clusterdisruption-controller: osd "rook-ceph-osd-35" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982178 D | clusterdisruption-controller: osd "44" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982187 I | clusterdisruption-controller: osd "rook-ceph-osd-44" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982221 D | clusterdisruption-controller: osd "47" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982230 I | clusterdisruption-controller: osd "rook-ceph-osd-47" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982275 D | clusterdisruption-controller: osd "5" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982285 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982318 D | clusterdisruption-controller: osd "29" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982327 I | clusterdisruption-controller: osd "rook-ceph-osd-29" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982362 D | clusterdisruption-controller: osd "30" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982371 I | clusterdisruption-controller: osd "rook-ceph-osd-30" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982404 D | clusterdisruption-controller: osd "21" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982412 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c17.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:45.982473 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:45.982531 I | clusterdisruption-controller: osd "rook-ceph-osd-21" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982578 D | clusterdisruption-controller: osd "46" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982587 I | clusterdisruption-controller: osd "rook-ceph-osd-46" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982622 D | clusterdisruption-controller: osd "3" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982633 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982667 D | clusterdisruption-controller: osd "20" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982677 I | clusterdisruption-controller: osd "rook-ceph-osd-20" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982707 D | clusterdisruption-controller: osd "22" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982716 I | clusterdisruption-controller: osd "rook-ceph-osd-22" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982749 D | clusterdisruption-controller: osd "33" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982758 I | clusterdisruption-controller: osd "rook-ceph-osd-33" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982792 D | clusterdisruption-controller: osd "17" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982801 I | clusterdisruption-controller: osd "rook-ceph-osd-17" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982835 D | clusterdisruption-controller: osd "24" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982844 I | clusterdisruption-controller: osd "rook-ceph-osd-24" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982877 D | clusterdisruption-controller: osd "49" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982886 I | clusterdisruption-controller: osd "rook-ceph-osd-49" is down and a possible node drain is detected | |
2023-07-18 22:32:45.982921 D | clusterdisruption-controller: osd "50" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.982930 I | clusterdisruption-controller: osd "rook-ceph-osd-50" is down and a possible node drain is detected | |
2023-07-18 22:32:45.983118 I | clusterdisruption-controller: osd "rook-ceph-osd-52" is down but no node drain is detected | |
2023-07-18 22:32:45.983162 D | clusterdisruption-controller: osd "9" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.983172 I | clusterdisruption-controller: osd "rook-ceph-osd-9" is down and a possible node drain is detected | |
2023-07-18 22:32:45.983205 D | clusterdisruption-controller: osd "18" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.983214 I | clusterdisruption-controller: osd "rook-ceph-osd-18" is down and a possible node drain is detected | |
2023-07-18 22:32:45.983250 D | clusterdisruption-controller: osd "28" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.983259 I | clusterdisruption-controller: osd "rook-ceph-osd-28" is down and a possible node drain is detected | |
2023-07-18 22:32:45.983291 D | clusterdisruption-controller: osd "37" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:32:45.983300 I | clusterdisruption-controller: osd "rook-ceph-osd-37" is down and a possible node drain is detected | |
2023-07-18 22:32:45.983354 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:45.983499 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c10.z1.sea.pahadi.net-vzng8" is a ceph pod! | |
2023-07-18 22:32:45.983534 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:45.986613 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:45.986676 D | ceph-nodedaemon-controller: reconciling node: "c10.z1.sea.pahadi.net" | |
2023-07-18 22:32:45.987677 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:46.012001 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c10.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:46.012024 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:46.012893 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:46.017490 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:46.154805 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:46.154852 I | ceph-block-pool-controller: creating pool "ceph-nvme-replica-default" in namespace "rook-ceph" | |
2023-07-18 22:32:46.154877 D | exec: Running command: ceph osd crush rule create-replicated ceph-nvme-replica-default default osd nvme --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:46.462036 I | clusterdisruption-controller: osd is down in failure domain "c10-z1-sea-pahadi-net" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:186} {StateName:down Count:9} {StateName:incomplete Count:1}]" | |
2023-07-18 22:32:46.462089 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:46.560702 D | exec: Running command: ceph osd pool get ceph-nvme-replica-default all --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:46.861065 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.862098 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.863087 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.863964 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.864906 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.865788 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.866668 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.867529 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.868390 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.869304 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.870156 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.870967 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.871001 D | clusterdisruption-controller: deleting default pdb with maxUnavailable=1 for all osd | |
2023-07-18 22:32:46.871842 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:32:46.885076 D | ceph-spec: object "rook-ceph-osd-c13.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:46.885100 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:46.969632 D | exec: Running command: ceph osd pool application get ceph-nvme-replica-default --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:47.382576 I | cephclient: application "rbd" is already set on pool "ceph-nvme-replica-default" | |
2023-07-18 22:32:47.382606 I | cephclient: reconciling replicated pool ceph-nvme-replica-default succeeded | |
2023-07-18 22:32:47.382617 D | cephclient: checking that pool "ceph-nvme-replica-default" has the failure domain "osd" | |
2023-07-18 22:32:47.382642 D | exec: Running command: ceph osd pool get ceph-nvme-replica-default all --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:47.793015 D | exec: Running command: ceph osd crush rule dump ceph-nvme-replica-default --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:48.202640 D | cephclient: pool "ceph-nvme-replica-default" has the expected failure domain "osd" | |
2023-07-18 22:32:48.202668 I | ceph-block-pool-controller: initializing pool "ceph-nvme-replica-default" for RBD use | |
2023-07-18 22:32:48.202688 D | exec: Running command: rbd pool init ceph-nvme-replica-default --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring | |
2023-07-18 22:32:48.371680 D | ceph-spec: object "rook-ceph-osd-c17.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:48.371718 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:48.813740 I | op-k8sutil: batch job rook-ceph-osd-prepare-c10.z1.sea.pahadi.net deleted | |
W0718 22:32:48.830086 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:32:48.831052 I | op-osd: started OSD provisioning job for node "c10.z1.sea.pahadi.net" | |
2023-07-18 22:32:48.851544 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-r4.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:32:48.897748 I | op-k8sutil: batch job rook-ceph-osd-prepare-r4.z1.sea.pahadi.net still exists | |
2023-07-18 22:32:49.308611 D | op-mon: checking health of mons | |
2023-07-18 22:32:49.308632 D | op-mon: Acquiring lock for mon orchestration | |
2023-07-18 22:32:49.308669 D | op-mon: Acquired lock for mon orchestration | |
2023-07-18 22:32:49.483434 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r4.z1.sea.pahadi.net-cxgk8" is a ceph pod! | |
2023-07-18 22:32:49.483484 D | ceph-nodedaemon-controller: reconciling node: "r4.z1.sea.pahadi.net" | |
2023-07-18 22:32:49.483767 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c10.z1.sea.pahadi.net-8bzrf" is a ceph pod! | |
2023-07-18 22:32:49.484262 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:49.485482 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:49.488857 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:49.488895 D | ceph-nodedaemon-controller: reconciling node: "c10.z1.sea.pahadi.net" | |
2023-07-18 22:32:49.489557 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:49.492378 D | op-mon: Checking health for mons in cluster "rook-ceph" | |
2023-07-18 22:32:49.492401 D | exec: Running command: ceph quorum_status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:32:49.516168 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c10.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:49.516198 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:49.517098 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:49.520318 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:49.604431 D | ceph-spec: object "rook-ceph-osd-c17.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:49.604457 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:49.980713 D | op-mon: Mon quorum status: {Quorum:[0 1 2] MonMap:{Mons:[{Name:j Rank:0 Address:10.152.183.247:6789/0 PublicAddr:10.152.183.247:6789/0 PublicAddrs:{Addrvec:[{Type:v2 Addr:10.152.183.247:3300 Nonce:0} {Type:v1 Addr:10.152.183.247:6789 Nonce:0}]}} {Name:k Rank:1 Address:10.152.183.192:6789/0 PublicAddr:10.152.183.192:6789/0 PublicAddrs:{Addrvec:[{Type:v2 Addr:10.152.183.192:3300 Nonce:0} {Type:v1 Addr:10.152.183.192:6789 Nonce:0}]}} {Name:l Rank:2 Address:10.152.183.95:6789/0 PublicAddr:10.152.183.95:6789/0 PublicAddrs:{Addrvec:[{Type:v2 Addr:10.152.183.95:3300 Nonce:0} {Type:v1 Addr:10.152.183.95:6789 Nonce:0}]}}]}} | |
2023-07-18 22:32:49.980739 D | op-mon: targeting the mon count 3 | |
2023-07-18 22:32:49.980754 D | op-mon: mon "j" found in quorum | |
2023-07-18 22:32:49.980762 D | op-mon: mon "k" found in quorum | |
2023-07-18 22:32:49.980770 D | op-mon: mon "l" found in quorum | |
2023-07-18 22:32:49.980777 D | op-mon: mon cluster is healthy, removing any existing canary deployment | |
2023-07-18 22:32:50.182075 I | op-mon: checking if multiple mons are on the same node | |
2023-07-18 22:32:50.238137 D | op-mon: analyzing mon pod "rook-ceph-mon-l-85c897c947-6zzrv" on node "c10.z1.sea.pahadi.net" | |
2023-07-18 22:32:50.238164 D | op-mon: analyzing mon pod "rook-ceph-mon-k-89d675d6b-qcfjh" on node "c14.z1.sea.pahadi.net" | |
2023-07-18 22:32:50.238174 D | op-mon: analyzing mon pod "rook-ceph-mon-j-6558c6d7b4-hfx87" on node "r7.z1.sea.pahadi.net" | |
2023-07-18 22:32:50.238184 D | op-mon: Released lock for mon orchestration | |
2023-07-18 22:32:50.238195 D | op-mon: ceph mon status in namespace "rook-ceph" check interval "45s" | |
2023-07-18 22:32:50.572474 I | exec: exec timeout waiting for process ceph to return. Sending interrupt signal to the process | |
2023-07-18 22:32:50.589440 E | op-mgr: failed modules: "dashboard". failed to initialize dashboard: failed to create a self signed cert for the ceph dashboard: failed to create self signed cert on mgr: exec timeout waiting for the command ceph to return | |
2023-07-18 22:32:51.302060 D | ceph-spec: object "rook-ceph-osd-c10.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:51.302085 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:51.901980 I | op-k8sutil: batch job rook-ceph-osd-prepare-r4.z1.sea.pahadi.net deleted | |
W0718 22:32:51.970474 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:32:51.971161 I | op-osd: started OSD provisioning job for node "r4.z1.sea.pahadi.net" | |
2023-07-18 22:32:52.183501 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-c14.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:32:52.338168 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r4.z1.sea.pahadi.net-gsf7f" is a ceph pod! | |
2023-07-18 22:32:52.338232 D | ceph-nodedaemon-controller: reconciling node: "r4.z1.sea.pahadi.net" | |
2023-07-18 22:32:52.339249 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:52.339829 I | op-k8sutil: batch job rook-ceph-osd-prepare-c14.z1.sea.pahadi.net still exists | |
2023-07-18 22:32:52.340330 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:52.343369 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:52.493470 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c14.z1.sea.pahadi.net-rjkv9" is a ceph pod! | |
2023-07-18 22:32:52.493526 D | ceph-nodedaemon-controller: reconciling node: "c14.z1.sea.pahadi.net" | |
2023-07-18 22:32:52.494498 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:52.517910 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c14.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:52.517939 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:52.518862 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:52.553312 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:53.256508 D | ceph-spec: object "rook-ceph-osd-c10.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:53.256527 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:54.468776 D | ceph-spec: object "rook-ceph-osd-r4.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:54.468804 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:55.344666 I | op-k8sutil: batch job rook-ceph-osd-prepare-c14.z1.sea.pahadi.net deleted | |
W0718 22:32:55.360381 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:32:55.360990 I | op-osd: started OSD provisioning job for node "c14.z1.sea.pahadi.net" | |
2023-07-18 22:32:55.394529 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-c11.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:32:55.415751 I | op-k8sutil: batch job rook-ceph-osd-prepare-c11.z1.sea.pahadi.net still exists | |
2023-07-18 22:32:56.032781 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c14.z1.sea.pahadi.net-4nspf" is a ceph pod! | |
2023-07-18 22:32:56.032855 D | ceph-nodedaemon-controller: reconciling node: "c14.z1.sea.pahadi.net" | |
2023-07-18 22:32:56.033030 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c11.z1.sea.pahadi.net-n98d8" is a ceph pod! | |
2023-07-18 22:32:56.033921 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:56.049947 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c14.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:56.049975 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:56.050895 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:56.053995 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:56.054047 D | ceph-nodedaemon-controller: reconciling node: "c11.z1.sea.pahadi.net" | |
2023-07-18 22:32:56.054958 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:56.055974 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:56.058700 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:57.615353 D | ceph-spec: object "rook-ceph-osd-c14.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:57.615382 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:58.420143 I | op-k8sutil: batch job rook-ceph-osd-prepare-c11.z1.sea.pahadi.net deleted | |
W0718 22:32:58.438192 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:32:58.439125 I | op-osd: started OSD provisioning job for node "c11.z1.sea.pahadi.net" | |
2023-07-18 22:32:58.460820 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-c12.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:32:58.486353 I | op-k8sutil: batch job rook-ceph-osd-prepare-c12.z1.sea.pahadi.net still exists | |
2023-07-18 22:32:59.015400 D | ceph-spec: object "rook-ceph-osd-c14.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:59.015430 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:32:59.016739 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c12.z1.sea.pahadi.net-q79zv" is a ceph pod! | |
2023-07-18 22:32:59.016814 D | ceph-nodedaemon-controller: reconciling node: "c12.z1.sea.pahadi.net" | |
2023-07-18 22:32:59.016883 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c11.z1.sea.pahadi.net-g6mzw" is a ceph pod! | |
2023-07-18 22:32:59.017835 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:59.032037 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c12.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:32:59.032065 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:32:59.032984 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:59.036393 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:59.036462 D | ceph-nodedaemon-controller: reconciling node: "c11.z1.sea.pahadi.net" | |
2023-07-18 22:32:59.037540 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:32:59.038655 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:32:59.041956 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:32:59.252410 D | ceph-spec: object "rook-ceph-osd-r4.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:32:59.252433 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:33:01.018340 D | ceph-spec: object "rook-ceph-osd-c11.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:33:01.018367 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:33:01.494023 I | op-k8sutil: batch job rook-ceph-osd-prepare-c12.z1.sea.pahadi.net deleted | |
W0718 22:33:01.549672 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:33:01.550636 I | op-osd: started OSD provisioning job for node "c12.z1.sea.pahadi.net" | |
2023-07-18 22:33:01.606846 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-c18.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:33:01.644426 I | op-k8sutil: batch job rook-ceph-osd-prepare-c18.z1.sea.pahadi.net still exists | |
2023-07-18 22:33:01.743734 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c18.z1.sea.pahadi.net-wch4f" is a ceph pod! | |
2023-07-18 22:33:01.743808 D | ceph-nodedaemon-controller: reconciling node: "c18.z1.sea.pahadi.net" | |
2023-07-18 22:33:01.743922 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c12.z1.sea.pahadi.net-v59h4" is a ceph pod! | |
2023-07-18 22:33:01.744856 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:33:01.826799 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c18.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:33:01.826826 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:33:01.827785 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:33:01.835984 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:33:01.836045 D | ceph-nodedaemon-controller: reconciling node: "c12.z1.sea.pahadi.net" | |
2023-07-18 22:33:01.836931 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:33:01.851105 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c12.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:33:01.851134 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:33:01.852068 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:33:01.855455 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:33:03.046689 D | ceph-spec: object "rook-ceph-osd-c11.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:33:03.046703 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:33:03.203849 I | exec: exec timeout waiting for process rbd to return. Sending interrupt signal to the process | |
2023-07-18 22:33:03.216214 D | ceph-block-pool-controller: pool "rook-ceph/ceph-nvme-replica-default" status updated to "Failure" | |
2023-07-18 22:33:03.216274 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/ceph-nvme-replica-default". failed to create pool "ceph-nvme-replica-default".: failed to create pool "ceph-nvme-replica-default".: failed to initialize pool "ceph-nvme-replica-default" for RBD use. : signal: interrupt | |
2023-07-18 22:33:03.216467 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" | |
2023-07-18 22:33:03.216484 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling | |
2023-07-18 22:33:03.228466 D | ceph-spec: found existing monitor secrets for cluster rook-ceph | |
2023-07-18 22:33:03.241918 I | ceph-spec: parsing mon endpoints: j=10.152.183.247:6789,k=10.152.183.192:6789,l=10.152.183.95:6789 | |
2023-07-18 22:33:03.241996 D | ceph-spec: loaded: maxMonID=11, mons=map[j:0xc002085080 k:0xc0020850b0 l:0xc0020850e0], assignment=&{Schedule:map[c:0xc0010c3280 e:0xc0010c32c0 j:0xc0010c3340 k:0xc0010c3380 l:0xc0010c33c0]} | |
2023-07-18 22:33:03.242022 D | exec: Running command: ceph osd crush dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:03.621289 D | operator: number of goroutines 428 | |
2023-07-18 22:33:03.645845 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:33:03.645890 I | ceph-block-pool-controller: creating pool "ceph-ssd-erasure-default-data" in namespace "rook-ceph" | |
2023-07-18 22:33:03.645912 D | exec: Running command: ceph osd erasure-code-profile get default --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:04.054105 D | exec: Running command: ceph osd erasure-code-profile set ceph-ssd-erasure-default-data_ecprofile --force k=5 m=2 plugin=jerasure technique=reed_sol_van crush-failure-domain=osd crush-device-class=ssd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:04.124748 D | ceph-spec: object "rook-ceph-osd-c12.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:33:04.124760 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:33:04.308940 D | op-osd: checking osd processes status. | |
2023-07-18 22:33:04.309028 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:04.423077 D | ceph-cluster-controller: checking health of cluster | |
2023-07-18 22:33:04.423126 D | exec: Running command: ceph status --format json --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring | |
2023-07-18 22:33:04.469179 D | exec: Running command: ceph osd pool create ceph-ssd-erasure-default-data 0 erasure ceph-ssd-erasure-default-data_ecprofile --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:04.647592 I | op-k8sutil: batch job rook-ceph-osd-prepare-c18.z1.sea.pahadi.net deleted | |
2023-07-18 22:33:04.712385 D | op-osd: validating status of osd.0 | |
2023-07-18 22:33:04.712403 D | op-osd: osd.0 is marked 'DOWN' | |
2023-07-18 22:33:04.712407 D | op-osd: validating status of osd.1 | |
2023-07-18 22:33:04.712411 D | op-osd: osd.1 is marked 'DOWN' | |
2023-07-18 22:33:04.712414 D | op-osd: validating status of osd.2 | |
2023-07-18 22:33:04.712418 D | op-osd: osd.2 is marked 'DOWN' | |
2023-07-18 22:33:04.712421 D | op-osd: osd.2 is marked 'OUT' | |
2023-07-18 22:33:04.712424 D | op-osd: validating status of osd.3 | |
2023-07-18 22:33:04.712428 D | op-osd: osd.3 is marked 'DOWN' | |
2023-07-18 22:33:04.712431 D | op-osd: validating status of osd.4 | |
2023-07-18 22:33:04.712434 D | op-osd: osd.4 is marked 'DOWN' | |
2023-07-18 22:33:04.712437 D | op-osd: osd.4 is marked 'OUT' | |
2023-07-18 22:33:04.712441 D | op-osd: validating status of osd.5 | |
2023-07-18 22:33:04.712444 D | op-osd: osd.5 is marked 'DOWN' | |
2023-07-18 22:33:04.712447 D | op-osd: osd.5 is marked 'OUT' | |
2023-07-18 22:33:04.712451 D | op-osd: validating status of osd.6 | |
2023-07-18 22:33:04.712454 D | op-osd: osd.6 is marked 'DOWN' | |
2023-07-18 22:33:04.712457 D | op-osd: osd.6 is marked 'OUT' | |
2023-07-18 22:33:04.712460 D | op-osd: validating status of osd.7 | |
2023-07-18 22:33:04.712463 D | op-osd: osd.7 is marked 'DOWN' | |
2023-07-18 22:33:04.712467 D | op-osd: osd.7 is marked 'OUT' | |
2023-07-18 22:33:04.712470 D | op-osd: validating status of osd.8 | |
2023-07-18 22:33:04.712474 D | op-osd: osd.8 is marked 'DOWN' | |
2023-07-18 22:33:04.712478 D | op-osd: osd.8 is marked 'OUT' | |
2023-07-18 22:33:04.712481 D | op-osd: validating status of osd.9 | |
2023-07-18 22:33:04.712484 D | op-osd: osd.9 is marked 'DOWN' | |
2023-07-18 22:33:04.712487 D | op-osd: osd.9 is marked 'OUT' | |
2023-07-18 22:33:04.712491 D | op-osd: validating status of osd.10 | |
2023-07-18 22:33:04.712494 D | op-osd: osd.10 is marked 'DOWN' | |
2023-07-18 22:33:04.712497 D | op-osd: osd.10 is marked 'OUT' | |
2023-07-18 22:33:04.712500 D | op-osd: validating status of osd.11 | |
2023-07-18 22:33:04.712504 D | op-osd: osd.11 is marked 'DOWN' | |
2023-07-18 22:33:04.712507 D | op-osd: osd.11 is marked 'OUT' | |
2023-07-18 22:33:04.712510 D | op-osd: validating status of osd.12 | |
2023-07-18 22:33:04.712513 D | op-osd: osd.12 is marked 'DOWN' | |
2023-07-18 22:33:04.712516 D | op-osd: osd.12 is marked 'OUT' | |
2023-07-18 22:33:04.712519 D | op-osd: validating status of osd.13 | |
2023-07-18 22:33:04.712523 D | op-osd: osd.13 is marked 'DOWN' | |
2023-07-18 22:33:04.712526 D | op-osd: validating status of osd.14 | |
2023-07-18 22:33:04.712530 D | op-osd: osd.14 is marked 'DOWN' | |
2023-07-18 22:33:04.712533 D | op-osd: osd.14 is marked 'OUT' | |
2023-07-18 22:33:04.712536 D | op-osd: validating status of osd.15 | |
2023-07-18 22:33:04.712540 D | op-osd: osd.15 is marked 'DOWN' | |
2023-07-18 22:33:04.712543 D | op-osd: validating status of osd.16 | |
2023-07-18 22:33:04.712546 D | op-osd: osd.16 is marked 'DOWN' | |
2023-07-18 22:33:04.712550 D | op-osd: validating status of osd.17 | |
2023-07-18 22:33:04.712553 D | op-osd: osd.17 is marked 'DOWN' | |
2023-07-18 22:33:04.712556 D | op-osd: validating status of osd.18 | |
2023-07-18 22:33:04.712560 D | op-osd: osd.18 is marked 'DOWN' | |
2023-07-18 22:33:04.712563 D | op-osd: osd.18 is marked 'OUT' | |
2023-07-18 22:33:04.712566 D | op-osd: validating status of osd.19 | |
2023-07-18 22:33:04.712570 D | op-osd: osd.19 is marked 'DOWN' | |
2023-07-18 22:33:04.712573 D | op-osd: osd.19 is marked 'OUT' | |
2023-07-18 22:33:04.712576 D | op-osd: validating status of osd.20 | |
2023-07-18 22:33:04.712579 D | op-osd: osd.20 is marked 'DOWN' | |
2023-07-18 22:33:04.712583 D | op-osd: osd.20 is marked 'OUT' | |
2023-07-18 22:33:04.712585 D | op-osd: validating status of osd.21 | |
2023-07-18 22:33:04.712589 D | op-osd: osd.21 is marked 'DOWN' | |
2023-07-18 22:33:04.712592 D | op-osd: validating status of osd.22 | |
2023-07-18 22:33:04.712596 D | op-osd: osd.22 is marked 'DOWN' | |
2023-07-18 22:33:04.712600 D | op-osd: validating status of osd.23 | |
2023-07-18 22:33:04.712604 D | op-osd: osd.23 is marked 'DOWN' | |
2023-07-18 22:33:04.712607 D | op-osd: validating status of osd.24 | |
2023-07-18 22:33:04.712610 D | op-osd: osd.24 is marked 'DOWN' | |
2023-07-18 22:33:04.712613 D | op-osd: validating status of osd.25 | |
2023-07-18 22:33:04.712617 D | op-osd: osd.25 is marked 'DOWN' | |
2023-07-18 22:33:04.712620 D | op-osd: validating status of osd.26 | |
2023-07-18 22:33:04.712624 D | op-osd: osd.26 is marked 'DOWN' | |
2023-07-18 22:33:04.712627 D | op-osd: osd.26 is marked 'OUT' | |
2023-07-18 22:33:04.712632 D | op-osd: validating status of osd.27 | |
2023-07-18 22:33:04.712636 D | op-osd: osd.27 is marked 'DOWN' | |
2023-07-18 22:33:04.712640 D | op-osd: validating status of osd.28 | |
2023-07-18 22:33:04.712643 D | op-osd: osd.28 is marked 'DOWN' | |
2023-07-18 22:33:04.712646 D | op-osd: validating status of osd.29 | |
2023-07-18 22:33:04.712650 D | op-osd: osd.29 is marked 'DOWN' | |
2023-07-18 22:33:04.712653 D | op-osd: validating status of osd.30 | |
2023-07-18 22:33:04.712657 D | op-osd: osd.30 is marked 'DOWN' | |
2023-07-18 22:33:04.712660 D | op-osd: validating status of osd.31 | |
2023-07-18 22:33:04.712664 D | op-osd: osd.31 is marked 'DOWN' | |
2023-07-18 22:33:04.712669 D | op-osd: validating status of osd.32 | |
2023-07-18 22:33:04.712673 D | op-osd: osd.32 is marked 'DOWN' | |
2023-07-18 22:33:04.712677 D | op-osd: validating status of osd.33 | |
2023-07-18 22:33:04.712681 D | op-osd: osd.33 is marked 'DOWN' | |
2023-07-18 22:33:04.712684 D | op-osd: validating status of osd.34 | |
2023-07-18 22:33:04.712688 D | op-osd: osd.34 is marked 'DOWN' | |
2023-07-18 22:33:04.712691 D | op-osd: validating status of osd.35 | |
2023-07-18 22:33:04.712695 D | op-osd: osd.35 is marked 'DOWN' | |
2023-07-18 22:33:04.712698 D | op-osd: validating status of osd.36 | |
2023-07-18 22:33:04.712702 D | op-osd: osd.36 is marked 'DOWN' | |
2023-07-18 22:33:04.712706 D | op-osd: validating status of osd.37 | |
2023-07-18 22:33:04.712711 D | op-osd: osd.37 is marked 'DOWN' | |
2023-07-18 22:33:04.712714 D | op-osd: validating status of osd.38 | |
2023-07-18 22:33:04.712718 D | op-osd: osd.38 is marked 'DOWN' | |
2023-07-18 22:33:04.712721 D | op-osd: validating status of osd.39 | |
2023-07-18 22:33:04.712725 D | op-osd: osd.39 is marked 'DOWN' | |
2023-07-18 22:33:04.712729 D | op-osd: validating status of osd.40 | |
2023-07-18 22:33:04.712733 D | op-osd: osd.40 is marked 'DOWN' | |
2023-07-18 22:33:04.712736 D | op-osd: validating status of osd.41 | |
2023-07-18 22:33:04.712740 D | op-osd: osd.41 is marked 'DOWN' | |
2023-07-18 22:33:04.712743 D | op-osd: validating status of osd.42 | |
2023-07-18 22:33:04.712748 D | op-osd: osd.42 is marked 'DOWN' | |
2023-07-18 22:33:04.712752 D | op-osd: validating status of osd.43 | |
2023-07-18 22:33:04.712756 D | op-osd: osd.43 is marked 'DOWN' | |
2023-07-18 22:33:04.712759 D | op-osd: validating status of osd.44 | |
2023-07-18 22:33:04.712764 D | op-osd: osd.44 is marked 'DOWN' | |
2023-07-18 22:33:04.712767 D | op-osd: validating status of osd.45 | |
2023-07-18 22:33:04.712771 D | op-osd: osd.45 is marked 'DOWN' | |
2023-07-18 22:33:04.712774 D | op-osd: validating status of osd.46 | |
2023-07-18 22:33:04.712779 D | op-osd: osd.46 is marked 'DOWN' | |
2023-07-18 22:33:04.712782 D | op-osd: validating status of osd.47 | |
2023-07-18 22:33:04.712787 D | op-osd: osd.47 is marked 'DOWN' | |
2023-07-18 22:33:04.712790 D | op-osd: validating status of osd.48 | |
2023-07-18 22:33:04.712794 D | op-osd: osd.48 is marked 'DOWN' | |
2023-07-18 22:33:04.712798 D | op-osd: validating status of osd.49 | |
2023-07-18 22:33:04.712802 D | op-osd: osd.49 is marked 'DOWN' | |
2023-07-18 22:33:04.712805 D | op-osd: validating status of osd.50 | |
2023-07-18 22:33:04.712810 D | op-osd: osd.50 is marked 'DOWN' | |
2023-07-18 22:33:04.712813 D | op-osd: validating status of osd.51 | |
2023-07-18 22:33:04.712817 D | op-osd: osd.51 is marked 'DOWN' | |
2023-07-18 22:33:04.712820 D | op-osd: validating status of osd.52 | |
2023-07-18 22:33:04.712825 D | op-osd: osd.52 is marked 'DOWN' | |
2023-07-18 22:33:04.712828 D | op-osd: validating status of osd.53 | |
2023-07-18 22:33:04.712834 D | op-osd: osd.53 is marked 'DOWN' | |
2023-07-18 22:33:04.712837 D | op-osd: osd.53 is marked 'OUT' | |
2023-07-18 22:33:04.712841 D | op-osd: validating status of osd.54 | |
2023-07-18 22:33:04.712845 D | op-osd: osd.54 is healthy. | |
2023-07-18 22:33:04.712848 D | op-osd: validating status of osd.55 | |
2023-07-18 22:33:04.712853 D | op-osd: osd.55 is healthy. | |
2023-07-18 22:33:04.712856 D | op-osd: validating status of osd.56 | |
2023-07-18 22:33:04.712861 D | op-osd: osd.56 is healthy. | |
2023-07-18 22:33:04.712875 D | exec: Running command: ceph osd crush class ls --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
W0718 22:33:04.752418 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:33:04.753452 I | op-osd: started OSD provisioning job for node "c18.z1.sea.pahadi.net" | |
2023-07-18 22:33:04.778214 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-r7.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:33:04.800966 I | op-k8sutil: batch job rook-ceph-osd-prepare-r7.z1.sea.pahadi.net still exists | |
2023-07-18 22:33:04.875046 I | cephclient: setting pool property "allow_ec_overwrites" to "true" on pool "ceph-ssd-erasure-default-data" | |
2023-07-18 22:33:04.875083 D | exec: Running command: ceph osd pool set ceph-ssd-erasure-default-data allow_ec_overwrites true --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:04.910538 D | ceph-cluster-controller: cluster status: {Health:{Status:HEALTH_WARN Checks:map[OSD_DOWN:{Severity:HEALTH_WARN Summary:{Message:38 osds down}} OSD_HOST_DOWN:{Severity:HEALTH_WARN Summary:{Message:12 hosts (54 osds) down}} PG_AVAILABILITY:{Severity:HEALTH_WARN Summary:{Message:Reduced data availability: 196 pgs inactive, 9 pgs down, 1 pg incomplete}} PG_NOT_DEEP_SCRUBBED:{Severity:HEALTH_WARN Summary:{Message:2 pgs not deep-scrubbed in time}} RECENT_CRASH:{Severity:HEALTH_WARN Summary:{Message:69 daemons have recently crashed}} SLOW_OPS:{Severity:HEALTH_WARN Summary:{Message:3 slow ops, oldest one blocked for 390 sec, osd.54 has slow ops}}]} FSID:8291aa3f-a3c4-4b08-bade-54ef289cff38 ElectionEpoch:390 Quorum:[0 1 2] QuorumNames:[j k l] MonMap:{Epoch:20 NumMons:0 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{Epoch:20843 NumOsd:57 NumUpOsd:3 NumInOsd:41 Full:false NearFull:false NumRemappedPgs:0} PgMap:{PgsByState:[{StateName:unknown Count:186} {StateName:down Count:9} {StateName:incomplete Count:1}] Version:0 NumPgs:196 DataBytes:19 UsedBytes:1308446720 AvailableBytes:2491756429312 TotalBytes:2493064876032 ReadBps:0 WriteBps:0 ReadOps:0 WriteOps:0 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:1 ID:0 Up:0 In:0 Max:0 ByRank:[] UpStandby:0}} | |
2023-07-18 22:33:04.919424 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:05.195620 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r7.z1.sea.pahadi.net-n9gbm" is a ceph pod! | |
2023-07-18 22:33:05.195685 D | ceph-nodedaemon-controller: reconciling node: "r7.z1.sea.pahadi.net" | |
2023-07-18 22:33:05.195710 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-c18.z1.sea.pahadi.net-pcpl8" is a ceph pod! | |
2023-07-18 22:33:05.196698 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:33:05.211365 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "r7.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:33:05.211383 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:33:05.212261 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:33:05.215205 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:33:05.215240 D | ceph-nodedaemon-controller: reconciling node: "c18.z1.sea.pahadi.net" | |
2023-07-18 22:33:05.215861 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:33:05.236866 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "c18.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:33:05.236892 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:33:05.237750 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:33:05.240786 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:33:05.408206 D | cephclient: {"mon":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mgr":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":2},"osd":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mds":{},"overall":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":8}} | |
2023-07-18 22:33:05.408232 D | cephclient: {"mon":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mgr":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":2},"osd":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":3},"mds":{},"overall":{"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)":8}} | |
2023-07-18 22:33:05.408381 D | ceph-cluster-controller: updating ceph cluster "rook-ceph" status and condition to &{Health:{Status:HEALTH_WARN Checks:map[OSD_DOWN:{Severity:HEALTH_WARN Summary:{Message:38 osds down}} OSD_HOST_DOWN:{Severity:HEALTH_WARN Summary:{Message:12 hosts (54 osds) down}} PG_AVAILABILITY:{Severity:HEALTH_WARN Summary:{Message:Reduced data availability: 196 pgs inactive, 9 pgs down, 1 pg incomplete}} PG_NOT_DEEP_SCRUBBED:{Severity:HEALTH_WARN Summary:{Message:2 pgs not deep-scrubbed in time}} RECENT_CRASH:{Severity:HEALTH_WARN Summary:{Message:69 daemons have recently crashed}} SLOW_OPS:{Severity:HEALTH_WARN Summary:{Message:3 slow ops, oldest one blocked for 390 sec, osd.54 has slow ops}}]} FSID:8291aa3f-a3c4-4b08-bade-54ef289cff38 ElectionEpoch:390 Quorum:[0 1 2] QuorumNames:[j k l] MonMap:{Epoch:20 NumMons:0 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{Epoch:20843 NumOsd:57 NumUpOsd:3 NumInOsd:41 Full:false NearFull:false NumRemappedPgs:0} PgMap:{PgsByState:[{StateName:unknown Count:186} {StateName:down Count:9} {StateName:incomplete Count:1}] Version:0 NumPgs:196 DataBytes:19 UsedBytes:1308446720 AvailableBytes:2491756429312 TotalBytes:2493064876032 ReadBps:0 WriteBps:0 ReadOps:0 WriteOps:0 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:1 ID:0 Up:0 In:0 Max:0 ByRank:[] UpStandby:0}}, True, ClusterCreated, Cluster created successfully | |
2023-07-18 22:33:05.408406 D | ceph-spec: CephCluster "rook-ceph" status: "Ready". "Cluster created successfully" | |
2023-07-18 22:33:05.440644 D | ceph-cluster-controller: checking for stuck pods on not ready nodes | |
2023-07-18 22:33:05.441496 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" | |
2023-07-18 22:33:05.441508 D | ceph-cluster-controller: update event on CephCluster CR | |
2023-07-18 22:33:05.485289 D | ceph-cluster-controller: Health: "HEALTH_WARN", code: "RECENT_CRASH", message: "69 daemons have recently crashed" | |
2023-07-18 22:33:05.485305 D | ceph-cluster-controller: Health: "HEALTH_WARN", code: "SLOW_OPS", message: "3 slow ops, oldest one blocked for 390 sec, osd.54 has slow ops" | |
2023-07-18 22:33:05.485309 D | ceph-cluster-controller: Health: "HEALTH_WARN", code: "OSD_DOWN", message: "38 osds down" | |
2023-07-18 22:33:05.485313 D | ceph-cluster-controller: Health: "HEALTH_WARN", code: "OSD_HOST_DOWN", message: "12 hosts (54 osds) down" | |
2023-07-18 22:33:05.485318 D | ceph-cluster-controller: Health: "HEALTH_WARN", code: "PG_AVAILABILITY", message: "Reduced data availability: 196 pgs inactive, 9 pgs down, 1 pg incomplete" | |
2023-07-18 22:33:05.485322 D | ceph-cluster-controller: Health: "HEALTH_WARN", code: "PG_NOT_DEEP_SCRUBBED", message: "2 pgs not deep-scrubbed in time" | |
2023-07-18 22:33:05.513869 D | exec: Running command: ceph osd pool application get ceph-ssd-erasure-default-data --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:05.924158 I | cephclient: application "rbd" is already set on pool "ceph-ssd-erasure-default-data" | |
2023-07-18 22:33:05.924179 I | cephclient: creating EC pool ceph-ssd-erasure-default-data succeeded | |
2023-07-18 22:33:05.924186 I | ceph-block-pool-controller: initializing pool "ceph-ssd-erasure-default-data" for RBD use | |
2023-07-18 22:33:05.924202 D | exec: Running command: rbd pool init ceph-ssd-erasure-default-data --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring | |
2023-07-18 22:33:06.305155 D | ceph-spec: object "rook-ceph-osd-c12.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:33:06.305180 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:33:07.806237 I | op-k8sutil: batch job rook-ceph-osd-prepare-r7.z1.sea.pahadi.net deleted | |
W0718 22:33:07.826969 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:33:07.827940 I | op-osd: started OSD provisioning job for node "r7.z1.sea.pahadi.net" | |
2023-07-18 22:33:07.849138 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-r8.z1.sea.pahadi.net to start a new one | |
2023-07-18 22:33:07.930386 I | op-k8sutil: batch job rook-ceph-osd-prepare-r8.z1.sea.pahadi.net still exists | |
2023-07-18 22:33:08.175320 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r8.z1.sea.pahadi.net-j9x4d" is a ceph pod! | |
2023-07-18 22:33:08.175391 D | ceph-nodedaemon-controller: reconciling node: "r8.z1.sea.pahadi.net" | |
2023-07-18 22:33:08.176442 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:33:08.184654 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r7.z1.sea.pahadi.net-gdt76" is a ceph pod! | |
2023-07-18 22:33:08.190658 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "r8.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:33:08.190685 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:33:08.191620 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:33:08.194267 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:33:08.194319 D | ceph-nodedaemon-controller: reconciling node: "r7.z1.sea.pahadi.net" | |
2023-07-18 22:33:08.195251 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:33:08.243524 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "r7.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:33:08.243552 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:33:08.244494 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:33:08.247542 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:33:09.897149 D | ceph-spec: object "rook-ceph-osd-c18.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:33:09.897180 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:33:10.528638 D | ceph-spec: object "rook-ceph-osd-r7.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:33:10.528666 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:33:10.935831 I | op-k8sutil: batch job rook-ceph-osd-prepare-r8.z1.sea.pahadi.net deleted | |
W0718 22:33:10.954221 1 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] | |
2023-07-18 22:33:10.955169 I | op-osd: started OSD provisioning job for node "r8.z1.sea.pahadi.net" | |
2023-07-18 22:33:10.964106 I | op-osd: OSD orchestration status for node r5.z1.sea.pahadi.net is "completed" | |
2023-07-18 22:33:10.964132 D | op-osd: not creating deployment for OSD 16 which already exists | |
2023-07-18 22:33:10.964140 D | op-osd: not creating deployment for OSD 28 which already exists | |
2023-07-18 22:33:10.964146 D | op-osd: not creating deployment for OSD 27 which already exists | |
2023-07-18 22:33:10.979754 I | op-osd: OSD orchestration status for node c16.z1.sea.pahadi.net is "completed" | |
2023-07-18 22:33:10.979775 D | op-osd: not creating deployment for OSD 54 which already exists | |
2023-07-18 22:33:10.980575 D | ceph-spec: object "rook-ceph-osd-r5.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:10.980598 D | ceph-spec: object "rook-ceph-osd-r5.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:10.980606 D | ceph-spec: do not reconcile on "rook-ceph-osd-r5.z1.sea.pahadi.net-status" config map changes | |
2023-07-18 22:33:10.980614 D | ceph-spec: object "rook-ceph-osd-r5.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:10.993456 I | op-osd: OSD orchestration status for node r3.z1.sea.pahadi.net is "completed" | |
2023-07-18 22:33:10.993482 D | op-osd: not creating deployment for OSD 29 which already exists | |
2023-07-18 22:33:10.993491 D | op-osd: not creating deployment for OSD 21 which already exists | |
2023-07-18 22:33:10.993497 D | op-osd: not creating deployment for OSD 17 which already exists | |
2023-07-18 22:33:10.993504 D | op-osd: not creating deployment for OSD 19 which already exists | |
2023-07-18 22:33:10.993510 D | op-osd: not creating deployment for OSD 23 which already exists | |
2023-07-18 22:33:10.993516 D | op-osd: not creating deployment for OSD 22 which already exists | |
2023-07-18 22:33:10.993521 D | op-osd: not creating deployment for OSD 24 which already exists | |
2023-07-18 22:33:10.993528 D | op-osd: not creating deployment for OSD 25 which already exists | |
2023-07-18 22:33:10.994288 D | ceph-spec: object "rook-ceph-osd-c16.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:10.994306 D | ceph-spec: object "rook-ceph-osd-c16.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:10.994316 D | ceph-spec: do not reconcile on "rook-ceph-osd-c16.z1.sea.pahadi.net-status" config map changes | |
2023-07-18 22:33:10.994332 D | ceph-spec: object "rook-ceph-osd-c16.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.008387 I | op-osd: OSD orchestration status for node c13.z1.sea.pahadi.net is "completed" | |
2023-07-18 22:33:11.008409 D | op-osd: not creating deployment for OSD 20 which already exists | |
2023-07-18 22:33:11.008415 D | op-osd: not creating deployment for OSD 51 which already exists | |
2023-07-18 22:33:11.008423 D | op-osd: not creating deployment for OSD 11 which already exists | |
2023-07-18 22:33:11.009578 D | ceph-spec: object "rook-ceph-osd-r3.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.009599 D | ceph-spec: do not reconcile on "rook-ceph-osd-r3.z1.sea.pahadi.net-status" config map changes | |
2023-07-18 22:33:11.009627 D | ceph-spec: object "rook-ceph-osd-r3.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.009638 D | ceph-spec: object "rook-ceph-osd-r3.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.026134 I | op-osd: OSD orchestration status for node c17.z1.sea.pahadi.net is "completed" | |
2023-07-18 22:33:11.026155 D | op-osd: not creating deployment for OSD 55 which already exists | |
2023-07-18 22:33:11.026850 D | ceph-spec: object "rook-ceph-osd-c13.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.026873 D | ceph-spec: do not reconcile on "rook-ceph-osd-c13.z1.sea.pahadi.net-status" config map changes | |
2023-07-18 22:33:11.026887 D | ceph-spec: object "rook-ceph-osd-c13.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.026904 D | ceph-spec: object "rook-ceph-osd-c13.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.051991 I | op-osd: OSD orchestration status for node c10.z1.sea.pahadi.net is "completed" | |
2023-07-18 22:33:11.052011 D | op-osd: not creating deployment for OSD 2 which already exists | |
2023-07-18 22:33:11.052017 D | op-osd: not creating deployment for OSD 18 which already exists | |
2023-07-18 22:33:11.052023 D | op-osd: not creating deployment for OSD 44 which already exists | |
2023-07-18 22:33:11.052786 D | ceph-spec: object "rook-ceph-osd-c17.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.052818 D | ceph-spec: object "rook-ceph-osd-c17.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.052839 D | ceph-spec: do not reconcile on "rook-ceph-osd-c17.z1.sea.pahadi.net-status" config map changes | |
2023-07-18 22:33:11.052855 D | ceph-spec: object "rook-ceph-osd-c17.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.066001 I | op-osd: OSD orchestration status for node c14.z1.sea.pahadi.net is "completed" | |
2023-07-18 22:33:11.066023 D | op-osd: not creating deployment for OSD 49 which already exists | |
2023-07-18 22:33:11.066718 D | ceph-spec: object "rook-ceph-osd-c10.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.066731 D | ceph-spec: do not reconcile on "rook-ceph-osd-c10.z1.sea.pahadi.net-status" config map changes | |
2023-07-18 22:33:11.066746 D | ceph-spec: object "rook-ceph-osd-c10.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.066754 D | ceph-spec: object "rook-ceph-osd-c10.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.079108 I | op-osd: OSD orchestration status for node r4.z1.sea.pahadi.net is "completed" | |
2023-07-18 22:33:11.079130 D | op-osd: not creating deployment for OSD 12 which already exists | |
2023-07-18 22:33:11.079137 D | op-osd: not creating deployment for OSD 0 which already exists | |
2023-07-18 22:33:11.079144 D | op-osd: not creating deployment for OSD 9 which already exists | |
2023-07-18 22:33:11.079150 D | op-osd: not creating deployment for OSD 4 which already exists | |
2023-07-18 22:33:11.079156 D | op-osd: not creating deployment for OSD 15 which already exists | |
2023-07-18 22:33:11.079162 D | op-osd: not creating deployment for OSD 1 which already exists | |
2023-07-18 22:33:11.079168 D | op-osd: not creating deployment for OSD 7 which already exists | |
2023-07-18 22:33:11.079818 D | ceph-spec: object "rook-ceph-osd-c14.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.079834 D | ceph-spec: do not reconcile on "rook-ceph-osd-c14.z1.sea.pahadi.net-status" config map changes | |
2023-07-18 22:33:11.079842 D | ceph-spec: object "rook-ceph-osd-c14.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.079854 D | ceph-spec: object "rook-ceph-osd-c14.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.092779 I | op-osd: OSD orchestration status for node c11.z1.sea.pahadi.net is "completed" | |
2023-07-18 22:33:11.092801 D | op-osd: not creating deployment for OSD 50 which already exists | |
2023-07-18 22:33:11.092808 D | op-osd: not creating deployment for OSD 37 which already exists | |
2023-07-18 22:33:11.092814 D | op-osd: not creating deployment for OSD 5 which already exists | |
2023-07-18 22:33:11.093765 D | ceph-spec: do not reconcile on "rook-ceph-osd-r4.z1.sea.pahadi.net-status" config map changes | |
2023-07-18 22:33:11.093791 D | ceph-spec: object "rook-ceph-osd-r4.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.093804 D | ceph-spec: object "rook-ceph-osd-r4.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.093821 D | ceph-spec: object "rook-ceph-osd-r4.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.106149 I | op-osd: OSD orchestration status for node c12.z1.sea.pahadi.net is "completed" | |
2023-07-18 22:33:11.106168 D | op-osd: not creating deployment for OSD 52 which already exists | |
2023-07-18 22:33:11.106174 D | op-osd: not creating deployment for OSD 14 which already exists | |
2023-07-18 22:33:11.106180 D | op-osd: not creating deployment for OSD 39 which already exists | |
2023-07-18 22:33:11.106763 D | ceph-spec: object "rook-ceph-osd-c11.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.106786 D | ceph-spec: object "rook-ceph-osd-c11.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.106808 D | ceph-spec: do not reconcile on "rook-ceph-osd-c11.z1.sea.pahadi.net-status" config map changes | |
2023-07-18 22:33:11.106818 D | ceph-spec: object "rook-ceph-osd-c11.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.183112 I | op-osd: OSD orchestration status for node r8.z1.sea.pahadi.net is "starting" | |
2023-07-18 22:33:11.183136 I | op-osd: OSD orchestration status for node c18.z1.sea.pahadi.net is "orchestrating" | |
2023-07-18 22:33:11.183149 I | op-osd: OSD orchestration status for node r7.z1.sea.pahadi.net is "orchestrating" | |
2023-07-18 22:33:11.184518 D | ceph-spec: object "rook-ceph-osd-c12.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.184534 D | ceph-spec: object "rook-ceph-osd-c12.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.184550 D | ceph-spec: object "rook-ceph-osd-c12.z1.sea.pahadi.net-status" did not match on delete | |
2023-07-18 22:33:11.184561 D | ceph-spec: do not reconcile on "rook-ceph-osd-c12.z1.sea.pahadi.net-status" config map changes | |
2023-07-18 22:33:11.185444 D | op-osd: not processing DELETED event for object "rook-ceph-osd-r5.z1.sea.pahadi.net-status" | |
2023-07-18 22:33:11.185769 D | op-osd: not processing DELETED event for object "rook-ceph-osd-c16.z1.sea.pahadi.net-status" | |
2023-07-18 22:33:11.186448 D | op-osd: not processing DELETED event for object "rook-ceph-osd-r3.z1.sea.pahadi.net-status" | |
2023-07-18 22:33:11.186772 D | op-osd: not processing DELETED event for object "rook-ceph-osd-c13.z1.sea.pahadi.net-status" | |
2023-07-18 22:33:11.187057 D | op-osd: not processing DELETED event for object "rook-ceph-osd-c17.z1.sea.pahadi.net-status" | |
2023-07-18 22:33:11.187390 D | op-osd: not processing DELETED event for object "rook-ceph-osd-c10.z1.sea.pahadi.net-status" | |
2023-07-18 22:33:11.187613 D | op-osd: not processing DELETED event for object "rook-ceph-osd-c14.z1.sea.pahadi.net-status" | |
2023-07-18 22:33:11.188189 D | op-osd: not processing DELETED event for object "rook-ceph-osd-r4.z1.sea.pahadi.net-status" | |
2023-07-18 22:33:11.188534 D | op-osd: not processing DELETED event for object "rook-ceph-osd-c11.z1.sea.pahadi.net-status" | |
2023-07-18 22:33:11.188792 D | op-osd: not processing DELETED event for object "rook-ceph-osd-c12.z1.sea.pahadi.net-status" | |
2023-07-18 22:33:11.289624 D | exec: Running command: ceph osd ls --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:11.317072 D | ceph-nodedaemon-controller: "rook-ceph-osd-prepare-r8.z1.sea.pahadi.net-nmn5w" is a ceph pod! | |
2023-07-18 22:33:11.317131 D | ceph-nodedaemon-controller: reconciling node: "r8.z1.sea.pahadi.net" | |
2023-07-18 22:33:11.318174 D | ceph-spec: ceph version found "17.2.6-0" | |
2023-07-18 22:33:11.332199 D | ceph-nodedaemon-controller: crash collector successfully reconciled for node "r8.z1.sea.pahadi.net". operation: "updated" | |
2023-07-18 22:33:11.332223 I | ceph-nodedaemon-controller: Skipping exporter reconcile on ceph version "17.2.6-0 quincy" | |
2023-07-18 22:33:11.333152 D | ceph-nodedaemon-controller: deleting cronjob if it exists... | |
2023-07-18 22:33:11.336204 D | ceph-nodedaemon-controller: cronJob resource not found. Ignoring since object must be deleted. | |
2023-07-18 22:33:11.684911 D | exec: Running command: ceph osd tree --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:12.081472 D | exec: Running command: ceph osd ls --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:12.513427 D | exec: Running command: ceph osd ok-to-stop 10 --max=20 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:14.649506 D | ceph-spec: object "rook-ceph-osd-r8.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:33:14.649532 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:33:15.026665 D | ceph-spec: object "rook-ceph-osd-r7.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:33:15.026720 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:33:15.939834 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" | |
2023-07-18 22:33:15.943135 D | clusterdisruption-controller: osd "30" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943159 I | clusterdisruption-controller: osd "rook-ceph-osd-30" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943204 D | clusterdisruption-controller: osd "46" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943214 I | clusterdisruption-controller: osd "rook-ceph-osd-46" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943257 D | clusterdisruption-controller: osd "21" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943266 I | clusterdisruption-controller: osd "rook-ceph-osd-21" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943301 D | clusterdisruption-controller: osd "3" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943310 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943346 D | clusterdisruption-controller: osd "20" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943355 I | clusterdisruption-controller: osd "rook-ceph-osd-20" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943390 D | clusterdisruption-controller: osd "22" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943399 I | clusterdisruption-controller: osd "rook-ceph-osd-22" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943434 D | clusterdisruption-controller: osd "33" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943443 I | clusterdisruption-controller: osd "rook-ceph-osd-33" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943478 D | clusterdisruption-controller: osd "24" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943487 I | clusterdisruption-controller: osd "rook-ceph-osd-24" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943522 D | clusterdisruption-controller: osd "17" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943531 I | clusterdisruption-controller: osd "rook-ceph-osd-17" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943564 D | clusterdisruption-controller: osd "18" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943572 I | clusterdisruption-controller: osd "rook-ceph-osd-18" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943605 D | clusterdisruption-controller: osd "28" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943614 I | clusterdisruption-controller: osd "rook-ceph-osd-28" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943650 D | clusterdisruption-controller: osd "37" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943659 I | clusterdisruption-controller: osd "rook-ceph-osd-37" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943694 D | clusterdisruption-controller: osd "49" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943702 I | clusterdisruption-controller: osd "rook-ceph-osd-49" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943739 D | clusterdisruption-controller: osd "50" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943747 I | clusterdisruption-controller: osd "rook-ceph-osd-50" is down and a possible node drain is detected | |
2023-07-18 22:33:15.943932 I | clusterdisruption-controller: osd "rook-ceph-osd-52" is down but no node drain is detected | |
2023-07-18 22:33:15.943979 D | clusterdisruption-controller: osd "9" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.943988 I | clusterdisruption-controller: osd "rook-ceph-osd-9" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944027 D | clusterdisruption-controller: osd "19" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944037 I | clusterdisruption-controller: osd "rook-ceph-osd-19" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944073 D | clusterdisruption-controller: osd "36" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944082 I | clusterdisruption-controller: osd "rook-ceph-osd-36" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944116 D | clusterdisruption-controller: osd "11" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944125 I | clusterdisruption-controller: osd "rook-ceph-osd-11" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944160 D | clusterdisruption-controller: osd "14" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944169 I | clusterdisruption-controller: osd "rook-ceph-osd-14" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944206 D | clusterdisruption-controller: osd "25" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944215 I | clusterdisruption-controller: osd "rook-ceph-osd-25" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944248 D | clusterdisruption-controller: osd "26" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944257 I | clusterdisruption-controller: osd "rook-ceph-osd-26" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944291 D | clusterdisruption-controller: osd "41" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944299 I | clusterdisruption-controller: osd "rook-ceph-osd-41" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944334 D | clusterdisruption-controller: osd "48" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944342 I | clusterdisruption-controller: osd "rook-ceph-osd-48" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944394 D | clusterdisruption-controller: osd "51" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944404 I | clusterdisruption-controller: osd "rook-ceph-osd-51" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944440 D | clusterdisruption-controller: osd "6" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944451 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944486 D | clusterdisruption-controller: osd "0" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944494 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944527 D | clusterdisruption-controller: osd "1" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944536 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944571 D | clusterdisruption-controller: osd "43" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944580 I | clusterdisruption-controller: osd "rook-ceph-osd-43" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944613 D | clusterdisruption-controller: osd "10" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944621 I | clusterdisruption-controller: osd "rook-ceph-osd-10" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944655 D | clusterdisruption-controller: osd "38" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944663 I | clusterdisruption-controller: osd "rook-ceph-osd-38" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944694 D | clusterdisruption-controller: osd "40" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944703 I | clusterdisruption-controller: osd "rook-ceph-osd-40" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944737 D | clusterdisruption-controller: osd "42" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944746 I | clusterdisruption-controller: osd "rook-ceph-osd-42" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944778 D | clusterdisruption-controller: osd "2" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944787 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944819 D | clusterdisruption-controller: osd "4" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944828 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944862 D | clusterdisruption-controller: osd "16" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944870 I | clusterdisruption-controller: osd "rook-ceph-osd-16" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944906 D | clusterdisruption-controller: osd "31" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944915 I | clusterdisruption-controller: osd "rook-ceph-osd-31" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944949 D | clusterdisruption-controller: osd "12" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944958 I | clusterdisruption-controller: osd "rook-ceph-osd-12" is down and a possible node drain is detected | |
2023-07-18 22:33:15.944991 D | clusterdisruption-controller: osd "39" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.944999 I | clusterdisruption-controller: osd "rook-ceph-osd-39" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945031 D | clusterdisruption-controller: osd "45" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945040 I | clusterdisruption-controller: osd "rook-ceph-osd-45" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945073 D | clusterdisruption-controller: osd "34" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945082 I | clusterdisruption-controller: osd "rook-ceph-osd-34" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945115 D | clusterdisruption-controller: osd "27" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945124 I | clusterdisruption-controller: osd "rook-ceph-osd-27" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945157 D | clusterdisruption-controller: osd "32" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945165 I | clusterdisruption-controller: osd "rook-ceph-osd-32" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945197 D | clusterdisruption-controller: osd "7" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945205 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945244 D | clusterdisruption-controller: osd "13" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945253 I | clusterdisruption-controller: osd "rook-ceph-osd-13" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945285 D | clusterdisruption-controller: osd "15" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945294 I | clusterdisruption-controller: osd "rook-ceph-osd-15" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945328 D | clusterdisruption-controller: osd "23" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945336 I | clusterdisruption-controller: osd "rook-ceph-osd-23" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945369 D | clusterdisruption-controller: osd "8" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945378 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945413 D | clusterdisruption-controller: osd "29" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945422 I | clusterdisruption-controller: osd "rook-ceph-osd-29" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945456 D | clusterdisruption-controller: osd "35" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945464 I | clusterdisruption-controller: osd "rook-ceph-osd-35" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945496 D | clusterdisruption-controller: osd "44" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945504 I | clusterdisruption-controller: osd "rook-ceph-osd-44" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945536 D | clusterdisruption-controller: osd "47" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945544 I | clusterdisruption-controller: osd "rook-ceph-osd-47" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945580 D | clusterdisruption-controller: osd "5" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:15.945589 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down and a possible node drain is detected | |
2023-07-18 22:33:15.945651 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:16.424673 I | clusterdisruption-controller: osd is down in failure domain "c10-z1-sea-pahadi-net" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:186} {StateName:down Count:9} {StateName:incomplete Count:1}]" | |
2023-07-18 22:33:16.424720 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:16.834268 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.835351 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.836321 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.837283 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.838198 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.839115 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.839982 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.840858 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.841708 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.842589 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.843449 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.844305 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.844345 D | clusterdisruption-controller: deleting default pdb with maxUnavailable=1 for all osd | |
2023-07-18 22:33:16.845228 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:16.882194 D | clusterdisruption-controller: reconciling "rook-ceph/" | |
2023-07-18 22:33:16.889725 D | clusterdisruption-controller: osd "0" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.889753 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down and a possible node drain is detected | |
2023-07-18 22:33:16.889798 D | clusterdisruption-controller: osd "1" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.889808 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down and a possible node drain is detected | |
2023-07-18 22:33:16.889846 D | clusterdisruption-controller: osd "43" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.889855 I | clusterdisruption-controller: osd "rook-ceph-osd-43" is down and a possible node drain is detected | |
2023-07-18 22:33:16.889892 D | clusterdisruption-controller: osd "40" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.889902 I | clusterdisruption-controller: osd "rook-ceph-osd-40" is down and a possible node drain is detected | |
2023-07-18 22:33:16.889935 D | clusterdisruption-controller: osd "42" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.889944 I | clusterdisruption-controller: osd "rook-ceph-osd-42" is down and a possible node drain is detected | |
2023-07-18 22:33:16.889980 D | clusterdisruption-controller: osd "10" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.889989 I | clusterdisruption-controller: osd "rook-ceph-osd-10" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890022 D | clusterdisruption-controller: osd "38" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890031 I | clusterdisruption-controller: osd "rook-ceph-osd-38" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890066 D | clusterdisruption-controller: osd "16" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890075 I | clusterdisruption-controller: osd "rook-ceph-osd-16" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890107 D | clusterdisruption-controller: osd "2" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890115 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890152 D | clusterdisruption-controller: osd "4" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890161 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890197 D | clusterdisruption-controller: osd "12" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890206 I | clusterdisruption-controller: osd "rook-ceph-osd-12" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890241 D | clusterdisruption-controller: osd "31" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890250 I | clusterdisruption-controller: osd "rook-ceph-osd-31" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890284 D | clusterdisruption-controller: osd "34" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890293 I | clusterdisruption-controller: osd "rook-ceph-osd-34" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890326 D | clusterdisruption-controller: osd "39" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890335 I | clusterdisruption-controller: osd "rook-ceph-osd-39" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890370 D | clusterdisruption-controller: osd "45" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890379 I | clusterdisruption-controller: osd "rook-ceph-osd-45" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890413 D | clusterdisruption-controller: osd "13" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890421 I | clusterdisruption-controller: osd "rook-ceph-osd-13" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890455 D | clusterdisruption-controller: osd "27" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890463 I | clusterdisruption-controller: osd "rook-ceph-osd-27" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890498 D | clusterdisruption-controller: osd "32" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890507 I | clusterdisruption-controller: osd "rook-ceph-osd-32" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890540 D | clusterdisruption-controller: osd "7" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890548 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890581 D | clusterdisruption-controller: osd "15" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890589 I | clusterdisruption-controller: osd "rook-ceph-osd-15" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890623 D | clusterdisruption-controller: osd "23" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890632 I | clusterdisruption-controller: osd "rook-ceph-osd-23" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890667 D | clusterdisruption-controller: osd "8" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890676 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890709 D | clusterdisruption-controller: osd "44" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890718 I | clusterdisruption-controller: osd "rook-ceph-osd-44" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890750 D | clusterdisruption-controller: osd "47" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890758 I | clusterdisruption-controller: osd "rook-ceph-osd-47" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890790 D | clusterdisruption-controller: osd "5" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890799 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890838 D | clusterdisruption-controller: osd "29" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890846 I | clusterdisruption-controller: osd "rook-ceph-osd-29" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890879 D | clusterdisruption-controller: osd "35" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890888 I | clusterdisruption-controller: osd "rook-ceph-osd-35" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890921 D | clusterdisruption-controller: osd "30" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890929 I | clusterdisruption-controller: osd "rook-ceph-osd-30" is down and a possible node drain is detected | |
2023-07-18 22:33:16.890960 D | clusterdisruption-controller: osd "21" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.890968 I | clusterdisruption-controller: osd "rook-ceph-osd-21" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891001 D | clusterdisruption-controller: osd "46" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891009 I | clusterdisruption-controller: osd "rook-ceph-osd-46" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891041 D | clusterdisruption-controller: osd "3" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891050 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891082 D | clusterdisruption-controller: osd "22" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891091 I | clusterdisruption-controller: osd "rook-ceph-osd-22" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891123 D | clusterdisruption-controller: osd "33" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891131 I | clusterdisruption-controller: osd "rook-ceph-osd-33" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891164 D | clusterdisruption-controller: osd "20" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891172 I | clusterdisruption-controller: osd "rook-ceph-osd-20" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891206 D | clusterdisruption-controller: osd "17" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891214 I | clusterdisruption-controller: osd "rook-ceph-osd-17" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891245 D | clusterdisruption-controller: osd "24" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891254 I | clusterdisruption-controller: osd "rook-ceph-osd-24" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891289 D | clusterdisruption-controller: osd "50" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891297 I | clusterdisruption-controller: osd "rook-ceph-osd-50" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891473 I | clusterdisruption-controller: osd "rook-ceph-osd-52" is down but no node drain is detected | |
2023-07-18 22:33:16.891517 D | clusterdisruption-controller: osd "9" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891527 I | clusterdisruption-controller: osd "rook-ceph-osd-9" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891563 D | clusterdisruption-controller: osd "18" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891571 I | clusterdisruption-controller: osd "rook-ceph-osd-18" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891606 D | clusterdisruption-controller: osd "28" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891614 I | clusterdisruption-controller: osd "rook-ceph-osd-28" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891649 D | clusterdisruption-controller: osd "37" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891658 I | clusterdisruption-controller: osd "rook-ceph-osd-37" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891693 D | clusterdisruption-controller: osd "49" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891701 I | clusterdisruption-controller: osd "rook-ceph-osd-49" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891734 D | clusterdisruption-controller: osd "11" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891743 I | clusterdisruption-controller: osd "rook-ceph-osd-11" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891778 D | clusterdisruption-controller: osd "19" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891787 I | clusterdisruption-controller: osd "rook-ceph-osd-19" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891821 D | clusterdisruption-controller: osd "36" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891829 I | clusterdisruption-controller: osd "rook-ceph-osd-36" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891863 D | clusterdisruption-controller: osd "48" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891872 I | clusterdisruption-controller: osd "rook-ceph-osd-48" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891904 D | clusterdisruption-controller: osd "51" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891913 I | clusterdisruption-controller: osd "rook-ceph-osd-51" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891946 D | clusterdisruption-controller: osd "6" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891954 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down and a possible node drain is detected | |
2023-07-18 22:33:16.891989 D | clusterdisruption-controller: osd "14" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.891997 I | clusterdisruption-controller: osd "rook-ceph-osd-14" is down and a possible node drain is detected | |
2023-07-18 22:33:16.892029 D | clusterdisruption-controller: osd "25" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.892037 I | clusterdisruption-controller: osd "rook-ceph-osd-25" is down and a possible node drain is detected | |
2023-07-18 22:33:16.892071 D | clusterdisruption-controller: osd "26" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.892079 I | clusterdisruption-controller: osd "rook-ceph-osd-26" is down and a possible node drain is detected | |
2023-07-18 22:33:16.892111 D | clusterdisruption-controller: osd "41" POD is not assigned to any node. assuming node drain | |
2023-07-18 22:33:16.892119 I | clusterdisruption-controller: osd "rook-ceph-osd-41" is down and a possible node drain is detected | |
2023-07-18 22:33:16.892172 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:17.384411 I | clusterdisruption-controller: osd is down in failure domain "c10-z1-sea-pahadi-net" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:186} {StateName:down Count:9} {StateName:incomplete Count:1}]" | |
2023-07-18 22:33:17.384437 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json | |
2023-07-18 22:33:17.786396 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.787184 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.788206 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.789148 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.790086 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.791021 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.791920 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.792773 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.793551 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.794374 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.795208 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.796026 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:17.796060 D | clusterdisruption-controller: deleting default pdb with maxUnavailable=1 for all osd | |
2023-07-18 22:33:17.796823 D | op-k8sutil: kubernetes version fetched 1.27.2 | |
2023-07-18 22:33:19.012117 D | ceph-spec: object "rook-ceph-osd-c18.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:33:19.012142 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
2023-07-18 22:33:19.019962 D | ceph-spec: object "rook-ceph-osd-r8.z1.sea.pahadi.net-status" matched on update | |
2023-07-18 22:33:19.019972 D | ceph-spec: do not reconcile on configmap that is not "rook-config-override" | |
sa@r0:~$ |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment