Created
January 30, 2020 16:10
-
-
Save tvvignesh/0efc00702d0915ddafc35b7f15681ceb to your computer and use it in GitHub Desktop.
Rook-Ceph-Operator Logs
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading...2020-01-30 15:36:02.110633 I | rookcmd: starting Rook v1.2.2 with arguments '/usr/local/bin/rook ceph operator' | |
2020-01-30 15:36:02.110768 I | rookcmd: flag values: --add_dir_header=false, --alsologtostderr=false, --csi-attacher-image=quay.io/k8scsi/csi-attacher:v1.2.0, --csi-ceph-image=quay.io/cephcsi/cephcsi:v1.2.2, --csi-cephfs-plugin-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin.yaml, --csi-cephfs-provisioner-dep-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner-dep.yaml, --csi-cephfs-provisioner-sts-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner-sts.yaml, --csi-driver-name-prefix=, --csi-enable-cephfs=true, --csi-enable-grpc-metrics=true, --csi-enable-rbd=true, --csi-kubelet-dir-path=/var/lib/kubelet, --csi-provisioner-image=quay.io/k8scsi/csi-provisioner:v1.4.0, --csi-rbd-plugin-template-path=/etc/ceph-csi/rbd/csi-rbdplugin.yaml, --csi-rbd-provisioner-dep-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner-dep.yaml, --csi-rbd-provisioner-sts-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner-sts.yaml, --csi-registrar-image=quay.io/k8scsi/csi-node-driver-registrar:v1.1.0, --csi-snapshotter-image=quay.io/k8scsi/csi-snapshotter:v1.2.2, --enable-discovery-daemon=true, --enable-flex-driver=false, --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-flush-frequency=5s, --log-level=INFO, --log_backtrace_at=:0, --log_dir=, --log_file=, --log_file_max_size=1800, --logtostderr=true, --master=, --mon-healthcheck-interval=45s, --mon-out-timeout=10m0s, --operator-image=, --service-account=, --skip_headers=false, --skip_log_headers=false, --stderrthreshold=2, --v=0, --vmodule= | |
2020-01-30 15:36:02.110773 I | cephcmd: starting operator | |
2020-01-30 15:36:02.152736 I | op-discover: rook-discover daemonset started | |
2020-01-30 15:36:02.155345 I | operator: rook-provisioner ceph.rook.io/block started using ceph.rook.io flex vendor dir | |
I0130 15:36:02.155681 12 leaderelection.go:217] attempting to acquire leader lease rook-ceph/ceph.rook.io-block... | |
2020-01-30 15:36:02.155977 I | operator: rook-provisioner rook.io/block started using rook.io flex vendor dir | |
I0130 15:36:02.156083 12 leaderelection.go:217] attempting to acquire leader lease rook-ceph/rook.io-block... | |
2020-01-30 15:36:02.156103 I | operator: Watching all namespaces for cluster CRDs | |
2020-01-30 15:36:02.156167 I | op-cluster: start watching clusters in all namespaces | |
2020-01-30 15:36:02.156202 I | op-cluster: Enabling hotplug orchestration: ROOK_DISABLE_DEVICE_HOTPLUG=false | |
2020-01-30 15:36:02.156301 I | operator: setting up the controller-runtime manager | |
2020-01-30 15:36:02.165482 I | op-cluster: starting cluster in namespace rook-ceph | |
2020-01-30 15:36:02.168962 I | op-cluster: Cluster rook-ceph is not ready. Skipping orchestration. | |
2020-01-30 15:36:02.168994 I | op-cluster: Cluster rook-ceph is not ready. Skipping orchestration. | |
2020-01-30 15:36:02.169003 I | op-cluster: Cluster rook-ceph is not ready. Skipping orchestration. | |
2020-01-30 15:36:02.169008 I | op-cluster: Cluster rook-ceph is not ready. Skipping orchestration. | |
2020-01-30 15:36:03.130183 I | ceph-csi: CSIDriver CRD already had been registered for "rook-ceph.rbd.csi.ceph.com" | |
2020-01-30 15:36:03.135313 I | ceph-csi: CSIDriver CRD already had been registered for "rook-ceph.cephfs.csi.ceph.com" | |
2020-01-30 15:36:03.135338 I | operator: successfully started Ceph CSI driver(s) | |
2020-01-30 15:36:03.570362 I | operator: starting the controller-runtime manager | |
2020-01-30 15:36:09.135580 I | op-cluster: detecting the ceph image version for image ceph/ceph:v14.2.6... | |
2020-01-30 15:36:11.962027 I | op-cluster: Detected ceph image version: "14.2.6-0 nautilus" | |
2020-01-30 15:36:11.978951 I | op-mon: parsing mon endpoints: c=10.104.0.218:6789,a=10.104.8.20:6789,b=10.104.12.173:6789 | |
2020-01-30 15:36:11.979128 I | op-mon: loaded: maxMonID=2, mons=map[c:0xc000c87460 a:0xc000c874a0 b:0xc000c875a0], mapping=&{Node:map[a:0xc0011ba750 b:0xc0011ba7e0 c:0xc0011ba810]} | |
2020-01-30 15:36:11.980622 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config | |
2020-01-30 15:36:11.980757 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph | |
2020-01-30 15:36:11.980935 I | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/076302325 | |
2020-01-30 15:36:12.646296 I | op-cluster: CephCluster "rook-ceph" status: "Creating". | |
2020-01-30 15:36:12.672488 I | op-mon: start running mons | |
2020-01-30 15:36:12.684662 I | op-mon: parsing mon endpoints: c=10.104.0.218:6789,a=10.104.8.20:6789,b=10.104.12.173:6789 | |
2020-01-30 15:36:12.685887 I | op-mon: loaded: maxMonID=2, mons=map[b:0xc001162340 c:0xc0011622c0 a:0xc001162300], mapping=&{Node:map[a:0xc00116c7e0 b:0xc00116c810 c:0xc00116c840]} | |
2020-01-30 15:36:12.693907 I | op-mon: saved mon endpoints to config map map[mapping:{"node":{"a":{"Name":"gke-tc-dev-1-tc-dev-big-2-74756ee2-34np","Hostname":"gke-tc-dev-1-tc-dev-big-2-74756ee2-34np","Address":"10.160.15.209"},"b":{"Name":"gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c","Hostname":"gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c","Address":"10.160.15.206"},"c":{"Name":"gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62","Hostname":"gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62","Address":"10.160.15.208"}}} csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.104.0.218:6789","10.104.8.20:6789","10.104.12.173:6789"]}] data:c=10.104.0.218:6789,a=10.104.8.20:6789,b=10.104.12.173:6789 maxMonId:2] | |
2020-01-30 15:36:12.702978 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config | |
2020-01-30 15:36:12.703754 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph | |
2020-01-30 15:36:13.748588 I | op-mon: targeting the mon count 3 | |
2020-01-30 15:36:13.748826 I | exec: Running command: ceph config set global mon_allow_pool_delete true --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/498015440 | |
2020-01-30 15:36:13.791072 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-34np will be 7bd82ab0fa96ccd23ff842822b152aa5 | |
2020-01-30 15:36:13.816339 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:36:13.853916 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 15:36:13.912389 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 15:36:13.937275 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-e5d1c93b4aac1091862f4e71488f7edc": the object has been modified; please apply your changes to the latest version and try again | |
2020-01-30 15:36:14.560693 I | exec: Running command: ceph config set global rbd_default_features 3 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/619805679 | |
2020-01-30 15:36:15.015269 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 15:36:15.983619 I | op-mon: checking for basic quorum with existing mons | |
2020-01-30 15:36:16.006289 I | op-mon: mon "c" endpoint are [v2:10.104.0.218:3300,v1:10.104.0.218:6789] | |
2020-01-30 15:36:16.045647 I | op-mon: mon "a" endpoint are [v2:10.104.8.20:3300,v1:10.104.8.20:6789] | |
2020-01-30 15:36:16.085552 I | op-mon: mon "b" endpoint are [v2:10.104.12.173:3300,v1:10.104.12.173:6789] | |
2020-01-30 15:36:16.188947 I | op-mon: saved mon endpoints to config map map[mapping:{"node":{"a":{"Name":"gke-tc-dev-1-tc-dev-big-2-74756ee2-34np","Hostname":"gke-tc-dev-1-tc-dev-big-2-74756ee2-34np","Address":"10.160.15.209"},"b":{"Name":"gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c","Hostname":"gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c","Address":"10.160.15.206"},"c":{"Name":"gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62","Hostname":"gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62","Address":"10.160.15.208"}}} csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.104.0.218:6789","10.104.8.20:6789","10.104.12.173:6789"]}] data:c=10.104.0.218:6789,a=10.104.8.20:6789,b=10.104.12.173:6789 maxMonId:2] | |
2020-01-30 15:36:16.789258 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config | |
2020-01-30 15:36:16.789435 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph | |
2020-01-30 15:36:17.188406 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config | |
2020-01-30 15:36:17.188633 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph | |
2020-01-30 15:36:17.199219 I | op-mon: deployment for mon rook-ceph-mon-c already exists. updating if needed | |
2020-01-30 15:36:17.203119 I | op-k8sutil: updating deployment rook-ceph-mon-c | |
I0130 15:36:18.153325 12 leaderelection.go:227] successfully acquired lease rook-ceph/ceph.rook.io-block | |
I0130 15:36:18.153542 12 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph", Name:"ceph.rook.io-block", UID:"bbbeeadf-51db-4755-b55d-a77d44c803dc", APIVersion:"v1", ResourceVersion:"25214087", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-755c77d5f9-dhvxq_38832cb8-4376-11ea-ab85-dadbc8dc6b45 became leader | |
I0130 15:36:18.153636 12 controller.go:769] Starting provisioner controller ceph.rook.io/block_rook-ceph-operator-755c77d5f9-dhvxq_38832cb8-4376-11ea-ab85-dadbc8dc6b45! | |
I0130 15:36:18.254800 12 controller.go:818] Started provisioner controller ceph.rook.io/block_rook-ceph-operator-755c77d5f9-dhvxq_38832cb8-4376-11ea-ab85-dadbc8dc6b45! | |
2020-01-30 15:36:18.848074 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 15:36:18.897334 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
I0130 15:36:19.175365 12 leaderelection.go:227] successfully acquired lease rook-ceph/rook.io-block | |
I0130 15:36:19.176071 12 controller.go:769] Starting provisioner controller rook.io/block_rook-ceph-operator-755c77d5f9-dhvxq_38835256-4376-11ea-ab85-dadbc8dc6b45! | |
I0130 15:36:19.176318 12 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph", Name:"rook.io-block", UID:"2d7298e2-5598-4d8d-ae58-8ef49acf7c7a", APIVersion:"v1", ResourceVersion:"25214104", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-755c77d5f9-dhvxq_38835256-4376-11ea-ab85-dadbc8dc6b45 became leader | |
2020-01-30 15:36:19.229217 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-c | |
2020-01-30 15:36:19.229271 I | op-mon: waiting for mon quorum with [c a b] | |
I0130 15:36:19.876431 12 controller.go:818] Started provisioner controller rook.io/block_rook-ceph-operator-755c77d5f9-dhvxq_38835256-4376-11ea-ab85-dadbc8dc6b45! | |
2020-01-30 15:36:20.795666 I | op-mon: mons running: [c a b] | |
2020-01-30 15:36:20.795847 I | exec: Running command: ceph quorum_status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/226505346 | |
2020-01-30 15:36:21.507818 I | op-mon: Monitors in quorum: [a b c] | |
2020-01-30 15:36:21.512014 I | op-mon: deployment for mon rook-ceph-mon-a already exists. updating if needed | |
2020-01-30 15:36:21.516545 I | op-k8sutil: updating deployment rook-ceph-mon-a | |
2020-01-30 15:36:23.534284 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-a | |
2020-01-30 15:36:23.534340 I | op-mon: waiting for mon quorum with [c a b] | |
2020-01-30 15:36:23.558924 I | op-mon: mons running: [c a b] | |
2020-01-30 15:36:23.559129 I | exec: Running command: ceph quorum_status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/545310713 | |
2020-01-30 15:36:24.402503 I | op-mon: Monitors in quorum: [a b c] | |
2020-01-30 15:36:24.409382 I | op-mon: deployment for mon rook-ceph-mon-b already exists. updating if needed | |
2020-01-30 15:36:24.413447 I | op-k8sutil: updating deployment rook-ceph-mon-b | |
2020-01-30 15:36:26.451045 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-b | |
2020-01-30 15:36:26.451083 I | op-mon: waiting for mon quorum with [c a b] | |
2020-01-30 15:36:26.475058 I | op-mon: mons running: [c a b] | |
2020-01-30 15:36:26.475208 I | exec: Running command: ceph quorum_status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/659526404 | |
2020-01-30 15:36:27.117355 I | op-mon: Monitors in quorum: [a b c] | |
2020-01-30 15:36:27.117389 I | op-mon: mons created: 3 | |
2020-01-30 15:36:27.117657 I | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/265697427 | |
2020-01-30 15:36:27.718146 I | op-mon: waiting for mon quorum with [c a b] | |
2020-01-30 15:36:27.740071 I | op-mon: mons running: [c a b] | |
2020-01-30 15:36:27.740422 I | exec: Running command: ceph quorum_status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/485678294 | |
2020-01-30 15:36:28.462084 I | op-mon: Monitors in quorum: [a b c] | |
2020-01-30 15:36:28.462231 I | exec: Running command: ceph auth get-or-create-key client.csi-rbd-provisioner mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/996468285 | |
2020-01-30 15:36:28.991199 I | exec: Running command: ceph auth get-or-create-key client.csi-rbd-node mon profile rbd osd profile rbd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/570255992 | |
2020-01-30 15:36:29.616748 I | exec: Running command: ceph auth get-or-create-key client.csi-cephfs-provisioner mon allow r mgr allow rw osd allow rw tag cephfs metadata=* --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/251203447 | |
2020-01-30 15:36:30.246008 I | exec: Running command: ceph auth get-or-create-key client.csi-cephfs-node mon allow r mgr allow rw osd allow rw tag cephfs *=* mds allow rw --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/098954346 | |
2020-01-30 15:36:30.960065 I | ceph-csi: created kubernetes csi secrets for cluster "rook-ceph" | |
2020-01-30 15:36:30.960282 I | exec: Running command: ceph auth get-or-create-key client.crash mon allow profile crash mgr allow profile crash --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/000878273 | |
2020-01-30 15:36:31.588530 I | ceph-crashcollector-controller: created kubernetes crash collector secret for cluster "rook-ceph" | |
2020-01-30 15:36:31.588688 I | exec: Running command: ceph version --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/171922220 | |
2020-01-30 15:36:32.140786 I | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/471184539 | |
2020-01-30 15:36:32.709746 I | exec: Running command: ceph mon enable-msgr2 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/070695230 | |
2020-01-30 15:36:33.218847 I | cephclient: successfully enabled msgr2 protocol | |
2020-01-30 15:36:33.218886 I | op-mgr: start running mgr | |
2020-01-30 15:36:33.219268 I | exec: Running command: ceph auth get-or-create-key mgr.a mon allow * mds allow * osd allow * --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/225712005 | |
2020-01-30 15:36:33.873672 I | op-mgr: deployment for mgr rook-ceph-mgr-a already exists. updating if needed | |
2020-01-30 15:36:33.877495 I | op-k8sutil: updating deployment rook-ceph-mgr-a | |
2020-01-30 15:36:35.906353 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mgr-a | |
2020-01-30 15:36:35.944601 I | op-mgr: dashboard service already exists | |
2020-01-30 15:36:35.956204 I | exec: Running command: ceph mgr module enable dashboard --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/024809248 | |
2020-01-30 15:36:35.956514 I | exec: Running command: ceph config get mgr.a mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/587221641 | |
2020-01-30 15:36:35.957094 I | exec: Running command: ceph mgr module enable crash --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/476478290 | |
2020-01-30 15:36:35.957307 I | op-mgr: successful modules: mgr module(s) from the spec | |
2020-01-30 15:36:35.957478 I | exec: Running command: ceph mgr module enable prometheus --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/185881599 | |
2020-01-30 15:36:35.962652 I | exec: Running command: ceph mgr module enable rook --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/962167892 | |
2020-01-30 15:36:37.398779 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/543364003 | |
2020-01-30 15:36:37.929382 I | exec: Running command: ceph config get mgr.a mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/551905446 | |
2020-01-30 15:36:38.165667 I | exec: module 'dashboard' is already enabled | |
2020-01-30 15:36:38.214654 I | exec: module 'prometheus' is already enabled | |
2020-01-30 15:36:38.215114 I | op-mgr: successful modules: prometheus | |
2020-01-30 15:36:38.220339 I | exec: module 'rook' is already enabled | |
2020-01-30 15:36:38.220772 I | exec: Running command: ceph mgr module enable orchestrator_cli --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/281531341 | |
2020-01-30 15:36:38.246293 I | exec: module 'crash' is already enabled (always-on) | |
2020-01-30 15:36:38.246404 I | op-mgr: successful modules: crash | |
2020-01-30 15:36:38.811445 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/051584200 | |
2020-01-30 15:36:39.201796 I | exec: module 'orchestrator_cli' is already enabled (always-on) | |
2020-01-30 15:36:39.202127 I | exec: Running command: ceph orchestrator set backend rook --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/052728199 | |
2020-01-30 15:36:39.543574 I | exec: Running command: ceph config get mgr.a mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/597151546 | |
2020-01-30 15:36:40.049542 I | op-mgr: successful modules: orchestrator modules | |
2020-01-30 15:36:40.215951 I | exec: Running command: ceph config rm mgr.a mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/668632401 | |
2020-01-30 15:36:40.873530 I | exec: Running command: ceph config get mgr.a mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/859800188 | |
2020-01-30 15:36:41.494582 I | exec: Running command: ceph config rm mgr.a mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/804363691 | |
2020-01-30 15:36:42.300930 I | op-mgr: successful modules: http bind settings | |
2020-01-30 15:36:43.174103 I | op-mgr: the dashboard secret was already generated | |
2020-01-30 15:36:43.174258 I | exec: Running command: ceph dashboard create-self-signed-cert --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/655116558 | |
2020-01-30 15:36:43.853763 I | op-mgr: Running command: ceph dashboard set-login-credentials admin ******* | |
2020-01-30 15:36:44.886990 I | exec: Running command: ceph config get mgr.a mgr/dashboard/url_prefix --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/155983728 | |
2020-01-30 15:36:45.574835 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/url_prefix --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/465775631 | |
2020-01-30 15:36:46.183994 I | exec: Running command: ceph config get mgr.a mgr/dashboard/ssl --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/059558946 | |
2020-01-30 15:36:46.743401 I | exec: Running command: ceph config set mgr.a mgr/dashboard/ssl true --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/141903129 | |
2020-01-30 15:36:47.297219 I | exec: Running command: ceph config get mgr.a mgr/dashboard/server_port --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/801793956 | |
2020-01-30 15:36:47.907852 I | exec: Running command: ceph config set mgr.a mgr/dashboard/server_port 8443 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/680358579 | |
2020-01-30 15:36:48.478030 I | exec: Running command: ceph config get mgr.a mgr/dashboard/ssl_server_port --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/584375926 | |
2020-01-30 15:36:49.410959 I | exec: Running command: ceph config set mgr.a mgr/dashboard/ssl_server_port 8443 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/338800477 | |
2020-01-30 15:36:50.044206 I | op-mgr: dashboard config has changed. restarting the dashboard module. | |
2020-01-30 15:36:50.044237 I | op-mgr: restarting the mgr module | |
2020-01-30 15:36:50.044330 I | exec: Running command: ceph mgr module disable dashboard --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/161049880 | |
2020-01-30 15:36:51.460070 I | exec: Running command: ceph mgr module enable dashboard --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/181631383 | |
2020-01-30 15:36:52.525879 I | op-mgr: successful modules: dashboard | |
2020-01-30 15:36:52.551643 I | op-mgr: mgr metrics service already exists | |
2020-01-30 15:36:52.551829 I | op-osd: start running osds in namespace rook-ceph | |
2020-01-30 15:36:52.551907 I | op-osd: start provisioning the osds on pvcs, if needed | |
2020-01-30 15:36:52.563729 I | op-osd: successfully provisioned osd for storageClassDeviceSet set1 of set 0 | |
2020-01-30 15:36:52.567939 I | op-osd: successfully provisioned osd for storageClassDeviceSet set1 of set 1 | |
2020-01-30 15:36:52.572026 I | op-osd: successfully provisioned osd for storageClassDeviceSet set1 of set 2 | |
2020-01-30 15:36:52.588540 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-set1-0-data-2kfjk to start a new one | |
2020-01-30 15:36:52.599545 I | op-k8sutil: batch job rook-ceph-osd-prepare-set1-0-data-2kfjk still exists | |
2020-01-30 15:36:54.603106 I | op-k8sutil: batch job rook-ceph-osd-prepare-set1-0-data-2kfjk deleted | |
2020-01-30 15:36:54.612997 I | op-osd: osd provision job started for node set1-0-data-2kfjk | |
2020-01-30 15:36:54.641394 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-set1-1-data-5zsh5 to start a new one | |
2020-01-30 15:36:54.701240 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:36:54.727275 I | op-k8sutil: batch job rook-ceph-osd-prepare-set1-1-data-5zsh5 still exists | |
2020-01-30 15:36:56.730045 I | op-k8sutil: batch job rook-ceph-osd-prepare-set1-1-data-5zsh5 deleted | |
2020-01-30 15:36:56.748463 I | op-osd: osd provision job started for node set1-1-data-5zsh5 | |
2020-01-30 15:36:56.764931 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-set1-2-data-c5mwp to start a new one | |
2020-01-30 15:36:56.790112 I | op-k8sutil: batch job rook-ceph-osd-prepare-set1-2-data-c5mwp still exists | |
2020-01-30 15:36:56.856331 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 15:36:58.793653 I | op-k8sutil: batch job rook-ceph-osd-prepare-set1-2-data-c5mwp deleted | |
2020-01-30 15:36:58.802503 I | op-osd: osd provision job started for node set1-2-data-c5mwp | |
2020-01-30 15:36:58.802672 I | op-osd: start osds after provisioning is completed, if needed | |
2020-01-30 15:36:58.808999 I | op-osd: osd orchestration status for node set1-0-data-2kfjk is starting | |
2020-01-30 15:36:58.809042 I | op-osd: osd orchestration status for node set1-1-data-5zsh5 is starting | |
2020-01-30 15:36:58.809058 I | op-osd: osd orchestration status for node set1-2-data-c5mwp is starting | |
2020-01-30 15:36:58.809068 I | op-osd: 0/3 node(s) completed osd provisioning, resource version 25214452 | |
2020-01-30 15:36:58.859336 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:37:15.050481 I | op-osd: osd orchestration status for node set1-0-data-2kfjk is computingDiff | |
2020-01-30 15:37:15.099125 I | op-osd: osd orchestration status for node set1-0-data-2kfjk is orchestrating | |
2020-01-30 15:37:16.835960 I | op-osd: osd orchestration status for node set1-1-data-5zsh5 is computingDiff | |
2020-01-30 15:37:16.867141 I | op-osd: osd orchestration status for node set1-1-data-5zsh5 is orchestrating | |
2020-01-30 15:37:19.103192 I | op-osd: osd orchestration status for node set1-2-data-c5mwp is computingDiff | |
2020-01-30 15:37:19.138960 I | op-osd: osd orchestration status for node set1-2-data-c5mwp is orchestrating | |
2020-01-30 15:37:20.731229 I | op-osd: osd orchestration status for node set1-0-data-2kfjk is completed | |
2020-01-30 15:37:20.731271 I | op-osd: starting 1 osd daemons on pvc set1-0-data-2kfjk | |
2020-01-30 15:37:20.731481 I | exec: Running command: ceph auth get-or-create-key osd.1 osd allow * mon allow profile osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/322793482 | |
2020-01-30 15:37:21.348445 I | op-osd: deployment for osd 1 already exists. updating if needed | |
2020-01-30 15:37:21.373218 I | op-k8sutil: updating deployment rook-ceph-osd-1 | |
2020-01-30 15:37:29.323018 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 15:37:29.467203 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:37:37.493616 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-1 | |
2020-01-30 15:37:37.493656 I | op-osd: started deployment for osd 1 (dir=false, type=) | |
2020-01-30 15:37:37.669414 I | op-osd: osd orchestration status for node set1-1-data-5zsh5 is completed | |
2020-01-30 15:37:37.669439 I | op-osd: starting 1 osd daemons on pvc set1-1-data-5zsh5 | |
2020-01-30 15:37:37.669927 I | exec: Running command: ceph auth get-or-create-key osd.0 osd allow * mon allow profile osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/929302497 | |
2020-01-30 15:37:38.151002 I | op-osd: deployment for osd 0 already exists. updating if needed | |
2020-01-30 15:37:38.169884 I | op-k8sutil: updating deployment rook-ceph-osd-0 | |
2020-01-30 15:37:48.317017 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 15:37:48.466680 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
W0130 15:41:31.172765 12 reflector.go:289] github.com/rook/rook/pkg/operator/ceph/cluster/controller.go:179: watch of *v1.ConfigMap ended with: too old resource version: 25214670 (25216054) | |
2020-01-30 15:43:39.395557 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-0 | |
2020-01-30 15:43:39.395602 I | op-osd: started deployment for osd 0 (dir=false, type=) | |
2020-01-30 15:43:39.400610 I | op-osd: osd orchestration status for node set1-2-data-c5mwp is completed | |
2020-01-30 15:43:39.400642 I | op-osd: starting 1 osd daemons on pvc set1-2-data-c5mwp | |
2020-01-30 15:43:39.400911 I | exec: Running command: ceph auth get-or-create-key osd.2 osd allow * mon allow profile osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/104701388 | |
2020-01-30 15:43:39.628064 I | op-cluster: skip orchestration on device config map update for OSDs on PVC | |
2020-01-30 15:43:40.148872 I | op-osd: deployment for osd 2 already exists. updating if needed | |
2020-01-30 15:43:40.168705 I | op-k8sutil: updating deployment rook-ceph-osd-2 | |
2020-01-30 15:43:49.377354 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 15:43:49.648817 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 15:44:06.287042 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-2 | |
2020-01-30 15:44:06.287078 I | op-osd: started deployment for osd 2 (dir=false, type=) | |
2020-01-30 15:44:06.295887 I | op-osd: 3/3 node(s) completed osd provisioning | |
2020-01-30 15:44:06.295950 I | op-osd: start provisioning the osds on nodes, if needed | |
2020-01-30 15:44:06.312027 I | op-osd: 4 of the 4 storage nodes are valid | |
2020-01-30 15:44:06.331701 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-34np to start a new one | |
2020-01-30 15:44:06.348123 I | op-k8sutil: batch job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-34np still exists | |
2020-01-30 15:44:06.413352 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-34np will be 7bd82ab0fa96ccd23ff842822b152aa5 | |
2020-01-30 15:44:08.351278 I | op-k8sutil: batch job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-34np deleted | |
2020-01-30 15:44:08.368307 I | op-osd: osd provision job started for node gke-tc-dev-1-tc-dev-big-2-74756ee2-34np | |
2020-01-30 15:44:08.393395 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 to start a new one | |
2020-01-30 15:44:08.438998 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-34np will be 7bd82ab0fa96ccd23ff842822b152aa5 | |
2020-01-30 15:44:08.446488 I | op-k8sutil: batch job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 still exists | |
2020-01-30 15:44:09.901593 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:44:10.454050 I | op-k8sutil: batch job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 deleted | |
2020-01-30 15:44:10.463030 I | op-osd: osd provision job started for node gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 | |
2020-01-30 15:44:10.473695 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c to start a new one | |
2020-01-30 15:44:10.486340 I | op-k8sutil: batch job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c still exists | |
2020-01-30 15:44:10.554983 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 15:44:10.660729 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:44:12.489420 I | op-k8sutil: batch job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c deleted | |
2020-01-30 15:44:12.504723 I | op-osd: osd provision job started for node gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c | |
2020-01-30 15:44:12.519290 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-sjb5 to start a new one | |
2020-01-30 15:44:12.537749 I | op-k8sutil: batch job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-sjb5 still exists | |
2020-01-30 15:44:12.662853 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 15:44:14.540790 I | op-k8sutil: batch job rook-ceph-osd-prepare-gke-tc-dev-1-tc-dev-big-2-74756ee2-sjb5 deleted | |
2020-01-30 15:44:14.555163 I | op-osd: osd provision job started for node gke-tc-dev-1-tc-dev-big-2-74756ee2-sjb5 | |
2020-01-30 15:44:14.555200 I | op-osd: start osds after provisioning is completed, if needed | |
2020-01-30 15:44:14.561222 I | op-osd: osd orchestration status for node gke-tc-dev-1-tc-dev-big-2-74756ee2-34np is completed | |
2020-01-30 15:44:14.561253 I | op-osd: starting 0 osd daemons on node gke-tc-dev-1-tc-dev-big-2-74756ee2-34np | |
2020-01-30 15:44:14.574067 I | op-osd: osd orchestration status for node gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 is orchestrating | |
2020-01-30 15:44:14.574107 I | op-osd: osd orchestration status for node gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c is starting | |
2020-01-30 15:44:14.574124 I | op-osd: osd orchestration status for node gke-tc-dev-1-tc-dev-big-2-74756ee2-sjb5 is starting | |
2020-01-30 15:44:14.574133 I | op-osd: 1/4 node(s) completed osd provisioning, resource version 25217896 | |
2020-01-30 15:44:15.000904 I | op-osd: osd orchestration status for node gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c is computingDiff | |
2020-01-30 15:44:15.661832 I | op-osd: osd orchestration status for node gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c is orchestrating | |
2020-01-30 15:44:18.074651 I | op-osd: osd orchestration status for node gke-tc-dev-1-tc-dev-big-2-74756ee2-sjb5 is computingDiff | |
2020-01-30 15:44:18.506455 I | op-osd: osd orchestration status for node gke-tc-dev-1-tc-dev-big-2-74756ee2-sjb5 is orchestrating | |
2020-01-30 15:44:19.828928 I | op-osd: osd orchestration status for node gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 is completed | |
2020-01-30 15:44:19.828960 I | op-osd: starting 1 osd daemons on node gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 | |
2020-01-30 15:44:19.829276 I | exec: Running command: ceph auth get-or-create-key osd.1 osd allow * mon allow profile osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/951862971 | |
2020-01-30 15:44:20.521365 I | op-osd: deployment for osd 1 already exists. updating if needed | |
2020-01-30 15:44:20.559218 I | op-k8sutil: updating deployment rook-ceph-osd-1 | |
2020-01-30 15:44:29.325469 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:44:29.486457 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:44:29.552228 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:44:29.590160 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:44:29.640275 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:44:29.743201 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:44:34.478418 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:44:34.639677 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:45:21.097879 I | op-cluster: skip orchestration on device config map update for OSDs on PVC | |
W0130 15:47:03.183087 12 reflector.go:289] github.com/rook/rook/pkg/operator/ceph/cluster/controller.go:179: watch of *v1.ConfigMap ended with: too old resource version: 25218452 (25218600) | |
2020-01-30 15:47:32.366908 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
2020-01-30 15:47:37.866584 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-8d62 will be b8a19b94e148e22560cac62ded7c898c | |
W0130 15:52:06.202424 12 reflector.go:289] github.com/rook/rook/pkg/operator/ceph/cluster/controller.go:179: watch of *v1.ConfigMap ended with: too old resource version: 25219177 (25220939) | |
W0130 16:00:53.212402 12 reflector.go:289] github.com/rook/rook/pkg/operator/ceph/cluster/controller.go:179: watch of *v1.ConfigMap ended with: too old resource version: 25221508 (25224805) | |
2020-01-30 16:04:24.348302 E | op-osd: failed to update osd deployment 1. gave up waiting for deployment rook-ceph-osd-1 to update | |
2020-01-30 16:04:24.348341 I | op-osd: started deployment for osd 1 (dir=false, type=) | |
2020-01-30 16:04:24.355171 I | op-osd: osd orchestration status for node gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c is completed | |
2020-01-30 16:04:24.355207 I | op-osd: starting 2 osd daemons on node gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c | |
2020-01-30 16:04:24.355489 I | exec: Running command: ceph auth get-or-create-key osd.0 osd allow * mon allow profile osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/175115486 | |
2020-01-30 16:04:24.936579 I | op-osd: deployment for osd 0 already exists. updating if needed | |
2020-01-30 16:04:24.954345 I | op-k8sutil: updating deployment rook-ceph-osd-0 | |
2020-01-30 16:04:59.320675 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 16:04:59.544034 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 16:05:09.147738 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-0 | |
2020-01-30 16:05:09.147767 I | op-osd: started deployment for osd 0 (dir=false, type=) | |
2020-01-30 16:05:09.147889 I | exec: Running command: ceph auth get-or-create-key osd.2 osd allow * mon allow profile osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/867728037 | |
2020-01-30 16:05:09.730753 I | op-osd: deployment for osd 2 already exists. updating if needed | |
2020-01-30 16:05:09.743824 I | op-k8sutil: updating deployment rook-ceph-osd-2 | |
2020-01-30 16:05:19.369737 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 16:05:19.633230 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName gke-tc-dev-1-tc-dev-big-2-74756ee2-cl5c will be e5d1c93b4aac1091862f4e71488f7edc | |
2020-01-30 16:06:33.489371 I | op-cluster: skip orchestration on device config map update for OSDs on PVC |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment