Created
June 12, 2020 22:58
-
-
Save RyanW8/b5182dd891a87ed79042a23854af8dfc to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2020-06-12 22:50:57.634149 I | rookcmd: starting Rook v1.3.1 with arguments '/usr/local/bin/rook ceph operator' | |
2020-06-12 22:50:57.634820 I | rookcmd: flag values: --add_dir_header=false, --alsologtostderr=false, --csi-cephfs-plugin-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin.yaml, --csi-cephfs-provisioner-dep-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner-dep.yaml, --csi-cephfs-provisioner-sts-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner-sts.yaml, --csi-rbd-plugin-template-path=/etc/ceph-csi/rbd/csi-rbdplugin.yaml, --csi-rbd-provisioner-dep-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner-dep.yaml, --csi-rbd-provisioner-sts-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner-sts.yaml, --enable-discovery-daemon=true, --enable-flex-driver=false, --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-flush-frequency=5s, --log-level=INFO, --log_backtrace_at=:0, --log_dir=, --log_file=, --log_file_max_size=1800, --logtostderr=true, --master=, --mon-healthcheck-interval=45s, --mon-out-timeout=10m0s, --operator-image=, --service-account=, --skip_headers=false, --skip_log_headers=false, --stderrthreshold=2, --v=0, --vmodule= | |
2020-06-12 22:50:57.634826 I | cephcmd: starting operator | |
2020-06-12 22:50:57.693809 I | op-discover: rook-discover daemonset started | |
2020-06-12 22:50:57.696214 I | operator: rook-provisioner ceph.rook.io/block started using ceph.rook.io flex vendor dir | |
I0612 22:50:57.696332 7 leaderelection.go:242] attempting to acquire leader lease rook-ceph/ceph.rook.io-block... | |
2020-06-12 22:50:57.696441 I | operator: rook-provisioner rook.io/block started using rook.io flex vendor dir | |
2020-06-12 22:50:57.696460 I | operator: Watching all namespaces for cluster CRDs | |
2020-06-12 22:50:57.696467 I | op-cluster: start watching clusters in all namespaces | |
2020-06-12 22:50:57.696490 I | op-cluster: Enabling hotplug orchestration: ROOK_DISABLE_DEVICE_HOTPLUG=false | |
I0612 22:50:57.696499 7 leaderelection.go:242] attempting to acquire leader lease rook-ceph/rook.io-block... | |
2020-06-12 22:50:57.696598 I | operator: setting up the controller-runtime manager | |
2020-06-12 22:50:57.701883 I | op-cluster: ConfigMap "rook-ceph-operator-config" changes detected. Updating configurations | |
I0612 22:50:57.722357 7 leaderelection.go:252] successfully acquired lease rook-ceph/rook.io-block | |
I0612 22:50:57.722458 7 controller.go:780] Starting provisioner controller rook.io/block_rook-ceph-operator-599765ff49-nzq8p_b93923b7-1444-4904-95ab-dfd52b69e1f5! | |
I0612 22:50:57.722588 7 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph", Name:"rook.io-block", UID:"2a0257cb-91b2-44dd-b640-23a25bb1b6f6", APIVersion:"v1", ResourceVersion:"225117", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-599765ff49-nzq8p_b93923b7-1444-4904-95ab-dfd52b69e1f5 became leader | |
I0612 22:50:57.731355 7 leaderelection.go:252] successfully acquired lease rook-ceph/ceph.rook.io-block | |
I0612 22:50:57.731458 7 controller.go:780] Starting provisioner controller ceph.rook.io/block_rook-ceph-operator-599765ff49-nzq8p_c1ca0472-92fa-4d41-9e2a-3983cbca75d8! | |
I0612 22:50:57.731479 7 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph", Name:"ceph.rook.io-block", UID:"f0e2a2da-7245-4276-b217-7d350f3251b5", APIVersion:"v1", ResourceVersion:"225118", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-599765ff49-nzq8p_c1ca0472-92fa-4d41-9e2a-3983cbca75d8 became leader | |
2020-06-12 22:50:58.152033 I | operator: starting the controller-runtime manager | |
I0612 22:50:58.322770 7 controller.go:829] Started provisioner controller rook.io/block_rook-ceph-operator-599765ff49-nzq8p_b93923b7-1444-4904-95ab-dfd52b69e1f5! | |
I0612 22:50:59.332128 7 controller.go:829] Started provisioner controller ceph.rook.io/block_rook-ceph-operator-599765ff49-nzq8p_c1ca0472-92fa-4d41-9e2a-3983cbca75d8! | |
2020-06-12 22:51:08.497852 I | op-cluster: starting cluster in namespace rook-ceph | |
2020-06-12 22:51:08.566702 I | ceph-csi: detecting the ceph csi image version for image "quay.io/cephcsi/cephcsi:v2.0.1" | |
2020-06-12 22:51:11.572561 I | ceph-csi: Detected ceph CSI image version: "v2.0.1" | |
2020-06-12 22:51:11.610997 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config" | |
2020-06-12 22:51:15.539496 I | ceph-csi: CSIDriver object created for driver "rook-ceph.rbd.csi.ceph.com" | |
2020-06-12 22:51:15.551527 I | ceph-csi: CSIDriver object created for driver "rook-ceph.cephfs.csi.ceph.com" | |
2020-06-12 22:51:15.551560 I | operator: successfully started Ceph CSI driver(s) | |
2020-06-12 22:51:21.622085 I | op-cluster: detecting the ceph image version for image ceph/ceph:v14.2.8... | |
2020-06-12 22:51:24.569180 I | op-cluster: Detected ceph image version: "14.2.8-0 nautilus" | |
2020-06-12 22:51:24.576808 E | cephconfig: clusterInfo: <nil> | |
2020-06-12 22:51:24.576838 I | op-cluster: cluster "rook-ceph": version "14.2.8-0 nautilus" detected for image "ceph/ceph:v14.2.8" | |
2020-06-12 22:51:24.622106 I | op-config: CephCluster "rook-ceph" status: "Progressing". "Cluster is creating" | |
2020-06-12 22:51:24.644859 I | op-mon: start running mons | |
2020-06-12 22:51:24.801033 I | op-mon: creating mon secrets for a new cluster | |
2020-06-12 22:51:24.828689 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":[]}] data: mapping:{"node":{}} maxMonId:-1] | |
2020-06-12 22:51:25.000755 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config | |
2020-06-12 22:51:25.000916 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph | |
2020-06-12 22:51:26.607433 I | op-mon: targeting the mon count 3 | |
2020-06-12 22:51:26.622112 I | op-mon: sched-mon: created canary deployment rook-ceph-mon-a-canary | |
2020-06-12 22:51:27.399710 I | op-mon: sched-mon: canary monitor deployment rook-ceph-mon-a-canary scheduled to wkf-sre-kube-worker-1 | |
2020-06-12 22:51:27.399743 I | op-mon: assignmon: mon a assigned to node wkf-sre-kube-worker-1 | |
2020-06-12 22:51:27.413963 I | op-mon: sched-mon: created canary deployment rook-ceph-mon-b-canary | |
2020-06-12 22:51:27.800188 I | op-mon: sched-mon: canary monitor deployment rook-ceph-mon-b-canary scheduled to wkf-sre-kube-worker-2 | |
2020-06-12 22:51:27.800211 I | op-mon: assignmon: mon b assigned to node wkf-sre-kube-worker-2 | |
2020-06-12 22:51:27.811758 I | op-mon: sched-mon: created canary deployment rook-ceph-mon-c-canary | |
2020-06-12 22:51:28.398683 I | op-mon: sched-mon: canary monitor deployment rook-ceph-mon-c-canary scheduled to nht-sre-kube-worker-4 | |
2020-06-12 22:51:28.398709 I | op-mon: assignmon: mon c assigned to node nht-sre-kube-worker-4 | |
2020-06-12 22:51:28.411963 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-a-canary" and canary pvc "". | |
2020-06-12 22:51:28.412006 I | op-k8sutil: removing deployment rook-ceph-mon-a-canary if it exists | |
2020-06-12 22:51:28.427345 I | op-k8sutil: Removed deployment rook-ceph-mon-a-canary | |
2020-06-12 22:51:28.438085 I | op-k8sutil: "rook-ceph-mon-a-canary" still found. waiting... | |
2020-06-12 22:51:38.485792 I | op-k8sutil: "rook-ceph-mon-a-canary" still found. waiting... | |
2020-06-12 22:51:42.505745 I | op-k8sutil: confirmed rook-ceph-mon-a-canary does not exist | |
2020-06-12 22:51:42.505780 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-b-canary" and canary pvc "". | |
2020-06-12 22:51:42.505822 I | op-k8sutil: removing deployment rook-ceph-mon-b-canary if it exists | |
2020-06-12 22:51:42.523357 I | op-k8sutil: Removed deployment rook-ceph-mon-b-canary | |
2020-06-12 22:51:42.533958 I | op-k8sutil: "rook-ceph-mon-b-canary" still found. waiting... | |
2020-06-12 22:51:52.577335 I | op-k8sutil: "rook-ceph-mon-b-canary" still found. waiting... | |
2020-06-12 22:51:54.585596 I | op-k8sutil: confirmed rook-ceph-mon-b-canary does not exist | |
2020-06-12 22:51:54.585666 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-c-canary" and canary pvc "". | |
2020-06-12 22:51:54.585679 I | op-k8sutil: removing deployment rook-ceph-mon-c-canary if it exists | |
2020-06-12 22:51:54.605870 I | op-k8sutil: Removed deployment rook-ceph-mon-c-canary | |
2020-06-12 22:51:54.613071 I | op-k8sutil: "rook-ceph-mon-c-canary" still found. waiting... | |
2020-06-12 22:51:58.629149 I | op-k8sutil: confirmed rook-ceph-mon-c-canary does not exist | |
2020-06-12 22:51:58.629190 I | op-mon: creating mon a | |
2020-06-12 22:51:58.657607 I | op-mon: mon "a" endpoint are [v2:10.233.27.29:3300,v1:10.233.27.29:6789] | |
2020-06-12 22:51:58.683462 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.233.27.29:6789"]}] data:a=10.233.27.29:6789 mapping:{"node":{"a":{"Name":"wkf-sre-kube-worker-1","Hostname":"wkf-sre-kube-worker-1","Address":"10.200.105.222"},"b":{"Name":"wkf-sre-kube-worker-2","Hostname":"wkf-sre-kube-worker-2","Address":"10.200.105.223"},"c":{"Name":"nht-sre-kube-worker-4","Hostname":"nht-sre-kube-worker-4","Address":"10.200.106.222"}}} maxMonId:2] | |
2020-06-12 22:51:58.703185 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config | |
2020-06-12 22:51:58.703399 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph | |
2020-06-12 22:51:58.724340 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config | |
2020-06-12 22:51:58.724539 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph | |
2020-06-12 22:51:58.731428 I | op-mon: 0 of 1 expected mon deployments exist. creating new deployment(s). | |
2020-06-12 22:51:58.758955 I | op-mon: waiting for mon quorum with [a] | |
2020-06-12 22:51:58.771108 I | op-mon: mon a is not yet running | |
2020-06-12 22:51:58.771147 I | op-mon: mons running: [] | |
2020-06-12 22:52:19.029139 I | op-mon: mons running: [a] | |
2020-06-12 22:52:39.188830 I | op-mon: mons running: [a] | |
2020-06-12 22:52:59.359215 I | op-mon: mons running: [a] | |
2020-06-12 22:53:19.534751 I | op-mon: mons running: [a] | |
2020-06-12 22:53:39.675658 I | op-mon: mons running: [a] | |
2020-06-12 22:53:59.850963 I | op-mon: mons running: [a] | |
2020-06-12 22:54:20.043374 I | op-mon: mons running: [a] | |
2020-06-12 22:54:40.191941 I | op-mon: mons running: [a] | |
2020-06-12 22:55:00.330339 I | op-mon: mons running: [a] | |
2020-06-12 22:55:20.484850 I | op-mon: mons running: [a] | |
2020-06-12 22:55:40.635487 I | op-mon: mons running: [a] | |
2020-06-12 22:56:00.767881 I | op-mon: mons running: [a] | |
2020-06-12 22:56:20.909266 I | op-mon: mons running: [a] | |
2020-06-12 22:56:41.069283 I | op-mon: mons running: [a] | |
2020-06-12 22:57:01.199164 I | op-mon: mons running: [a] | |
2020-06-12 22:57:21.329399 I | op-mon: mons running: [a] |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment