Skip to content

Instantly share code, notes, and snippets.

@peterska
Created October 3, 2018 08:30
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save peterska/eac6d1e6ac2cfb737ab7ec8fb85e8b13 to your computer and use it in GitHub Desktop.
Save peterska/eac6d1e6ac2cfb737ab7ec8fb85e8b13 to your computer and use it in GitHub Desktop.
ceph rook operator logs for failed 0.8.3 upgrade
[peters@troy ceph]$ kubectl logs -n rook-ceph-system -l app=rook-ceph-operator
2018-10-03 07:42:35.535543 I | rookcmd: starting Rook v0.8.3 with arguments '/usr/local/bin/rook ceph operator'
2018-10-03 07:42:35.535617 I | rookcmd: flag values: --help=false, --log-level=INFO, --mon-healthcheck-interval=45s, --mon-out-timeout=5m0s
2018-10-03 07:42:35.536379 I | cephcmd: starting operator
2018-10-03 07:42:35.597170 I | op-agent: getting flexvolume dir path from FLEXVOLUME_DIR_PATH env var
2018-10-03 07:42:35.597191 I | op-agent: flexvolume dir path env var FLEXVOLUME_DIR_PATH is not provided. Defaulting to: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
2018-10-03 07:42:35.597196 I | op-agent: discovered flexvolume dir path from source default. value: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
2018-10-03 07:42:35.611939 I | op-agent: rook-ceph-agent daemonset already exists, updating ...
2018-10-03 07:42:35.625192 I | op-discover: rook-discover daemonset already exists, updating ...
2018-10-03 07:42:35.631921 I | operator: rook-provisioner ceph.rook.io/block started using ceph.rook.io flex vendor dir
2018-10-03 07:42:35.632710 I | operator: rook-provisioner rook.io/block started using rook.io flex vendor dir
2018-10-03 07:42:35.632718 I | op-cluster: start watching clusters in all namespaces
2018-10-03 07:42:35.635442 I | op-cluster: skipping watching for legacy rook cluster events (legacy cluster CRD probably doesn't exist): the server could not find the requested resource (get clusters.rook.io)
2018-10-03 07:42:35.660929 I | op-cluster: starting cluster in namespace rook-ceph
2018-10-03 07:42:41.680028 I | op-k8sutil: verified the ownerref can be set on resources
2018-10-03 07:42:41.683477 I | op-mon: start running mons
2018-10-03 07:42:41.690848 I | cephmon: parsing mon endpoints: rook-ceph-mon13=192.168.3.68:6794,rook-ceph-mon8=192.168.3.1:6791,rook-ceph-mon15=192.168.3.74:6790
2018-10-03 07:42:41.691114 I | op-mon: loaded: maxMonID=15, mons=map[rook-ceph-mon13:0xc4202a9660 rook-ceph-mon8:0xc4202a9880 rook-ceph-mon15:0xc4202a9d00], mapping=&{Node:map[rook-ceph-mon13:0xc4202cc480 rook-ceph-mon15:0xc4202cc570 rook-ceph-mon8:0xc4202cc5a0] Port:map[kronos.swdevel.serendipity-software.com.au:6795 raid2.swdevel.serendipity-software.com.au:6794 raid3.swdevel.serendipity-software.com.au:6790]}
2018-10-03 07:42:41.697550 I | op-mon: saved mon endpoints to config map map[mapping:{"node":{"rook-ceph-mon13":{"Name":"raid2.swdevel.serendipity-software.com.au","Hostname":"raid2.swdevel.serendipity-software.com.au","Address":"192.168.3.68"},"rook-ceph-mon15":{"Name":"raid3.swdevel.serendipity-software.com.au","Hostname":"raid3.swdevel.serendipity-software.com.au","Address":"192.168.3.74"},"rook-ceph-mon8":{"Name":"kronos.swdevel.serendipity-software.com.au","Hostname":"kronos.swdevel.serendipity-software.com.au","Address":"192.168.3.1"}},"port":{"kronos.swdevel.serendipity-software.com.au":6795,"raid2.swdevel.serendipity-software.com.au":6794,"raid3.swdevel.serendipity-software.com.au":6790}} data:rook-ceph-mon13=192.168.3.68:6794,rook-ceph-mon8=192.168.3.1:6791,rook-ceph-mon15=192.168.3.74:6790 maxMonId:15]
2018-10-03 07:42:41.698075 I | cephmon: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2018-10-03 07:42:41.698185 I | cephmon: copying config to /etc/ceph/ceph.conf
2018-10-03 07:42:41.698261 I | cephmon: generated admin config in /var/lib/rook/rook-ceph
2018-10-03 07:42:42.004329 I | op-mgr: start running mgr
2018-10-03 07:42:42.006939 I | op-mgr: the mgr keyring was already generated
2018-10-03 07:42:42.011503 I | op-mgr: rook-ceph-mgr-a deployment already exists
2018-10-03 07:42:42.011609 I | exec: Running command: ceph mgr module enable prometheus --force --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/422506645
2018-10-03 07:42:43.321259 I | op-mgr: mgr metrics service already exists
2018-10-03 07:42:43.321382 I | exec: Running command: ceph mgr module enable dashboard --force --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/265758448
2018-10-03 07:42:44.387838 I | op-mgr: dashboard service already exists
2018-10-03 07:42:44.387869 I | op-osd: start running osds in namespace rook-ceph
2018-10-03 07:42:44.388014 I | exec: Running command: ceph osd set noscrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/185977743
2018-10-03 07:42:45.616510 I | exec: noscrub is set
2018-10-03 07:42:45.616637 I | exec: Running command: ceph osd set nodeep-scrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/388245410
2018-10-03 07:42:46.686042 I | exec: nodeep-scrub is set
2018-10-03 07:42:46.690980 I | op-osd: 3 of the 3 storage nodes are valid
2018-10-03 07:42:46.690992 I | op-osd: checking if orchestration is still in progress
2018-10-03 07:42:46.695041 I | op-osd: start provisioning the osds on nodes, if needed
2018-10-03 07:42:46.712345 I | op-osd: avail devices for node raid2.swdevel.serendipity-software.com.au: []
2018-10-03 07:42:46.717036 I | op-osd: Removing previous provision job for node raid2.swdevel.serendipity-software.com.au to start a new one
2018-10-03 07:42:46.722279 I | op-osd: batch job rook-ceph-osd-prepare-raid2.swdevel.serendipity-software.com.au still exists
2018-10-03 07:42:48.726087 I | op-osd: batch job rook-ceph-osd-prepare-raid2.swdevel.serendipity-software.com.au deleted
2018-10-03 07:42:48.731670 I | op-osd: osd provision job started for node raid2.swdevel.serendipity-software.com.au
2018-10-03 07:42:48.758095 I | op-osd: avail devices for node raid3.swdevel.serendipity-software.com.au: []
2018-10-03 07:42:48.760210 I | op-osd: Removing previous provision job for node raid3.swdevel.serendipity-software.com.au to start a new one
2018-10-03 07:42:48.765247 I | op-osd: batch job rook-ceph-osd-prepare-raid3.swdevel.serendipity-software.com.au still exists
2018-10-03 07:42:50.768623 I | op-osd: batch job rook-ceph-osd-prepare-raid3.swdevel.serendipity-software.com.au deleted
2018-10-03 07:42:50.773603 I | op-osd: osd provision job started for node raid3.swdevel.serendipity-software.com.au
2018-10-03 07:42:50.796697 I | op-osd: avail devices for node salak.swdevel.serendipity-software.com.au: []
2018-10-03 07:42:50.798854 I | op-osd: Removing previous provision job for node salak.swdevel.serendipity-software.com.au to start a new one
2018-10-03 07:42:50.806502 I | op-osd: batch job rook-ceph-osd-prepare-salak.swdevel.serendipity-software.com.au still exists
2018-10-03 07:42:52.809209 I | op-osd: batch job rook-ceph-osd-prepare-salak.swdevel.serendipity-software.com.au deleted
2018-10-03 07:42:52.813416 I | op-osd: osd provision job started for node salak.swdevel.serendipity-software.com.au
2018-10-03 07:42:52.813430 I | op-osd: start osds after provisioning is completed, if needed
2018-10-03 07:42:52.816825 I | op-osd: osd orchestration status for node raid2.swdevel.serendipity-software.com.au is completed
2018-10-03 07:42:52.816840 I | op-osd: starting 0 osd daemons on node raid2.swdevel.serendipity-software.com.au
2018-10-03 07:42:52.826377 I | op-osd: osd orchestration status for node raid3.swdevel.serendipity-software.com.au is completed
2018-10-03 07:42:52.826400 I | op-osd: starting 0 osd daemons on node raid3.swdevel.serendipity-software.com.au
2018-10-03 07:42:52.835399 I | op-osd: osd orchestration status for node salak.swdevel.serendipity-software.com.au is starting
2018-10-03 07:42:52.835415 I | op-osd: 2/3 node(s) completed osd provisioning, resource version 12393591
2018-10-03 07:42:53.739382 I | op-osd: osd orchestration status for node salak.swdevel.serendipity-software.com.au is computingDiff
2018-10-03 07:42:54.033256 I | op-osd: osd orchestration status for node salak.swdevel.serendipity-software.com.au is orchestrating
2018-10-03 07:42:54.038908 I | op-osd: osd orchestration status for node salak.swdevel.serendipity-software.com.au is completed
2018-10-03 07:42:54.038928 I | op-osd: starting 0 osd daemons on node salak.swdevel.serendipity-software.com.au
2018-10-03 07:42:54.043796 I | op-osd: 3/3 node(s) completed osd provisioning
2018-10-03 07:42:54.043848 I | op-osd: checking if any nodes were removed
2018-10-03 07:42:54.058050 I | op-osd: processing 0 removed nodes
2018-10-03 07:42:54.058071 I | op-osd: done processing removed nodes
2018-10-03 07:42:54.058080 I | op-osd: completed running osds in namespace rook-ceph
2018-10-03 07:42:54.058203 I | exec: Running command: ceph osd unset noscrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/159326361
2018-10-03 07:42:54.535509 I | exec: noscrub is unset
2018-10-03 07:42:54.535620 I | exec: Running command: ceph osd unset nodeep-scrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/127389476
2018-10-03 07:42:55.572836 I | exec: nodeep-scrub is unset
2018-10-03 07:42:55.572911 I | op-cluster: Done creating rook instance in namespace rook-ceph
2018-10-03 07:42:55.581340 I | op-pool: start watching pool resources in namespace rook-ceph
2018-10-03 07:42:55.582352 I | op-pool: skipping watching for legacy rook pool events (legacy pool CRD probably doesn't exist): the server could not find the requested resource (get pools.rook.io)
2018-10-03 07:42:55.582366 I | op-object: start watching object store resources in namespace rook-ceph
2018-10-03 07:42:55.583169 I | op-object: skipping watching for legacy rook objectstore events (legacy objectstore CRD probably doesn't exist): the server could not find the requested resource (get objectstores.rook.io)
2018-10-03 07:42:55.583188 I | op-file: start watching filesystem resource in namespace rook-ceph
2018-10-03 07:42:55.584231 I | op-file: skipping watching for legacy rook filesystem events (legacy filesystem CRD probably doesn't exist): the server could not find the requested resource (get filesystems.rook.io)
2018-10-03 07:42:55.586593 I | op-cluster: finalizer already set on cluster rook-ceph
2018-10-03 07:42:55.586849 I | exec: Running command: ceph osd crush dump --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/280540211
2018-10-03 07:42:55.587007 I | op-cluster: update event for cluster rook-ceph
2018-10-03 07:42:55.587149 I | op-cluster: update event for cluster rook-ceph is not supported
2018-10-03 07:42:55.587238 I | op-cluster: update event for cluster rook-ceph
2018-10-03 07:42:55.587454 I | op-cluster: update event for cluster rook-ceph is not supported
2018-10-03 07:42:55.791395 I | op-pool: creating pool kubernetes-pool in namespace rook-ceph
2018-10-03 07:42:55.791460 I | exec: Running command: ceph osd crush rule create-simple kubernetes-pool default osd --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/392863222
2018-10-03 07:42:56.031039 I | exec: rule kubernetes-pool already exists
2018-10-03 07:42:56.031173 I | exec: Running command: ceph osd pool create kubernetes-pool 0 replicated kubernetes-pool --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/796885213
2018-10-03 07:42:56.261920 I | exec: pool 'kubernetes-pool' already exists
2018-10-03 07:42:56.262061 I | exec: Running command: ceph osd pool set kubernetes-pool size 2 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/148212888
2018-10-03 07:42:56.882603 I | exec: set pool 3 size to 2
2018-10-03 07:42:56.882733 I | exec: Running command: ceph osd pool application enable kubernetes-pool kubernetes-pool --yes-i-really-mean-it --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/149820183
2018-10-03 07:42:57.969561 I | exec: enabled application 'kubernetes-pool' on pool 'kubernetes-pool'
2018-10-03 07:42:57.969655 I | cephclient: creating replicated pool kubernetes-pool succeeded, buf:
2018-10-03 07:42:57.969664 I | op-pool: created pool kubernetes-pool
2018-10-03 07:42:57.969747 I | exec: Running command: ceph osd crush dump --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/141938570
2018-10-03 07:42:58.169997 I | op-pool: creating pool libvirt-pool in namespace rook-ceph
2018-10-03 07:42:58.170062 I | exec: Running command: ceph osd crush rule create-simple libvirt-pool default osd --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/290648417
2018-10-03 07:42:58.401235 I | exec: rule libvirt-pool already exists
2018-10-03 07:42:58.401373 I | exec: Running command: ceph osd pool create libvirt-pool 0 replicated libvirt-pool --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/698555724
2018-10-03 07:42:58.619605 I | exec: pool 'libvirt-pool' already exists
2018-10-03 07:42:58.619741 I | exec: Running command: ceph osd pool set libvirt-pool size 2 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/869251131
2018-10-03 07:42:59.255599 I | exec: set pool 6 size to 2
2018-10-03 07:42:59.255741 I | exec: Running command: ceph osd pool application enable libvirt-pool libvirt-pool --yes-i-really-mean-it --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/670055518
2018-10-03 07:43:00.387913 I | exec: enabled application 'libvirt-pool' on pool 'libvirt-pool'
2018-10-03 07:43:00.388019 I | cephclient: creating replicated pool libvirt-pool succeeded, buf:
2018-10-03 07:43:00.388028 I | op-pool: created pool libvirt-pool
2018-10-03 07:43:00.388121 I | exec: Running command: ceph osd crush dump --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/146644517
2018-10-03 07:43:00.604282 I | op-pool: creating pool serendipity-pool in namespace rook-ceph
2018-10-03 07:43:00.604362 I | exec: Running command: ceph osd crush rule create-simple serendipity-pool default osd --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/105731904
2018-10-03 07:43:00.814487 I | exec: rule serendipity-pool already exists
2018-10-03 07:43:00.814628 I | exec: Running command: ceph osd pool create serendipity-pool 0 replicated serendipity-pool --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/579556767
2018-10-03 07:43:01.041668 I | exec: pool 'serendipity-pool' already exists
2018-10-03 07:43:01.041792 I | exec: Running command: ceph osd pool set serendipity-pool size 2 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/101108850
2018-10-03 07:43:01.616231 I | exec: set pool 4 size to 2
2018-10-03 07:43:01.616385 I | exec: Running command: ceph osd pool application enable serendipity-pool serendipity-pool --yes-i-really-mean-it --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/137491241
2018-10-03 07:43:02.798509 I | exec: enabled application 'serendipity-pool' on pool 'serendipity-pool'
2018-10-03 07:43:02.798587 I | cephclient: creating replicated pool serendipity-pool succeeded, buf:
2018-10-03 07:43:02.798593 I | op-pool: created pool serendipity-pool
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment