Skip to content

Instantly share code, notes, and snippets.

@sgoings
Created October 16, 2015 19:27
Show Gist options
  • Save sgoings/f8461c011b784e473dc4 to your computer and use it in GitHub Desktop.
Save sgoings/f8461c011b784e473dc4 to your computer and use it in GitHub Desktop.
● ▴ ■
■ ● ▴ Installing Deis...
▴ ■ ●
Storage subsystem...
deis-store-monitor.service: loaded
deis-store-daemon.service: loaded
deis-store-metadata.service: loaded
deis-store-volume.service: loaded
deis-store-gateway@1.service: loaded
Logging subsystem...
deis-logspout.service: loaded
deis-logger.service: loaded
Control plane...
deis-registry@1.service: loaded
deis-builder.service: loaded
deis-controller.service: loaded
deis-database.service: loaded
Data plane...
deis-publisher.service: loaded
Router mesh...
deis-router@1.service: loaded
deis-router@2.service: loaded
deis-router@3.service: loaded
Done.
Please run `deisctl start platform` to boot up Deis.
● ▴ ■
■ ● ▴ Starting Deis...
▴ ■ ●
Storage subsystem...
deis-store-monitor.service: active/running
Could not find unit: deis-store-daemon.servicee
deis-store-metadata.service: active/running
deis-store-gateway@1.service: active/running
deis-store-volume.service: active/running
Logging subsystem...
deis-logger.service: active/running
deis-logspout.service: active/running
Control plane...
Could not find unit: deis-controller.servicee
Could not find unit: deis-registry@1.service
deis-database.service: active/running
Could not find unit: deis-builder.service
Data plane...
deis-publisher.service: active/running
Router mesh...
Could not find unit: deis-router@3.service
deis-router@2.service: active/running
Could not find unit: deis-router@1.service
Done.
Please set up an administrative account. See 'deis help register'
Checking infrastructure on azure...
Warning: Permanently added '40.78.25.128' (RSA) to the list of known hosts.
==> etcdctl cluster-health
member 3793770fbf860754 is healthy: got healthy result from http://10.0.0.6:2379
member 4c45d104ba0842cd is healthy: got healthy result from http://10.0.0.5:2379
member d3f582f9c3f15958 is unhealthy: got unhealthy result from http://10.0.0.4:2379
cluster is healthy
==> fleetctl list-machines
MACHINE IP METADATA
1402a119... 40.78.26.140 controlPlane=true,dataPlane=true,routerMesh=true
98077168... 40.78.26.140 controlPlane=true,dataPlane=true,routerMesh=true
dbb75ecc... 40.78.26.140 controlPlane=true,dataPlane=true,routerMesh=true
==> fleetctl list-units
UNIT MACHINE ACTIVE SUB
deis-controller.service 98077168.../40.78.26.140 activating start-pre
deis-database.service dbb75ecc.../40.78.26.140 active running
deis-logspout.service 98077168.../40.78.26.140 active running
deis-logspout.service dbb75ecc.../40.78.26.140 active running
deis-publisher.service 98077168.../40.78.26.140 active running
deis-publisher.service dbb75ecc.../40.78.26.140 active running
deis-registry@1.service 98077168.../40.78.26.140 activating start-pre
deis-router@2.service 98077168.../40.78.26.140 active running
deis-store-daemon.service 98077168.../40.78.26.140 active running
deis-store-daemon.service dbb75ecc.../40.78.26.140 active running
deis-store-gateway@1.service 98077168.../40.78.26.140 activating start-pre
deis-store-metadata.service 98077168.../40.78.26.140 active running
deis-store-metadata.service dbb75ecc.../40.78.26.140 active running
deis-store-monitor.service 98077168.../40.78.26.140 active running
deis-store-volume.service 98077168.../40.78.26.140 active running
==> journalctl -u deis-store-monitor
-- Logs begin at Fri 2015-10-16 17:13:35 UTC, end at Fri 2015-10-16 17:48:27 UTC. --
Oct 16 17:41:25 deisNode0 systemd[1]: Starting deis-store-monitor...
Oct 16 17:41:31 deisNode0 sh[1529]: 3.2: Pulling from alpine
Oct 16 17:41:31 deisNode0 sh[1529]: f4fddc471ec2: Pulling fs layer
Oct 16 17:41:32 deisNode0 sh[1529]: f4fddc471ec2: Verifying Checksum
Oct 16 17:41:32 deisNode0 sh[1529]: f4fddc471ec2: Download complete
Oct 16 17:41:32 deisNode0 sh[1529]: f4fddc471ec2: Pull complete
Oct 16 17:41:32 deisNode0 sh[1529]: Digest: sha256:b4769592c47ebc65f85999c61cf33dcdd5f04a7660c4225b454f875c57ef79fd
Oct 16 17:41:32 deisNode0 sh[1529]: Status: Downloaded newer image for alpine:3.2
Oct 16 17:41:36 deisNode0 sh[1666]: git-2f401a9: Pulling from deisci/store-monitor
Oct 16 17:41:36 deisNode0 sh[1666]: 29abf451e777: Pulling fs layer
Oct 16 17:41:36 deisNode0 sh[1666]: 1b166211e055: Pulling fs layer
Oct 16 17:41:36 deisNode0 sh[1666]: 96684b875775: Pulling fs layer
Oct 16 17:41:36 deisNode0 sh[1666]: a8fac952a98a: Pulling fs layer
Oct 16 17:41:36 deisNode0 sh[1666]: 19496d155985: Pulling fs layer
Oct 16 17:41:36 deisNode0 sh[1666]: c98fead7aaa3: Pulling fs layer
Oct 16 17:41:36 deisNode0 sh[1666]: 4fa39bce9832: Pulling fs layer
Oct 16 17:41:36 deisNode0 sh[1666]: 405f1a676f2f: Pulling fs layer
Oct 16 17:41:36 deisNode0 sh[1666]: 5543851ad3ca: Pulling fs layer
Oct 16 17:41:36 deisNode0 sh[1666]: e181d746e6bd: Pulling fs layer
Oct 16 17:41:36 deisNode0 sh[1666]: e181d746e6bd: Pulling fs layer
Oct 16 17:41:36 deisNode0 sh[1666]: e181d746e6bd: Layer already being pulled by another client. Waiting.
Oct 16 17:41:37 deisNode0 sh[1666]: 4fa39bce9832: Verifying Checksum
Oct 16 17:41:37 deisNode0 sh[1666]: 4fa39bce9832: Download complete
Oct 16 17:41:37 deisNode0 sh[1666]: a8fac952a98a: Verifying Checksum
Oct 16 17:41:37 deisNode0 sh[1666]: a8fac952a98a: Download complete
Oct 16 17:41:37 deisNode0 sh[1666]: e181d746e6bd: Verifying Checksum
Oct 16 17:41:37 deisNode0 sh[1666]: e181d746e6bd: Download complete
Oct 16 17:41:37 deisNode0 sh[1666]: e181d746e6bd: Download complete
Oct 16 17:41:37 deisNode0 sh[1666]: 405f1a676f2f: Verifying Checksum
Oct 16 17:41:37 deisNode0 sh[1666]: 405f1a676f2f: Download complete
Oct 16 17:41:37 deisNode0 sh[1666]: 1b166211e055: Verifying Checksum
Oct 16 17:41:37 deisNode0 sh[1666]: 1b166211e055: Download complete
Oct 16 17:41:37 deisNode0 sh[1666]: 5543851ad3ca: Verifying Checksum
Oct 16 17:41:37 deisNode0 sh[1666]: 5543851ad3ca: Download complete
Oct 16 17:41:37 deisNode0 sh[1666]: c98fead7aaa3: Verifying Checksum
Oct 16 17:41:37 deisNode0 sh[1666]: c98fead7aaa3: Download complete
Oct 16 17:41:37 deisNode0 sh[1666]: 96684b875775: Verifying Checksum
Oct 16 17:41:37 deisNode0 sh[1666]: 96684b875775: Download complete
Oct 16 17:41:41 deisNode0 sh[1666]: 29abf451e777: Verifying Checksum
Oct 16 17:41:41 deisNode0 sh[1666]: 29abf451e777: Download complete
Oct 16 17:41:49 deisNode0 sh[1666]: 19496d155985: Verifying Checksum
Oct 16 17:41:49 deisNode0 sh[1666]: 19496d155985: Download complete
Oct 16 17:41:50 deisNode0 sh[1666]: 29abf451e777: Pull complete
Oct 16 17:41:51 deisNode0 sh[1666]: 1b166211e055: Pull complete
Oct 16 17:41:51 deisNode0 sh[1666]: 96684b875775: Pull complete
Oct 16 17:41:52 deisNode0 sh[1666]: a8fac952a98a: Pull complete
Oct 16 17:42:08 deisNode0 sh[1666]: 19496d155985: Pull complete
Oct 16 17:42:20 deisNode0 sh[1666]: c98fead7aaa3: Pull complete
Oct 16 17:42:20 deisNode0 sh[1666]: 4fa39bce9832: Pull complete
Oct 16 17:42:21 deisNode0 sh[1666]: 405f1a676f2f: Pull complete
Oct 16 17:42:22 deisNode0 sh[1666]: 5543851ad3ca: Pull complete
Oct 16 17:42:23 deisNode0 sh[1666]: e181d746e6bd: Pull complete
Oct 16 17:42:23 deisNode0 sh[1666]: e181d746e6bd: Already exists
Oct 16 17:42:23 deisNode0 sh[1666]: Digest: sha256:ad29fcee95643d2a3e32eb4aa26a9f494b0db900f142f432cb3e777ddeca935f
Oct 16 17:42:23 deisNode0 sh[1666]: Status: Downloaded newer image for deisci/store-monitor:git-2f401a9
Oct 16 17:42:24 deisNode0 systemd[1]: Started deis-store-monitor.
Oct 16 17:42:27 deisNode0 sh[1773]: 2015-10-16 17:42:27.426044 7f3d4c384700 0 -- :/1000055 >> 10.0.0.6:6789/0 pipe(0x7f3d4402e010 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f3d44024980).fault
Oct 16 17:42:27 deisNode0 sh[1773]: 2015-10-16 17:42:27.426044 7f3d4c384700 0 -- :/1000055 >> 10.0.0.6:6789/0 pipe(0x7f3d4402e010 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f3d44024980).fault
Oct 16 17:42:30 deisNode0 sh[1773]: 2015-10-16 17:42:30.425852 7f3d4c182700 0 -- 10.0.0.5:0/1000055 >> 10.0.0.5:6789/0 pipe(0x7f3d34000c00 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f3d34004ea0).fault
Oct 16 17:42:30 deisNode0 sh[1773]: 2015-10-16 17:42:30.425852 7f3d4c182700 0 -- 10.0.0.5:0/1000055 >> 10.0.0.5:6789/0 pipe(0x7f3d34000c00 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7f3d34004ea0).fault
Oct 16 17:42:33 deisNode0 sh[1773]: got monmap epoch 1
Oct 16 17:42:34 deisNode0 sh[1773]: creating /tmp/ceph.mon.keyring
Oct 16 17:42:34 deisNode0 sh[1773]: importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
Oct 16 17:42:34 deisNode0 sh[1773]: importing contents of /etc/ceph/ceph.mon.keyring into /tmp/ceph.mon.keyring
Oct 16 17:42:34 deisNode0 sh[1773]: ceph-mon: set fsid to 34a89aa0-a605-4298-8f20-f4f633dfe19b
Oct 16 17:42:34 deisNode0 sh[1773]: ceph-mon: created monfs at /var/lib/ceph/mon/ceph-deisNode0 for mon.deisNode0
Oct 16 17:42:34 deisNode0 sh[1773]: 2015-10-16 17:42:34.508720 7f86581bb8c0 0 ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b), process ceph-mon, pid 1
Oct 16 17:42:34 deisNode0 sh[1773]: 2015-10-16 17:42:34.784902 7f86581bb8c0 0 mon.deisNode0 does not exist in monmap, will attempt to join an existing cluster
Oct 16 17:42:34 deisNode0 sh[1773]: 2015-10-16 17:42:34.785144 7f86581bb8c0 0 starting mon.deisNode0 rank -1 at 10.0.0.5:6789/0 mon_data /var/lib/ceph/mon/ceph-deisNode0 fsid 34a89aa0-a605-4298-8f20-f4f633dfe19b
Oct 16 17:42:34 deisNode0 sh[1773]: using public_addr 10.0.0.5:6789/0 -> 10.0.0.5:6789/0
Oct 16 17:42:34 deisNode0 sh[1773]: 2015-10-16 17:42:34.785200 7f86581bb8c0 0 starting mon.deisNode0 rank -1 at 10.0.0.5:6789/0 mon_data /var/lib/ceph/mon/ceph-deisNode0 fsid 34a89aa0-a605-4298-8f20-f4f633dfe19b
Oct 16 17:42:34 deisNode0 sh[1773]: 2015-10-16 17:42:34.785438 7f86581bb8c0 1 mon.deisNode0@-1(probing) e0 preinit fsid 34a89aa0-a605-4298-8f20-f4f633dfe19b
Oct 16 17:42:34 deisNode0 sh[1773]: 2015-10-16 17:42:34.785511 7f86581bb8c0 1 mon.deisNode0@-1(probing) e0 initial_members deisNode1, filtering seed monmap
Oct 16 17:42:46 deisNode0 sh[1773]: 2015-10-16 17:42:46.550462 7f864e9ab700 0 -- 10.0.0.5:6789/0 >> 10.0.0.6:6789/0 pipe(0x5390000 sd=11 :49838 s=2 pgs=4 cs=1 l=0 c=0x52234a0).fault with nothing to send, going to standby
Oct 16 17:42:46 deisNode0 sh[1773]: 2015-10-16 17:42:46.553611 7f864e8aa700 0 -- 10.0.0.5:6789/0 >> 10.0.0.6:6789/0 pipe(0x5395000 sd=20 :6789 s=0 pgs=0 cs=0 l=0 c=0x52247e0).accept connect_seq 0 vs existing 1 state standby
Oct 16 17:42:46 deisNode0 sh[1773]: 2015-10-16 17:42:46.553636 7f864e8aa700 0 -- 10.0.0.5:6789/0 >> 10.0.0.6:6789/0 pipe(0x5395000 sd=20 :6789 s=0 pgs=0 cs=0 l=0 c=0x52247e0).accept peer reset, then tried to connect to us, replacing
Oct 16 17:42:56 deisNode0 sh[1773]: 2015-10-16 17:42:56.279466 7f86509af700 0 mon.deisNode0@-1(probing) e2 my rank is now 0 (was -1)
Oct 16 17:42:56 deisNode0 sh[1773]: 2015-10-16 17:42:56.279699 7f86509af700 0 log_channel(cluster) log [INF] : mon.deisNode0 calling new monitor election
Oct 16 17:42:56 deisNode0 sh[1773]: 2015-10-16 17:42:56.279806 7f86509af700 1 mon.deisNode0@0(electing).elector(1) init, last seen epoch 1
Oct 16 17:42:56 deisNode0 sh[1773]: 2015-10-16 17:42:56.282874 7f864e8aa700 0 -- 10.0.0.5:6789/0 >> 10.0.0.6:6789/0 pipe(0x5395000 sd=20 :6789 s=0 pgs=0 cs=0 l=0 c=0x5224940).accept connect_seq 2 vs existing 0 state connecting
Oct 16 17:42:56 deisNode0 sh[1773]: 2015-10-16 17:42:56.282895 7f864e8aa700 0 -- 10.0.0.5:6789/0 >> 10.0.0.6:6789/0 pipe(0x5395000 sd=20 :6789 s=0 pgs=0 cs=0 l=0 c=0x5224940).accept we reset (peer sent cseq 2, 0x5390000.cseq = 0), sending RESETSESSION
Oct 16 17:42:56 deisNode0 sh[1773]: 2015-10-16 17:42:56.325010 7f864e8aa700 0 -- 10.0.0.5:6789/0 >> 10.0.0.6:6789/0 pipe(0x5390000 sd=11 :49875 s=2 pgs=7 cs=1 l=0 c=0x52247e0).reader missed message? skipped from seq 0 to 10805953
Oct 16 17:43:01 deisNode0 sh[1773]: 2015-10-16 17:43:01.358538 7f86509af700 0 log_channel(cluster) log [INF] : mon.deisNode0 calling new monitor election
Oct 16 17:43:01 deisNode0 sh[1773]: 2015-10-16 17:43:01.358610 7f86509af700 1 mon.deisNode0@0(electing).elector(3) init, last seen epoch 3
Oct 16 17:43:01 deisNode0 sh[1773]: 2015-10-16 17:43:01.407665 7f86509af700 0 log_channel(cluster) log [INF] : mon.deisNode0@0 won leader election with quorum 0,1
Oct 16 17:43:01 deisNode0 sh[1773]: 2015-10-16 17:43:01.837961 7f86509af700 1 mon.deisNode0@0(leader) e2 apply_quorum_to_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code}
Oct 16 17:43:01 deisNode0 sh[1773]: 2015-10-16 17:43:01.901266 7f86509af700 0 log_channel(cluster) log [INF] : HEALTH_ERR; no osds
Oct 16 17:43:01 deisNode0 sh[1773]: 2015-10-16 17:43:01.980794 7f86509af700 1 mon.deisNode0@0(leader).paxosservice(pgmap 1..2) refresh upgraded, format 0 -> 1
Oct 16 17:43:01 deisNode0 sh[1773]: 2015-10-16 17:43:01.980803 7f86509af700 1 mon.deisNode0@0(leader).pg v0 on_upgrade discarding in-core PGMap
Oct 16 17:43:01 deisNode0 sh[1773]: 2015-10-16 17:43:01.981405 7f86509af700 0 mon.deisNode0@0(leader).mds e1 print_map
Oct 16 17:43:01 deisNode0 sh[1773]: epoch 1
Oct 16 17:43:01 deisNode0 sh[1773]: flags 0
Oct 16 17:43:01 deisNode0 sh[1773]: created 0.000000
Oct 16 17:43:01 deisNode0 sh[1773]: modified 2015-10-16 17:42:29.317586
Oct 16 17:43:01 deisNode0 sh[1773]: tableserver 0
Oct 16 17:43:01 deisNode0 sh[1773]: root 0
Oct 16 17:43:01 deisNode0 sh[1773]: session_timeout 0
Oct 16 17:43:01 deisNode0 sh[1773]: session_autoclose 0
Oct 16 17:43:01 deisNode0 sh[1773]: max_file_size 0
Oct 16 17:43:01 deisNode0 sh[1773]: last_failure 0
Oct 16 17:43:01 deisNode0 sh[1773]: last_failure_osd_epoch 0
Oct 16 17:43:01 deisNode0 sh[1773]: compat compat={},rocompat={},incompat={}
Oct 16 17:43:01 deisNode0 sh[1773]: max_mds 0
Oct 16 17:43:01 deisNode0 sh[1773]: in
Oct 16 17:43:01 deisNode0 sh[1773]: up {}
Oct 16 17:43:01 deisNode0 sh[1773]: failed
Oct 16 17:43:01 deisNode0 sh[1773]: stopped
Oct 16 17:43:01 deisNode0 sh[1773]: data_pools
Oct 16 17:43:01 deisNode0 sh[1773]: metadata_pool 0
Oct 16 17:43:01 deisNode0 sh[1773]: inline_data disabled
Oct 16 17:43:01 deisNode0 sh[1773]: 2015-10-16 17:43:01.981544 7f86509af700 1 mon.deisNode0@0(leader).osd e1 e1: 0 osds: 0 up, 0 in
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.026266 7f86509af700 0 mon.deisNode0@0(leader).osd e1 crush map has features 1107558400, adjusting msgr requires
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.026327 7f86509af700 0 mon.deisNode0@0(leader).osd e1 crush map has features 1107558400, adjusting msgr requires
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.026335 7f86509af700 0 mon.deisNode0@0(leader).osd e1 crush map has features 1107558400, adjusting msgr requires
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.026358 7f86509af700 0 mon.deisNode0@0(leader).osd e1 crush map has features 1107558400, adjusting msgr requires
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.026511 7f86509af700 1 mon.deisNode0@0(leader).paxosservice(auth 1..2) refresh upgraded, format 0 -> 1
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.057052 7f86509af700 0 log_channel(cluster) log [INF] : monmap e2: 2 mons at {deisNode0=10.0.0.5:6789/0,deisNode1=10.0.0.6:6789/0}
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.057147 7f86509af700 0 log_channel(cluster) log [INF] : pgmap v2: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.057211 7f86509af700 0 log_channel(cluster) log [INF] : mdsmap e1: 0/0/0 up
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.057270 7f86509af700 0 log_channel(cluster) log [INF] : osdmap e1: 0 osds: 0 up, 0 in
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.354139 7f86509af700 0 mon.deisNode0@0(leader) e2 handle_command mon_command({"prefix": "osd create"} v 0) v1
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.354223 7f86509af700 0 log_channel(audit) log [INF] : from='client.4103 :/0' entity='client.admin' cmd=[{"prefix": "osd create"}]: dispatch
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.385579 7f86509af700 0 mon.deisNode0@0(leader) e2 handle_command mon_command({"prefix": "osd create"} v 0) v1
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.385654 7f86509af700 0 log_channel(audit) log [INF] : from='client.4101 :/0' entity='client.admin' cmd=[{"prefix": "osd create"}]: dispatch
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.481244 7f86521b2700 1 mon.deisNode0@0(leader).osd e2 e2: 1 osds: 0 up, 0 in
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.541805 7f86521b2700 0 log_channel(audit) log [INF] : from='client.4103 :/0' entity='client.admin' cmd='[{"prefix": "osd create"}]': finished
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.605501 7f86521b2700 0 log_channel(cluster) log [INF] : osdmap e2: 1 osds: 0 up, 0 in
Oct 16 17:43:02 deisNode0 sh[1773]: 2015-10-16 17:43:02.713807 7f86521b2700 0 log_channel(cluster) log [INF] : pgmap v3: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
Oct 16 17:43:04 deisNode0 sh[1773]: 2015-10-16 17:43:04.810804 7f86521b2700 1 mon.deisNode0@0(leader).osd e3 e3: 2 osds: 0 up, 0 in
Oct 16 17:43:04 deisNode0 sh[1773]: 2015-10-16 17:43:04.869592 7f86521b2700 0 log_channel(audit) log [INF] : from='client.4101 :/0' entity='client.admin' cmd='[{"prefix": "osd create"}]': finished
Oct 16 17:43:04 deisNode0 sh[1773]: 2015-10-16 17:43:04.932647 7f86521b2700 0 log_channel(cluster) log [INF] : osdmap e3: 2 osds: 0 up, 0 in
Oct 16 17:43:04 deisNode0 sh[1773]: 2015-10-16 17:43:04.994648 7f86521b2700 0 log_channel(cluster) log [INF] : pgmap v4: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.553815 7f86509af700 1 mon.deisNode0@0(leader) e2 adding peer 10.0.0.4:6789/0 to list of hints
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.705147 7f86509af700 1 mon.deisNode0@0(leader) e2 adding peer 10.0.0.4:6789/0 to list of hints
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.705897 7f86509af700 0 mon.deisNode0@0(leader).monmap v2 adding/updating deisNode2 at 10.0.0.4:6789/0 to monitor cluster
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.746910 7f86509af700 0 mon.deisNode0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "osd.1", "caps": ["osd", "allow *", "mon", "allow profile osd"]} v 0) v1
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.747364 7f86509af700 0 log_channel(audit) log [INF] : from='client.? 10.0.0.6:0/1000079' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "osd.1", "caps": ["osd", "allow *", "mon", "allow profile osd"]}]: dispatch
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.853030 7f864e8aa700 0 -- 10.0.0.5:6789/0 >> 10.0.0.6:6789/0 pipe(0x5390000 sd=11 :49875 s=2 pgs=7 cs=1 l=0 c=0x52247e0).fault, initiating reconnect
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.855045 7f864e4a6700 0 -- 10.0.0.5:6789/0 >> 10.0.0.6:6789/0 pipe(0x54a4000 sd=22 :6789 s=0 pgs=0 cs=0 l=0 c=0x5225b20).accept connect_seq 0 vs existing 2 state connecting
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.855061 7f864e4a6700 0 -- 10.0.0.5:6789/0 >> 10.0.0.6:6789/0 pipe(0x54a4000 sd=22 :6789 s=0 pgs=0 cs=0 l=0 c=0x5225b20).accept peer reset, then tried to connect to us, replacing
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.855916 7f86521b2700 0 mon.deisNode0@0(leader) e3 my rank is now 1 (was 0)
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.856552 7f86521b2700 1 mon.deisNode0@1(probing) e3 _ms_dispatch dropping stray message mon_command({"prefix": "auth get-or-create", "entity": "osd.1", "caps": ["osd", "allow *", "mon", "allow profile osd"]} v 0) v1 from client.14098 10.0.0.6:0/1000079
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.858406 7f864e7a9700 0 -- 10.0.0.5:6789/0 >> 10.0.0.6:6789/0 pipe(0x540b000 sd=21 :6789 s=0 pgs=0 cs=0 l=0 c=0x52255a0).accept connect_seq 0 vs existing 0 state connecting
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.902004 7f864e7a9700 0 -- 10.0.0.5:6789/0 >> 10.0.0.4:6789/0 pipe(0x5390000 sd=11 :46610 s=2 pgs=6 cs=1 l=0 c=0x5225700).reader missed message? skipped from seq 0 to 1693890849
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.902117 7f86509af700 0 log_channel(cluster) log [INF] : mon.deisNode0 calling new monitor election
Oct 16 17:43:06 deisNode0 sh[1773]: 2015-10-16 17:43:06.902221 7f86509af700 1 mon.deisNode0@1(electing).elector(4) init, last seen epoch 4
Oct 16 17:43:08 deisNode0 sh[1773]: 2015-10-16 17:43:08.712139 7f864e7a9700 0 -- 10.0.0.5:6789/0 >> 10.0.0.4:6789/0 pipe(0x5390000 sd=11 :46610 s=2 pgs=6 cs=1 l=0 c=0x5225700).fault, initiating reconnect
Oct 16 17:43:08 deisNode0 sh[1773]: 2015-10-16 17:43:08.713613 7f864e4a6700 0 -- 10.0.0.5:6789/0 >> 10.0.0.4:6789/0 pipe(0x54b3000 sd=21 :6789 s=0 pgs=0 cs=0 l=0 c=0x53a6420).accept connect_seq 0 vs existing 2 state connecting
Oct 16 17:43:08 deisNode0 sh[1773]: 2015-10-16 17:43:08.713640 7f864e4a6700 0 -- 10.0.0.5:6789/0 >> 10.0.0.4:6789/0 pipe(0x54b3000 sd=21 :6789 s=0 pgs=0 cs=0 l=0 c=0x53a6420).accept peer reset, then tried to connect to us, replacing
Oct 16 17:43:11 deisNode0 sh[1773]: 2015-10-16 17:43:11.981437 7f86511b0700 0 log_channel(cluster) log [INF] : mon.deisNode0@1 won leader election with quorum 1,2
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.048534 7f86511b0700 0 log_channel(cluster) log [INF] : HEALTH_WARN; 64 pgs stuck inactive; 64 pgs stuck unclean; 1 mons down, quorum 1,2 deisNode0,deisNode1
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.088526 7f86509af700 0 log_channel(cluster) log [INF] : monmap e3: 3 mons at {deisNode0=10.0.0.5:6789/0,deisNode1=10.0.0.6:6789/0,deisNode2=10.0.0.4:6789/0}
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.088620 7f86509af700 0 log_channel(cluster) log [INF] : pgmap v4: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.088770 7f86509af700 0 log_channel(cluster) log [INF] : mdsmap e1: 0/0/0 up
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.088869 7f86509af700 0 log_channel(cluster) log [INF] : osdmap e3: 2 osds: 0 up, 0 in
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.092898 7f86509af700 0 mon.deisNode0@1(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "osd.1", "caps": ["osd", "allow *", "mon", "allow profile osd"]} v 0) v1
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.092973 7f86509af700 0 log_channel(audit) log [INF] : from='client.14098 10.0.0.6:0/1000079' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "osd.1", "caps": ["osd", "allow *", "mon", "allow profile osd"]}]: dispatch
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.199642 7f86521b2700 0 log_channel(audit) log [INF] : from='client.14098 10.0.0.6:0/1000079' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "osd.1", "caps": ["osd", "allow *", "mon", "allow profile osd"]}]': finished
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.378668 7f86509af700 0 mon.deisNode0@1(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "osd.0", "caps": ["osd", "allow *", "mon", "allow profile osd"]} v 0) v1
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.378706 7f86509af700 0 log_channel(audit) log [INF] : from='client.4106 :/0' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "osd.0", "caps": ["osd", "allow *", "mon", "allow profile osd"]}]: dispatch
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.480375 7f86521b2700 0 log_channel(audit) log [INF] : from='client.4106 :/0' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "osd.0", "caps": ["osd", "allow *", "mon", "allow profile osd"]}]': finished
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.557586 7f86509af700 0 mon.deisNode0@1(leader) e3 handle_command mon_command({"prefix": "osd crush add", "args": ["root=default", "host=deisNode1"], "id": 1, "weight": 1.0} v 0) v1
Oct 16 17:43:12 deisNode0 sh[1773]: 2015-10-16 17:43:12.557644 7f86509af700 0 log_channel(audit) log [INF] : from='client.4109 :/0' entity='client.admin' cmd=[{"prefix": "osd crush add", "args": ["root=default", "host=deisNode1"], "id": 1, "weight": 1.0}]: dispatch
Oct 16 17:43:13 deisNode0 sh[1773]: 2015-10-16 17:43:13.589518 7f86521b2700 1 mon.deisNode0@1(leader).osd e4 e4: 2 osds: 0 up, 0 in
Oct 16 17:43:13 deisNode0 sh[1773]: 2015-10-16 17:43:13.623816 7f86521b2700 0 log_channel(audit) log [INF] : from='client.4109 :/0' entity='client.admin' cmd='[{"prefix": "osd crush add", "args": ["root=default", "host=deisNode1"], "id": 1, "weight": 1.0}]': finished
Oct 16 17:43:13 deisNode0 sh[1773]: 2015-10-16 17:43:13.669211 7f86521b2700 0 log_channel(cluster) log [INF] : osdmap e4: 2 osds: 0 up, 0 in
Oct 16 17:43:13 deisNode0 sh[1773]: 2015-10-16 17:43:13.731435 7f86521b2700 0 log_channel(cluster) log [INF] : pgmap v5: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
Oct 16 17:43:13 deisNode0 sh[1773]: 2015-10-16 17:43:13.731609 7f86509af700 0 log_channel(cluster) log [INF] : mon.deisNode0 calling new monitor election
Oct 16 17:43:13 deisNode0 sh[1773]: 2015-10-16 17:43:13.731750 7f86509af700 1 mon.deisNode0@1(electing).elector(6) init, last seen epoch 6
Oct 16 17:43:20 deisNode0 sh[1773]: 2015-10-16 17:43:19.969455 7f86509af700 0 mon.deisNode0@1(peon) e3 handle_command mon_command({"prefix": "osd crush add", "args": ["root=default", "host=deisNode0"], "id": 0, "weight": 1.0} v 0) v1
Oct 16 17:43:20 deisNode0 sh[1773]: 2015-10-16 17:43:19.969526 7f86509af700 0 log_channel(audit) log [INF] : from='client.? 10.0.0.5:0/1000108' entity='client.admin' cmd=[{"prefix": "osd crush add", "args": ["root=default", "host=deisNode0"], "id": 0, "weight": 1.0}]: dispatch
Oct 16 17:43:21 deisNode0 sh[1773]: 2015-10-16 17:43:21.764786 7f86509af700 1 mon.deisNode0@1(peon).osd e5 e5: 2 osds: 1 up, 1 in
Oct 16 17:43:22 deisNode0 sh[1773]: 2015-10-16 17:43:22.315822 7f86509af700 0 mon.deisNode0@1(peon) e3 handle_command mon_command({"prefix": "osd create"} v 0) v1
Oct 16 17:43:22 deisNode0 sh[1773]: 2015-10-16 17:43:22.315892 7f86509af700 0 log_channel(audit) log [INF] : from='client.? 10.0.0.4:0/1000020' entity='client.admin' cmd=[{"prefix": "osd create"}]: dispatch
Oct 16 17:43:22 deisNode0 sh[1773]: 2015-10-16 17:43:22.474598 7f86509af700 0 mon.deisNode0@1(peon) e3 handle_command mon_command({"prefix": "osd pool create", "pg_num": 64, "pool": "data"} v 0) v1
Oct 16 17:43:22 deisNode0 sh[1773]: 2015-10-16 17:43:22.474647 7f86509af700 0 log_channel(audit) log [INF] : from='client.? 10.0.0.6:0/1000058' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 64, "pool": "data"}]: dispatch
Oct 16 17:43:23 deisNode0 sh[1773]: 2015-10-16 17:43:23.059402 7f86509af700 1 mon.deisNode0@1(peon).osd e6 e6: 3 osds: 1 up, 1 in
Oct 16 17:43:23 deisNode0 sh[1773]: 2015-10-16 17:43:23.818171 7f86509af700 0 mon.deisNode0@1(peon) e3 handle_command mon_command({"prefix": "osd lspools"} v 0) v1
Oct 16 17:43:23 deisNode0 sh[1773]: 2015-10-16 17:43:23.818232 7f86509af700 0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1000089' entity='client.admin' cmd=[{"prefix": "osd lspools"}]: dispatch
Oct 16 17:43:24 deisNode0 sh[1773]: 2015-10-16 17:43:24.181972 7f86509af700 0 mon.deisNode0@1(peon) e3 handle_command mon_command({"prefix": "osd pool create", "pg_num": 64, "pool": "metadata"} v 0) v1
Oct 16 17:43:24 deisNode0 sh[1773]: 2015-10-16 17:43:24.182074 7f86509af700 0 log_channel(audit) log [INF] : from='client.? 10.0.0.6:0/1000121' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 64, "pool": "metadata"}]: dispatch
Oct 16 17:43:26 deisNode0 sh[1773]: 2015-10-16 17:43:26.325780 7f86509af700 1 mon.deisNode0@1(peon).osd e7 e7: 3 osds: 2 up, 2 in
Oct 16 17:43:27 deisNode0 sh[1773]: 2015-10-16 17:43:27.418259 7f86509af700 0 mon.deisNode0@1(peon).mds e2 print_map
Oct 16 17:43:27 deisNode0 sh[1773]: epoch 2
Oct 16 17:43:27 deisNode0 sh[1773]: flags 0
Oct 16 17:43:27 deisNode0 sh[1773]: created 2015-10-16 17:43:27.156447
Oct 16 17:43:27 deisNode0 sh[1773]: modified 2015-10-16 17:43:27.156487
Oct 16 17:43:27 deisNode0 sh[1773]: tableserver 0
Oct 16 17:43:27 deisNode0 sh[1773]: root 0
Oct 16 17:43:27 deisNode0 sh[1773]: session_timeout 60
Oct 16 17:43:27 deisNode0 sh[1773]: session_autoclose 300
Oct 16 17:43:27 deisNode0 sh[1773]: max_file_size 1099511627776
Oct 16 17:43:27 deisNode0 sh[1773]: last_failure 0
Oct 16 17:43:27 deisNode0 sh[1773]: last_failure_osd_epoch 0
Oct 16 17:43:27 deisNode0 sh[1773]: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table}
Oct 16 17:43:27 deisNode0 sh[1773]: max_mds 1
Oct 16 17:43:27 deisNode0 sh[1773]: in
Oct 16 17:43:27 deisNode0 sh[1773]: up {}
Oct 16 17:43:27 deisNode0 sh[1773]: failed
Oct 16 17:43:27 deisNode0 sh[1773]: stopped
Oct 16 17:43:27 deisNode0 sh[1773]: data_pools 1
Oct 16 17:43:27 deisNode0 sh[1773]: metadata_pool 2
Oct 16 17:43:27 deisNode0 sh[1773]: inline_data disabled
Oct 16 17:43:27 deisNode0 sh[1773]: 2015-10-16 17:43:27.418494 7f86509af700 1 mon.deisNode0@1(peon).osd e8 e8: 3 osds: 2 up, 2 in
Oct 16 17:43:28 deisNode0 sh[1773]: 2015-10-16 17:43:28.724646 7f86509af700 1 mon.deisNode0@1(peon).osd e9 e9: 3 osds: 2 up, 2 in
Oct 16 17:43:29 deisNode0 sh[1773]: 2015-10-16 17:43:29.722339 7f86509af700 0 mon.deisNode0@1(peon).mds e3 print_map
Oct 16 17:43:29 deisNode0 sh[1773]: epoch 3
Oct 16 17:43:29 deisNode0 sh[1773]: flags 0
Oct 16 17:43:29 deisNode0 sh[1773]: created 2015-10-16 17:43:27.156447
Oct 16 17:43:29 deisNode0 sh[1773]: modified 2015-10-16 17:43:29.402202
Oct 16 17:43:29 deisNode0 sh[1773]: tableserver 0
Oct 16 17:43:29 deisNode0 sh[1773]: root 0
Oct 16 17:43:29 deisNode0 sh[1773]: session_timeout 60
Oct 16 17:43:29 deisNode0 sh[1773]: session_autoclose 300
Oct 16 17:43:29 deisNode0 sh[1773]: max_file_size 1099511627776
Oct 16 17:43:29 deisNode0 sh[1773]: last_failure 0
Oct 16 17:43:29 deisNode0 sh[1773]: last_failure_osd_epoch 0
Oct 16 17:43:29 deisNode0 sh[1773]: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table}
Oct 16 17:43:29 deisNode0 sh[1773]: max_mds 1
Oct 16 17:43:29 deisNode0 sh[1773]: in
Oct 16 17:43:29 deisNode0 sh[1773]: up {}
Oct 16 17:43:29 deisNode0 sh[1773]: failed
Oct 16 17:43:29 deisNode0 sh[1773]: stopped
Oct 16 17:43:29 deisNode0 sh[1773]: data_pools 1
Oct 16 17:43:29 deisNode0 sh[1773]: metadata_pool 2
Oct 16 17:43:29 deisNode0 sh[1773]: inline_data disabled
Oct 16 17:43:29 deisNode0 sh[1773]: 14106: 10.0.0.6:6804/1 'deisNode1' mds.-1.0 up:standby seq 1
Oct 16 17:43:29 deisNode0 sh[1773]: 2015-10-16 17:43:29.889427 7f86509af700 0 mon.deisNode0@1(peon).mds e4 print_map
Oct 16 17:43:29 deisNode0 sh[1773]: epoch 4
Oct 16 17:43:29 deisNode0 sh[1773]: flags 0
Oct 16 17:43:29 deisNode0 sh[1773]: created 2015-10-16 17:43:27.156447
Oct 16 17:43:29 deisNode0 sh[1773]: modified 2015-10-16 17:43:29.622586
Oct 16 17:43:29 deisNode0 sh[1773]: tableserver 0
Oct 16 17:43:29 deisNode0 sh[1773]: root 0
Oct 16 17:43:29 deisNode0 sh[1773]: session_timeout 60
Oct 16 17:43:29 deisNode0 sh[1773]: session_autoclose 300
Oct 16 17:43:29 deisNode0 sh[1773]: max_file_size 1099511627776
Oct 16 17:43:29 deisNode0 sh[1773]: last_failure 0
Oct 16 17:43:29 deisNode0 sh[1773]: last_failure_osd_epoch 0
Oct 16 17:43:29 deisNode0 sh[1773]: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table}
Oct 16 17:43:29 deisNode0 sh[1773]: max_mds 1
Oct 16 17:43:29 deisNode0 sh[1773]: in 0
Oct 16 17:43:29 deisNode0 sh[1773]: up {0=14106}
Oct 16 17:43:29 deisNode0 sh[1773]: failed
Oct 16 17:43:29 deisNode0 sh[1773]: stopped
Oct 16 17:43:29 deisNode0 sh[1773]: data_pools 1
Oct 16 17:43:29 deisNode0 sh[1773]: metadata_pool 2
Oct 16 17:43:29 deisNode0 sh[1773]: inline_data disabled
Oct 16 17:43:29 deisNode0 sh[1773]: 14106: 10.0.0.6:6804/1 'deisNode1' mds.0.1 up:creating seq 1
Oct 16 17:43:30 deisNode0 sh[1773]: 2015-10-16 17:43:30.327974 7f86509af700 1 mon.deisNode0@1(peon).osd e10 e10: 3 osds: 3 up, 3 in
Oct 16 17:43:31 deisNode0 sh[1773]: 2015-10-16 17:43:31.450956 7f86509af700 1 mon.deisNode0@1(peon).osd e11 e11: 3 osds: 3 up, 3 in
Oct 16 17:43:33 deisNode0 sh[1773]: 2015-10-16 17:43:33.266457 7f86509af700 0 mon.deisNode0@1(peon).mds e5 print_map
Oct 16 17:43:33 deisNode0 sh[1773]: epoch 5
Oct 16 17:43:33 deisNode0 sh[1773]: flags 0
Oct 16 17:43:33 deisNode0 sh[1773]: created 2015-10-16 17:43:27.156447
Oct 16 17:43:33 deisNode0 sh[1773]: modified 2015-10-16 17:43:33.077047
Oct 16 17:43:33 deisNode0 sh[1773]: tableserver 0
Oct 16 17:43:33 deisNode0 sh[1773]: root 0
Oct 16 17:43:33 deisNode0 sh[1773]: session_timeout 60
Oct 16 17:43:33 deisNode0 sh[1773]: session_autoclose 300
Oct 16 17:43:33 deisNode0 sh[1773]: max_file_size 1099511627776
Oct 16 17:43:33 deisNode0 sh[1773]: last_failure 0
Oct 16 17:43:33 deisNode0 sh[1773]: last_failure_osd_epoch 0
Oct 16 17:43:33 deisNode0 sh[1773]: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table}
Oct 16 17:43:33 deisNode0 sh[1773]: max_mds 1
Oct 16 17:43:33 deisNode0 sh[1773]: in 0
Oct 16 17:43:33 deisNode0 sh[1773]: up {0=14106}
Oct 16 17:43:33 deisNode0 sh[1773]: failed
Oct 16 17:43:33 deisNode0 sh[1773]: stopped
Oct 16 17:43:33 deisNode0 sh[1773]: data_pools 1
Oct 16 17:43:33 deisNode0 sh[1773]: metadata_pool 2
Oct 16 17:43:33 deisNode0 sh[1773]: inline_data disabled
Oct 16 17:43:33 deisNode0 sh[1773]: 14115: 10.0.0.5:6804/1 'deisNode0' mds.-1.0 up:standby seq 1
Oct 16 17:43:33 deisNode0 sh[1773]: 14106: 10.0.0.6:6804/1 'deisNode1' mds.0.1 up:creating seq 1
Oct 16 17:43:33 deisNode0 sh[1773]: 2015-10-16 17:43:33.754610 7f86509af700 1 mon.deisNode0@1(peon).osd e12 e12: 3 osds: 3 up, 3 in
Oct 16 17:43:39 deisNode0 sh[1773]: 2015-10-16 17:43:39.087426 7f86511b0700 0 mon.deisNode0@1(peon).data_health(8) update_stats avail 74% total 27787 MB, used 5666 MB, avail 20667 MB
Oct 16 17:43:39 deisNode0 sh[1773]: 2015-10-16 17:43:39.343696 7f86509af700 -1 mon.deisNode0@1(peon).paxos(paxos updating c 1..54) lease_expire from mon.0 10.0.0.4:6789/0 is 0.363248 seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Oct 16 17:43:40 deisNode0 sh[1773]: 2015-10-16 17:43:40.718698 7f86509af700 1 mon.deisNode0@1(peon).osd e13 e13: 3 osds: 3 up, 3 in
Oct 16 17:43:41 deisNode0 sh[1773]: 2015-10-16 17:43:41.969561 7f86509af700 1 mon.deisNode0@1(peon).osd e14 e14: 3 osds: 3 up, 3 in
Oct 16 17:43:50 deisNode0 sh[1773]: 2015-10-16 17:43:50.017572 7f86509af700 0 mon.deisNode0@1(peon).mds e6 print_map
Oct 16 17:43:50 deisNode0 sh[1773]: epoch 6
Oct 16 17:43:50 deisNode0 sh[1773]: flags 0
Oct 16 17:43:50 deisNode0 sh[1773]: created 2015-10-16 17:43:27.156447
Oct 16 17:43:50 deisNode0 sh[1773]: modified 2015-10-16 17:43:49.828963
Oct 16 17:43:50 deisNode0 sh[1773]: tableserver 0
Oct 16 17:43:50 deisNode0 sh[1773]: root 0
Oct 16 17:43:50 deisNode0 sh[1773]: session_timeout 60
Oct 16 17:43:50 deisNode0 sh[1773]: session_autoclose 300
Oct 16 17:43:50 deisNode0 sh[1773]: max_file_size 1099511627776
Oct 16 17:43:50 deisNode0 sh[1773]: last_failure 0
Oct 16 17:43:50 deisNode0 sh[1773]: last_failure_osd_epoch 0
Oct 16 17:43:50 deisNode0 sh[1773]: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table}
Oct 16 17:43:50 deisNode0 sh[1773]: max_mds 1
Oct 16 17:43:50 deisNode0 sh[1773]: in 0
Oct 16 17:43:50 deisNode0 sh[1773]: up {0=14106}
Oct 16 17:43:50 deisNode0 sh[1773]: failed
Oct 16 17:43:50 deisNode0 sh[1773]: stopped
Oct 16 17:43:50 deisNode0 sh[1773]: data_pools 1
Oct 16 17:43:50 deisNode0 sh[1773]: metadata_pool 2
Oct 16 17:43:50 deisNode0 sh[1773]: inline_data disabled
Oct 16 17:43:50 deisNode0 sh[1773]: 14115: 10.0.0.5:6804/1 'deisNode0' mds.-1.0 up:standby seq 1
Oct 16 17:43:50 deisNode0 sh[1773]: 14121: 10.0.0.4:6804/1 'deisNode2' mds.-1.0 up:standby seq 1
Oct 16 17:43:50 deisNode0 sh[1773]: 14106: 10.0.0.6:6804/1 'deisNode1' mds.0.1 up:creating seq 1
Oct 16 17:44:03 deisNode0 sh[1773]: 2015-10-16 17:44:03.413269 7f86509af700 1 mon.deisNode0@1(peon).osd e15 e15: 3 osds: 3 up, 3 in
Oct 16 17:44:19 deisNode0 sh[1773]: 2015-10-16 17:44:19.526951 7f86509af700 1 mon.deisNode0@1(peon).osd e16 e16: 3 osds: 3 up, 3 in
Oct 16 17:44:20 deisNode0 sh[1773]: 2015-10-16 17:44:20.884498 7f86509af700 1 mon.deisNode0@1(peon).osd e17 e17: 3 osds: 3 up, 3 in
Oct 16 17:44:22 deisNode0 sh[1773]: 2015-10-16 17:44:22.228502 7f86509af700 1 mon.deisNode0@1(peon).osd e18 e18: 3 osds: 3 up, 3 in
Oct 16 17:44:39 deisNode0 sh[1773]: 2015-10-16 17:44:39.343727 7f86511b0700 0 mon.deisNode0@1(peon).data_health(8) update_stats avail 74% total 27787 MB, used 5672 MB, avail 20661 MB
Oct 16 17:44:41 deisNode0 sh[1773]: 2015-10-16 17:44:41.114719 7f86509af700 0 mon.deisNode0@1(peon).mds e7 print_map
Oct 16 17:44:41 deisNode0 sh[1773]: epoch 7
Oct 16 17:44:41 deisNode0 sh[1773]: flags 0
Oct 16 17:44:41 deisNode0 sh[1773]: created 2015-10-16 17:43:27.156447
Oct 16 17:44:41 deisNode0 sh[1773]: modified 2015-10-16 17:44:40.622326
Oct 16 17:44:41 deisNode0 sh[1773]: tableserver 0
Oct 16 17:44:41 deisNode0 sh[1773]: root 0
Oct 16 17:44:41 deisNode0 sh[1773]: session_timeout 60
Oct 16 17:44:41 deisNode0 sh[1773]: session_autoclose 300
Oct 16 17:44:41 deisNode0 sh[1773]: max_file_size 1099511627776
Oct 16 17:44:41 deisNode0 sh[1773]: last_failure 0
Oct 16 17:44:41 deisNode0 sh[1773]: last_failure_osd_epoch 0
Oct 16 17:44:41 deisNode0 sh[1773]: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table}
Oct 16 17:44:41 deisNode0 sh[1773]: max_mds 1
Oct 16 17:44:41 deisNode0 sh[1773]: in 0
Oct 16 17:44:41 deisNode0 sh[1773]: up {0=14106}
Oct 16 17:44:41 deisNode0 sh[1773]: failed
Oct 16 17:44:41 deisNode0 sh[1773]: stopped
Oct 16 17:44:41 deisNode0 sh[1773]: data_pools 1
Oct 16 17:44:41 deisNode0 sh[1773]: metadata_pool 2
Oct 16 17:44:41 deisNode0 sh[1773]: inline_data disabled
Oct 16 17:44:41 deisNode0 sh[1773]: 14115: 10.0.0.5:6804/1 'deisNode0' mds.-1.0 up:standby seq 1
Oct 16 17:44:41 deisNode0 sh[1773]: 14121: 10.0.0.4:6804/1 'deisNode2' mds.-1.0 up:standby seq 1
Oct 16 17:44:41 deisNode0 sh[1773]: 14106: 10.0.0.6:6804/1 'deisNode1' mds.0.1 up:active seq 19
Oct 16 17:44:49 deisNode0 sh[1773]: 2015-10-16 17:44:49.177726 7f86509af700 1 mon.deisNode0@1(peon).osd e19 e19: 3 osds: 3 up, 3 in
Oct 16 17:44:50 deisNode0 sh[1773]: 2015-10-16 17:44:50.587474 7f86509af700 1 mon.deisNode0@1(peon).osd e20 e20: 3 osds: 3 up, 3 in
Oct 16 17:45:09 deisNode0 sh[1773]: 2015-10-16 17:45:09.573936 7f86509af700 1 mon.deisNode0@1(peon).osd e21 e21: 3 osds: 3 up, 3 in
Oct 16 17:45:11 deisNode0 sh[1773]: 2015-10-16 17:45:11.106181 7f86509af700 1 mon.deisNode0@1(peon).osd e22 e22: 3 osds: 3 up, 3 in
Oct 16 17:45:12 deisNode0 sh[1773]: 2015-10-16 17:45:12.496928 7f86509af700 1 mon.deisNode0@1(peon).osd e23 e23: 3 osds: 3 up, 3 in
Oct 16 17:45:13 deisNode0 sh[1773]: 2015-10-16 17:45:13.901634 7f86509af700 1 mon.deisNode0@1(peon).osd e24 e24: 3 osds: 3 up, 3 in
Oct 16 17:45:16 deisNode0 sh[1773]: 2015-10-16 17:45:16.191805 7f86509af700 1 mon.deisNode0@1(peon).osd e25 e25: 3 osds: 3 up, 3 in
Oct 16 17:45:18 deisNode0 sh[1773]: 2015-10-16 17:45:18.669354 7f86509af700 1 mon.deisNode0@1(peon).osd e26 e26: 3 osds: 3 up, 3 in
Oct 16 17:45:39 deisNode0 sh[1773]: 2015-10-16 17:45:39.344018 7f86511b0700 0 mon.deisNode0@1(peon).data_health(8) update_stats avail 74% total 27787 MB, used 5678 MB, avail 20655 MB
Oct 16 17:45:54 deisNode0 sh[1773]: 2015-10-16 17:45:54.413811 7f86509af700 1 mon.deisNode0@1(peon).osd e27 e27: 3 osds: 3 up, 3 in
Oct 16 17:45:55 deisNode0 sh[1773]: 2015-10-16 17:45:55.915932 7f86509af700 1 mon.deisNode0@1(peon).osd e28 e28: 3 osds: 3 up, 3 in
Oct 16 17:46:14 deisNode0 sh[1773]: 2015-10-16 17:46:14.553069 7f864e8aa700 0 -- 10.0.0.5:6789/0 >> 10.0.0.5:0/1423366644 pipe(0x5ea8000 sd=11 :6789 s=0 pgs=0 cs=0 l=0 c=0x53a8c00).accept peer addr is really 10.0.0.5:0/1423366644 (socket is 10.0.0.5:44002/0)
Oct 16 17:46:39 deisNode0 sh[1773]: 2015-10-16 17:46:39.344332 7f86511b0700 0 mon.deisNode0@1(peon).data_health(8) update_stats avail 74% total 27787 MB, used 5701 MB, avail 20632 MB
Oct 16 17:47:06 deisNode0 sh[1773]: 2015-10-16 17:47:06.305507 7f86511b0700 1 mon.deisNode0@1(peon).paxos(paxos updating c 1..228) lease_timeout -- calling new election
Oct 16 17:47:07 deisNode0 sh[1773]: 2015-10-16 17:47:06.306829 7f86509af700 0 log_channel(cluster) log [INF] : mon.deisNode0 calling new monitor election
Oct 16 17:47:07 deisNode0 sh[1773]: 2015-10-16 17:47:06.306971 7f86509af700 1 mon.deisNode0@1(electing).elector(8) init, last seen epoch 8
Oct 16 17:47:41 deisNode0 sh[1773]: 2015-10-16 17:47:41.896375 7f86509af700 0 log_channel(cluster) log [INF] : mon.deisNode0 calling new monitor election
Oct 16 17:47:41 deisNode0 sh[1773]: 2015-10-16 17:47:41.896625 7f86509af700 1 mon.deisNode0@1(electing).elector(11) init, last seen epoch 11
Oct 16 17:47:41 deisNode0 sh[1773]: 2015-10-16 17:47:41.896932 7f86511b0700 0 mon.deisNode0@1(electing).data_health(8) update_stats avail 72% total 27787 MB, used 6124 MB, avail 20209 MB
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment