Skip to content

Instantly share code, notes, and snippets.

@seffyroff
Created October 15, 2018 21:38
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save seffyroff/e0f4054088a3036a74972692f4ed7a57 to your computer and use it in GitHub Desktop.
Save seffyroff/e0f4054088a3036a74972692f4ed7a57 to your computer and use it in GitHub Desktop.
Juju Cephfs Deploy Log
unit-ceph-mon-1: 13:50:05 DEBUG unit.ceph-mon/1.juju-log Hardening function 'install'
unit-ceph-mon-1: 13:50:05 DEBUG unit.ceph-mon/1.juju-log Hardening function 'config_changed'
unit-ceph-mon-1: 13:50:05 DEBUG unit.ceph-mon/1.juju-log Hardening function 'upgrade_charm'
unit-ceph-mon-1: 13:50:05 DEBUG unit.ceph-mon/1.juju-log Hardening function 'update_status'
unit-ceph-mon-1: 13:50:05 DEBUG unit.ceph-mon/1.juju-log No hardening applied to 'update_status'
unit-ceph-mon-1: 13:50:05 INFO unit.ceph-mon/1.juju-log Updating status.
unit-ceph-mon-1: 13:50:11 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-osd-4: 13:50:41 DEBUG unit.ceph-osd/4.juju-log Hardening function 'install'
unit-ceph-osd-4: 13:50:41 DEBUG unit.ceph-osd/4.juju-log Hardening function 'config_changed'
unit-ceph-osd-4: 13:50:41 DEBUG unit.ceph-osd/4.juju-log Hardening function 'upgrade_charm'
unit-ceph-osd-4: 13:50:41 DEBUG unit.ceph-osd/4.juju-log Hardening function 'update_status'
unit-ceph-osd-4: 13:50:41 DEBUG unit.ceph-osd/4.juju-log No hardening applied to 'update_status'
unit-ceph-osd-4: 13:50:41 INFO unit.ceph-osd/4.juju-log Updating status.
unit-ceph-osd-4: 13:50:45 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-osd-7: 13:51:55 DEBUG unit.ceph-osd/7.juju-log Hardening function 'install'
unit-ceph-osd-7: 13:51:55 DEBUG unit.ceph-osd/7.juju-log Hardening function 'config_changed'
unit-ceph-osd-7: 13:51:55 DEBUG unit.ceph-osd/7.juju-log Hardening function 'upgrade_charm'
unit-ceph-osd-7: 13:51:55 DEBUG unit.ceph-osd/7.juju-log Hardening function 'update_status'
unit-ceph-osd-7: 13:51:55 DEBUG unit.ceph-osd/7.juju-log No hardening applied to 'update_status'
unit-ceph-osd-7: 13:51:55 INFO unit.ceph-osd/7.juju-log Updating status.
unit-ceph-osd-6: 13:51:55 DEBUG unit.ceph-osd/6.juju-log Hardening function 'install'
unit-ceph-osd-6: 13:51:56 DEBUG unit.ceph-osd/6.juju-log Hardening function 'config_changed'
unit-ceph-osd-6: 13:51:56 DEBUG unit.ceph-osd/6.juju-log Hardening function 'upgrade_charm'
unit-ceph-osd-6: 13:51:56 DEBUG unit.ceph-osd/6.juju-log Hardening function 'update_status'
unit-ceph-osd-6: 13:51:56 DEBUG unit.ceph-osd/6.juju-log No hardening applied to 'update_status'
unit-ceph-osd-6: 13:51:56 INFO unit.ceph-osd/6.juju-log Updating status.
unit-ceph-osd-7: 13:51:58 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-osd-6: 13:52:00 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-osd-5: 13:52:14 DEBUG unit.ceph-osd/5.juju-log Hardening function 'install'
unit-ceph-osd-5: 13:52:14 DEBUG unit.ceph-osd/5.juju-log Hardening function 'config_changed'
unit-ceph-osd-5: 13:52:14 DEBUG unit.ceph-osd/5.juju-log Hardening function 'upgrade_charm'
unit-ceph-osd-5: 13:52:15 DEBUG unit.ceph-osd/5.juju-log Hardening function 'update_status'
unit-ceph-osd-5: 13:52:15 DEBUG unit.ceph-osd/5.juju-log No hardening applied to 'update_status'
unit-ceph-osd-5: 13:52:16 INFO unit.ceph-osd/5.juju-log Updating status.
unit-ceph-mon-2: 13:52:20 DEBUG unit.ceph-mon/2.juju-log Hardening function 'install'
unit-ceph-mon-2: 13:52:20 DEBUG unit.ceph-mon/2.juju-log Hardening function 'config_changed'
unit-ceph-mon-2: 13:52:20 DEBUG unit.ceph-mon/2.juju-log Hardening function 'upgrade_charm'
unit-ceph-mon-2: 13:52:20 DEBUG unit.ceph-mon/2.juju-log Hardening function 'update_status'
unit-ceph-mon-2: 13:52:21 DEBUG unit.ceph-mon/2.juju-log No hardening applied to 'update_status'
unit-ceph-mon-2: 13:52:21 INFO unit.ceph-mon/2.juju-log Updating status.
unit-ceph-mon-2: 13:52:26 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-osd-5: 13:52:27 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-mon-0: 13:53:51 DEBUG unit.ceph-mon/0.juju-log Hardening function 'install'
unit-ceph-mon-0: 13:53:51 DEBUG unit.ceph-mon/0.juju-log Hardening function 'config_changed'
unit-ceph-mon-0: 13:53:51 DEBUG unit.ceph-mon/0.juju-log Hardening function 'upgrade_charm'
unit-ceph-mon-0: 13:53:51 DEBUG unit.ceph-mon/0.juju-log Hardening function 'update_status'
unit-ceph-mon-0: 13:53:51 DEBUG unit.ceph-mon/0.juju-log No hardening applied to 'update_status'
unit-ceph-mon-0: 13:53:51 INFO unit.ceph-mon/0.juju-log Updating status.
unit-ceph-mon-0: 13:53:55 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-mon-1: 13:54:26 DEBUG unit.ceph-mon/1.juju-log Hardening function 'install'
unit-ceph-mon-1: 13:54:26 DEBUG unit.ceph-mon/1.juju-log Hardening function 'config_changed'
unit-ceph-mon-1: 13:54:26 DEBUG unit.ceph-mon/1.juju-log Hardening function 'upgrade_charm'
unit-ceph-mon-1: 13:54:26 DEBUG unit.ceph-mon/1.juju-log Hardening function 'update_status'
unit-ceph-mon-1: 13:54:27 DEBUG unit.ceph-mon/1.juju-log No hardening applied to 'update_status'
unit-ceph-mon-1: 13:54:27 INFO unit.ceph-mon/1.juju-log Updating status.
unit-ceph-mon-1: 13:54:32 INFO juju.worker.uniter.operation ran "update-status" hook
machine-0: 13:55:37 INFO juju.worker.deployer checking unit "ceph-fs/3"
unit-ceph-osd-7: 13:55:37 INFO juju.worker.upgrader desired agent binary version: 2.5-beta1
unit-ceph-mon-0: 13:55:37 INFO juju.worker.upgrader desired agent binary version: 2.5-beta1
machine-0: 13:55:37 INFO juju.api.common no addresses observed on interface "enp2s0"
machine-0: 13:55:37 INFO juju.api.common no addresses observed on interface "virbr0-nic"
machine-0: 13:55:37 INFO juju.worker.deployer deploying unit "ceph-fs/3"
machine-0: 13:55:38 INFO juju.service Installing and starting service &{Service:{Name:jujud-unit-ceph-fs-3 Conf:{Desc:juju unit agent for ceph-fs/3 Transient:false AfterStopped: Env:map[JUJU_CONTAINER_TYPE:] Limit:map[] Timeout:300 ExecStart:/lib/systemd/system/jujud-unit-ceph-fs-3/exec-start.sh ExecStopPost: Logfile:/var/log/juju/unit-ceph-fs-3.log ExtraScript: ServiceBinary:/var/lib/juju/tools/unit-ceph-fs-3/jujud ServiceArgs:[unit --data-dir /var/lib/juju --unit-name ceph-fs/3 --debug]}} ConfName:jujud-unit-ceph-fs-3.service UnitName:jujud-unit-ceph-fs-3.service DirName:/lib/systemd/system/jujud-unit-ceph-fs-3 FallBackDirName:/var/lib/juju/init Script:[35 33 47 117 115 114 47 98 105 110 47 101 110 118 32 98 97 115 104 10 10 35 32 83 101 116 32 117 112 32 108 111 103 103 105 110 103 46 10 116 111 117 99 104 32 39 47 118 97 114 47 108 111 103 47 106 117 106 117 47 117 110 105 116 45 99 101 112 104 45 102 115 45 51 46 108 111 103 39 10 99 104 111 119 110 32 115 121 115 108 111 103 58 115 121 115 108 111 103 32 39 47 118 97 114 47 108 111 103 47 106 117 106 117 47 117 110 105 116 45 99 101 112 104 45 102 115 45 51 46 108 111 103 39 10 99 104 109 111 100 32 48 54 48 48 32 39 47 118 97 114 47 108 111 103 47 106 117 106 117 47 117 110 105 116 45 99 101 112 104 45 102 115 45 51 46 108 111 103 39 10 101 120 101 99 32 62 62 32 39 47 118 97 114 47 108 111 103 47 106 117 106 117 47 117 110 105 116 45 99 101 112 104 45 102 115 45 51 46 108 111 103 39 10 101 120 101 99 32 50 62 38 49 10 10 35 32 82 117 110 32 116 104 101 32 115 99 114 105 112 116 46 10 39 47 118 97 114 47 108 105 98 47 106 117 106 117 47 116 111 111 108 115 47 117 110 105 116 45 99 101 112 104 45 102 115 45 51 47 106 117 106 117 100 39 32 117 110 105 116 32 45 45 100 97 116 97 45 100 105 114 32 39 47 118 97 114 47 108 105 98 47 106 117 106 117 39 32 45 45 117 110 105 116 45 110 97 109 101 32 99 101 112 104 45 102 115 47 51 32 45 45 100 101 98 117 103] newDBus:0xae5410}
unit-ceph-fs-3: 13:55:40 INFO juju.cmd running jujud [2.5-beta1 gc go1.10]
unit-ceph-fs-3: 13:55:40 DEBUG juju.cmd args: []string{"/var/lib/juju/tools/unit-ceph-fs-3/jujud", "unit", "--data-dir", "/var/lib/juju", "--unit-name", "ceph-fs/3", "--debug"}
unit-ceph-fs-3: 13:55:40 DEBUG juju.agent read agent config, format "2.0"
unit-ceph-fs-3: 13:55:40 INFO juju.worker.upgradesteps upgrade steps for 2.5-beta1 have already been run.
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "migration-inactive-flag" manifold worker stopped: "api-caller" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "charm-dir" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "leadership-tracker" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "metric-spool" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "upgrade-steps-gate" manifold worker started
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "log-sender" manifold worker stopped: "api-caller" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "upgrade-check-flag" manifold worker stopped: "upgrade-check-gate" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "upgrade-check-gate" manifold worker started
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "metric-sender" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "agent" manifold worker started
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "uniter" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "upgrade-steps-runner" manifold worker stopped: "agent" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "migration-minion" manifold worker stopped: "api-caller" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "hook-retry-strategy" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "logging-config-updater" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "api-address-updater" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "metric-collect" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "api-config-watcher" manifold worker started
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "upgrade-steps-flag" manifold worker started
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.apicaller connecting with old password
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "upgrader" manifold worker stopped: "api-caller" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.introspection introspection worker listening on "@jujud-unit-ceph-fs-3"
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "migration-fortress" manifold worker stopped: "upgrade-check-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "proxy-config-updater" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "meter-status" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.introspection stats worker now serving
unit-ceph-fs-3: 13:55:40 DEBUG juju.api looked up juju-metal -> [10.0.10.122]
unit-ceph-fs-3: 13:55:40 DEBUG juju.api successfully dialed "wss://10.0.10.122:17070/model/0c4673d1-f469-4d73-866e-fcc458de14e1/api"
unit-ceph-fs-3: 13:55:40 INFO juju.api connection established to "wss://10.0.10.122:17070/model/0c4673d1-f469-4d73-866e-fcc458de14e1/api"
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "upgrade-check-flag" manifold worker started
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "metric-spool" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "upgrade-steps-runner" manifold worker stopped: "api-caller" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "uniter" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "leadership-tracker" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "metric-sender" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:40 DEBUG juju.worker.dependency "migration-fortress" manifold worker stopped: "upgrade-check-flag" not set: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.apicaller connected
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.apicaller changing password...
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.apicaller password changed
unit-ceph-fs-3: 13:55:41 DEBUG juju.api RPC connection died
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "api-caller" manifold worker stopped: restart immediately
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.apicaller connecting with current password
unit-ceph-fs-3: 13:55:41 DEBUG juju.api looked up juju-metal -> [10.0.10.122]
unit-ceph-fs-3: 13:55:41 DEBUG juju.api successfully dialed "wss://10.0.10.122:17070/model/0c4673d1-f469-4d73-866e-fcc458de14e1/api"
unit-ceph-fs-3: 13:55:41 INFO juju.api connection established to "wss://10.0.10.122:17070/model/0c4673d1-f469-4d73-866e-fcc458de14e1/api"
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.apicaller connected
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "api-caller" manifold worker started
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "logging-config-updater" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "migration-minion" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "upgrade-steps-runner" manifold worker started
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "upgrade-steps-runner" manifold worker stopped: <nil>
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "hook-retry-strategy" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "proxy-config-updater" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "api-address-updater" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "uniter" manifold worker stopped: "migration-inactive-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "migration-inactive-flag" manifold worker started
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "meter-status" manifold worker stopped: <nil>
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "leadership-tracker" manifold worker stopped: <nil>
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "metric-sender" manifold worker stopped: <nil>
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "log-sender" manifold worker started
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "hook-retry-strategy" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "upgrader" manifold worker started
unit-ceph-fs-3: 13:55:41 INFO juju.worker.upgrader abort check blocked until version event received
unit-ceph-fs-3: 13:55:41 INFO juju.worker.upgrader unblocking abort check
unit-ceph-fs-3: 13:55:41 INFO juju.worker.upgrader desired agent binary version: 2.5-beta1
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "upgrade-check-flag" manifold worker stopped: gate unlocked
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "uniter" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "metric-collect" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "logging-config-updater" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "meter-status" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "proxy-config-updater" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "migration-fortress" manifold worker stopped: "upgrade-check-flag" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "leadership-tracker" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "api-address-updater" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "metric-spool" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "charm-dir" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "metric-sender" manifold worker stopped: "migration-fortress" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "upgrade-check-flag" manifold worker started
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "migration-fortress" manifold worker started
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "migration-minion" manifold worker started
unit-ceph-fs-3: 13:55:41 INFO juju.worker.migrationminion migration phase is now: NONE
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "metric-sender" manifold worker stopped: "metric-spool" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "api-address-updater" manifold worker started
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "uniter" manifold worker stopped: "leadership-tracker" not running: dependency not available
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "metric-spool" manifold worker started
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "meter-status" manifold worker started
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "metric-collect" manifold worker stopped: <nil>
unit-ceph-fs-3: 13:55:41 DEBUG juju.network no lxc bridge addresses to filter for machine
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.meterstatus got meter status change signal from watcher
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.logger initial log config: "<root>=DEBUG"
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.dependency "logging-config-updater" manifold worker started
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.logger logger setup
unit-ceph-fs-3: 13:55:41 DEBUG juju.network cannot get "lxdbr0" addresses: route ip+net: no such network interface (ignoring)
unit-ceph-fs-3: 13:55:41 DEBUG juju.network "virbr0" has addresses [192.168.122.1/24]
unit-ceph-fs-3: 13:55:41 DEBUG juju.network not filtering invalid IP: "juju-metal"
unit-ceph-fs-3: 13:55:41 DEBUG juju.network including address public:juju-metal for machine
unit-ceph-fs-3: 13:55:41 DEBUG juju.network including address local-cloud:10.0.10.122 for machine
unit-ceph-fs-3: 13:55:41 DEBUG juju.network including address local-machine:127.0.0.1 for machine
unit-ceph-fs-3: 13:55:41 DEBUG juju.network including address local-machine:::1 for machine
unit-ceph-fs-3: 13:55:41 DEBUG juju.network addresses after filtering: [public:juju-metal local-cloud:10.0.10.122 local-machine:127.0.0.1 local-machine:::1]
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.apiaddressupdater updating API hostPorts to [[juju-metal:17070 10.0.10.122:17070 127.0.0.1:17070 [::1]:17070]]
unit-ceph-fs-3: 13:55:41 DEBUG juju.agent API server address details [["juju-metal:17070" "10.0.10.122:17070" "127.0.0.1:17070" "[::1]:17070"]] written to agent config as ["10.0.10.122:17070" "juju-metal:17070"]
unit-ceph-fs-3: 13:55:41 DEBUG juju.worker.logger reconfiguring logging from "<root>=DEBUG" to "<root>=INFO;unit=DEBUG"
unit-ceph-fs-3: 13:55:42 INFO juju.worker.leadership ceph-fs/3 promoted to leadership of ceph-fs
unit-ceph-fs-3: 13:55:42 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-ceph-fs-3
unit-ceph-fs-3: 13:55:42 INFO juju.agent.tools was a symlink, now looking at /var/lib/juju/tools/2.5-beta1-bionic-amd64
unit-ceph-fs-3: 13:55:42 INFO juju.worker.uniter unit "ceph-fs/3" started
unit-ceph-fs-3: 13:55:42 INFO juju.worker.uniter resuming charm install
unit-ceph-fs-3: 13:55:42 INFO juju.worker.uniter.charm downloading cs:ceph-fs-16 from API server
unit-ceph-fs-3: 13:55:42 INFO juju.downloader downloading from cs:ceph-fs-16
unit-ceph-fs-3: 13:55:42 INFO juju.downloader download complete ("cs:ceph-fs-16")
unit-ceph-fs-3: 13:55:42 INFO juju.downloader download verified ("cs:ceph-fs-16")
unit-ceph-fs-3: 13:55:45 INFO juju.worker.uniter hooks are retried true
unit-ceph-fs-3: 13:55:46 INFO juju.worker.uniter.storage initial storage attachments ready
unit-ceph-fs-3: 13:55:46 INFO juju.worker.uniter found queued "install" hook
unit-ceph-fs-3: 13:55:46 DEBUG unit.ceph-fs/3.install Reading package lists...
unit-ceph-fs-3: 13:55:46 DEBUG unit.ceph-fs/3.install Building dependency tree...
unit-ceph-fs-3: 13:55:46 DEBUG unit.ceph-fs/3.install Reading state information...
unit-ceph-fs-3: 13:55:46 DEBUG unit.ceph-fs/3.install python3-setuptools is already the newest version (39.0.1-2).
unit-ceph-fs-3: 13:55:46 DEBUG unit.ceph-fs/3.install python3-yaml is already the newest version (3.12-1build2).
unit-ceph-fs-3: 13:55:46 DEBUG unit.ceph-fs/3.install python3-dev is already the newest version (3.6.5-3ubuntu1).
unit-ceph-fs-3: 13:55:46 DEBUG unit.ceph-fs/3.install python3-pip is already the newest version (9.0.1-2.3~ubuntu1).
unit-ceph-fs-3: 13:55:46 DEBUG unit.ceph-fs/3.install 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
unit-ceph-fs-3: 13:55:46 DEBUG unit.ceph-fs/3.install Reading package lists...
unit-ceph-fs-3: 13:55:47 DEBUG unit.ceph-fs/3.install Building dependency tree...
unit-ceph-fs-3: 13:55:47 DEBUG unit.ceph-fs/3.install Reading state information...
unit-ceph-fs-3: 13:55:47 DEBUG unit.ceph-fs/3.install 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
unit-ceph-fs-3: 13:55:47 DEBUG unit.ceph-fs/3.install Reading package lists...
unit-ceph-fs-3: 13:55:47 DEBUG unit.ceph-fs/3.install Building dependency tree...
unit-ceph-fs-3: 13:55:47 DEBUG unit.ceph-fs/3.install Reading state information...
unit-ceph-fs-3: 13:55:47 DEBUG unit.ceph-fs/3.install virtualenv is already the newest version (15.1.0+ds-1.1).
unit-ceph-fs-3: 13:55:47 DEBUG unit.ceph-fs/3.install 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
unit-ceph-fs-3: 13:55:48 DEBUG unit.ceph-fs/3.install Already using interpreter /usr/bin/python3
unit-ceph-fs-3: 13:55:48 DEBUG unit.ceph-fs/3.install Using base prefix '/usr'
unit-ceph-fs-3: 13:55:48 DEBUG unit.ceph-fs/3.install New python executable in /var/lib/juju/agents/unit-ceph-fs-3/.venv/bin/python3
unit-ceph-fs-3: 13:55:48 DEBUG unit.ceph-fs/3.install Also creating executable in /var/lib/juju/agents/unit-ceph-fs-3/.venv/bin/python
unit-ceph-fs-3: 13:55:48 DEBUG unit.ceph-fs/3.install Please make sure you remove any previous custom paths from your /root/.pydistutils.cfg file.
unit-ceph-fs-3: 13:55:50 DEBUG unit.ceph-fs/3.install Installing setuptools, pkg_resources, pip, wheel...done.
unit-ceph-fs-3: 13:55:52 DEBUG unit.ceph-fs/3.install Requirement already up-to-date: pip in /var/lib/juju/agents/unit-ceph-fs-3/.venv/lib/python3.6/site-packages
unit-ceph-fs-3: 13:55:53 DEBUG unit.ceph-fs/3.install Requirement already up-to-date: setuptools in /var/lib/juju/agents/unit-ceph-fs-3/.venv/lib/python3.6/site-packages
unit-ceph-fs-3: 13:55:53 DEBUG unit.ceph-fs/3.install Collecting setuptools-scm
unit-ceph-fs-3: 13:55:54 DEBUG unit.ceph-fs/3.install Building wheels for collected packages: setuptools-scm
unit-ceph-fs-3: 13:55:54 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for setuptools-scm: started
unit-ceph-fs-3: 13:55:54 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for setuptools-scm: finished with status 'done'
unit-ceph-fs-3: 13:55:54 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/8e/e7/22/071791b43e1d1f2ceae5e901863fe11f991e5a3e4aec8afb04
unit-ceph-fs-3: 13:55:54 DEBUG unit.ceph-fs/3.install Successfully built setuptools-scm
unit-ceph-fs-3: 13:55:54 DEBUG unit.ceph-fs/3.install Installing collected packages: setuptools-scm
unit-ceph-fs-3: 13:55:54 DEBUG unit.ceph-fs/3.install Successfully installed setuptools-scm-1.17.0
unit-ceph-fs-3: 13:55:56 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/Jinja2-2.10.tar.gz
unit-ceph-fs-3: 13:55:56 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/MarkupSafe-1.0.tar.gz
unit-ceph-fs-3: 13:55:57 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/PyYAML-3.13.tar.gz
unit-ceph-fs-3: 13:55:57 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/Tempita-0.5.2.tar.gz
unit-ceph-fs-3: 13:55:58 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/ceph_api-0.4.0.tar.gz
unit-ceph-fs-3: 13:55:58 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/charmhelpers-0.19.2.tar.gz
unit-ceph-fs-3: 13:55:59 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/charms.reactive-1.0.0.tar.gz
unit-ceph-fs-3: 13:55:59 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/dnspython-1.15.0.zip
unit-ceph-fs-3: 13:55:59 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/dnspython3-1.15.0.zip
unit-ceph-fs-3: 13:56:00 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/netaddr-0.7.19.tar.gz
unit-ceph-fs-3: 13:56:00 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/netifaces-0.10.7.tar.gz
unit-ceph-fs-3: 13:56:01 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/pip-8.1.2.tar.gz
unit-ceph-fs-3: 13:56:01 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/pyaml-17.12.1.tar.gz
unit-ceph-fs-3: 13:56:02 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/setuptools-39.0.1.zip
unit-ceph-fs-3: 13:56:02 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/setuptools_scm-1.17.0.tar.gz
unit-ceph-fs-3: 13:56:03 DEBUG unit.ceph-fs/3.install Processing ./wheelhouse/six-1.11.0.tar.gz
unit-ceph-fs-3: 13:56:03 DEBUG unit.ceph-fs/3.install Requirement already up-to-date: MarkupSafe>=0.23 in /usr/lib/python3/dist-packages (from Jinja2==2.10)
unit-ceph-fs-3: 13:56:03 DEBUG unit.ceph-fs/3.install Requirement already up-to-date: six in /usr/lib/python3/dist-packages (from ceph-api==0.4.0)
unit-ceph-fs-3: 13:56:03 DEBUG unit.ceph-fs/3.install Requirement already up-to-date: netaddr in /usr/lib/python3/dist-packages (from charmhelpers==0.19.2)
unit-ceph-fs-3: 13:56:03 DEBUG unit.ceph-fs/3.install Collecting pyaml (from charms.reactive==1.0.0)
unit-ceph-fs-3: 13:56:03 DEBUG unit.ceph-fs/3.install Building wheels for collected packages: Jinja2, PyYAML, Tempita, ceph-api, charmhelpers, charms.reactive, pyaml, dnspython, dnspython3, netifaces, pip, setuptools, setuptools-scm
unit-ceph-fs-3: 13:56:03 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for Jinja2: started
unit-ceph-fs-3: 13:56:04 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for Jinja2: finished with status 'done'
unit-ceph-fs-3: 13:56:04 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/b2/6e/be/784cf17ed4d2e976bc6dfc8bd51dfad4f8000cf313c0650a39
unit-ceph-fs-3: 13:56:04 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for PyYAML: started
unit-ceph-fs-3: 13:56:04 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for PyYAML: finished with status 'done'
unit-ceph-fs-3: 13:56:04 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/cf/0a/13/91e4211bbd0549563966fed1ddfdc4e9e21f916c3a11cf062f
unit-ceph-fs-3: 13:56:04 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for Tempita: started
unit-ceph-fs-3: 13:56:06 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for Tempita: finished with status 'done'
unit-ceph-fs-3: 13:56:06 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/57/5e/fd/083a452485bb30ef9a2917374ba1a284264dd2fb3241994049
unit-ceph-fs-3: 13:56:06 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for ceph-api: started
unit-ceph-fs-3: 13:56:07 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for ceph-api: finished with status 'done'
unit-ceph-fs-3: 13:56:07 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/99/b0/14/2d779b1b7874dab423f4425f6cb39470842440a95a1b37bb3e
unit-ceph-fs-3: 13:56:07 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for charmhelpers: started
unit-ceph-fs-3: 13:56:07 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for charmhelpers: finished with status 'done'
unit-ceph-fs-3: 13:56:07 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/8b/ed/68/16e9b7dca86a8be8ee92dba18302d2e18002e0fb51f0ec1d0b
unit-ceph-fs-3: 13:56:07 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for charms.reactive: started
unit-ceph-fs-3: 13:56:08 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for charms.reactive: finished with status 'done'
unit-ceph-fs-3: 13:56:08 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/9c/19/42/49b61cb2b19bb239faf13ea1c61534b68ec9a8be33707901d8
unit-ceph-fs-3: 13:56:08 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for pyaml: started
unit-ceph-fs-3: 13:56:08 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for pyaml: finished with status 'done'
unit-ceph-fs-3: 13:56:08 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/cb/be/56/215582babaf60620ac1799c6ac0a0801bf52cabb6bea2e4f6f
unit-ceph-fs-3: 13:56:08 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for dnspython: started
unit-ceph-fs-3: 13:56:09 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for dnspython: finished with status 'done'
unit-ceph-fs-3: 13:56:09 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/ef/86/7c/3d02fc0cb9ce9d29e335adfabfdd6ff46c112bd068932eaa7a
unit-ceph-fs-3: 13:56:09 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for dnspython3: started
unit-ceph-fs-3: 13:56:09 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for dnspython3: finished with status 'done'
unit-ceph-fs-3: 13:56:09 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/08/57/5b/549ffcbd1da1ca092a10b4a7db70c538a2db0868cbf16e0810
unit-ceph-fs-3: 13:56:09 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for netifaces: started
unit-ceph-fs-3: 13:56:12 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for netifaces: finished with status 'done'
unit-ceph-fs-3: 13:56:12 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/7c/83/1c/c4af38c8ee98aa40eccc4e050000fdc39fa7ccf69916b9a1a6
unit-ceph-fs-3: 13:56:12 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for pip: started
unit-ceph-fs-3: 13:56:13 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for pip: finished with status 'done'
unit-ceph-fs-3: 13:56:13 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/10/ec/2e/833df038563fc3c383a7c2cbc7611b1f399a564dc2a27a193d
unit-ceph-fs-3: 13:56:13 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for setuptools: started
unit-ceph-fs-3: 13:56:14 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for setuptools: finished with status 'done'
unit-ceph-fs-3: 13:56:14 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/82/9f/52/63cfdfe6d8227b3d742694df0cacf7e1b28dda76d0d14723db
unit-ceph-fs-3: 13:56:14 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for setuptools-scm: started
unit-ceph-fs-3: 13:56:14 DEBUG unit.ceph-fs/3.install Running setup.py bdist_wheel for setuptools-scm: finished with status 'done'
unit-ceph-fs-3: 13:56:14 DEBUG unit.ceph-fs/3.install Stored in directory: /root/.cache/pip/wheels/8e/e7/22/071791b43e1d1f2ceae5e901863fe11f991e5a3e4aec8afb04
unit-ceph-fs-3: 13:56:14 DEBUG unit.ceph-fs/3.install Successfully built Jinja2 PyYAML Tempita ceph-api charmhelpers charms.reactive pyaml dnspython dnspython3 netifaces pip setuptools setuptools-scm
unit-ceph-fs-3: 13:56:14 DEBUG unit.ceph-fs/3.install Installing collected packages: Jinja2, PyYAML, Tempita, ceph-api, charmhelpers, pyaml, charms.reactive, dnspython, dnspython3, netifaces, pip, setuptools, setuptools-scm
unit-ceph-fs-3: 13:56:14 DEBUG unit.ceph-fs/3.install Found existing installation: Jinja2 2.10
unit-ceph-fs-3: 13:56:14 DEBUG unit.ceph-fs/3.install Not uninstalling jinja2 at /usr/lib/python3/dist-packages, outside environment /var/lib/juju/agents/unit-ceph-fs-3/.venv
unit-ceph-fs-3: 13:56:14 DEBUG unit.ceph-fs/3.install Found existing installation: PyYAML 3.12
unit-ceph-fs-3: 13:56:14 DEBUG unit.ceph-fs/3.install Not uninstalling pyyaml at /usr/lib/python3/dist-packages, outside environment /var/lib/juju/agents/unit-ceph-fs-3/.venv
unit-ceph-fs-3: 13:56:15 DEBUG unit.ceph-fs/3.install Found existing installation: dnspython 1.15.0
unit-ceph-fs-3: 13:56:15 DEBUG unit.ceph-fs/3.install Not uninstalling dnspython at /usr/lib/python3/dist-packages, outside environment /var/lib/juju/agents/unit-ceph-fs-3/.venv
unit-ceph-fs-3: 13:56:15 DEBUG unit.ceph-fs/3.install Found existing installation: netifaces 0.10.4
unit-ceph-fs-3: 13:56:15 DEBUG unit.ceph-fs/3.install Not uninstalling netifaces at /usr/lib/python3/dist-packages, outside environment /var/lib/juju/agents/unit-ceph-fs-3/.venv
unit-ceph-fs-3: 13:56:15 DEBUG unit.ceph-fs/3.install Found existing installation: pip 9.0.1
unit-ceph-fs-3: 13:56:15 DEBUG unit.ceph-fs/3.install Uninstalling pip-9.0.1:
unit-ceph-fs-3: 13:56:15 DEBUG unit.ceph-fs/3.install Successfully uninstalled pip-9.0.1
unit-ceph-fs-3: 13:56:16 DEBUG unit.ceph-fs/3.install Found existing installation: setuptools 39.0.1
unit-ceph-fs-3: 13:56:16 DEBUG unit.ceph-fs/3.install Uninstalling setuptools-39.0.1:
unit-ceph-fs-3: 13:56:16 DEBUG unit.ceph-fs/3.install Successfully uninstalled setuptools-39.0.1
unit-ceph-fs-3: 13:56:16 DEBUG unit.ceph-fs/3.install Found existing installation: setuptools-scm 1.17.0
unit-ceph-fs-3: 13:56:16 DEBUG unit.ceph-fs/3.install Uninstalling setuptools-scm-1.17.0:
unit-ceph-fs-3: 13:56:16 DEBUG unit.ceph-fs/3.install Successfully uninstalled setuptools-scm-1.17.0
unit-ceph-fs-3: 13:56:16 DEBUG unit.ceph-fs/3.install Successfully installed Jinja2-2.10 PyYAML-3.13 Tempita-0.5.2 ceph-api-0.4.0 charmhelpers-0.19.2 charms.reactive-1.0.0 dnspython-1.15.0 dnspython3-1.15.0 netifaces-0.10.7 pip-8.1.2 pyaml-17.12.1 setuptools-39.0.1 setuptools-scm-1.17.0
unit-ceph-fs-3: 13:56:17 INFO unit.ceph-fs/3.juju-log Reactive main running for hook install
unit-ceph-fs-3: 13:56:17 INFO unit.ceph-fs/3.juju-log Initializing Apt Layer
unit-ceph-fs-3: 13:56:17 INFO unit.ceph-fs/3.juju-log Invoking reactive handler: reactive/apt.py:38:update
unit-ceph-fs-3: 13:56:18 INFO unit.ceph-fs/3.juju-log status-set: maintenance: Updating apt cache
unit-ceph-fs-3: 13:56:18 DEBUG unit.ceph-fs/3.install Hit:1 http://repo.saltstack.com/apt/ubuntu/18.04/amd64/latest bionic InRelease
unit-ceph-fs-3: 13:56:18 DEBUG unit.ceph-fs/3.install Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
unit-ceph-fs-3: 13:56:18 DEBUG unit.ceph-fs/3.install Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
unit-ceph-fs-3: 13:56:19 DEBUG unit.ceph-fs/3.install Get:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
unit-ceph-fs-3: 13:56:19 DEBUG unit.ceph-fs/3.install Get:5 http://archive.ubuntu.com/ubuntu bionic-security InRelease [83.2 kB]
unit-ceph-fs-3: 13:56:19 DEBUG unit.ceph-fs/3.install Get:6 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [403 kB]
unit-ceph-fs-3: 13:56:20 DEBUG unit.ceph-fs/3.install Get:7 http://archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [150 kB]
unit-ceph-fs-3: 13:56:20 DEBUG unit.ceph-fs/3.install Get:8 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [564 kB]
unit-ceph-fs-3: 13:56:20 DEBUG unit.ceph-fs/3.install Get:9 http://archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [147 kB]
unit-ceph-fs-3: 13:56:21 DEBUG unit.ceph-fs/3.install Get:10 http://archive.ubuntu.com/ubuntu bionic-security/main amd64 Packages [183 kB]
unit-ceph-fs-3: 13:56:21 DEBUG unit.ceph-fs/3.install Get:11 http://archive.ubuntu.com/ubuntu bionic-security/main Translation-en [71.2 kB]
unit-ceph-fs-3: 13:56:21 DEBUG unit.ceph-fs/3.install Get:12 http://archive.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [88.6 kB]
unit-ceph-fs-3: 13:56:21 DEBUG unit.ceph-fs/3.install Get:13 http://archive.ubuntu.com/ubuntu bionic-security/universe Translation-en [48.2 kB]
unit-ceph-fs-3: 13:56:24 DEBUG unit.ceph-fs/3.install Fetched 1901 kB in 4s (510 kB/s)
unit-ceph-fs-3: 13:56:25 DEBUG unit.ceph-fs/3.install Reading package lists...
unit-ceph-fs-3: 13:56:25 INFO unit.ceph-fs/3.juju-log Invoking reactive handler: reactive/ceph_fs.py:60:install_ceph_base
unit-ceph-fs-3: 13:56:25 INFO unit.ceph-fs/3.juju-log Unknown source: ''
unit-ceph-fs-3: 13:56:25 INFO unit.ceph-fs/3.juju-log Invoking reactive handler: reactive/ceph_fs.py:66:install_cephfs
unit-ceph-fs-3: 13:56:25 INFO unit.ceph-fs/3.juju-log Invoking reactive handler: reactive/apt.py:38:update
unit-ceph-fs-3: 13:56:25 INFO unit.ceph-fs/3.juju-log status-set: maintenance: Updating apt cache
unit-ceph-fs-3: 13:56:26 DEBUG unit.ceph-fs/3.install Hit:1 http://repo.saltstack.com/apt/ubuntu/18.04/amd64/latest bionic InRelease
unit-ceph-fs-3: 13:56:26 DEBUG unit.ceph-fs/3.install Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
unit-ceph-fs-3: 13:56:26 DEBUG unit.ceph-fs/3.install Hit:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
unit-ceph-fs-3: 13:56:26 DEBUG unit.ceph-fs/3.install Hit:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
unit-ceph-fs-3: 13:56:26 DEBUG unit.ceph-fs/3.install Hit:5 http://archive.ubuntu.com/ubuntu bionic-security InRelease
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install Reading package lists...
unit-ceph-fs-3: 13:56:28 INFO unit.ceph-fs/3.juju-log Invoking reactive handler: reactive/apt.py:43:install_queued
unit-ceph-fs-3: 13:56:28 INFO unit.ceph-fs/3.juju-log status-set: maintenance: Installing btrfs-tools,ceph,ceph-mds,gdisk,ntp,python-ceph,python3-pyxattr,xfsprogs
unit-ceph-fs-3: 13:56:28 INFO unit.ceph-fs/3.juju-log Installing ['btrfs-tools', 'ceph', 'ceph-mds', 'gdisk', 'ntp', 'python-ceph', 'python3-pyxattr', 'xfsprogs'] with options: ['--option=Dpkg::Options::=--force-confold']
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install Reading package lists...
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install Building dependency tree...
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install Reading state information...
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install btrfs-tools is already the newest version (4.15.1-1build1).
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install gdisk is already the newest version (1.0.3-1).
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install python3-pyxattr is already the newest version (0.6.0-2build2).
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install xfsprogs is already the newest version (4.9.0+nmu1ubuntu2).
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install ceph is already the newest version (12.2.7-0ubuntu0.18.04.1).
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install python-ceph is already the newest version (12.2.7-0ubuntu0.18.04.1).
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install ceph-mds is already the newest version (12.2.7-0ubuntu0.18.04.1).
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install ntp is already the newest version (1:4.2.8p10+dfsg-5ubuntu7.1).
unit-ceph-fs-3: 13:56:28 DEBUG unit.ceph-fs/3.install 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
unit-ceph-fs-3: 13:56:29 INFO unit.ceph-fs/3.juju-log Invoking reactive handler: reactive/apt.py:49:ensure_package_status
unit-ceph-fs-3: 13:56:29 INFO unit.ceph-fs/3.juju-log Unholding packages gdisk,python-ceph,python3-pyxattr,ceph,ntp,xfsprogs,btrfs-tools,ceph-mds
unit-ceph-fs-3: 13:56:29 INFO unit.ceph-fs/3.juju-log Marking {'gdisk', 'python-ceph', 'python3-pyxattr', 'ceph', 'ntp', 'xfsprogs', 'btrfs-tools', 'ceph-mds'} as unhold
unit-ceph-fs-3: 13:56:29 DEBUG unit.ceph-fs/3.install gdisk was already not hold.
unit-ceph-fs-3: 13:56:29 DEBUG unit.ceph-fs/3.install python-ceph was already not hold.
unit-ceph-fs-3: 13:56:29 DEBUG unit.ceph-fs/3.install python3-pyxattr was already not hold.
unit-ceph-fs-3: 13:56:29 DEBUG unit.ceph-fs/3.install ceph was already not hold.
unit-ceph-fs-3: 13:56:29 DEBUG unit.ceph-fs/3.install ntp was already not hold.
unit-ceph-fs-3: 13:56:29 DEBUG unit.ceph-fs/3.install xfsprogs was already not hold.
unit-ceph-fs-3: 13:56:29 DEBUG unit.ceph-fs/3.install btrfs-tools was already not hold.
unit-ceph-fs-3: 13:56:29 DEBUG unit.ceph-fs/3.install ceph-mds was already not hold.
unit-ceph-fs-3: 13:56:29 INFO unit.ceph-fs/3.juju-log status-set failed: active
unit-ceph-fs-3: 13:56:29 INFO juju.worker.uniter.operation ran "install" hook
unit-ceph-fs-3: 13:56:29 INFO juju.worker.uniter found queued "leader-elected" hook
unit-ceph-fs-3: 13:56:30 INFO unit.ceph-fs/3.juju-log Reactive main running for hook leader-elected
unit-ceph-fs-3: 13:56:30 INFO unit.ceph-fs/3.juju-log Initializing Apt Layer
unit-ceph-fs-3: 13:56:30 INFO unit.ceph-fs/3.juju-log Invoking reactive handler: reactive/apt.py:49:ensure_package_status
unit-ceph-fs-3: 13:56:30 INFO unit.ceph-fs/3.juju-log status-set failed: active
unit-ceph-fs-3: 13:56:31 INFO juju.worker.uniter.operation ran "leader-elected" hook
unit-ceph-fs-3: 13:56:31 INFO unit.ceph-fs/3.juju-log Reactive main running for hook config-changed
unit-ceph-fs-3: 13:56:31 INFO unit.ceph-fs/3.juju-log Initializing Apt Layer
unit-ceph-fs-3: 13:56:31 INFO unit.ceph-fs/3.juju-log Invoking reactive handler: reactive/apt.py:49:ensure_package_status
unit-ceph-fs-3: 13:56:32 INFO unit.ceph-fs/3.juju-log status-set failed: active
unit-ceph-osd-4: 13:56:32 DEBUG unit.ceph-osd/4.juju-log Hardening function 'install'
unit-ceph-fs-3: 13:56:32 INFO juju.worker.uniter.operation ran "config-changed" hook
unit-ceph-fs-3: 13:56:32 INFO juju.worker.uniter found queued "start" hook
unit-ceph-osd-4: 13:56:32 DEBUG unit.ceph-osd/4.juju-log Hardening function 'config_changed'
unit-ceph-osd-4: 13:56:32 DEBUG unit.ceph-osd/4.juju-log Hardening function 'upgrade_charm'
unit-ceph-osd-4: 13:56:32 DEBUG unit.ceph-osd/4.juju-log Hardening function 'update_status'
unit-ceph-fs-3: 13:56:32 INFO unit.ceph-fs/3.juju-log Reactive main running for hook start
unit-ceph-osd-4: 13:56:32 DEBUG unit.ceph-osd/4.juju-log No hardening applied to 'update_status'
unit-ceph-osd-4: 13:56:32 INFO unit.ceph-osd/4.juju-log Updating status.
unit-ceph-fs-3: 13:56:32 INFO unit.ceph-fs/3.juju-log Initializing Apt Layer
unit-ceph-fs-3: 13:56:32 INFO unit.ceph-fs/3.juju-log Invoking reactive handler: reactive/apt.py:49:ensure_package_status
unit-ceph-fs-3: 13:56:33 INFO unit.ceph-fs/3.juju-log status-set failed: active
unit-ceph-fs-3: 13:56:33 INFO juju.worker.uniter.operation ran "start" hook
unit-ceph-fs-3: 13:56:33 INFO juju.worker.uniter.relation joining relation "ceph-fs:ceph-mds ceph-mon:mds"
unit-ceph-fs-3: 13:56:33 INFO juju.worker.uniter.relation joined relation "ceph-fs:ceph-mds ceph-mon:mds"
unit-ceph-fs-3: 13:56:33 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Reactive main running for hook ceph-mds-relation-joined
unit-ceph-fs-3: 13:56:34 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Initializing Apt Layer
unit-ceph-mon-1: 13:56:34 DEBUG unit.ceph-mon/1.juju-log mds:3: Hardening function 'install'
unit-ceph-fs-3: 13:56:34 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: hooks/relations/ceph-mds/requires.py:23:joined
unit-ceph-mon-1: 13:56:34 DEBUG unit.ceph-mon/1.juju-log mds:3: Hardening function 'config_changed'
unit-ceph-mon-1: 13:56:34 DEBUG unit.ceph-mon/1.juju-log mds:3: Hardening function 'upgrade_charm'
unit-ceph-fs-3: 13:56:34 DEBUG unit.ceph-fs/3.juju-log ceph-mds:3: Sending request ccd29e22-d0bc-11e8-94c3-36e896d6062d
unit-ceph-mon-1: 13:56:34 DEBUG unit.ceph-mon/1.juju-log mds:3: Hardening function 'update_status'
unit-ceph-fs-3: 13:56:34 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/apt.py:49:ensure_package_status
unit-ceph-fs-3: 13:56:34 INFO unit.ceph-fs/3.juju-log ceph-mds:3: status-set failed: active
unit-ceph-fs-3: 13:56:35 INFO juju.worker.uniter.operation ran "ceph-mds-relation-joined" hook
unit-ceph-mon-0: 13:56:35 DEBUG unit.ceph-mon/0.juju-log mds:3: Hardening function 'install'
unit-ceph-mon-0: 13:56:35 DEBUG unit.ceph-mon/0.juju-log mds:3: Hardening function 'config_changed'
unit-ceph-mon-0: 13:56:35 DEBUG unit.ceph-mon/0.juju-log mds:3: Hardening function 'upgrade_charm'
unit-ceph-mon-0: 13:56:35 DEBUG unit.ceph-mon/0.juju-log mds:3: Hardening function 'update_status'
unit-ceph-osd-4: 13:56:36 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-mon-1: 13:56:37 INFO unit.ceph-mon/1.juju-log mds:3: mon cluster in quorum and OSDs related- providing mds client with keys
unit-ceph-mon-0: 13:56:37 INFO unit.ceph-mon/0.juju-log mds:3: mon cluster in quorum and OSDs related- providing mds client with keys
unit-ceph-mon-2: 13:56:37 DEBUG unit.ceph-mon/2.juju-log mds:3: Hardening function 'install'
unit-ceph-mon-2: 13:56:37 DEBUG unit.ceph-mon/2.juju-log mds:3: Hardening function 'config_changed'
unit-ceph-mon-2: 13:56:38 DEBUG unit.ceph-mon/2.juju-log mds:3: Hardening function 'upgrade_charm'
unit-ceph-mon-2: 13:56:38 DEBUG unit.ceph-mon/2.juju-log mds:3: Hardening function 'update_status'
unit-ceph-mon-0: 13:56:39 DEBUG unit.ceph-mon/0.juju-log mds:3: Calling check_output: ['sudo', '-u', 'ceph', 'ceph', '--name', 'mon.', '--keyring', '/var/lib/ceph/mon/ceph-celery/keyring', 'auth', 'get-or-create', 'mds.celery', 'osd', 'allow *', 'mds', 'allow', 'mon', 'allow rwx']
unit-ceph-mon-1: 13:56:39 DEBUG unit.ceph-mon/1.juju-log mds:3: Calling check_output: ['sudo', '-u', 'ceph', 'ceph', '--name', 'mon.', '--keyring', '/var/lib/ceph/mon/ceph-lazarus/keyring', 'auth', 'get-or-create', 'mds.celery', 'osd', 'allow *', 'mds', 'allow', 'mon', 'allow rwx']
unit-ceph-mon-2: 13:56:40 INFO unit.ceph-mon/2.juju-log mds:3: mon cluster in quorum and OSDs related- providing mds client with keys
unit-ceph-mon-0: 13:56:41 DEBUG unit.ceph-mon/0.juju-log mds:3: Not leader - ignoring mds broker request
unit-ceph-mon-1: 13:56:42 DEBUG unit.ceph-mon/1.juju-log mds:3: Processing request ccd29e22-d0bc-11e8-94c3-36e896d6062d
unit-ceph-mon-1: 13:56:42 INFO unit.ceph-mon/1.juju-log mds:3: Processing 3 ceph broker requests
unit-ceph-mon-1: 13:56:43 DEBUG unit.ceph-mon/1.juju-log mds:3: Processing op='create-pool'
unit-ceph-mon-2: 13:56:43 DEBUG unit.ceph-mon/2.juju-log mds:3: Calling check_output: ['sudo', '-u', 'ceph', 'ceph', '--name', 'mon.', '--keyring', '/var/lib/ceph/mon/ceph-inspiral/keyring', 'auth', 'get-or-create', 'mds.celery', 'osd', 'allow *', 'mds', 'allow', 'mon', 'allow rwx']
unit-ceph-mon-1: 13:56:44 DEBUG unit.ceph-mon/1.juju-log mds:3: Pool 'ceph-fs_data' already exists - skipping create
unit-ceph-mon-1: 13:56:44 DEBUG unit.ceph-mon/1.juju-log mds:3: Processing op='create-pool'
unit-ceph-mon-0: 13:56:44 INFO juju.worker.uniter.operation ran "mds-relation-joined" hook
unit-ceph-fs-3: 13:56:45 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Reactive main running for hook ceph-mds-relation-changed
unit-ceph-fs-3: 13:56:45 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Initializing Apt Layer
unit-ceph-fs-3: 13:56:45 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: hooks/relations/ceph-mds/requires.py:29:changed
unit-ceph-mon-1: 13:56:45 DEBUG unit.ceph-mon/1.juju-log mds:3: Pool 'ceph-fs_metadata' already exists - skipping create
unit-ceph-mon-1: 13:56:45 DEBUG unit.ceph-mon/1.juju-log mds:3: Processing op='create-cephfs'
unit-ceph-fs-3: 13:56:45 INFO unit.ceph-fs/3.juju-log ceph-mds:3: changed broker_req: [{'op': 'create-pool', 'name': 'ceph-fs_data', 'replicas': 3, 'pg_num': None, 'weight': None, 'group': None, 'group-namespace': None}, {'op': 'create-pool', 'name': 'ceph-fs_metadata', 'replicas': 3, 'pg_num': None, 'weight': None, 'group': None, 'group-namespace': None}, {'op': 'create-cephfs', 'mds_name': 'ceph-fs', 'data_pool': 'ceph-fs_data', 'metadata_pool': 'ceph-fs_metadata'}]
unit-ceph-fs-3: 13:56:46 INFO unit.ceph-fs/3.juju-log ceph-mds:3: incomplete request. broker_req not found
unit-ceph-fs-3: 13:56:46 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/apt.py:49:ensure_package_status
unit-ceph-fs-3: 13:56:46 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/ceph_fs.py:85:config_changed
unit-ceph-fs-3: 13:56:46 INFO unit.ceph-fs/3.juju-log ceph-mds:3: status-set failed: active
unit-ceph-fs-3: 13:56:46 INFO juju.worker.uniter.operation ran "ceph-mds-relation-changed" hook
unit-ceph-mon-2: 13:56:46 DEBUG unit.ceph-mon/2.juju-log mds:3: Not leader - ignoring mds broker request
unit-ceph-mon-0: 13:56:47 DEBUG unit.ceph-mon/0.juju-log mds:3: Hardening function 'install'
unit-ceph-mon-0: 13:56:47 DEBUG unit.ceph-mon/0.juju-log mds:3: Hardening function 'config_changed'
unit-ceph-mon-0: 13:56:47 DEBUG unit.ceph-mon/0.juju-log mds:3: Hardening function 'upgrade_charm'
unit-ceph-mon-0: 13:56:47 DEBUG unit.ceph-mon/0.juju-log mds:3: Hardening function 'update_status'
unit-ceph-mon-1: 13:56:48 DEBUG unit.ceph-mon/1.mds-relation-joined Error EINVAL: pool 'ceph-fs_data' (id '1') has a non-CephFS application enabled.
unit-ceph-mon-1: 13:56:48 INFO unit.ceph-mon/1.juju-log mds:3: CephFS already created
unit-ceph-mon-0: 13:56:49 INFO unit.ceph-mon/0.juju-log mds:3: mon cluster in quorum and OSDs related- providing mds client with keys
unit-ceph-mon-0: 13:56:50 DEBUG unit.ceph-mon/0.juju-log mds:3: Calling check_output: ['sudo', '-u', 'ceph', 'ceph', '--name', 'mon.', '--keyring', '/var/lib/ceph/mon/ceph-celery/keyring', 'auth', 'get-or-create', 'mds.celery', 'osd', 'allow *', 'mds', 'allow', 'mon', 'allow rwx']
unit-ceph-mon-2: 13:56:52 INFO juju.worker.uniter.operation ran "mds-relation-joined" hook
unit-ceph-mon-0: 13:56:52 DEBUG unit.ceph-mon/0.juju-log mds:3: Not leader - ignoring mds broker request
unit-ceph-mon-2: 13:56:53 DEBUG unit.ceph-mon/2.juju-log mds:3: Hardening function 'install'
unit-ceph-mon-2: 13:56:53 DEBUG unit.ceph-mon/2.juju-log mds:3: Hardening function 'config_changed'
unit-ceph-mon-2: 13:56:53 DEBUG unit.ceph-mon/2.juju-log mds:3: Hardening function 'upgrade_charm'
unit-ceph-mon-2: 13:56:53 DEBUG unit.ceph-mon/2.juju-log mds:3: Hardening function 'update_status'
unit-ceph-mon-1: 13:56:54 INFO juju.worker.uniter.operation ran "mds-relation-joined" hook
unit-ceph-mon-1: 13:56:55 DEBUG unit.ceph-mon/1.juju-log mds:3: Hardening function 'install'
unit-ceph-mon-1: 13:56:55 DEBUG unit.ceph-mon/1.juju-log mds:3: Hardening function 'config_changed'
unit-ceph-mon-1: 13:56:55 DEBUG unit.ceph-mon/1.juju-log mds:3: Hardening function 'upgrade_charm'
unit-ceph-mon-1: 13:56:55 DEBUG unit.ceph-mon/1.juju-log mds:3: Hardening function 'update_status'
unit-ceph-mon-0: 13:56:56 INFO juju.worker.uniter.operation ran "mds-relation-changed" hook
unit-ceph-mon-2: 13:56:56 INFO unit.ceph-mon/2.juju-log mds:3: mon cluster in quorum and OSDs related- providing mds client with keys
unit-ceph-fs-3: 13:56:56 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Reactive main running for hook ceph-mds-relation-joined
unit-ceph-fs-3: 13:56:56 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Initializing Apt Layer
unit-ceph-fs-3: 13:56:57 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: hooks/relations/ceph-mds/requires.py:23:joined
unit-ceph-fs-3: 13:56:57 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Json request: {"api-version": 1, "ops": [{"op": "create-pool", "name": "ceph-fs_data", "replicas": 3, "pg_num": null, "weight": null, "group": null, "group-namespace": null}, {"op": "create-pool", "name": "ceph-fs_metadata", "replicas": 3, "pg_num": null, "weight": null, "group": null, "group-namespace": null}, {"op": "create-cephfs", "mds_name": "ceph-fs", "data_pool": "ceph-fs_data", "metadata_pool": "ceph-fs_metadata"}], "request-id": "ccd29e22-d0bc-11e8-94c3-36e896d6062d"}
unit-ceph-fs-3: 13:56:57 DEBUG unit.ceph-fs/3.juju-log ceph-mds:3: Request already sent but not complete, not sending new request
unit-ceph-fs-3: 13:56:57 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/apt.py:49:ensure_package_status
unit-ceph-fs-3: 13:56:57 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/ceph_fs.py:85:config_changed
unit-ceph-mon-1: 13:56:58 INFO unit.ceph-mon/1.juju-log mds:3: mon cluster in quorum and OSDs related- providing mds client with keys
unit-ceph-fs-3: 13:56:58 INFO unit.ceph-fs/3.juju-log ceph-mds:3: status-set failed: active
unit-ceph-fs-3: 13:56:58 INFO juju.worker.uniter.operation ran "ceph-mds-relation-joined" hook
unit-ceph-mon-2: 13:56:58 DEBUG unit.ceph-mon/2.juju-log mds:3: Calling check_output: ['sudo', '-u', 'ceph', 'ceph', '--name', 'mon.', '--keyring', '/var/lib/ceph/mon/ceph-inspiral/keyring', 'auth', 'get-or-create', 'mds.celery', 'osd', 'allow *', 'mds', 'allow', 'mon', 'allow rwx']
unit-ceph-fs-3: 13:56:59 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Reactive main running for hook ceph-mds-relation-changed
unit-ceph-fs-3: 13:56:59 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Initializing Apt Layer
unit-ceph-fs-3: 13:56:59 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: hooks/relations/ceph-mds/requires.py:29:changed
unit-ceph-fs-3: 13:56:59 INFO unit.ceph-fs/3.juju-log ceph-mds:3: changed broker_req: [{'op': 'create-pool', 'name': 'ceph-fs_data', 'replicas': 3, 'pg_num': None, 'weight': None, 'group': None, 'group-namespace': None}, {'op': 'create-pool', 'name': 'ceph-fs_metadata', 'replicas': 3, 'pg_num': None, 'weight': None, 'group': None, 'group-namespace': None}, {'op': 'create-cephfs', 'mds_name': 'ceph-fs', 'data_pool': 'ceph-fs_data', 'metadata_pool': 'ceph-fs_metadata'}]
unit-ceph-fs-3: 13:57:00 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Setting ceph-mds.pools.available
unit-ceph-fs-3: 13:57:00 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/apt.py:49:ensure_package_status
unit-ceph-fs-3: 13:57:00 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/ceph_fs.py:71:setup_mds
unit-ceph-mon-1: 13:57:00 DEBUG unit.ceph-mon/1.juju-log mds:3: Calling check_output: ['sudo', '-u', 'ceph', 'ceph', '--name', 'mon.', '--keyring', '/var/lib/ceph/mon/ceph-lazarus/keyring', 'auth', 'get-or-create', 'mds.celery', 'osd', 'allow *', 'mds', 'allow', 'mon', 'allow rwx']
unit-ceph-mon-2: 13:57:02 DEBUG unit.ceph-mon/2.juju-log mds:3: Not leader - ignoring mds broker request
unit-ceph-fs-3: 13:57:02 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/ceph_fs.py:85:config_changed
unit-ceph-fs-3: 13:57:02 INFO unit.ceph-fs/3.juju-log ceph-mds:3: status-set failed: active Unit is ready (1 MDS)
unit-ceph-fs-3: 13:57:02 INFO juju.worker.uniter.operation ran "ceph-mds-relation-changed" hook
unit-ceph-osd-7: 13:57:03 DEBUG unit.ceph-osd/7.juju-log Hardening function 'install'
unit-ceph-osd-7: 13:57:03 DEBUG unit.ceph-osd/7.juju-log Hardening function 'config_changed'
unit-ceph-osd-7: 13:57:03 DEBUG unit.ceph-osd/7.juju-log Hardening function 'upgrade_charm'
unit-ceph-osd-7: 13:57:03 DEBUG unit.ceph-osd/7.juju-log Hardening function 'update_status'
unit-ceph-mon-1: 13:57:03 DEBUG unit.ceph-mon/1.juju-log mds:3: Processing request ccd29e22-d0bc-11e8-94c3-36e896d6062d
unit-ceph-osd-7: 13:57:03 DEBUG unit.ceph-osd/7.juju-log No hardening applied to 'update_status'
unit-ceph-osd-7: 13:57:03 INFO unit.ceph-osd/7.juju-log Updating status.
unit-ceph-mon-1: 13:57:03 INFO unit.ceph-mon/1.juju-log mds:3: Processing 3 ceph broker requests
unit-ceph-mon-1: 13:57:04 DEBUG unit.ceph-mon/1.juju-log mds:3: Processing op='create-pool'
unit-ceph-mon-1: 13:57:04 DEBUG unit.ceph-mon/1.juju-log mds:3: Pool 'ceph-fs_data' already exists - skipping create
unit-ceph-mon-1: 13:57:05 DEBUG unit.ceph-mon/1.juju-log mds:3: Processing op='create-pool'
unit-ceph-mon-1: 13:57:05 DEBUG unit.ceph-mon/1.juju-log mds:3: Pool 'ceph-fs_metadata' already exists - skipping create
unit-ceph-mon-1: 13:57:06 DEBUG unit.ceph-mon/1.juju-log mds:3: Processing op='create-cephfs'
unit-ceph-osd-7: 13:57:06 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-fs-3: 13:57:07 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Reactive main running for hook ceph-mds-relation-joined
unit-ceph-fs-3: 13:57:07 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Initializing Apt Layer
unit-ceph-fs-3: 13:57:07 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: hooks/relations/ceph-mds/requires.py:23:joined
unit-ceph-mon-2: 13:57:07 INFO juju.worker.uniter.operation ran "mds-relation-changed" hook
unit-ceph-fs-3: 13:57:07 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Json request: {"api-version": 1, "ops": [{"op": "create-pool", "name": "ceph-fs_data", "replicas": 3, "pg_num": null, "weight": null, "group": null, "group-namespace": null}, {"op": "create-pool", "name": "ceph-fs_metadata", "replicas": 3, "pg_num": null, "weight": null, "group": null, "group-namespace": null}, {"op": "create-cephfs", "mds_name": "ceph-fs", "data_pool": "ceph-fs_data", "metadata_pool": "ceph-fs_metadata"}], "request-id": "ccd29e22-d0bc-11e8-94c3-36e896d6062d"}
unit-ceph-fs-3: 13:57:08 DEBUG unit.ceph-fs/3.juju-log ceph-mds:3: Request already sent but not complete, not sending new request
unit-ceph-fs-3: 13:57:08 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/apt.py:49:ensure_package_status
unit-ceph-fs-3: 13:57:08 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/ceph_fs.py:85:config_changed
unit-ceph-fs-3: 13:57:08 INFO unit.ceph-fs/3.juju-log ceph-mds:3: status-set failed: active Unit is ready (1 MDS)
unit-ceph-fs-3: 13:57:09 INFO juju.worker.uniter.operation ran "ceph-mds-relation-joined" hook
unit-ceph-mon-1: 13:57:09 DEBUG unit.ceph-mon/1.mds-relation-changed Error EINVAL: pool 'ceph-fs_data' (id '1') has a non-CephFS application enabled.
unit-ceph-fs-3: 13:57:09 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Reactive main running for hook ceph-mds-relation-changed
unit-ceph-mon-1: 13:57:09 INFO unit.ceph-mon/1.juju-log mds:3: CephFS already created
unit-ceph-fs-3: 13:57:09 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Initializing Apt Layer
unit-ceph-fs-3: 13:57:09 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: hooks/relations/ceph-mds/requires.py:29:changed
unit-ceph-fs-3: 13:57:10 INFO unit.ceph-fs/3.juju-log ceph-mds:3: changed broker_req: [{'op': 'create-pool', 'name': 'ceph-fs_data', 'replicas': 3, 'pg_num': None, 'weight': None, 'group': None, 'group-namespace': None}, {'op': 'create-pool', 'name': 'ceph-fs_metadata', 'replicas': 3, 'pg_num': None, 'weight': None, 'group': None, 'group-namespace': None}, {'op': 'create-cephfs', 'mds_name': 'ceph-fs', 'data_pool': 'ceph-fs_data', 'metadata_pool': 'ceph-fs_metadata'}]
unit-ceph-fs-3: 13:57:10 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Setting ceph-mds.pools.available
unit-ceph-fs-3: 13:57:10 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/apt.py:49:ensure_package_status
unit-ceph-fs-3: 13:57:10 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/ceph_fs.py:85:config_changed
unit-ceph-fs-3: 13:57:11 INFO unit.ceph-fs/3.juju-log ceph-mds:3: status-set failed: active Unit is ready (1 MDS)
unit-ceph-fs-3: 13:57:11 INFO juju.worker.uniter.operation ran "ceph-mds-relation-changed" hook
unit-ceph-fs-3: 13:57:11 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Reactive main running for hook ceph-mds-relation-changed
unit-ceph-fs-3: 13:57:12 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Initializing Apt Layer
unit-ceph-fs-3: 13:57:12 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: hooks/relations/ceph-mds/requires.py:29:changed
unit-ceph-fs-3: 13:57:12 INFO unit.ceph-fs/3.juju-log ceph-mds:3: changed broker_req: [{'op': 'create-pool', 'name': 'ceph-fs_data', 'replicas': 3, 'pg_num': None, 'weight': None, 'group': None, 'group-namespace': None}, {'op': 'create-pool', 'name': 'ceph-fs_metadata', 'replicas': 3, 'pg_num': None, 'weight': None, 'group': None, 'group-namespace': None}, {'op': 'create-cephfs', 'mds_name': 'ceph-fs', 'data_pool': 'ceph-fs_data', 'metadata_pool': 'ceph-fs_metadata'}]
unit-ceph-fs-3: 13:57:13 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Setting ceph-mds.pools.available
unit-ceph-fs-3: 13:57:13 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/apt.py:49:ensure_package_status
unit-ceph-fs-3: 13:57:13 INFO unit.ceph-fs/3.juju-log ceph-mds:3: Invoking reactive handler: reactive/ceph_fs.py:85:config_changed
unit-ceph-fs-3: 13:57:13 INFO unit.ceph-fs/3.juju-log ceph-mds:3: status-set failed: active Unit is ready (1 MDS)
unit-ceph-fs-3: 13:57:13 INFO juju.worker.uniter.operation ran "ceph-mds-relation-changed" hook
unit-ceph-mon-1: 13:57:15 INFO juju.worker.uniter.operation ran "mds-relation-changed" hook
unit-ceph-osd-6: 13:57:15 DEBUG unit.ceph-osd/6.juju-log Hardening function 'install'
unit-ceph-osd-6: 13:57:15 DEBUG unit.ceph-osd/6.juju-log Hardening function 'config_changed'
unit-ceph-osd-6: 13:57:16 DEBUG unit.ceph-osd/6.juju-log Hardening function 'upgrade_charm'
unit-ceph-osd-6: 13:57:16 DEBUG unit.ceph-osd/6.juju-log Hardening function 'update_status'
unit-ceph-osd-6: 13:57:16 DEBUG unit.ceph-osd/6.juju-log No hardening applied to 'update_status'
unit-ceph-osd-6: 13:57:16 INFO unit.ceph-osd/6.juju-log Updating status.
unit-ceph-osd-6: 13:57:21 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-mon-2: 13:57:45 DEBUG unit.ceph-mon/2.juju-log Hardening function 'install'
unit-ceph-mon-2: 13:57:45 DEBUG unit.ceph-mon/2.juju-log Hardening function 'config_changed'
unit-ceph-mon-2: 13:57:45 DEBUG unit.ceph-mon/2.juju-log Hardening function 'upgrade_charm'
unit-ceph-mon-2: 13:57:46 DEBUG unit.ceph-mon/2.juju-log Hardening function 'update_status'
unit-ceph-mon-2: 13:57:46 DEBUG unit.ceph-mon/2.juju-log No hardening applied to 'update_status'
unit-ceph-mon-2: 13:57:46 INFO unit.ceph-mon/2.juju-log Updating status.
unit-ceph-mon-2: 13:57:51 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-osd-5: 13:58:00 DEBUG unit.ceph-osd/5.juju-log Hardening function 'install'
unit-ceph-osd-5: 13:58:01 DEBUG unit.ceph-osd/5.juju-log Hardening function 'config_changed'
unit-ceph-osd-5: 13:58:01 DEBUG unit.ceph-osd/5.juju-log Hardening function 'upgrade_charm'
unit-ceph-osd-5: 13:58:01 DEBUG unit.ceph-osd/5.juju-log Hardening function 'update_status'
unit-ceph-osd-5: 13:58:02 DEBUG unit.ceph-osd/5.juju-log No hardening applied to 'update_status'
unit-ceph-mon-0: 13:58:02 DEBUG unit.ceph-mon/0.juju-log Hardening function 'install'
unit-ceph-osd-5: 13:58:02 INFO unit.ceph-osd/5.juju-log Updating status.
unit-ceph-mon-0: 13:58:02 DEBUG unit.ceph-mon/0.juju-log Hardening function 'config_changed'
unit-ceph-mon-0: 13:58:02 DEBUG unit.ceph-mon/0.juju-log Hardening function 'upgrade_charm'
unit-ceph-mon-0: 13:58:02 DEBUG unit.ceph-mon/0.juju-log Hardening function 'update_status'
unit-ceph-mon-0: 13:58:02 DEBUG unit.ceph-mon/0.juju-log No hardening applied to 'update_status'
unit-ceph-mon-0: 13:58:03 INFO unit.ceph-mon/0.juju-log Updating status.
unit-ceph-mon-0: 13:58:06 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-osd-5: 13:58:14 INFO juju.worker.uniter.operation ran "update-status" hook
unit-ceph-mon-1: 13:58:29 DEBUG unit.ceph-mon/1.juju-log Hardening function 'install'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment