Skip to content

Instantly share code, notes, and snippets.

@jschmid1
Last active January 28, 2020 11:45
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jschmid1/4bf2970e8748bcf02bd7120b3c550f46 to your computer and use it in GitHub Desktop.
Save jschmid1/4bf2970e8748bcf02bd7120b3c550f46 to your computer and use it in GitHub Desktop.
c-v --format json vs plain
admin:~ # ceph-volume inventory
stderr: unable to read label for /dev/vda2: (2) No such file or directory
stderr: unable to read label for /dev/vda3: (2) No such file or directory
stderr: unable to read label for /dev/vda1: (2) No such file or directory
stderr: unable to read label for /dev/vda: (2) No such file or directory
stderr: unable to read label for /dev/vdb: (2) No such file or directory
stderr: unable to read label for /dev/vdc: (2) No such file or directory
stderr: unable to read label for /dev/vdd: (2) No such file or directory
stderr: unable to read label for /dev/vde: (2) No such file or directory
stderr: unable to read label for /dev/vdf: (2) No such file or directory
stderr: unable to read label for /dev/vdg: (2) No such file or directory
stderr: unable to read label for /dev/vdh: (2) No such file or directory
stderr: unable to read label for /dev/vdi: (2) No such file or directory
stderr: unable to read label for /dev/vdj: (2) No such file or directory
stderr: unable to read label for /dev/vdk1: (2) No such file or directory
stderr: unable to read label for /dev/vdk: (2) No such file or directory
stderr: unable to read label for /dev/vdl: (2) No such file or directory
stderr: unable to read label for /dev/vdm: (2) No such file or directory
stderr: unable to read label for /dev/vdn: (2) No such file or directory
stderr: unable to read label for /dev/vdo: (2) No such file or directory
stderr: unable to read label for /dev/vdp: (2) No such file or directory
stderr: unable to read label for /dev/vdq: (2) No such file or directory
stderr: unable to read label for /dev/vdr: (2) No such file or directory
stderr: unable to read label for /dev/vds: (2) No such file or directory
stderr: unable to read label for /dev/vdt: (2) No such file or directory
stderr: unable to read label for /dev/vdu: (2) No such file or directory
Device Path Size rotates available Model name
/dev/vdc 25.00 GB True True
/dev/vdd 25.00 GB True True
/dev/vde 25.00 GB True True
/dev/vdf 25.00 GB True True
/dev/vdh 25.00 GB True True
/dev/vdi 25.00 GB True True
/dev/vdj 25.00 GB True True
/dev/vdl 25.00 GB True True
/dev/vdm 25.00 GB True True
/dev/vdn 25.00 GB True True
/dev/vdo 25.00 GB True True
/dev/vdp 25.00 GB True True
/dev/vdr 25.00 GB True True
/dev/vda 42.00 GB True False
/dev/vdb 25.00 GB True False
/dev/vdg 25.00 GB True False
/dev/vdk 25.00 GB True False
/dev/vdq 25.00 GB True False
/dev/vds 25.00 GB True False
/dev/vdt 25.00 GB True False
/dev/vdu 25.00 GB True False
admin:~ # ceph-volume inventory --format=json
stderr: unable to read label for /dev/vda2: (2) No such file or directory
stderr: unable to read label for /dev/vda3: (2) No such file or directory
stderr: unable to read label for /dev/vda1: (2) No such file or directory
stderr: unable to read label for /dev/vda: (2) No such file or directory
stderr: unable to read label for /dev/vdb: (2) No such file or directory
stderr: unable to read label for /dev/vdc: (2) No such file or directory
stderr: unable to read label for /dev/vdd: (2) No such file or directory
stderr: unable to read label for /dev/vde: (2) No such file or directory
stderr: unable to read label for /dev/vdf: (2) No such file or directory
stderr: unable to read label for /dev/vdg: (2) No such file or directory
stderr: unable to read label for /dev/vdh: (2) No such file or directory
stderr: unable to read label for /dev/vdi: (2) No such file or directory
stderr: unable to read label for /dev/vdj: (2) No such file or directory
stderr: unable to read label for /dev/vdk1: (2) No such file or directory
stderr: unable to read label for /dev/vdk: (2) No such file or directory
stderr: unable to read label for /dev/vdl: (2) No such file or directory
stderr: unable to read label for /dev/vdm: (2) No such file or directory
stderr: unable to read label for /dev/vdn: (2) No such file or directory
stderr: unable to read label for /dev/vdo: (2) No such file or directory
stderr: unable to read label for /dev/vdp: (2) No such file or directory
stderr: unable to read label for /dev/vdq: (2) No such file or directory
stderr: unable to read label for /dev/vdr: (2) No such file or directory
stderr: unable to read label for /dev/vds: (2) No such file or directory
stderr: unable to read label for /dev/vdt: (2) No such file or directory
stderr: unable to read label for /dev/vdu: (2) No such file or directory
--> KeyError: 'ceph.cluster_name'
admin:~ #
Traceback (most recent call last):
File "/usr/sbin/ceph-volume", line 11, in <module>
load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 39, in __init__
self.main(self.argv)
File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 150, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 38, in main
self.format_report(Devices())
File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 42, in format_report
print(json.dumps(inventory.json_report()))
File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 51, in json_report
output.append(device.json_report())
File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 196, in json_report
output['lvs'] = [lv.report() for lv in self.lvs]
File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 196, in <listcomp>
output['lvs'] = [lv.report() for lv in self.lvs]
File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 945, in report
'cluster_name': self.tags['ceph.cluster_name'],
KeyError: 'ceph.cluster_name'
jxs@zulu ~/projects/ceph/build ±drive_group_ssh⚡ » ceph orchestrator osd create ssh-dev1:foo/bar
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2020-01-28T12:33:13.301+0100 7f2eecc50700 -1 WARNING: all dangerous and experimental features are enabled.
2020-01-28T12:33:13.333+0100 7f2eecc50700 -1 WARNING: all dangerous and experimental features are enabled.
Error EINVAL: Traceback (most recent call last):
File "/home/jxs/projects/ceph/src/pybind/mgr/mgr_module.py", line 1064, in _handle_command
return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
File "/home/jxs/projects/ceph/src/pybind/mgr/mgr_module.py", line 304, in call
return self.func(mgr, **kwargs)
File "/home/jxs/projects/ceph/src/pybind/mgr/orchestrator.py", line 140, in wrapper
return func(*args, **kwargs)
File "/home/jxs/projects/ceph/src/pybind/mgr/orchestrator_cli/module.py", line 366, in _create_osd
orchestrator.raise_if_exception(completion)
File "/home/jxs/projects/ceph/src/pybind/mgr/orchestrator.py", line 655, in raise_if_exception
raise e
File "/usr/lib64/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/usr/lib64/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/home/jxs/projects/ceph/src/pybind/mgr/cephadm/module.py", line 132, in do_work
res = self._on_complete_(*args, **kwargs)
File "/home/jxs/projects/ceph/src/pybind/mgr/cephadm/module.py", line 189, in <lambda>
return cls(on_complete=lambda x: f(*x), value=value, name=name, **c_kwargs)
File "/home/jxs/projects/ceph/src/pybind/mgr/cephadm/module.py", line 922, in _get_inventory
['--', 'inventory', '--format=json'])
File "/home/jxs/projects/ceph/src/pybind/mgr/cephadm/module.py", line 681, in _run_cephadm
code, '\n'.join(err)))
RuntimeError: cephadm exited with an error code: 1, stderr:INFO:cephadm:/usr/bin/podman:stderr --> KeyError: 'ceph.cluster_name'
Traceback (most recent call last):
File "<stdin>", line 2705, in <module>
File "<stdin>", line 545, in _infer_fsid
File "<stdin>", line 1981, in command_ceph_volume
File "<stdin>", line 474, in call_throws
RuntimeError: Failed command: /usr/bin/podman run --rm --net=host --privileged -e CONTAINER_IMAGE=ceph/daemon-base:latest-master-devel -e NODE_NAME=admin -v /var/log/ceph/307fa37f-5447-4436-8266-3366ed055a60:/var/log/ceph:z -v /var/lib/ceph/307fa37f-5447-4436-8266-3366ed055a60/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm --entrypoint /usr/sbin/ceph-volume ceph/daemon-base:latest-master-devel inventory --format=json
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment