Skip to content

Instantly share code, notes, and snippets.

@tamurray
Created March 25, 2017 04:12
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tamurray/b6f9cba11994844e5dd1dd6cd448fa13 to your computer and use it in GitHub Desktop.
Save tamurray/b6f9cba11994844e5dd1dd6cd448fa13 to your computer and use it in GitHub Desktop.
Glance
List Images
root@csgsnm001os1:/var/tmp/HEAT# glance image-list
+--------------------------------------+----------------------+
| ID | Name |
+--------------------------------------+----------------------+
| 30e4700c-17a6-476a-b0e1-7d2e451b9948 | cirros |
| cf759a78-0d49-4eaf-b380-b361583887dc | cirros.img |
| fc1e170d-b48d-458d-a2b1-25a4c2f44a38 | ubuntu-kvm |
| 515288e6-4448-4a70-aec5-e9766613365e | ubuntu-kvm.img |
| a71c9ea3-19e5-4276-bee9-2ece1430704c | ubuntu-webserver |
| d9ccac96-7c3b-4b99-a3a3-f64e1000d858 | ubuntu-webserver.img |
+--------------------------------------+----------------------+
Direct URL
root@csgsnm001os1:/var/tmp/HEAT# openstack-config --get /etc/glance/glance-api.conf DEFAULT show_image_direct_url
true
# restart glance-api service if you change this property
Image details
root@csgsnm001os1:/var/tmp/HEAT# glance --os-image-api-version 2 image-show 515288e6-4448-4a70-aec5-e9766613365e
+------------------+--------------------------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------------------------+
| checksum | 981dbbba3b2816ab8afdd748c87df1e8 |
| container_format | bare |
| created_at | 2016-12-20T17:24:12Z |
| description | |
| direct_url | file:///var/lib/glance/images/515288e6-4448-4a70-aec5-e9766613365e |
| disk_format | raw |
| id | 515288e6-4448-4a70-aec5-e9766613365e |
| min_disk | 0 |
| min_ram | 0 |
| name | ubuntu-kvm.img |
| owner | ad20031984404f54be0241a64ac3168d |
| protected | False |
| size | 1027604480 |
| status | active |
| tags | [] |
| updated_at | 2016-12-20T17:25:49Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------------------------------------+
with Ceph:
root@gngbnm037d:~# glance image-show a59e3e1d-2db5-429b-9ba1-0be5343e759a
+------------------+----------------------------------------------------------------------------------+
| Property | Value |
+------------------+----------------------------------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2017-03-21T06:59:53Z |
| direct_url | rbd://baac22f7-0041-4476-b86b-1a87f9e8ac25/images/a59e3e1d-2db5-429b- |
| | 9ba1-0be5343e759a/snap |
| disk_format | raw |
| id | a59e3e1d-2db5-429b-9ba1-0be5343e759a |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros-image |
| owner | 6a49908217b24babb4322f4e5d83d273 |
| protected | False |
| size | 13287936 |
| status | active |
| tags | [] |
| updated_at | 2017-03-21T06:59:58Z |
| virtual_size | None |
| visibility | public |
+------------------+----------------------------------------------------------------------------------+
Create an image from ceph object
see https://www.sebastien-han.fr/blog/2014/11/11/openstack-glance-import-images-and-convert-them-directly-in-ceph/
$ glance image-create --id 4f460d8c-2af3-4041-a28d-12c3631a305f --name CirrosImport --store rbd --disk-format raw --container-format bare --location rbd://$(sudo ceph fsid)/imajeez/4f460d8c-2af3-4041-a28d-12c3631a305f/snap
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | None |
| container_format | bare |
| created_at | 2014-11-10T17:00:02 |
| deleted | False |
| deleted_at | None |
| disk_format | raw |
| id | 4f460d8c-2af3-4041-a28d-12c3631a305f |
| is_public | False |
| min_disk | 0 |
| min_ram | 0 |
| name | CirrosImport |
| owner | 2f314f86ca9048ac828baedb5e8e4e2a |
| protected | False |
| size | 41126400 |
| status | active |
| updated_at | 2014-11-10T17:00:02 |
| virtual_size | None |
+------------------+--------------------------------------+
Cinder
Cinder list
root@r11-ru1-s1:~# cinder list
+--------------------------------------+-----------+-------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------+------+-------------+----------+-------------+
| e1908789-fe9d-427c-9bb4-0f95269e0b26 | available | myvol | 12 | - | false | |
+--------------------------------------+-----------+-------+------+-------------+----------+-------------+
Cinder show
root@r11-ru1-s1:~# cinder show myvol
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-03-24T14:48:01.000000 |
| description | None |
| encrypted | False |
| id | e1908789-fe9d-427c-9bb4-0f95269e0b26 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | myvol |
| os-vol-host-attr:host | r11-ru1-s1@rbd-disk#RBD |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1ec8359918374641ae6b2c20bf675988 |
| replication_status | disabled |
| size | 12 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2017-03-24T14:48:03.000000 |
| user_id | aa33e4b0a8014006bbb2c570dbe72e79 |
| volume_type | None |
+--------------------------------+--------------------------------------+
Purge Old
cinder-manage db purge [<number of days>]
Cinder types
root@gngbnm037d:~# cinder type-list
+--------------------------------------+-----------------------------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+-----------------------------+-------------+-----------+
| 6f85d47d-1195-4b32-9706-885e746fff66 | ocs-block-disk | - | True |
| 6fbe65b6-425c-4bda-9345-02e187cf6a36 | ocs-block-hdd_Pool_a-disk | - | True |
| a0e645bd-9a47-480f-927e-fea2a16ec612 | ocs-block-hdd_Pool_b-disk | - | True |
| a990f9f7-a162-4199-b777-acc1b8b10700 | ocs-block-hdd_Pool_1tb-disk | - | True |
| ce607ec9-5c0d-41f0-8651-b5439b7c9a0e | ocs-block-hdd_Pool_c-disk | - | True |
| cebb6b8c-5375-4ddb-a6ed-fbc9e9835246 | ocs-block-hdd_Pool_d-disk | - | True |
| f008f24a-c47e-4877-8879-3327e3c5e690 | nimble02 | - | True |
+--------------------------------------+-----------------------------+-------------+-----------+
Cinder type metadata
root@gngbnm037d:~# cinder extra-specs-list
+--------------------------------------+-----------------------------+-------------------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+-----------------------------+-------------------------------------------------+
| 6f85d47d-1195-4b32-9706-885e746fff66 | ocs-block-disk | {'volume_backend_name': 'RBD'} |
| 6fbe65b6-425c-4bda-9345-02e187cf6a36 | ocs-block-hdd_Pool_a-disk | {'volume_backend_name': 'VOLUMES_HDD_POOL_A'} |
| a0e645bd-9a47-480f-927e-fea2a16ec612 | ocs-block-hdd_Pool_b-disk | {'volume_backend_name': 'VOLUMES_HDD_POOL_B'} |
| a990f9f7-a162-4199-b777-acc1b8b10700 | ocs-block-hdd_Pool_1tb-disk | {'volume_backend_name': 'VOLUMES_HDD_POOL_1TB'} |
| ce607ec9-5c0d-41f0-8651-b5439b7c9a0e | ocs-block-hdd_Pool_c-disk | {'volume_backend_name': 'VOLUMES_HDD_POOL_C'} |
| cebb6b8c-5375-4ddb-a6ed-fbc9e9835246 | ocs-block-hdd_Pool_d-disk | {'volume_backend_name': 'VOLUMES_HDD_POOL_D'} |
| f008f24a-c47e-4877-8879-3327e3c5e690 | nimble02 | {'volume_backend_name': 'nimble02'} |
+--------------------------------------+-----------------------------+-------------------------------------------------+
Ceph
Get status
ceph heath detail
ceph -s
ceph -w
PG Calculations: http://ceph.com/pgcalc/
Get Pools
root@r11-ru1-s1:~# ceph osd pool ls detail
pool 1 'internal' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 5 flags hashpspool stripe_width 0
pool 2 'volumes' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 576 pgp_num 576 last_change 54 flags hashpspool stripe_width 0
pool 3 'images' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 576 pgp_num 576 last_change 56 flags hashpspool stripe_width 0
or
root@gngbnm039d:~# ceph osd pool ls detail
pool 1 'volumes' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 18048 pgp_num 18048 last_change 4927 flags hashpspool stripe_width 0
pool 2 'images' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 18048 pgp_num 18048 last_change 4957 flags hashpspool stripe_width 0
removed_snaps [1~3]
pool 3 'internal' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 4919 flags hashpspool stripe_width 0
pool 4 'volumes_hdd_Pool_a' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 8192 pgp_num 8192 last_change 4971 flags hashpspool stripe_width 0
removed_snaps [1~3]
pool 5 'volumes_hdd_Pool_b' replicated size 2 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 8192 pgp_num 8192 last_change 4976 flags hashpspool stripe_width 0
pool 6 'volumes_hdd_Pool_c' replicated size 2 min_size 2 crush_ruleset 3 object_hash rjenkins pg_num 8192 pgp_num 8192 last_change 4979 flags hashpspool stripe_width 0
pool 7 'volumes_hdd_Pool_d' replicated size 2 min_size 2 crush_ruleset 4 object_hash rjenkins pg_num 8192 pgp_num 8192 last_change 4981 flags hashpspool stripe_width 0
pool 8 'volumes_hdd_Pool_1tb' replicated size 2 min_size 2 crush_ruleset 5 object_hash rjenkins pg_num 720 pgp_num 720 last_change 4922 flags hashpspool stripe_width 0
Get Pool Use:
root@r11-ru1-s1:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
16759G 16758G 746M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
internal 1 476 0 16758G 15
volumes 2 0 0 8379G 0
images 3 0 0 8379G 0
root@r11-ru4-s2:~# ceph osd lspools
1 internal,2 volumes,3 images,
Create Pools http://docs.ceph.com/docs/master/rados/operations/pools/
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \
[crush-ruleset-name] [expected-num-objects]
Ceph OSD tree
root@r11-ru1-s1:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-8 16.36798 root cdefault
-9 5.45599 chassis chassis-default-0
-4 2.72800 host r11-ru2-s2
0 0.90900 osd.0 up 1.00000 1.00000
7 0.90900 osd.7 up 1.00000 1.00000
15 0.90900 osd.15 up 1.00000 1.00000
-7 2.72800 host r11-ru2-s1
3 0.90900 osd.3 up 1.00000 1.00000
11 0.90900 osd.11 up 1.00000 1.00000
17 0.90900 osd.17 up 1.00000 1.00000
-10 2.72800 chassis chassis-default-1
Find OSD location
root@r11-ru1-s1:~# ceph osd find 2
{
"osd": 2,
"ip": "172.16.80.107:6800\/19854",
"crush_location": {
"host": "r11-ru4-s1",
"root": "default"
}
}
Check OSD behavior
ceph osd perf | sort -n -k 2
Change a CRUSH map
root@gngbnm039d:~# ceph osd getcrushmap -o /tmp/mycrush
got crush map from osdmap epoch 5037
root@gngbnm039d:~# crushtool -d /tmp/mycrush -o /tmp/mycrush.txt
root@gngbnm039d:~# vi /tmp/mycrush.txt
root@gngbnm039d:~# crushtool -c /tmp/mycrush.txt -o /tmp/mycrush_new
root@gngbnm039d:~# ceph osd setcrushmap -i /tmp/mycrush_new
Find ceph disks
root@r11-ru4-s2:~# ceph-disk list
/dev/sda :
/dev/sda1 ceph journal, for /dev/sdd1
/dev/sda2 ceph journal, for /dev/sde1
/dev/sda3 ceph journal, for /dev/sdf1
/dev/sdb :
/dev/sdb2 other, 0x5
/dev/sdb5 other, LVM2_member
/dev/sdb1 other, ext2, mounted on /boot
/dev/sdc :
/dev/sdc1 other, LVM2_member
/dev/sdd :
/dev/sdd1 ceph data, active, cluster ceph, osd.5, journal /dev/sda1
/dev/sde :
/dev/sde1 ceph data, active, cluster ceph, osd.10, journal /dev/sda2
/dev/sdf :
/dev/sdf1 ceph data, active, cluster ceph, osd.16, journal /dev/sda3
Remove OSD http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual
# on ceph master node
osd=6
host=gngsvc002a
ceph osd reweight ${osd} 0
# wait until rebalance is complete
ceph osd out ${osd}
ssh ${host} service ceph-osd stop id=${osd}
# clean up journal? see ceph-disk list
ssh ${host} umount /var/lib/ceph/osd/ceph-${osd}
ceph osd crush remove osd.${osd}
ceph auth del osd.${osd}
ceph osd rm ${osd}
Add OSD http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-osd/
Remove a mon (e.g. wrong IP) http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-mon/
ceph-deploy --overwrite-conf mon destroy myhost
Add a mon
ceph-deploy --overwrite-conf mon create myhost
Compact a mon
id=gngsvc010a ; sudo ceph tell mon.${id} compact
Broken pgs
(good article: http://www.sebastien-han.fr/blog/2015/04/27/ceph-manually-repair-object/)
usually enough to do
ceph health detail
and then
ceph pg repair <badpg>
for p in $(ceph health detail | awk '/^pg/{print $2}'); do ceph pg repair $p ; done
Push ceph configs
ceph-deploy --overwrite-conf config push $(ceph osd tree | awk '/host/{print $4}'|sort | uniq)
Check configs
ceph --admin-daemon /var/run/ceph/ceph-osd.268.asok config show
ceph --admin-daemon /var/run/ceph/ceph-mon.gngsvm011d.asok config show
ceph daemon osd.173 config show
ceph daemon mon.r11-ru1-s1 config show
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment