Notes on setting up an AIO running ceph-client
roles from os-ansible-deployment
Note: this is against an existing ceph cluster running on the 172.16.200.0/24
network:
172.16.200.3 mon0
172.16.200.4 mon1
172.16.200.5 mon2
172.16.200.6 osd0
172.16.200.7 osd1
- You will need to install
libvirt-bin
so that thevirsh
command is available.
The storage host should be configured with:
container_vars:
cinder_backends:
limit_container_types: cinder_volume
volumes_hdd:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_pool: volumes_hdd
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot: 'false'
rbd_max_clone_depth: 5
rbd_store_chunk_size: 4
rados_connect_timeout: -1
glance_api_version: 2
volume_backend_name: volumes_hdd
rbd_user: "{{ cinder_ceph_client }}"
rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
The following lines should be uncommented and configured apropriately:
Under ## Glance Options
:
glance_default_store: rbd
glance_ceph_client: glance
glance_rbd_store_pool: images
glance_rbd_store_chunk_size: 8
Under ## Nova
:
nova_libvirt_images_rbd_pool: vms
Under ## Cinder
:
cinder_ceph_client: cinder
Under ## Ceph
:
ceph_apt_repo_url_region: "www" # or "eu" for Netherlands based mirror
ceph_stable_release: hammer
ceph_fsid: d4ab416b-490c-4ab9-87e8-2364b524e0f2
ceph_conf:
global:
fsid: '{{ ceph_fsid }}'
mon_initial_members: 'mon1.example.local,mon2.example.local,mon3.example.local'
mon_host: '10.16.5.40,10.16.5.41,10.16.5.42'
auth_cluster_required: cephx
auth_service_required: cephx
auth_client_required: cephx
The variables ceph_fsid
, mon_initial_members
and mon_host
are specific to
your environment.
Uncomment the following line:
cinder_ceph_client_uuid:
and add the following line: (This is a bug and needs to be fixed)
nova_ceph_client_uuid:
The ceph_mons
list will need to be changed from:
ceph_mons: []
to include the IP addresses of your ceph monitor hosts:
ceph_mons:
- 172.16.200.3
- 172.16.200.4
- 172.16.200.5
I've been running the setup-*.yml
plays invididually in the following order:
openstack-ansible setup-hosts.yml
openstack-ansible haproxy-install.yml
openstack-ansible setup-infrastructure.yml
For some reason the ceilometer
user is not being created in keystone so I run
openstack user create ceilometer
from the utility container.
I had to manually change /tmp/nova-secret.xml
from:
<!-- Ansible managed: /opt/os-ansible-deployment/playbooks/roles/ceph_client/templates/secret.xml.j2 modified on 2015-08-03 22:08:34 by root on mon0 -->
<secret ephemeral='no' private='no'>
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
to:
<!-- Ansible managed: /opt/os-ansible-deployment/playbooks/roles/ceph_client/templates/secret.xml.j2 modified on 2015-08-03 22:08:34 by root on mon0 -->
<secret ephemeral='no' private='no'>
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
<usage type='ceph'>
<name>client.nova secret</name>
</usage>
</secret>
and then run virsh secret-define --file /tmp/nova-secret.xml
to bypass the nova_ceph_client_uuid
bug described below.
Finally I run openstack-ansible os-tempest-install.yml
to install tempest and test. (We are currently not passing tempest tests.)
-
The
nova_ceph_client_uuid
is defined in multiple places and is hardcoded twice. This needs to be ironed out and defined properly. For some reasonsvg
was using the same secret for bothcinder
andnova
. -
We need to install
libvirt-bin
on the deployment host. -
I don't this we should be defining
ceph_mons
inplaybooks/roles/ceph_client/defaults/main.yml
this should probably be pulled fromopenstack_user_config.yml
- Why do I have to create the
ceilometer
user in keystone, shouldn't this be done in the plays?
the default is to use the cinder value
nova_ceph_client_uuid: '{{ cinder_ceph_client_uuid | default() }}'
as nova needs access to both cinder and nova ceph backends