Skip to content

Instantly share code, notes, and snippets.

@cloudnull
Last active September 24, 2015 07:59
Show Gist options
  • Save cloudnull/a874e65014385d9b8439 to your computer and use it in GitHub Desktop.
Save cloudnull/a874e65014385d9b8439 to your computer and use it in GitHub Desktop.
existing-inventory-redeployment.rst

Redeploying using the existing inventory is completely possible and will minimize the changes needing to be done on the F5.

Potential plan of action (this is just my mutterings, likely will need slight revision)

  1. destroy all of the juno containers throughout the whole environment.
ansible hosts -m shell -a 'for i in $(lxc-ls); do lxc-destroy -fn $i; done'
  1. cleanup the persistent data for all containers on all hosts
ansible hosts -m shell -a 'rm -rf /openstack'
  1. clone OSAD into place if not already there
git clone https://github.com/openstack/openstack-ansible /opt/openstack-ansible
  1. If osad is already on the disk update it
cd /opt/os-ansible-deployment
git fetch --all
  1. Change the working directory to the latest kilo release
git checkout 11.2.1 # latest upstream stable kilo
  1. make a few edits to the upgrade script

At this point the upgrade script will need to be modified to for it to not stop when it fails in the following places:

The edit to these lines is simple. Just add || true to the end of the commands. The change will force the exit status to be 0 which will allow the tasks to continue.

The upgrade flag will need to be removed from the following line:

https://github.com/stackforge/os-ansible-deployment/blob/kilo/scripts/run-upgrade.sh#L659

# Modify 
RUN_TASKS+=("-e 'rabbitmq_upgrade=true' setup-infrastructure.yml")
# to this
RUN_TASKS+=("setup-infrastructure.yml")

Here is a gist of the kilo upgrade script with the edits mentioned above

  1. Make the variable changes for the users and such within the EXISTING /etc/rpc_deployment/user_variables.yml file.
# Service users
​ceilometer_service_user_name: gap1-ceilometer
cinder_service_user_name: gap1-cinder
glance_service_user_name: gap1-glance
heat_service_user_name: gap1-heat
keystone_service_user_name: gap1-keystone
neutron_service_user_name: gap1-neutron
nova_service_user_name: gap1-nova
swift_service_user_name: gap1-swift

# Admin users
keystone_admin_user_name: gap1-keystone-admin
heat_stack_admin_user_name: gap1-heat-admin
  1. The galera monitoring user within the F5, in juno, is set to "haproxy". To make that functional in Kilo set the following variable in the EXISTING /etc/rpc_deployment/user_variables.yml file.
galera_monitoring_user: haproxy
  1. You may need to update the existing ldap config as found in the /etc/rpc_deployment/user_variables.yml file which may need to have usernames updated, however I have no idea what they currently have set.
  2. execute the run upgrade script
./scripts/run-upgrade.sh

Once the process is complete you'll have a functional cloud running Kilo built using the old inventory which will minimize the required changes on the F5. That said, this will not eliminate the need to update the F5. As Chris touched on, the repo infrastructure will need to be added to the F5 for speed and HA capabilities of the repo infrastructure. However, that action can be done on a later date. The upgrade script sets the repo URL to a single container address instead using of the "internal_lb_vip_address". Once a change management request can be put in to update the F5 we can remove the user variable file "/etc/openstack_deploy/user_deleteme_post_upgrade_variables.yml" and simply rerun the container pip_lock_down role on all hosts and containers.

Future Task - Update everything to use the repo vip on F5 once the change can be made

rm /etc/openstack_deploy/user_deleteme_post_upgrade_variables.yml
cd /opt/os-ansible-deployment/playbooks
cat > /tmp/ensure_container_networking.yml <<EOF
- name: update repo bits
  hosts: "hosts:all_containers"
  gather_facts: false
  user: root
  roles:
    - pip_lock_down
EOF

openstack-ansible /tmp/ensure_container_networking.yml
@mancdaz
Copy link

mancdaz commented Sep 14, 2015

you'll need to do an lxc-stop before the lxc-destroy in the first step.
Also probably want to change that URL to the new openstack namespace

@cloudnull
Copy link
Author

No need to "stop" and "start" the destroy command in step one is using the -fn flags which will forcibly destroy the containers.

@cloudnull
Copy link
Author

👍 I've updated the repo name for the change to big-tent.

@BjoernT
Copy link

BjoernT commented Sep 15, 2015

@mancdaz I would have done that anyway :)

@BjoernT
Copy link

BjoernT commented Sep 17, 2015

openrc_os_username: gap1-keystone-admin was missing
heat_stack_domain_admin: gap1-heat-domain-admin was also missing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment