Skip to content

Instantly share code, notes, and snippets.

@shannonmitchell
Last active July 23, 2019 07:37
Show Gist options
  • Save shannonmitchell/ab1c96da91de6f267265a85b7773ada4 to your computer and use it in GitHub Desktop.
Save shannonmitchell/ab1c96da91de6f267265a85b7773ada4 to your computer and use it in GitHub Desktop.
Encrypting All Keystone SSL with openstack-ansible stable/queens

Ecrypting Keystone internal/admin endpoints

Related Docs

Actions taken to set up internal and admin endpoints for ssl

This was done on a small OSA AIO VM.

These are the changes after many attempts. The keystone configs only change the endpoint. We had to override the complete keystone haproxy configs to server up the new https endpoints. Some services such as glance and cinder fail due tossl cert verification after adding haproxy. I then had to add a couple of overrides to make sure --insecure is used in the playbooks.

We also encountered a bug in the cinder qos playbooks that didn't add the CLI_OPTIONS including --insecure properly. This is reflected in the changes below.

# vi /etc/openstack_deploy/user_variables.yml
...

#################################
# Keystone Service Modifications
#################################

keystone_service_internaluri_proto: https
keystone_service_internaluri_insecure: True

keystone_service_adminuri_proto: https
keystone_service_adminuri_insecure: True

haproxy_extra_services:
  - service:
      haproxy_service_name: keystone_service
      haproxy_backend_nodes: "{{ groups['keystone_all'] | default([])  }}"
      haproxy_port: 5000
      haproxy_ssl: "{{ haproxy_ssl }}"
      haproxy_ssl_all_vips: True
      haproxy_balance_type: "http"
      haproxy_backend_options:
        - "httpchk HEAD / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
      haproxy_service_enabled: "{{ groups['keystone_all'] is defined and groups['keystone_all'] | length > 0 }}"
  - service:
      haproxy_service_name: keystone_admin
      haproxy_backend_nodes: "{{ groups['keystone_all'] | default([])  }}"
      haproxy_port: 35357
      haproxy_ssl: "{{ haproxy_ssl }}"
      haproxy_ssl_all_vips: True
      haproxy_balance_type: "http"
      haproxy_backend_options:
        - "httpchk HEAD / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
      haproxy_whitelist_networks: "{{ haproxy_keystone_admin_whitelist_networks }}"
      haproxy_service_enabled: "{{ groups['keystone_all'] is defined and groups['keystone_all'] | length > 0 }}"
# vi /etc/ansible/roles/os_cinder/tasks/cinder_qos.yml

...
    {{ cinder_bin }}/openstack ${CLI_OPTIONS} volume qos list --format value --column Name | grep -x {{ item.name }} || \
    {{ cinder_bin }}/openstack ${CLI_OPTIONS} volume qos create {{ item.name }} \
...
    if {{ cinder_bin }}/openstack ${CLI_OPTIONS} volume type show "{{ vtype }}"; then
      {{ cinder_bin }}/openstack ${CLI_OPTIONS} volume qos associate {{ item.name }} {{ vtype }}
...


# cd /opt/openstack-ansible/playbooks/

# openstack-ansible haproxy-install.yml 

# openstack-ansible setup-openstack.yml

It made it all the way to the tempest playbooks and bombs out on "TASK [os_tempest : Execute tempest tests]". We need to look into this.

After the modifications

  • Keyston Endpoints
root@aio1:/opt/openstack-ansible/playbooks# lxc-attach -n aio1_utility_container-61a0a38a
root@aio1-utility-container-61a0a38a:/# . /root/openrc 
root@aio1-utility-container-61a0a38a:/# openstack catalog show keystone
+-----------+-----------------------------------------+
| Field     | Value                                   |
+-----------+-----------------------------------------+
| endpoints | RegionOne                               |
|           |   admin: https://172.29.236.100:35357   |
|           | RegionOne                               |
|           |   internal: https://172.29.236.100:5000 |
|           | RegionOne                               |
|           |   public: https://172.20.41.29:5000     |
|           |                                         |
| id        | 9869ca7147114b338330ce666d9c6a42        |
| name      | keystone                                |
| type      | identity                                |
+-----------+-----------------------------------------+
root@aio1-utility-container-61a0a38a:/# exit
  • Haproxy config for keystone
root@aio1:~# grep 'keystone_admin-front-1' /etc/haproxy/haproxy.cfg -A 70
frontend keystone_admin-front-1
    bind 172.20.41.29:35357 ssl crt /etc/ssl/private/haproxy.pem ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
    option httplog
    option forwardfor except 127.0.0.0/8
    option http-server-close
    acl white_list src 127.0.0.1/8 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8
    tcp-request content accept if white_list
    tcp-request content reject
    reqadd X-Forwarded-Proto:\ https
    mode http
    default_backend keystone_admin-back

frontend keystone_admin-front-2
    bind 172.29.236.100:35357 ssl crt /etc/ssl/private/haproxy.pem ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
    option httplog
    option forwardfor except 127.0.0.0/8
    option http-server-close
    acl white_list src 127.0.0.1/8 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8
    tcp-request content accept if white_list
    tcp-request content reject
    reqadd X-Forwarded-Proto:\ https
    mode http
    default_backend keystone_admin-back


backend keystone_admin-back
    mode http
    balance leastconn
    stick store-request src
    stick-table type ip size 256k expire 30m
    option forwardfor
    option httplog
    option httpchk HEAD / HTTP/1.0\r\nUser-agent:\ osa-haproxy-healthcheck


    server aio1_keystone_container-624b9a65 172.29.238.157:35357 check port 35357 inter 12000 rise 1 fall 1

# Ansible managed

      
frontend keystone_service-front-1
    bind 172.20.41.29:5000 ssl crt /etc/ssl/private/haproxy.pem ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
    option httplog
    option forwardfor except 127.0.0.0/8
    option http-server-close
    reqadd X-Forwarded-Proto:\ https
    mode http
    default_backend keystone_service-back

frontend keystone_service-front-2
    bind 172.29.236.100:5000 ssl crt /etc/ssl/private/haproxy.pem ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
    option httplog
    option forwardfor except 127.0.0.0/8
    option http-server-close
    reqadd X-Forwarded-Proto:\ https
    mode http
    default_backend keystone_service-back


backend keystone_service-back
    mode http
    balance leastconn
    stick store-request src
    stick-table type ip size 256k expire 30m
    option forwardfor
    option httplog
    option httpchk HEAD / HTTP/1.0\r\nUser-agent:\ osa-haproxy-healthcheck


    server aio1_keystone_container-624b9a65 172.29.238.157:5000 check port 5000 inter 12000 rise 1 fall 1

Cinder Fix

Looks like the cinder fix is already in master as of 5 days ago:

Found the master review here:

I submitted two cherry-picks off of the main:

Changes Merged. May not be in rpc-openstack queens yet. The SHAs in rpc-openstack should be automatically updated weekly if no issues are found.

Tempest fix

The tempest playbooks will not update the workspace tempest.conf fileon the util container.
This leaves the non-ssl identity url/uri in place causing the tempest test to fail. I ended up having to blow away the venv in /openstack/venvs/tempest* and the cache file in /var/cache/tempest*.tgz. Re-running the playbook after fixed all the things and ran the tempest tests.

I found the following related bug that needs a fix.

https://bugs.launchpad.net/openstack-ansible/+bug/1763336

  • Temp workaround for queens(changed for rocky and above)
diff --git a/tasks/tempest_post_install.yml b/tasks/tempest_post_install.yml
index e7c0b55..373c604 100644
--- a/tasks/tempest_post_install.yml
+++ b/tasks/tempest_post_install.yml
@@ -53,6 +53,27 @@
     mode: "0644"
     config_overrides: "{{ tempest_tempest_conf_overrides }}"
     config_type: "ini"
+  register: copy_tempest_config
+
+- name: Move over workspace when config is updated
+  shell: |
+    if [ -d {{ tempest_venv_bin | dirname }}/workspace ]
+    then
+      . {{ tempest_venv_bin }}/activate
+      export CURDATE=$(date +"%d%^b%g_%H%M%S%Z")
+      tempest workspace rename --old-name workspace --new-name workspace_${CURDATE}
+      mv {{ tempest_venv_bin | dirname }}/workspace {{ tempest_venv_bin | dirname }}/workspace_${CURDATE}
+      tempest workspace move --name workspace_${CURDATE} --path {{ tempest_venv_bin | dirname }}/workspace_${CURDATE}/
+      exit 3
+    fi
+  args:
+    executable: /bin/bash
+  register: tempest_move_workspace
+  changed_when: tempest_move_workspace.rc == 3
+  failed_when:
+    - tempest_move_workspace.rc != 0
+    - tempest_move_workspace.rc != 3
+  when: copy_tempest_config.changed
 
 - name: Initialise tempest workspace
   shell: |

I put in the following reviews:

Glance Issues

It looks like we can't use insecure with the glance swift driver due to the following bug:

Applying the fix manually gets us past the tempest tests.

  • Changes made on the glance api container:
root@aio1-glance-container-eb2fa647:/openstack/venvs/glance-17.1.8/lib/python2.7/site-packages/glance_store/_drivers/swift# grep -n 'ks_session.Ses' /openstack/venvs/glance-17.1.8/lib/python2.7/site-packages/glance_store/_drivers/swift/store.py -A2
1334:        sess = ks_session.Session(auth=password, verify=not self.insecure)
1335-        return ks_client.Client(session=sess)
1336-
--
1455:        trustor_sess = ks_session.Session(auth=trustor_auth,
1456-                                           verify=not self.insecure)
1457-        trustor_client = ks_client.Client(session=trustor_sess)
--
1472:        trustee_sess = ks_session.Session(auth=password,
1473-                                          verify=not self.insecure)
1474-        trustee_client = ks_client.Client(session=trustee_sess)
--
1499:        client_sess = ks_session.Session(auth=client_password,
1500-                                          verify=not self.insecure)
1501-        return ks_client.Client(session=client_sess)
  • Tempest output after
(tempest-17.1.8) root@aio1-utility-container-a6190e4e:/openstack/venvs/tempest-17.1.8/workspace# tempest run  --whitelist-file /openstack/venvs/tempest-17.1.8/workspace/etc/tempest_whitelist.txt 

======
Totals
======
Ran: 123 tests in 202.0000 sec.
 - Passed: 119
 - Skipped: 4
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 746.5488 sec.
...

I put in a backport for stable/glance as it seems to be in master and rockey.

TODO:

  • Look into encrypting keystone behind haproxy.
@shannonmitchell
Copy link
Author

I was working on upgrading queens to rocky on the keystone ssl configured environment. The playbooks make some auth accesses and fail against the keystone admin port due to ssl being enabled. We may need to dig into this after everything else is fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment