Skip to content

Instantly share code, notes, and snippets.

@lbragstad
Last active October 12, 2017 16:42
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save lbragstad/ddfb10f9f9048414d1f781ba006e95d1 to your computer and use it in GitHub Desktop.
Save lbragstad/ddfb10f9f9048414d1f781ba006e95d1 to your computer and use it in GitHub Desktop.
Keystone Credential Migration
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_PROJECT_NAME=admin
export OS_DEFAULT_DOMAIN=default
export OS_AUTH_URL=http://localhost:35357/v3/
export OS_IDENTITY_API_VERSION=3
[DEFAULT]
admin_token=ADMIN
debug=True
verbose=True
# We don't really use two endpoints, as we're only deploying v3
public_endpoint=http://localhost:35357/
admin_endpoint=http://localhost:35357/
fatal_deprecations = true
# truncate collection responses for performance
list_limit=20
strict_password_check=True
max_token_size = 255
fatal_deprecations=false
policy_file=policy.json
read_only_mode=true
[eventlet_server]
admin_port = 35357
[security_compliance]
#unique_last_password_count=3
#password_regex=0
#password_expires_days=1
#lockout_failure_attempts=3
#lockout_duration=60
minimum_password_age=1
[assignment]
driver=sql
[identity]
driver=sql
[auth]
methods=password,token
[cache]
enabled=true
backend=dogpile.cache.memcached
expiration_time=600
backend_argument=url:127.0.0.1:11211
[catalog]
driver=sql
[database]
connection=mysql://keystone:keystone@127.0.0.1/keystone
mysql_sql_mode = TRADITIONAL
[paste_deploy]
config_file=/etc/keystone/paste.ini
[token]
expiration=3600
provider = uuid
[trust]
enabled=true
[DEFAULT]
admin_token=ADMIN
debug=True
verbose=True
# We don't really use two endpoints, as we're only deploying v3
public_endpoint=http://localhost:35357/
admin_endpoint=http://localhost:35357/
fatal_deprecations = true
# truncate collection responses for performance
list_limit=20
strict_password_check=True
max_token_size = 255
fatal_deprecations=false
policy_file=policy.json
read_only_mode=true
[eventlet_server]
admin_port = 35357
[security_compliance]
#unique_last_password_count=3
#password_regex=0
#password_expires_days=1
#lockout_failure_attempts=3
#lockout_duration=60
minimum_password_age=1
[assignment]
driver=sql
[identity]
driver=sql
[auth]
methods=password,token
[cache]
enabled=true
backend=dogpile.cache.memcached
expiration_time=600
backend_argument=url:127.0.0.1:11211
[catalog]
driver=sql
[database]
connection=mysql://keystone:keystone@104.130.2.207/keystone
mysql_sql_mode = TRADITIONAL
[paste_deploy]
config_file=/etc/keystone/paste.ini
[token]
expiration=3600
provider = uuid
[trust]
enabled=true

Migrating Credentials from Mitaka to Newton

This is an example of migrating credentials from Mitaka to Newton.

Pre-Newton, keystone stored credentials as plaintext. The Newton codebase provides a path forward to encrypt all pre-existing credentials and introduces code to encrypt and decrypt credentials upon request.

Setup

This setup will include deploying stable/mitaka, creating credentials, and finally migrating them using code written for Newton that is still up for review. The deployment will consist of two keystone application nodes and a shared database.

Install Mitaka (keystone-1,keystone-2)

Start by getting the latest Mitaka source code on both keystone-1 and keystone-2:

git clone -b stable/mitaka https://github.com/openstack/keystone keystone-mitaka

Install pip (keystone-1,keystone-2)

This is being done on Ubuntu 16.04 - which has pip 8.1.1 available. We can upgrade to pip 8.1.2:

sudo apt install python-pip
pip install --upgrade pip

Install virtualenv (keystone-1,keystone-2)

We can encapsulate the environments using virtualenv, so let's prepare one for Mitaka:

pip install virtualenv
virtualenv mitaka

Install Mitaka (keystone-1,keystone-2)

sudo apt install libssl-dev  # needed to install cryptography
sudo apt install libmysqlclient-dev  # needed for MySQL-python
source mitaka/bin/activate
pip install -e keystone-mitaka/
pip install python-memcached osprofiler mysql-python python-openstackclient

Deploy configuration directory (keystone-1,keystone-2)

sudo mkdir /etc/keystone
sudo chown ubuntu:ubuntu /etc/keystone/
cp keystone-1.conf /etc/keystone/keystone.conf  ## this is for keystone-1 only
cp keystone-2.conf /etc/keystone/keystone.conf  ## this is for keystone-2 only
cp paste.ini /etc/keystone/paste.ini
cp policy.json /etc/keystone/policy.json

Install and configure MariaDB (keystone-1)

Install MariaDB and create a database for keystone:

sudo apt install mariadb-server mariadb-client

This is a workaround for bug 1575688. Start by replacing all instances of utf8mb4 with utf8. For me this was in the following files:

/etc/mysql/mariadb.conf.d/50-server.cnf:character-set-server  = utf8mb4
/etc/mysql/mariadb.conf.d/50-server.cnf:collation-server      = utf8mb4_general_ci
/etc/mysql/mariadb.conf.d/50-mysql-clients.cnf:default-character-set = utf8mb4
/etc/mysql/mariadb.conf.d/50-client.cnf:default-character-set = utf8mb4

Add the following config as /etc/mysql/conf.d/mysqld_openstack.cnf:

[client]
default-character-set = utf8

[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

[mysql]
default-character-set = utf8

You might also be required to change or remove the [mysqld] bind-address configuration option in order for keystone-2 to be able to connect to the database on keystone-1.

Restart MariaDB

sudo service mysql restart

Setup database (keystone-1)

We will setup the database on keystone-1 and have both keystone applications share it. The keystone-2.conf will have a different database connection string that points to the database on keystone-1.

Create a keystone database and user, similar to the following:

sudo mysql
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';

Migrate the database using keystone-manage:

keystone-manage --config-dir /etc/keystone/ db_sync

Bootstrap keystone (keystone-1)

keystone-manage --config-dir /etc/keystone/ bootstrap \
    --bootstrap-password password \
    --bootstrap-admin-url http://localhost:35357/v3/

Run Mitaka (keystone-1,keystone-2)

keystone-wsgi-admin -p 35357

Install and setup python-openstackclient (keystone-1,keystone-2)

virtualenv osc
source osc/bin/activate
pip install python-openstackclient
source adminrc

Verify your keystone installation by getting a token:

$ openstack token issue
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2016-08-30 21:01:07.590395+00:00 |
| id         | 977c90ac90c94a74a9d43d3b86ec0ebe |
| project_id | 2e6d5d5ef8b943398920d316cf0096bd |
| user_id    | f73fc8b0d13e436da2314896c936e193 |
+------------+----------------------------------+

Create credentials

Now that we have a Mitaka deployment, let's use it to create some credentials. This can be done from either keystone-1 or keystone-2.

(osc) ubuntu@keystone-1:~$ openstack credential create admin my-secret-stuff
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| blob       | my-secret-stuff                  |
| id         | b3c132c5cb55437ea7839c2dc62913fe |
| project_id | None                             |
| type       | cert                             |
| user_id    | f73fc8b0d13e436da2314896c936e193 |
+------------+----------------------------------+
(osc) ubuntu@keystone-1:~$ openstack credential create admin my-really-secret-stuff
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| blob       | my-really-secret-stuff           |
| id         | 97ed6272f2644610b6b7d4d6e93a3531 |
| project_id | None                             |
| type       | cert                             |
| user_id    | f73fc8b0d13e436da2314896c936e193 |
+------------+----------------------------------+
(osc) ubuntu@keystone-1:~$ openstack credential create --type ec2 --project admin admin '{"access":"181920","secret":"ec2-data"}'
+------------+------------------------------------------------------------------+
| Field      | Value                                                            |
+------------+------------------------------------------------------------------+
| blob       | {"access":"181920","secret":"ec2-data"}                          |
| id         | 3d3367228f9c7665266604462ec60029bcd83ad89614021a80b2eb879c572510 |
| project_id | 2e6d5d5ef8b943398920d316cf0096bd                                 |
| type       | ec2                                                              |
| user_id    | f73fc8b0d13e436da2314896c936e193                                 |
+------------+------------------------------------------------------------------+

We can take it one step further and verify credentials exist in the backend and are stored as plaintext:

MariaDB [keystone]> select * from credential\G
*************************** 1. row ***************************
        id: 3d3367228f9c7665266604462ec60029bcd83ad89614021a80b2eb879c572510
   user_id: f73fc8b0d13e436da2314896c936e193
project_id: 2e6d5d5ef8b943398920d316cf0096bd
      blob: "{\"access\":\"181920\",\"secret\":\"ec2-data\"}"
      type: ec2
     extra: {}
*************************** 2. row ***************************
        id: 97ed6272f2644610b6b7d4d6e93a3531
   user_id: f73fc8b0d13e436da2314896c936e193
project_id: NULL
      blob: "my-really-secret-stuff"
      type: cert
     extra: {}
*************************** 3. row ***************************
        id: b3c132c5cb55437ea7839c2dc62913fe
   user_id: f73fc8b0d13e436da2314896c936e193
project_id: NULL
      blob: "my-secret-stuff"
      type: cert
     extra: {}
3 rows in set (0.00 sec)

We should be able to list credentials from keystone-2 without any issues.

(osc) ubuntu@keystone-2:~$ openstack credential list
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+
| ID                                                               | Type | User ID                          | Data                                    | Project ID                       |
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+
| 3d3367228f9c7665266604462ec60029bcd83ad89614021a80b2eb879c572510 | ec2  | f73fc8b0d13e436da2314896c936e193 | {"access":"181920","secret":"ec2-data"} | 2e6d5d5ef8b943398920d316cf0096bd |
| 97ed6272f2644610b6b7d4d6e93a3531                                 | cert | f73fc8b0d13e436da2314896c936e193 | my-really-secret-stuff                  | None                             |
| b3c132c5cb55437ea7839c2dc62913fe                                 | cert | f73fc8b0d13e436da2314896c936e193 | my-secret-stuff                         | None                             |
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+

Upgrade

Here's the fun part...

We'll start by taking keystone-1 down and deploying Newton to that node. But first let's make sure we have the Newton source code an a virtualenv prepared for it.

git clone -b stable/newton https://github.com/openstack/keystone keystone-newton

Lets create a new virtualenv for our Newton source:

virtualenv newton
source newton/bin/activate
pip install -e keystone-newton/
pip install python-memcached osprofiler mysql-python

Even though we have Newton source installed and configured, we're only using it to upgrade and migrate the backend. We'll start the Newton node after we ensure the credentials are migrated.

keystone-manage credential_setup
keystone-manage db_sync --expand

At this point we should be able to verify the new schema from either application node by inspecting the database manually.

MariaDB [keystone]> select * from credential\G
*************************** 1. row ***************************
            id: 3d3367228f9c7665266604462ec60029bcd83ad89614021a80b2eb879c572510
       user_id: f73fc8b0d13e436da2314896c936e193
    project_id: 2e6d5d5ef8b943398920d316cf0096bd
          blob: "{\"access\":\"181920\",\"secret\":\"ec2-data\"}"
          type: ec2
         extra: {}
      key_hash: NULL
encrypted_blob: NULL
*************************** 2. row ***************************
            id: 97ed6272f2644610b6b7d4d6e93a3531
       user_id: f73fc8b0d13e436da2314896c936e193
    project_id: NULL
          blob: "my-really-secret-stuff"
          type: cert
         extra: {}
      key_hash: NULL
encrypted_blob: NULL
*************************** 3. row ***************************
            id: b3c132c5cb55437ea7839c2dc62913fe
       user_id: f73fc8b0d13e436da2314896c936e193
    project_id: NULL
          blob: "my-secret-stuff"
          type: cert
         extra: {}
      key_hash: NULL
encrypted_blob: NULL
3 rows in set (0.00 sec)

The --expand step of the migration will put triggers on the database to copy data back and forth between columns and tables. During the mirgation, triggers are used to make the credential API read-only. Throughout the upgrade process, users can access existing credentials but won't be able to create new ones.

This is because the SQL trigger would have to be aware of the same encryption implementation that Fernet uses. Let's test that keystone-2 can still read credentials, but can't create them.

(osc) ubuntu@keystone-2:~$ openstack credential list
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+
| ID                                                               | Type | User ID                          | Data                                    | Project ID                       |
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+
| 3d3367228f9c7665266604462ec60029bcd83ad89614021a80b2eb879c572510 | ec2  | f73fc8b0d13e436da2314896c936e193 | {"access":"181920","secret":"ec2-data"} | 2e6d5d5ef8b943398920d316cf0096bd |
| 97ed6272f2644610b6b7d4d6e93a3531                                 | cert | f73fc8b0d13e436da2314896c936e193 | my-really-secret-stuff                  | None                             |
| b3c132c5cb55437ea7839c2dc62913fe                                 | cert | f73fc8b0d13e436da2314896c936e193 | my-secret-stuff                         | None                             |
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+

(osc) ubuntu@keystone-2:~$ openstack credential create admin stuff
An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-64558152-54b6-420c-891e-fd5bea59b41c)

So, that's promising. Let's just verify the log messages to make sure.

ERROR keystone.common.wsgi [req-64558152-54b6-420c-891e-fd5bea59b41c f73fc8b0d13e436da2314896c936e193 2e6d5d5ef8b943398920d316cf0096bd - default default] (_mysql_exceptions.OperationalError) (1644, 'Credent
ial migration in progress. Cannot perform writes to credential table.') [SQL: u'INSERT INTO credential (id, user_id, project_id, `blob`, type, extra) VALUES (%s, %s, %s, %s, %s, %s)'] [parameters: ('ea962539b4b1433188f92b2ca120e353', 'f
73fc8b0d13e436da2314896c936e193', None, '"stuff"', 'cert', '{}')]

Now that we have the new columns in place and we can guarantee no new entities will be introduced through the API, we're ready to do our migration. Remember that to do this we must run keystone-manage credential_setup in order for Fernet to encrypt each credential. We did this in a previous step above, so let's go ahead with the data migration.

keystone-manage db_sync --migrate

If the migration completed successfully, you should see the key_hash and encrypted_blob columns populated for each row.

MariaDB [keystone]> select * from credential\G
*************************** 1. row ***************************
            id: 3d3367228f9c7665266604462ec60029bcd83ad89614021a80b2eb879c572510
       user_id: f73fc8b0d13e436da2314896c936e193
    project_id: 2e6d5d5ef8b943398920d316cf0096bd
          blob: "{\"access\":\"181920\",\"secret\":\"ec2-data\"}"
          type: ec2
         extra: {}
      key_hash: 26e1cc6e7b8a264aceba45ffb1a0fa82ada35587
encrypted_blob: gAAAAABXxez_wm1EQW1Iiwh84sEIsTKjgszd1cY_NEFqmV7ykx_YfxHTTyILL_ob_Nhsr-HEG6irFNwiqHVUzEVegGuTDRueEb6fbPu3vFn-chEXM1dKvTU9pKzli9CToDcb9e6iSyuS6yI5oRJsDHQWCUNDqChptg==
*************************** 2. row ***************************
            id: 97ed6272f2644610b6b7d4d6e93a3531
       user_id: f73fc8b0d13e436da2314896c936e193
    project_id: NULL
          blob: "my-really-secret-stuff"
          type: cert
         extra: {}
      key_hash: 26e1cc6e7b8a264aceba45ffb1a0fa82ada35587
encrypted_blob: gAAAAABXxez_1PioquVDVhKhGtFaWgfgQvAUGvezizpUxJxv1I34mmldZqjBiANZ84tKJOuQK-LSsWdWzrkAVzyH7UXQkkBaYnpLsSLz8_yFF5WBRQLsrjA=
*************************** 3. row ***************************
            id: b3c132c5cb55437ea7839c2dc62913fe
       user_id: f73fc8b0d13e436da2314896c936e193
    project_id: NULL
          blob: "my-secret-stuff"
          type: cert
         extra: {}
      key_hash: 26e1cc6e7b8a264aceba45ffb1a0fa82ada35587
encrypted_blob: gAAAAABXxez_BxiZKVyieJMtkf4sMnswWy202rTAmnfCyySs9dSVmkysZ7gFuzqxkIg-UTRhwTOTJrzKotWfdOT1EjlMSb8eVm8FAsIQyKHB6caudFhx7sA=
3 rows in set (0.00 sec)

Now, the Newton code understands the new schema and migrated data. Let's run keystone-1 to make sure we can list credentials.

(osc) ubuntu@keystone-1:~$ openstack credential list
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+
| ID                                                               | Type | User ID                          | Data                                    | Project ID                       |
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+
| 3d3367228f9c7665266604462ec60029bcd83ad89614021a80b2eb879c572510 | ec2  | f73fc8b0d13e436da2314896c936e193 | {"access":"181920","secret":"ec2-data"} | 2e6d5d5ef8b943398920d316cf0096bd |
| 97ed6272f2644610b6b7d4d6e93a3531                                 | cert | f73fc8b0d13e436da2314896c936e193 | "my-really-secret-stuff"                | None                             |
| b3c132c5cb55437ea7839c2dc62913fe                                 | cert | f73fc8b0d13e436da2314896c936e193 | "my-secret-stuff"                       | None                             |
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+

At this point keystone-1 is running Newton and keystone-2 is running Mitaka. Both can read the credentials from the backend and neither can write credentials to the backend. Let's double check to make sure we can't create new credentials using keystone-1.

(osc) ubuntu@keystone-1:~$ openstack credential create admin stuff
An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-21efb9e0-fd52-4d11-9661-329032456354)

Let's check the logs on keystone-1.

2016-08-30 20:40:12.965 30022 ERROR keystone.common.wsgi OperationalError: (_mysql_exceptions.OperationalError) (1644, 'Credential migration in progress. Cannot perform writes to credential table.') [SQL: u'INSERT INTO credential (id, user_id, project_id, encrypted_blob, type, key_hash, extra) VALUES (%s, %s, %s, %s, %s, %s, %s)'] [parameters: ('e979f168620b4b18840dcbcc3097b705', 'f73fc8b0d13e436da2314896c936e193', None, 'gAAAAABXxe8sjyymmagf4GEmnkHv-0cKYvJ3vI0dVU6Rq8xo_cjokRgfbSnFF8Rmt4DMR0b8dnZg7n-jl4mdXXWSIc87hVfUQw==', 'cert', '26e1cc6e7b8a264aceba45ffb1a0fa82ada35587', '{}')]

Next, let's upgrade keystone-2 to Newton using the same process and steps above used to update keystone-1. Before bringing keystone-2 online, we need to make sure we copy the Fernet key repository (/etc/keystone/credential-keys/) from keystone-1. It's critical that the keys from keystone-1 are copied to keystone-2. This is because the credentials have already been encrypted using the keys on keystone-1 and new keys will not be able to decrypt existing credentials unless they are the exact same keys. Double check to make sure the repositories match.

keystone-1 key repository:

ubuntu@keystone-1:~$ cat /etc/keystone/credential-keys/0
kVS9rElRD4FXUz7a1kbSepDiE1woEoAKUtuuvy17A78=ubuntu@keystone-1:~$ 
ubuntu@keystone-1:~$ cat /etc/keystone/credential-keys/1
ai0jE0BW7TCICy5qe7z4HGQ_m8gX2tYmm3-mPvRhosE=ubuntu@keystone-1:~$ 

keystone-2 key repository:

ubuntu@keystone-2:~$ cat /etc/keystone/credential-keys/0
kVS9rElRD4FXUz7a1kbSepDiE1woEoAKUtuuvy17A78=ubuntu@keystone-2:~$ 
ubuntu@keystone-2:~$ cat /etc/keystone/credential-keys/1
ai0jE0BW7TCICy5qe7z4HGQ_m8gX2tYmm3-mPvRhosE=ubuntu@keystone-2:~$

One we standup keystone-2, we should be able to read credentials:

(osc) ubuntu@keystone-2:~$ openstack credential list
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+
| ID                                                               | Type | User ID                          | Data                                    | Project ID                       |
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+
| 3d3367228f9c7665266604462ec60029bcd83ad89614021a80b2eb879c572510 | ec2  | f73fc8b0d13e436da2314896c936e193 | {"access":"181920","secret":"ec2-data"} | 2e6d5d5ef8b943398920d316cf0096bd |
| 97ed6272f2644610b6b7d4d6e93a3531                                 | cert | f73fc8b0d13e436da2314896c936e193 | "my-really-secret-stuff"                | None                             |
| b3c132c5cb55437ea7839c2dc62913fe                                 | cert | f73fc8b0d13e436da2314896c936e193 | "my-secret-stuff"                       | None                             |
+------------------------------------------------------------------+------+----------------------------------+-----------------------------------------+----------------------------------+

Now that we don't have any Mitaka code running, we can safely --contract the database.

keystone-manage db_sync --contract

Let's verify that the plaintext secrets were removed from the database, leaving only the encrypted values introduced during the --migrate step.

MariaDB [keystone]> select * from credential\G
*************************** 1. row ***************************
            id: 3d3367228f9c7665266604462ec60029bcd83ad89614021a80b2eb879c572510
       user_id: f73fc8b0d13e436da2314896c936e193
    project_id: 2e6d5d5ef8b943398920d316cf0096bd
          type: ec2
         extra: {}
      key_hash: 26e1cc6e7b8a264aceba45ffb1a0fa82ada35587
encrypted_blob: gAAAAABXxez_wm1EQW1Iiwh84sEIsTKjgszd1cY_NEFqmV7ykx_YfxHTTyILL_ob_Nhsr-HEG6irFNwiqHVUzEVegGuTDRueEb6fbPu3vFn-chEXM1dKvTU9pKzli9CToDcb9e6iSyuS6yI5oRJsDHQWCUNDqChptg==
*************************** 2. row ***************************
            id: 97ed6272f2644610b6b7d4d6e93a3531
       user_id: f73fc8b0d13e436da2314896c936e193
    project_id: NULL
          type: cert
         extra: {}
      key_hash: 26e1cc6e7b8a264aceba45ffb1a0fa82ada35587
encrypted_blob: gAAAAABXxez_1PioquVDVhKhGtFaWgfgQvAUGvezizpUxJxv1I34mmldZqjBiANZ84tKJOuQK-LSsWdWzrkAVzyH7UXQkkBaYnpLsSLz8_yFF5WBRQLsrjA=
*************************** 3. row ***************************
            id: b3c132c5cb55437ea7839c2dc62913fe
       user_id: f73fc8b0d13e436da2314896c936e193
    project_id: NULL
          type: cert
         extra: {}
      key_hash: 26e1cc6e7b8a264aceba45ffb1a0fa82ada35587
encrypted_blob: gAAAAABXxez_BxiZKVyieJMtkf4sMnswWy202rTAmnfCyySs9dSVmkysZ7gFuzqxkIg-UTRhwTOTJrzKotWfdOT1EjlMSb8eVm8FAsIQyKHB6caudFhx7sA=
3 rows in set (0.00 sec)

Now that the triggers have been removed, we should be able to create credentials again.

(osc) ubuntu@keystone-1:~$ openstack credential create admin stuff
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| blob       | stuff                            |
| id         | 08ba8a80973e431a948a2045a5e67c6f |
| project_id | None                             |
| type       | cert                             |
| user_id    | f73fc8b0d13e436da2314896c936e193 |
+------------+----------------------------------+

Which is decryptable on keystone-2 since both nodes share the same key repository.

(osc) ubuntu@keystone-2:~$ openstack credential show 08ba8a80973e431a948a2045a5e67c6f
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| blob       | stuff                            |
| id         | 08ba8a80973e431a948a2045a5e67c6f |
| project_id | None                             |
| type       | cert                             |
| user_id    | f73fc8b0d13e436da2314896c936e193 |
+------------+----------------------------------+

Congratulations! You successfully updated a Mitaka keystone deployment to Newton. This involved migrating plaintext data and changing keystone's schema to encrypt user secrets for enhanced security. All of this was achieved without downtime and a limited read-only service window.

Encrypted Credential Key Management

As stated earlier, credentials are encrypted using cryptography's Fernet implementation. This is the same mechanism used to encrypt token payloads when using the Fernet token provider (CONF [token] provider = fernet). A key, or a set of keys, is required in order to encrypt things. Even though the credential cipher text is never exposed through the API like it is with tokens, it still may be recommended that you rotate your encryption keys regularly.

There are some utility commands built into keystone-manage to help us manage keys for credential encryption. We've already used keystone-manage credential_setup earlier in the walkthrough to populate our key repository. The credential_rotate and credential_migrate commands are used to manage keys throughout the life of the deployment. Remember that this key repository is an entirely separately key repository used to encrypt tokens. Both repositories must live in separate places and be managed separately.

After upgrading to Newton, all credentials will have been encrypted and rewritten to the backend. Let's create some credentials:

(osc) ubuntu@keystone-1:~$ openstack credential create admin secret-stuff
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| blob       | secret-stuff                     |
| id         | 74228e3767f64f8d9f511e6c28574628 |
| project_id | None                             |
| type       | cert                             |
| user_id    | 8e6361803faa46eaa03725aeab3a111b |
+------------+----------------------------------+
(osc) ubuntu@keystone-1:~$ openstack credential create admin more-secret-stuff
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| blob       | more-secret-stuff                |
| id         | d617225ebbbb48118aca9e677990ac7a |
| project_id | None                             |
| type       | cert                             |
| user_id    | 8e6361803faa46eaa03725aeab3a111b |
+------------+----------------------------------+

We can verify these in the backend by inspecting the database directly:

MariaDB [keystone]> select * from credential\G
*************************** 1. row ***************************
            id: 74228e3767f64f8d9f511e6c28574628
       user_id: 8e6361803faa46eaa03725aeab3a111b
    project_id: NULL
          type: cert
         extra: {}
      key_hash: aaa1aa670c64ef0854d20370b88bfde866b3a54d
encrypted_blob: gAAAAABXyZxlFARfu2sX3p9NICoCieSYifZ1ydgzRIPJOlWIx2gknY2bUbgVQ26n89lbbZVQbfqkO0Osoa49_YxKVwMgPV0tpw==
*************************** 2. row ***************************
            id: d617225ebbbb48118aca9e677990ac7a
       user_id: 8e6361803faa46eaa03725aeab3a111b
    project_id: NULL
          type: cert
         extra: {}
      key_hash: aaa1aa670c64ef0854d20370b88bfde866b3a54d
encrypted_blob: gAAAAABXyZyUWjtS9-LI4rLo6UVZ3vfdrXFY-5aEVij2SILc_SOOzipGcrL_xTrx_unOAjVG1T1oMDkJe7-6-XmOAn0sXaY4CfqAElm36bLavSIiANKK8L8=
2 rows in set (0.00 sec)

We can see that all credentials have been encrypted with same key since the key_hash is consistent for all entities. Let's do a key rotation:

(newton) ubuntu@keystone-1:~$ keystone-manage credential_rotate
2016-09-02 15:38:53.621 8490 INFO keystone.common.fernet_utils [-] Starting key rotation with 2 key files: ['/etc/keystone/credential-keys/0', '/etc/keystone/credential-keys/1']
2016-09-02 15:38:53.622 8490 INFO keystone.common.fernet_utils [-] Current primary key is: 1
2016-09-02 15:38:53.622 8490 INFO keystone.common.fernet_utils [-] Next primary key will be: 2
2016-09-02 15:38:53.622 8490 INFO keystone.common.fernet_utils [-] Promoted key 0 to be the primary: 2
2016-09-02 15:38:53.622 8490 INFO keystone.common.fernet_utils [-] Created a new key: /etc/keystone/credential-keys/0

We can still decrypt all credentials stored in the backend since key 1 is still available as a secondary key. Any new credentials will be encrypted with key 2. Let's create a new credential:

(osc) ubuntu@keystone-1:~$ openstack credential create admin even-more-secret-stuff
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| blob       | even-more-secret-stuff           |
| id         | 71dfce4362354696ae2adbe61eff65e0 |
| project_id | None                             |
| type       | cert                             |
| user_id    | 8e6361803faa46eaa03725aeab3a111b |
+------------+----------------------------------+

We can verify the key_hash is different by inspecting the backend:

MariaDB [keystone]> select * from credential\G
*************************** 1. row ***************************
            id: 71dfce4362354696ae2adbe61eff65e0
       user_id: 8e6361803faa46eaa03725aeab3a111b
    project_id: NULL
          type: cert
         extra: {}
      key_hash: d4c58c38c5dc2664fa8d037b10ae092a5a594253
encrypted_blob: gAAAAABXyZ1wchEcJjyibhf_5SYmVK5sac_rbBKsgwvP-d6ywuWyJuRD-xxKSpO79p5N0hVXS_wBidGRY4nofWYPLNrSKgew9dQk3Xg8O_J6gGJAyG27Gs8=

At this point all credentials are read-able with the existing key repository, but we won't be able to do a credential_rotate. When doing a rotation, keystone-manage credential_rotate will check each credential in the backend to make sure it's key_hash matches the key hash of the current primary key. If a credential's key_hash doesn't match, the rotation is aborted because we can assume not all credentials have been encrypted with the latest primary key. Let's try and do a credential_rotate to make sure it fails:

(newton) ubuntu@keystone-1:~$ cat /etc/keystone/credential-keys/2  # current primary key
rfKoEmosLvKwk_6RYjtajn7AgRKTmNbhP0zINHFmZGw=
(newton) ubuntu@keystone-1:~$ keystone-manage credential_rotate
Unable to rotate credential keys because not all credentials are encrypted with the primary key. Please make sure all credentials have been encrypted with the primary key using `keystone-manage credential_migrate`.
(newton) ubuntu@keystone-1:~$ cat /etc/keystone/credential-keys/2   # still the primary key because the rotation aborted
rfKoEmosLvKwk_6RYjtajn7AgRKTmNbhP0zINHFmZGw=

The rotation aborted as expected with an error message telling us our next step. In order to safely rotate keys, we need to make sure all credentials are encrypted with the latest primary key. This ensures that we don't remove encryption keys that are still needed to decrypt a credential. Once an encryption key is removed, it is impossible to recover the original data. Let's re-encrypt all credentials with the new primary key and inspect the backend manually:

(newton) ubuntu@keystone-1:~$ keystone-manage credential_migrate
MariaDB [keystone]> select * from credential\G
*************************** 1. row ***************************
            id: 71dfce4362354696ae2adbe61eff65e0
       user_id: 8e6361803faa46eaa03725aeab3a111b
    project_id: NULL
          type: cert
         extra: {}
      key_hash: d4c58c38c5dc2664fa8d037b10ae092a5a594253
encrypted_blob: gAAAAABXyZ1wchEcJjyibhf_5SYmVK5sac_rbBKsgwvP-d6ywuWyJuRD-xxKSpO79p5N0hVXS_wBidGRY4nofWYPLNrSKgew9dQk3Xg8O_J6gGJAyG27Gs8=
*************************** 2. row ***************************
            id: 74228e3767f64f8d9f511e6c28574628
       user_id: 8e6361803faa46eaa03725aeab3a111b
    project_id: NULL
          type: cert
         extra: {}
      key_hash: d4c58c38c5dc2664fa8d037b10ae092a5a594253
encrypted_blob: gAAAAABXyZ86ROVfwUpuNwiJsB1oSKLpJLQadpnzqSg-5mrCDTfIHOaZFFWyDXj0nTGHGGPo0gARlFvu8ABi8s_m5iZlYomlrQ==
*************************** 3. row ***************************
            id: d617225ebbbb48118aca9e677990ac7a
       user_id: 8e6361803faa46eaa03725aeab3a111b
    project_id: NULL
          type: cert
         extra: {}
      key_hash: d4c58c38c5dc2664fa8d037b10ae092a5a594253
encrypted_blob: gAAAAABXyZ86JUQp62KtVZSiqtJw2ycYGoUc5ufWlEP4ctczuUEmywpYdmH5vZbSZuKOTy9IxccsGIKSlwqDkQv87Mj2sgVxvhJ_O2rrcsFc9GpWCxr2mPw=
3 rows in set (0.00 sec)

Now we can safely do another credential_rotate since we no longer have any credentials encrypted with the key corresponding to key_hash = aaa1aa670c64ef0854d20370b88bfde866b3a54d. All keys have been encrypted with the current primary key and we can double check this by hashing the primary key ourselves:

(newton) ubuntu@keystone-1:~$ cat /etc/keystone/credential-keys/2
rfKoEmosLvKwk_6RYjtajn7AgRKTmNbhP0zINHFmZGw=
(newton) ubuntu@keystone-1:~$ python -c "import hashlib; print hashlib.sha1('rfKoEmosLvKwk_6RYjtajn7AgRKTmNbhP0zINHFmZGw=').hexdigest()"
d4c58c38c5dc2664fa8d037b10ae092a5a594253
(newton) ubuntu@keystone-1:~$

Let's proceed with another rotation:

(newton) ubuntu@keystone-1:~$ keystone-manage credential_rotate
2016-09-02 15:52:27.128 9332 INFO keystone.common.fernet_utils [-] Starting key rotation with 3 key files: ['/etc/keystone/credential-keys/0', '/etc/keystone/credential-keys/1', '/etc/keystone/credential-keys/2']
2016-09-02 15:52:27.128 9332 INFO keystone.common.fernet_utils [-] Current primary key is: 2
2016-09-02 15:52:27.129 9332 INFO keystone.common.fernet_utils [-] Next primary key will be: 3
2016-09-02 15:52:27.129 9332 INFO keystone.common.fernet_utils [-] Promoted key 0 to be the primary: 3
2016-09-02 15:52:27.129 9332 INFO keystone.common.fernet_utils [-] Created a new key: /etc/keystone/credential-keys/0
2016-09-02 15:52:27.129 9332 INFO keystone.common.fernet_utils [-] Excess key to purge: /etc/keystone/credential-keys/1

We see that key 1 has been removed from the repository, meaning we can no longer decrypt anything that has been encrypted with key 1. This is fine since we took extra measures to make sure we re-encrypted everything with key 2. Just as before we'll need to execute credential_migrate in order to perform another rotation since we've introduced a new primary key. This ensures credential_migrate and credential_rotate always stay in lock-step.

Last but not least, we should be able to read all of our credentials despite re-encrypting them behind the scenes:

(osc) ubuntu@keystone-1:~$ openstack credential list
+----------------------------------+------+----------------------------------+------------------------+------------+
| ID                               | Type | User ID                          | Data                   | Project ID |
+----------------------------------+------+----------------------------------+------------------------+------------+
| 71dfce4362354696ae2adbe61eff65e0 | cert | 8e6361803faa46eaa03725aeab3a111b | even-more-secret-stuff | None       |
| 74228e3767f64f8d9f511e6c28574628 | cert | 8e6361803faa46eaa03725aeab3a111b | secret-stuff           | None       |
| d617225ebbbb48118aca9e677990ac7a | cert | 8e6361803faa46eaa03725aeab3a111b | more-secret-stuff      | None       |
+----------------------------------+------+----------------------------------+------------------------+------------+
# Keystone PasteDeploy configuration file.
[filter:debug]
use = egg:oslo.middleware#debug
[filter:request_id]
use = egg:oslo.middleware#request_id
[filter:build_auth_context]
use = egg:keystone#build_auth_context
[filter:token_auth]
use = egg:keystone#token_auth
[filter:admin_token_auth]
# This is deprecated in the M release and will be removed in the O release.
# Use `keystone-manage bootstrap` and remove this from the pipelines below.
use = egg:keystone#admin_token_auth
[filter:json_body]
use = egg:keystone#json_body
[filter:cors]
use = egg:oslo.middleware#cors
oslo_config_project = keystone
[filter:ec2_extension]
use = egg:keystone#ec2_extension
[filter:ec2_extension_v3]
use = egg:keystone#ec2_extension_v3
[filter:s3_extension]
use = egg:keystone#s3_extension
[filter:url_normalize]
use = egg:keystone#url_normalize
[filter:sizelimit]
use = egg:oslo.middleware#sizelimit
[app:public_service]
use = egg:keystone#public_service
[app:service_v3]
use = egg:keystone#service_v3
[app:admin_service]
use = egg:keystone#admin_service
[pipeline:public_api]
# The last item in this pipeline must be public_service or an equivalent
# application. It cannot be a filter.
pipeline = cors sizelimit osprofiler url_normalize request_id admin_token_auth build_auth_context token_auth json_body ec2_extension public_service
[pipeline:admin_api]
# The last item in this pipeline must be admin_service or an equivalent
# application. It cannot be a filter.
pipeline = cors sizelimit osprofiler url_normalize request_id admin_token_auth build_auth_context token_auth json_body ec2_extension s3_extension admin_service
[pipeline:api_v3]
# The last item in this pipeline must be service_v3 or an equivalent
# application. It cannot be a filter.
pipeline = cors sizelimit osprofiler url_normalize request_id admin_token_auth build_auth_context token_auth json_body ec2_extension_v3 s3_extension service_v3
[app:public_version_service]
use = egg:keystone#public_version_service
[app:admin_version_service]
use = egg:keystone#admin_version_service
[pipeline:public_version_api]
pipeline = cors sizelimit osprofiler url_normalize public_version_service
[pipeline:admin_version_api]
pipeline = cors sizelimit osprofiler url_normalize admin_version_service
[composite:main]
use = egg:Paste#urlmap
/v2.0 = public_api
/v3 = api_v3
/ = public_version_api
[composite:admin]
use = egg:Paste#urlmap
/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api
[filter:osprofiler]
paste.filter_factory = osprofiler.web:WsgiMiddleware.factory
{
"admin_required": "role:admin or is_admin:1",
"service_role": "role:service",
"service_or_admin": "rule:admin_required or rule:service_role",
"owner" : "user_id:%(user_id)s",
"admin_or_owner": "rule:admin_required or rule:owner",
"token_subject": "user_id:%(target.token.user_id)s",
"admin_or_token_subject": "rule:admin_required or rule:token_subject",
"service_admin_or_token_subject": "rule:service_or_admin or rule:token_subject",
"default": "rule:admin_required",
"identity:get_region": "",
"identity:list_regions": "",
"identity:create_region": "rule:admin_required",
"identity:update_region": "rule:admin_required",
"identity:delete_region": "rule:admin_required",
"identity:get_service": "rule:admin_required",
"identity:list_services": "rule:admin_required",
"identity:create_service": "rule:admin_required",
"identity:update_service": "rule:admin_required",
"identity:delete_service": "rule:admin_required",
"identity:get_endpoint": "rule:admin_required",
"identity:list_endpoints": "rule:admin_required",
"identity:create_endpoint": "rule:admin_required",
"identity:update_endpoint": "rule:admin_required",
"identity:delete_endpoint": "rule:admin_required",
"identity:get_domain": "rule:admin_required",
"identity:list_domains": "rule:admin_required",
"identity:create_domain": "rule:admin_required",
"identity:update_domain": "rule:admin_required",
"identity:delete_domain": "rule:admin_required",
"identity:get_project": "rule:admin_required",
"identity:list_projects": "rule:admin_required",
"identity:list_user_projects": "rule:admin_or_owner",
"identity:create_project": "rule:admin_required",
"identity:update_project": "rule:admin_required",
"identity:delete_project": "rule:admin_required",
"identity:get_user": "rule:admin_required",
"identity:list_users": "rule:admin_required",
"identity:create_user": "rule:admin_required",
"identity:update_user": "rule:admin_required",
"identity:delete_user": "rule:admin_required",
"identity:change_password": "rule:admin_or_owner",
"identity:get_group": "rule:admin_required",
"identity:list_groups": "rule:admin_required",
"identity:list_groups_for_user": "rule:admin_or_owner",
"identity:create_group": "rule:admin_required",
"identity:update_group": "rule:admin_required",
"identity:delete_group": "rule:admin_required",
"identity:list_users_in_group": "rule:admin_required",
"identity:remove_user_from_group": "rule:admin_required",
"identity:check_user_in_group": "rule:admin_required",
"identity:add_user_to_group": "rule:admin_required",
"identity:get_credential": "rule:admin_required",
"identity:list_credentials": "rule:admin_required",
"identity:create_credential": "rule:admin_required",
"identity:update_credential": "rule:admin_required",
"identity:delete_credential": "rule:admin_required",
"identity:ec2_get_credential": "rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s)",
"identity:ec2_list_credentials": "rule:admin_or_owner",
"identity:ec2_create_credential": "rule:admin_or_owner",
"identity:ec2_delete_credential": "rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s)",
"identity:get_role": "rule:admin_required",
"identity:list_roles": "rule:admin_required",
"identity:create_role": "rule:admin_required",
"identity:update_role": "rule:admin_required",
"identity:delete_role": "rule:admin_required",
"identity:check_grant": "rule:admin_required",
"identity:list_grants": "rule:admin_required",
"identity:create_grant": "rule:admin_required",
"identity:revoke_grant": "rule:admin_required",
"identity:list_role_assignments": "rule:admin_required",
"identity:get_policy": "rule:admin_required",
"identity:list_policies": "rule:admin_required",
"identity:create_policy": "rule:admin_required",
"identity:update_policy": "rule:admin_required",
"identity:delete_policy": "rule:admin_required",
"identity:check_token": "rule:admin_or_token_subject",
"identity:validate_token": "rule:service_admin_or_token_subject",
"identity:validate_token_head": "rule:service_or_admin",
"identity:revocation_list": "rule:service_or_admin",
"identity:revoke_token": "rule:admin_or_token_subject",
"identity:create_trust": "user_id:%(trust.trustor_user_id)s",
"identity:list_trusts": "",
"identity:list_roles_for_trust": "",
"identity:get_role_for_trust": "",
"identity:delete_trust": "",
"identity:create_consumer": "rule:admin_required",
"identity:get_consumer": "rule:admin_required",
"identity:list_consumers": "rule:admin_required",
"identity:delete_consumer": "rule:admin_required",
"identity:update_consumer": "rule:admin_required",
"identity:authorize_request_token": "rule:admin_required",
"identity:list_access_token_roles": "rule:admin_required",
"identity:get_access_token_role": "rule:admin_required",
"identity:list_access_tokens": "rule:admin_required",
"identity:get_access_token": "rule:admin_required",
"identity:delete_access_token": "rule:admin_required",
"identity:list_projects_for_endpoint": "rule:admin_required",
"identity:add_endpoint_to_project": "rule:admin_required",
"identity:check_endpoint_in_project": "rule:admin_required",
"identity:list_endpoints_for_project": "rule:admin_required",
"identity:remove_endpoint_from_project": "rule:admin_required",
"identity:create_endpoint_group": "rule:admin_required",
"identity:list_endpoint_groups": "rule:admin_required",
"identity:get_endpoint_group": "rule:admin_required",
"identity:update_endpoint_group": "rule:admin_required",
"identity:delete_endpoint_group": "rule:admin_required",
"identity:list_projects_associated_with_endpoint_group": "rule:admin_required",
"identity:list_endpoints_associated_with_endpoint_group": "rule:admin_required",
"identity:get_endpoint_group_in_project": "rule:admin_required",
"identity:list_endpoint_groups_for_project": "rule:admin_required",
"identity:add_endpoint_group_to_project": "rule:admin_required",
"identity:remove_endpoint_group_from_project": "rule:admin_required",
"identity:create_identity_provider": "rule:admin_required",
"identity:list_identity_providers": "rule:admin_required",
"identity:get_identity_providers": "rule:admin_required",
"identity:update_identity_provider": "rule:admin_required",
"identity:delete_identity_provider": "rule:admin_required",
"identity:create_protocol": "rule:admin_required",
"identity:update_protocol": "rule:admin_required",
"identity:get_protocol": "rule:admin_required",
"identity:list_protocols": "rule:admin_required",
"identity:delete_protocol": "rule:admin_required",
"identity:create_mapping": "rule:admin_required",
"identity:get_mapping": "rule:admin_required",
"identity:list_mappings": "rule:admin_required",
"identity:delete_mapping": "rule:admin_required",
"identity:update_mapping": "rule:admin_required",
"identity:create_service_provider": "rule:admin_required",
"identity:list_service_providers": "rule:admin_required",
"identity:get_service_provider": "rule:admin_required",
"identity:update_service_provider": "rule:admin_required",
"identity:delete_service_provider": "rule:admin_required",
"identity:get_auth_catalog": "",
"identity:get_auth_projects": "",
"identity:get_auth_domains": "",
"identity:list_projects_for_groups": "",
"identity:list_domains_for_groups": "",
"identity:list_revoke_events": "",
"identity:create_policy_association_for_endpoint": "rule:admin_required",
"identity:check_policy_association_for_endpoint": "rule:admin_required",
"identity:delete_policy_association_for_endpoint": "rule:admin_required",
"identity:create_policy_association_for_service": "rule:admin_required",
"identity:check_policy_association_for_service": "rule:admin_required",
"identity:delete_policy_association_for_service": "rule:admin_required",
"identity:create_policy_association_for_region_and_service": "rule:admin_required",
"identity:check_policy_association_for_region_and_service": "rule:admin_required",
"identity:delete_policy_association_for_region_and_service": "rule:admin_required",
"identity:get_policy_for_endpoint": "rule:admin_required",
"identity:list_endpoints_for_policy": "rule:admin_required",
"identity:create_domain_config": "rule:admin_required",
"identity:get_domain_config": "rule:admin_required",
"identity:update_domain_config": "rule:admin_required",
"identity:delete_domain_config": "rule:admin_required"
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment