Skip to content

Instantly share code, notes, and snippets.

@hgomez
Last active March 10, 2023 16:59
Show Gist options
  • Save hgomez/1ca02730fe56e5fbe047a8a68afb6778 to your computer and use it in GitHub Desktop.
Save hgomez/1ca02730fe56e5fbe047a8a68afb6778 to your computer and use it in GitHub Desktop.
How to install OpenStack Zed on RockyLinux 9.1 - Using DevStack

OpenStack Zed on Rocky Linux 9.1 - Using DevStack

Let see how to deploy OpenStack Zen on Rocky Linux 9.1

This time, I will try to use DevStack

Rocky Linux 9.1

Install Rocky Linux 9.1, minimal setup, using DVD ISO

I used a VirtualBox VM, 2 VCPU, 8GB RAM, 30 GB Storage, Bridged network (to ease operation from remote).

Make sure that root account allow remote login

Create stack user on VM

sudo useradd -s /bin/bash -d /opt/stack -m stack
sudo chmod +x /opt/stack
echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
sudo -u stack -i

Prepare Environment

Be sure to be in en_US and UTF-8 (ie: /etc/environment)

LANG=en_US.utf-8
LC_ALL=en_US.utf-8

Disable selinux

sudo setenforce 0

Make it permanent by editing /etc/sysconfig/selinux

SELINUX=disabled

Reboot

sudo reboot

Verify after reboot

sestatus 

You should see

SELinux status:                 disabled

Connect to VM as stack user

You should be connected as non root user, let's use stack account

Prepare DevStack

# Install git 
sudo dnf install git -y

# Download DevStack
git clone https://opendev.org/openstack/devstack
cd devstack
# Go to Zed Stable
git checkout stable/zed

Configure DevStack

Create a configuration file on devstack folder. You should specify VM IP, example 10.0.0.41

[[local|localrc]]
ADMIN_PASSWORD=nomoresecret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=10.0.0.41

Patch Zed DevStack

Back port change found in master

--- /opt/stack/devstack/functions-common.orig	2023-03-10 11:15:35.706271496 +0100
+++ /opt/stack/devstack/functions-common	2023-03-10 11:22:21.742556662 +0100
@@ -418,6 +418,9 @@
         os_RELEASE=${VERSION_ID}
         os_CODENAME="n/a"
         os_VENDOR=$(echo $NAME | tr -d '[:space:]')
+    elif [[ "${ID}${VERSION}" =~ "rocky9" ]]; then
+        os_VENDOR="Rocky"
+        os_RELEASE=${VERSION_ID}
     else
         _ensure_lsb_release
 
@@ -466,6 +469,7 @@
         "$os_VENDOR" =~ (AlmaLinux) || \
         "$os_VENDOR" =~ (Scientific) || \
         "$os_VENDOR" =~ (OracleServer) || \
+        "$os_VENDOR" =~ (Rocky) || \
         "$os_VENDOR" =~ (Virtuozzo) ]]; then
         # Drop the . release as we assume it's compatible
         # XXX re-evaluate when we get RHEL10
@@ -513,7 +517,7 @@
 
 
 # Determine if current distribution is a Fedora-based distribution
-# (Fedora, RHEL, CentOS, etc).
+# (Fedora, RHEL, CentOS, Rocky, etc).
 # is_fedora
 function is_fedora {
     if [[ -z "$os_VENDOR" ]]; then
@@ -523,6 +527,7 @@
     [ "$os_VENDOR" = "Fedora" ] || [ "$os_VENDOR" = "Red Hat" ] || \
         [ "$os_VENDOR" = "RedHatEnterpriseServer" ] || \
         [ "$os_VENDOR" = "RedHatEnterprise" ] || \
+        [ "$os_VENDOR" = "Rocky" ] || \
         [ "$os_VENDOR" = "CentOS" ] || [ "$os_VENDOR" = "CentOSStream" ] || \
         [ "$os_VENDOR" = "AlmaLinux" ] || \
         [ "$os_VENDOR" = "OracleServer" ] || [ "$os_VENDOR" = "Virtuozzo" ]

Install OpenStack

Logged as stack user and under /opt/stack/devstack directory

cd /opt/stack/devstack
./stack.sh

For now stucked at

+lib/neutron_plugins/ovn_agent:wait_for_sock_file:191  die 191 'Socket /var/run/openvswitch/ovnnb_db.sock not found'
+functions-common:die:264                  local exitcode=0
+functions-common:die:265                  set +o xtrace
[Call Trace]
./stack.sh:1273:start_ovn_services
/opt/stack/devstack/lib/neutron-legacy:521:start_ovn
/opt/stack/devstack/lib/neutron_plugins/ovn_agent:715:wait_for_sock_file
/opt/stack/devstack/lib/neutron_plugins/ovn_agent:191:die
[ERROR] /opt/stack/devstack/lib/neutron_plugins/ovn_agent:191 Socket /var/run/openvswitch/ovnnb_db.sock not found

Some hope here ? => https://stackoverflow.com/questions/68001501/error-opt-stack-devstack-lib-neutron-plugins-ovn-agent174-socket

Dirty way ?

Uninstall all ovn packages after ./unstack.sh and ./clean.sh

./unstack.sh
./clean.sh
sudo dnf remove -y ovn-common ovn-controller-vtep ovn-host ovn-central

Try reinstall

./stack.sh
+lib/neutron_plugins/ovn_agent:install_ovn:372  sudo chown stack /var/run/openvswitch
+lib/neutron_plugins/ovn_agent:install_ovn:376  sudo ln -s /var/run/openvswitch /var/run/ovn
ln: failed to create symbolic link '/var/run/ovn/openvswitch': File exists
+lib/neutron_plugins/ovn_agent:install_ovn:1  exit_trap
Same symlink issue...

In master they removed [this symlink](https://opendev.org/openstack/devstack/commit/71c99655479174750bcedfe458328328a1596766) to fix [bug](https://bugs.launchpad.net/devstack/+bug/1980421), let's remove this symlink too (may be not be enough)
--- lib/neutron_plugins/ovn_agent.orig	2023-03-10 13:40:48.218521413 +0100
+++ lib/neutron_plugins/ovn_agent	2023-03-10 13:40:58.115464335 +0100
@@ -373,7 +373,7 @@
     # NOTE(lucasagomes): To keep things simpler, let's reuse the same
     # RUNDIR for both OVS and OVN. This way we avoid having to specify the
     # --db option in the ovn-{n,s}bctl commands while playing with DevStack
-    sudo ln -s $OVS_RUNDIR $OVN_RUNDIR
+    # sudo ln -s $OVS_RUNDIR $OVN_RUNDIR
 
     if [[ "$OVN_BUILD_FROM_SOURCE" == "True" ]]; then
         # If OVS is already installed, remove it, because we're about to

Same problem, let's grab ovn_agent at that time

--- lib/neutron_plugins/ovn_agent.orig	2023-03-10 13:40:48.218521413 +0100
+++ lib/neutron_plugins/ovn_agent	2023-03-10 13:51:56.539450583 +0100
@@ -244,11 +244,12 @@
     local cmd="$2"
     local stop_cmd="$3"
     local group=$4
-    local user=${5:-$STACK_USER}
+    local user=$5
+    local rundir=${6:-$OVS_RUNDIR}
 
     local systemd_service="devstack@$service.service"
     local unit_file="$SYSTEMD_DIR/$systemd_service"
-    local environment="OVN_RUNDIR=$OVS_RUNDIR OVN_DBDIR=$OVN_DATADIR OVN_LOGDIR=$LOGDIR OVS_RUNDIR=$OVS_RUNDIR OVS_DBDIR=$OVS_DATADIR OVS_LOGDIR=$LOGDIR"
+    local environment="OVN_RUNDIR=$OVN_RUNDIR OVN_DBDIR=$OVN_DATADIR OVN_LOGDIR=$LOGDIR OVS_RUNDIR=$OVS_RUNDIR OVS_DBDIR=$OVS_DATADIR OVS_LOGDIR=$LOGDIR"
 
     echo "Starting $service executed command": $cmd
 
@@ -264,14 +265,14 @@
 
     _start_process $systemd_service
 
-    local testcmd="test -e $OVS_RUNDIR/$service.pid"
+    local testcmd="test -e $rundir/$service.pid"
     test_with_retry "$testcmd" "$service did not start" $SERVICE_TIMEOUT 1
     local service_ctl_file
-    service_ctl_file=$(ls $OVS_RUNDIR | grep $service | grep ctl)
+    service_ctl_file=$(ls $rundir | grep $service | grep ctl)
     if [ -z "$service_ctl_file" ]; then
         die $LINENO "ctl file for service $service is not present."
     fi
-    sudo ovs-appctl -t $OVS_RUNDIR/$service_ctl_file vlog/set console:off syslog:info file:info
+    sudo ovs-appctl -t $rundir/$service_ctl_file vlog/set console:off syslog:info file:info
 }
 
 function clone_repository {
@@ -370,10 +371,6 @@
 
     sudo mkdir -p $OVS_RUNDIR
     sudo chown $(whoami) $OVS_RUNDIR
-    # NOTE(lucasagomes): To keep things simpler, let's reuse the same
-    # RUNDIR for both OVS and OVN. This way we avoid having to specify the
-    # --db option in the ovn-{n,s}bctl commands while playing with DevStack
-    sudo ln -s $OVS_RUNDIR $OVN_RUNDIR
 
     if [[ "$OVN_BUILD_FROM_SOURCE" == "True" ]]; then
         # If OVS is already installed, remove it, because we're about to
@@ -590,7 +587,6 @@
     rm -f $OVS_DATADIR/.*.db.~lock~
     sudo rm -f $OVN_DATADIR/*.db
     sudo rm -f $OVN_DATADIR/.*.db.~lock~
-    sudo rm -f $OVN_RUNDIR/*.sock
 }
 
 function _start_ovs {
@@ -617,12 +613,12 @@
                 dbcmd+=" --remote=db:hardware_vtep,Global,managers $OVS_DATADIR/vtep.db"
             fi
             dbcmd+=" $OVS_DATADIR/conf.db"
-            _run_process ovsdb-server "$dbcmd" "" "$STACK_GROUP" "root"
+            _run_process ovsdb-server "$dbcmd" "" "$STACK_GROUP" "root" "$OVS_RUNDIR"
 
             # Note: ovn-controller will create and configure br-int once it is started.
             # So, no need to create it now because nothing depends on that bridge here.
             local ovscmd="$OVS_SBINDIR/ovs-vswitchd --log-file --pidfile --detach"
-            _run_process ovs-vswitchd "$ovscmd" "" "$STACK_GROUP" "root"
+            _run_process ovs-vswitchd "$ovscmd" "" "$STACK_GROUP" "root" "$OVS_RUNDIR"
         else
             _start_process "$OVSDB_SERVER_SERVICE"
             _start_process "$OVS_VSWITCHD_SERVICE"
@@ -661,7 +657,7 @@
 
             enable_service ovs-vtep
             local vtepcmd="$OVS_SCRIPTDIR/ovs-vtep --log-file --pidfile --detach br-v"
-            _run_process ovs-vtep "$vtepcmd" "" "$STACK_GROUP" "root"
+            _run_process ovs-vtep "$vtepcmd" "" "$STACK_GROUP" "root" "$OVS_RUNDIR"
 
             vtep-ctl set-manager tcp:$HOST_IP:6640
         fi
@@ -705,26 +701,26 @@
             local cmd="/bin/bash $SCRIPTDIR/ovn-ctl --no-monitor start_northd"
             local stop_cmd="/bin/bash $SCRIPTDIR/ovn-ctl stop_northd"
 
-            _run_process ovn-northd "$cmd" "$stop_cmd" "$STACK_GROUP" "root"
+            _run_process ovn-northd "$cmd" "$stop_cmd" "$STACK_GROUP" "root" "$OVN_RUNDIR"
         else
             _start_process "$OVN_NORTHD_SERVICE"
         fi
 
         # Wait for the service to be ready
         # Check for socket and db files for both OVN NB and SB
-        wait_for_sock_file $OVS_RUNDIR/ovnnb_db.sock
-        wait_for_sock_file $OVS_RUNDIR/ovnsb_db.sock
+        wait_for_sock_file $OVN_RUNDIR/ovnnb_db.sock
+        wait_for_sock_file $OVN_RUNDIR/ovnsb_db.sock
         wait_for_db_file $OVN_DATADIR/ovnnb_db.db
         wait_for_db_file $OVN_DATADIR/ovnsb_db.db
 
         if is_service_enabled tls-proxy; then
-            sudo ovn-nbctl --db=unix:$OVS_RUNDIR/ovnnb_db.sock set-ssl $INT_CA_DIR/private/$DEVSTACK_CERT_NAME.key $INT_CA_DIR/$DEVSTACK_CERT_NAME.crt $INT_CA_DIR/ca-chain.pem
-            sudo ovn-sbctl --db=unix:$OVS_RUNDIR/ovnsb_db.sock set-ssl $INT_CA_DIR/private/$DEVSTACK_CERT_NAME.key $INT_CA_DIR/$DEVSTACK_CERT_NAME.crt $INT_CA_DIR/ca-chain.pem
+            sudo ovn-nbctl --db=unix:$OVN_RUNDIR/ovnnb_db.sock set-ssl $INT_CA_DIR/private/$DEVSTACK_CERT_NAME.key $INT_CA_DIR/$DEVSTACK_CERT_NAME.crt $INT_CA_DIR/ca-chain.pem
+            sudo ovn-sbctl --db=unix:$OVN_RUNDIR/ovnsb_db.sock set-ssl $INT_CA_DIR/private/$DEVSTACK_CERT_NAME.key $INT_CA_DIR/$DEVSTACK_CERT_NAME.crt $INT_CA_DIR/ca-chain.pem
         fi
-        sudo ovn-nbctl --db=unix:$OVS_RUNDIR/ovnnb_db.sock set-connection p${OVN_PROTO}:6641:$SERVICE_LISTEN_ADDRESS -- set connection . inactivity_probe=60000
-        sudo ovn-sbctl --db=unix:$OVS_RUNDIR/ovnsb_db.sock set-connection p${OVN_PROTO}:6642:$SERVICE_LISTEN_ADDRESS -- set connection . inactivity_probe=60000
-        sudo ovs-appctl -t $OVS_RUNDIR/ovnnb_db.ctl vlog/set console:off syslog:$OVN_DBS_LOG_LEVEL file:$OVN_DBS_LOG_LEVEL
-        sudo ovs-appctl -t $OVS_RUNDIR/ovnsb_db.ctl vlog/set console:off syslog:$OVN_DBS_LOG_LEVEL file:$OVN_DBS_LOG_LEVEL
+        sudo ovn-nbctl --db=unix:$OVN_RUNDIR/ovnnb_db.sock set-connection p${OVN_PROTO}:6641:$SERVICE_LISTEN_ADDRESS -- set connection . inactivity_probe=60000
+        sudo ovn-sbctl --db=unix:$OVN_RUNDIR/ovnsb_db.sock set-connection p${OVN_PROTO}:6642:$SERVICE_LISTEN_ADDRESS -- set connection . inactivity_probe=60000
+        sudo ovs-appctl -t $OVN_RUNDIR/ovnnb_db.ctl vlog/set console:off syslog:$OVN_DBS_LOG_LEVEL file:$OVN_DBS_LOG_LEVEL
+        sudo ovs-appctl -t $OVN_RUNDIR/ovnsb_db.ctl vlog/set console:off syslog:$OVN_DBS_LOG_LEVEL file:$OVN_DBS_LOG_LEVEL
     fi
 
     if is_service_enabled ovn-controller ; then
@@ -732,7 +728,7 @@
             local cmd="/bin/bash $SCRIPTDIR/ovn-ctl --no-monitor start_controller"
             local stop_cmd="/bin/bash $SCRIPTDIR/ovn-ctl stop_controller"
 
-            _run_process ovn-controller "$cmd" "$stop_cmd" "$STACK_GROUP" "root"
+            _run_process ovn-controller "$cmd" "$stop_cmd" "$STACK_GROUP" "root" "$OVN_RUNDIR"
         else
             _start_process "$OVN_CONTROLLER_SERVICE"
         fi
@@ -741,7 +737,7 @@
     if is_service_enabled ovn-controller-vtep ; then
         if [[ "$OVN_BUILD_FROM_SOURCE" == "True" ]]; then
             local cmd="$OVS_BINDIR/ovn-controller-vtep --log-file --pidfile --detach --ovnsb-db=$OVN_SB_REMOTE"
-            _run_process ovn-controller-vtep "$cmd" "" "$STACK_GROUP" "root"
+            _run_process ovn-controller-vtep "$cmd" "" "$STACK_GROUP" "root" "$OVN_RUNDIR"
         else
             _start_process "$OVN_CONTROLLER_VTEP_SERVICE"
         fi

Sadly, it didn't fix problem

Mar 10 14:01:00 localhost.localdomain neutron-ovn-metadata-agent[274711]: INFO eventlet.wsgi.server [-] (274711) wsgi exited, is_accepting=True
Mar 10 14:01:00 localhost.localdomain neutron-ovn-metadata-agent[274711]: DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" {{(pid=274711) lock /usr/local/lib/pyt…ils.py:312}}
Mar 10 14:01:00 localhost.localdomain neutron-ovn-metadata-agent[274710]: DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" {{(pid=274710) lock /usr/local/lib/pyt…ils.py:312}}
Mar 10 14:01:00 localhost.localdomain neutron-ovn-metadata-agent[274710]: DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" {{(pid=274710) lock /usr/local/lib/pyth…ils.py:315}}
Mar 10 14:01:00 localhost.localdomain neutron-ovn-metadata-agent[274711]: DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" {{(pid=274711) lock /usr/local/lib/pyth…ils.py:315}}
Mar 10 14:01:00 localhost.localdomain neutron-ovn-metadata-agent[274710]: DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" {{(pid=274710) lock /usr/local/lib/pyt…ils.py:333}}
Mar 10 14:01:00 localhost.localdomain neutron-ovn-metadata-agent[274711]: DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" {{(pid=274711) lock /usr/local/lib/pyt…ils.py:333}}
Mar 10 14:01:00 localhost.localdomain systemd[1]: devstack@q-ovn-metadata-agent.service: Main process exited, code=exited, status=1/FAILURE
Mar 10 14:01:00 localhost.localdomain systemd[1]: devstack@q-ovn-metadata-agent.service: Failed with result 'exit-code'.
Mar 10 14:01:00 localhost.localdomain systemd[1]: devstack@q-ovn-metadata-agent.service: Consumed 1.050s CPU time.

There is still folders /var/run/ovn and /var/run/openswitch, let's do more housekeeping

./unstack.sh 
./clean.sh 
sudo dnf remove -y ovn-common ovn-controller-vtep ovn-host ovn-central 

sudo dnf remove openstack-network-scripts
sudo rm -rf /var/run/ovn 
sudo rm -rf /var/run/openswitch
sudo rm /var/run/openvswitch.useropts

./stack.sh
Mar 10 14:22:11 localhost.localdomain neutron-ovn-metadata-agent[381109]: DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" {{(pid=381109) lock /usr/local/lib/pyt…ils.py:333}}
Mar 10 14:22:11 localhost.localdomain systemd[1]: devstack@q-ovn-metadata-agent.service: Main process exited, code=exited, status=1/FAILURE
Mar 10 14:22:11 localhost.localdomain systemd[1]: devstack@q-ovn-metadata-agent.service: Failed with result 'exit-code'.
Mar 10 14:22:11 localhost.localdomain systemd[1]: devstack@q-ovn-metadata-agent.service: Unit process 381109 (neutron-ovn-met) remains running after unit stopped.
Mar 10 14:22:11 localhost.localdomain systemd[1]: devstack@q-ovn-metadata-agent.service: Consumed 1.035s CPU time.
Hint: Some lines were ellipsized, use -l to show in full.

Quand ça ne veut pas, ça ne veut pas

Zed + Zed => Failed

I cannot deploy OpenStack Zed, latest stable release, using DevStack (using Zed branch) on Rocky Linux 9.1.

Problem is not related to Rocky Linux 9.1 as it was quickly patched from master but from Zed and DevStack Zed who didn't seem really aligned.

Next try, latest OpenStack on Rocky Linux 9.1

I recreated a new VM, and redo operations but using latest devstack (master)

local.conf used

[[local|localrc]]
ADMIN_PASSWORD=nomoresecret
DATABASE_PASSWORD=stackdb
RABBIT_PASSWORD=stackqueue
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=10.0.0.42
HOST_IPV6=fe80::a00:27ff:fe45:dc5f/64
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data

Installation started

./stack.sh

A few minutes later, Victory

=========================
DevStack Component Timing
 (times are in seconds)  
=========================
wait_for_service       9
async_wait           136
osc                  147
test_with_retry        4
yum_install          173
dbsync                11
pip_install          138
run_process           19
git_timed            177
-------------------------
Unaccounted time     147
=========================
Total runtime        961

=================
 Async summary
=================
 Time spent in the background minus waits: 355 sec
 Elapsed time: 961 sec
 Time if we did everything serially: 1316 sec
 Speedup:  1.36941


Post-stack database query stats:
+------------+-----------+-------+
| db         | op        | count |
+------------+-----------+-------+
| keystone   | SELECT    | 41625 |
| keystone   | INSERT    |    98 |
| glance     | SELECT    |   999 |
| glance     | CREATE    |    65 |
| glance     | INSERT    |   256 |
| glance     | SHOW      |     8 |
| glance     | UPDATE    |    17 |
| glance     | ALTER     |     9 |
| glance     | DROP      |     1 |
| neutron    | SELECT    |  4790 |
| neutron    | CREATE    |   319 |
| neutron    | SHOW      |    39 |
| cinder     | SELECT    |   165 |
| cinder     | SHOW      |     1 |
| cinder     | CREATE    |    74 |
| cinder     | SET       |     1 |
| cinder     | ALTER     |    18 |
| neutron    | INSERT    |  1128 |
| neutron    | UPDATE    |   232 |
| neutron    | ALTER     |   153 |
| neutron    | DROP      |    53 |
| neutron    | DELETE    |    25 |
| nova_cell1 | SELECT    |   145 |
| nova_cell1 | SHOW      |    35 |
| nova_cell1 | CREATE    |   157 |
| nova_cell0 | SELECT    |   132 |
| nova_cell0 | SHOW      |    35 |
| nova_cell0 | CREATE    |   157 |
| nova_cell0 | ALTER     |     2 |
| nova_cell1 | ALTER     |     2 |
| placement  | SELECT    |    38 |
| placement  | INSERT    |    54 |
| placement  | SET       |     1 |
| nova_api   | SELECT    |   114 |
| nova_cell0 | INSERT    |     4 |
| placement  | UPDATE    |     3 |
| nova_cell1 | INSERT    |     4 |
| nova_cell1 | UPDATE    |    19 |
| cinder     | INSERT    |     5 |
| nova_cell0 | UPDATE    |    19 |
| cinder     | UPDATE    |    14 |
| nova_api   | INSERT    |    20 |
| nova_api   | SAVEPOINT |    10 |
| nova_api   | RELEASE   |    10 |
| cinder     | DELETE    |     1 |
| keystone   | DELETE    |     5 |
+------------+-----------+-------+



This is your host IP address: 10.0.0.42
This is your host IPv6 address: fe80::a00:27ff:fe45:dc5f/64
Horizon is now available at http://10.0.0.42/dashboard
Keystone is serving at http://10.0.0.42/identity/
The default users are: admin and demo
The password: nomoresecret

Services are running under systemd unit files.
For more information see: 
https://docs.openstack.org/devstack/latest/systemd.html

DevStack Version: 2023.1
Change: ab8e51eb49068a8c5004007c18fdfb9b1fcc0954 Merge "Disable memory_tracker and file_tracker in unstask.sh properly" 2023-02-28 06:13:08 +0000
OS Version: Rocky 9.1 

To be reached from outside, disable firewall

sudo systemctl stop iptables
sudo systemctl disable iptables

httpd and memcached services wasn't activated, to make it survive a reboot, activate and enable

sudo systemctl start httpd.service
sudo systemctl enable httpd.service

sudo systemctl start memcached.service
sudo systemctl enable memcached.service

Horizon

While browsing around UI, I noticed errors

for http://10.0.0.42/dashboard/project/instances/

2023-03-10 16:18:16.908037 DEBUG novaclient.v2.client GET call to compute for http://10.0.0.42/compute/v2.1/flavors/detail used request id req-09556b52-adb7-47f3-aef5-b7cf21bec613
2023-03-10 16:18:16.952418 ERROR django.request Internal Server Error: /dashboard/project/instances/
2023-03-10 16:18:16.952440 Traceback (most recent call last):
2023-03-10 16:18:16.952441   File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
2023-03-10 16:18:16.952442     response = get_response(request)
2023-03-10 16:18:16.952443   File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
2023-03-10 16:18:16.952444     response = wrapped_callback(request, *callback_args, **callback_kwargs)
2023-03-10 16:18:16.952445   File "/opt/stack/horizon/horizon/decorators.py", line 51, in dec
2023-03-10 16:18:16.952447     return view_func(request, *args, **kwargs)
2023-03-10 16:18:16.952447   File "/opt/stack/horizon/horizon/decorators.py", line 35, in dec
2023-03-10 16:18:16.952448     return view_func(request, *args, **kwargs)

cinder status failed

Define admin.rc

export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=nomoresecret
export OS_AUTH_URL=http://10.0.0.42/identity/
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
source admin.rc
openstack volume list

Call to Cinder API failed

cinder list
> ERROR: Internal Server Error (HTTP 500)

openstack volume list 

> Internal Server Error (HTTP 500)

Someone is listening to cinder ? (/var/log/messages)

Mar 10 17:30:13 rocky2 devstack@c-api.service[3870]: --- no python application found, check your startup logs for errors ---
Mar 10 17:30:13 rocky2 devstack@c-api.service[3870]: [pid: 3870|app: -1|req: -1/12] 10.0.0.42 () {64 vars in 1164 bytes} [Fri Mar 10 17:30:13 2023] GET /volume/ => generated 21 bytes in 0 msecs (HTTP/1.1 500) 3 headers in 102 bytes (0 switches on core 0)

or block storage damaged

Mar 10 17:32:37 rocky2 kernel: XFS (dm-0): Metadata corruption detected at xfs_dir3_data_reada_verify+0x3c/0x70 [xfs], xfs_dir3_data_reada block 0x2831068 
Mar 10 17:32:37 rocky2 kernel: XFS (dm-0): Unmount and run xfs_repair
Mar 10 17:32:37 rocky2 kernel: XFS (dm-0): First 128 bytes of corrupted metadata buffer:
Mar 10 17:32:37 rocky2 kernel: 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: XFS (dm-0): Metadata CRC error detected at xfs_dir3_block_read_verify+0xe0/0x120 [xfs], xfs_dir3_block block 0x2831068 
Mar 10 17:32:37 rocky2 kernel: XFS (dm-0): Unmount and run xfs_repair
Mar 10 17:32:37 rocky2 kernel: XFS (dm-0): First 128 bytes of corrupted metadata buffer:
Mar 10 17:32:37 rocky2 kernel: 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: 00000070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 10 17:32:37 rocky2 kernel: XFS (dm-0): metadata I/O error in "xfs_da_read_buf+0xe1/0x140 [xfs]" at daddr 0x2831068 len 8 error 74
Mar 10 17:32:37 rocky2 devstack@keystone.service[3884]: #033[00;32mDEBUG keystone.server.flask.request_processing.req_logging [#033[01;36mNone req-9b9bc8a1-6b10-4820-bfac-c90bfddcd51a #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mREQUEST_METHOD: `GET`#033[00m #033[00;33m{{(pid=3884) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:27}}#033[00m#033[00m

Conclusion

You can install OpenStack (latest) on Rocky Linux 9.1 commit ab8e51eb49

But there is some issues and not being a big fan of using master/trunk contents, I should find another way to have OpenStack inside my homelab :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment