Skip to content

Instantly share code, notes, and snippets.

@gibizer
Last active December 6, 2023 18:37
Show Gist options
  • Save gibizer/40cc12e90bfee7186c1681860c8c5e7c to your computer and use it in GitHub Desktop.
Save gibizer/40cc12e90bfee7186c1681860c8c5e7c to your computer and use it in GitHub Desktop.

Deploy

User with sudo

This is only needed in a clean hypervisor. If you have a laptop with a user with sudo then you can skip the user creation and sudo setup. As root:

useradd -s /bin/bash -d /home/gibi -m gibi
echo "gibi ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/gibi
sudo -u gibi -i

Tools

As your normal user:

sudo dnf install git ansible-core
git clone https://github.com/openstack-k8s-operators/install_yamls.git
cd install_yamls
cd devsetup
make download_tools

In some OS the go binary won't be in the users path check with go version if you don't have go visible then add to ~/.bashrc

PATH="/usr/local/bin/:$PATH"
export PATH

Pull secret

Go to https://cloud.redhat.com/openshift/create/local and download the pull secret and put it to ~/pull-secret.txt

CRC

CRC is OCP in a single VM. You might be able to use a bit less memory and disk but these values known to be enough

PULL_SECRET=~/pull-secret.txt CPUS=6 MEMORY=24576 DISK=100 make crc

This will take a while, go get coffee

Get access to openshift

eval $(crc oc-env)
oc login -u kubeadmin https://api.crc.testing:6443
oc get nodes

OpenStack operators

make crc_attach_default_interface
cd ~/install_yamls/
make crc_storage
make openstack_wait

This will take a while. You can follow the progress in another terminal with

oc get pods -n openstack-operators --watch

OSP18 control plane

make openstack_wait_deploy

The execution time varies and the make target might time out with error: timed out waiting for the condition on openstackcontrolplanes/openstack-galera-network-isolation Then don't worry you can continue waiting with

oc kustomize /home/gibi/install_yamls/out/openstack/openstack/cr | oc wait --for condition=Ready --timeout=300s -f -

You can follow the progress in another terminal

oc get pods -n openstack --watch

Create a VM to be used as a compute node

cd ~/install_yamls/devsetup
EDPM_TOTAL_NODES=1 make edpm_compute

If you have enough RAM you can deploy more than one compute. 4G RAM per compute is needed

OSP18 data plane

cd ~/install_yamls
TIMEOUT=600s DATAPLANE_TOTAL_NODES=1 make edpm_wait_deploy

You can see the progress on ansible role level with

oc get pods -n openstack --watch

Or you can look at the ansible execution logs with

while true; do oc logs -n openstack -f `oc get pods -n openstack | grep 'openstack-edpm-' | grep Running| cut -d ' ' -f1` 2>/dev/null || echo -n .; sleep 1; done

The deploy might also time out, you can wait more with

oc kustomize /home/gibi/install_yamls/out/openstack/dataplane/cr | oc wait --for condition=Ready --timeout=20m -f -

If you needed to wait more to succeed then you need to run the following as well

make edpm_nova_discover_hosts

Look around in your new OSP18 env

You can see your compute node and the openshift VM with

sudo virsh list --all

You can interact with the control plane via the OpenStackControlPlane CR

oc get OpenStackControlPlane -o yaml

You can interact with the data plane via the OpenStackDataPlaneNodeSet and OpenStackDataPlaneDeployment CRs

oc get OpenStackDataPlaneNodeSet -o yaml
oc get OpenStackDataPlaneDeployment -o yaml

You can interact with the openstack APIs via the openstackclient pod

oc exec -t openstackclient openstack compute service list

You can ssh into your compute node

ssh root@192.168.122.100

If you deployed more computes then the next is .101 etc. You can get the ssh key for the computes with

oc get secret/dataplane-ansible-ssh-private-key-secret -o go-template --template '{{index .data "ssh-privatekey"}}' | base64 --decode

Retry the deployment

If you need to retry the whole deployment then destroy the env with

crc delete
cd ~/install_yamls/devsetup
EDPM_TOTAL_NODES=1 make edpm_compute_cleanup

and then go back to the CRC step.

Structure of the setup

┌─────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│your machine ▲                                                                                           │
│             │libvirt crc net: 192.168.130.11                                                            │
│ ┌───────────┴──────────────────────────────────────────┐       .122.100 ┌─────────────────────────────┐ │
│ │CRC VM                                                │      ┌────────►│EDPM-0 VM                    │ │
│ │ ┌──────────────────────────────────────────────────┐ │      │         │                             │ │
│ │ │k8s node                                          │ │      │         │ systemd-service:libvirt     │ │
│ │ │ ┌──────────────────────────────────────────────┐ │ │      │         │ podman:nova_compute         │ │
│ │ │ │namespace:openstack-operator                  │ │ │      │         │ podman:neutron-ovn-metadata │ │
│ │ │ ├──────────────────────────────────────────────┤ │ │      │         │ ...                         │ │
│ │ │ │ pod:openstack-operator-controller-manager    │ │ │      │         │                             │ │
│ │ │ │ pod:nova-operator-operator-controller-manager│ │ │      │         └─────────────────────────────┘ │
│ │ │ │ ...                                          │ │ │      │                                         │
│ │ │ │                                              │ │ │      │                                         │
│ │ │ └──────────────────────────────────────────────┘ │ │      │                                         │
│ │ │                                                  │ │      │                                         │
│ │ │ ┌────────────────────────┐                       │ │◄─────┘                                         │
│ │ │ │namespace:openstack     │                       │ │ 192.168.122.65                                 │
│ │ │ ├────────────────────────┤                       │ │ libvirt default net                            │
│ │ │ │ pod:openstack-galera-0 │                       │ │      │                                         │
│ │ │ │ pod:rabbitmq-server-0  │                       │ │      │                                         │
│ │ │ │ pod:nova-api-0         │ 172.17.0.80           │ │      │.122.101 ┌─────────────────────────────┐ │
│ │ │ │ pod:placement-api◄─────┼─────────────────      │ │      └────────►│EDPM-1 VM                    │ │
│ │ │ │ ...                    │ metallb:internalapi   │ │                │                             │ │
│ │ │ └────────────────────────┘                │      │ │    vlan:20     │ systemd-service:libvirt     │ │
│ │ │                                           └──────┼─┼────────────────┤►podman:nova_compute         │ │
│ │ │  ...                                             │ │    172.17.0.101│ podman:neutron-ovn-metadata │ │
│ │ └──────────────────────────────────────────────────┘ │                │ ...                         │ │
│ │                                                      │                │                             │ │
│ └──────────────────────────────────────────────────────┘                └─────────────────────────────┘ │
│                                                                                                         │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────┘
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment