Skip to content

Instantly share code, notes, and snippets.

@SamiSousa
Last active December 13, 2018 21:37
Show Gist options
  • Save SamiSousa/ae37d0959b35ec0fa7e090a75e2d6613 to your computer and use it in GitHub Desktop.
Save SamiSousa/ae37d0959b35ec0fa7e090a75e2d6613 to your computer and use it in GitHub Desktop.
Launching bridge from 4.0 installer libvirt

Running console on installer via libvirt

The goal of this is to run a local version of console connected to a 4.0 OpenShift cluster created with the installer.

Setup installer on libvirt

Follow the steps here to set up the installer via libvirtd.

Some tips that you may find helpful:

  • Obtain the Pull Secret from Step 4 of the OpenShift Install Developer Preview. This will require you to have an openshift.com account. Beneath Step 4 there should be a link to download the secret and one to copy it. Copying the secret provides a CLI-safe format, which can be used in the installer prompt. I recommend downloading it and storing the file somewhere safe, NOT the installer repo! Set the OPENSHIFT_INSTALL_PULL_SECRET_PATH environment variable to the path to the downloaded file. This saves time from having to copy the secret every time.

  • When running iptable and subsequent commands using the ip address 192.168.124.1 you should instead use the ip address that is printed out after running:

    ip -4 a show dev virbr0

    or

    virsh --connect qemu:///system net-dumpxml default

    The howto explains to use these commands rather than the default provided, but it's important to emphasize since the default they chose isn't the default me or my peers have encountered.

  • Sometimes, the installer will hang or fail halfway into downloading or moving the cluster OS image. I clear the cache when this happens by deleting the ~/.cache/openshift-install/ directory to avoid having the failed pull impact the next run of the installer.

  • Each time the installer fails, you need to destroy the cluster resources and any metadata that was created. The following commands delete ALL libvirt resources, so be careful! Running ./bin/openshift-install destroy cluster command works well if the cluster actually came up; otherwise, run ./scripts/maintenance/virsh-cleanup.sh to remove everything from libvirt. Also, run git clean -fd to remove any metadata created in your git directory.

  • When running the installer, it will do some preliminary setup via terraform and print out Apply complete! Resources: 9 added, 0 changed, 0 destroyed. on successful setup. It will then produce logs to track your cluster as it begins to startup. However, the install command might print Killed after some time, and exit. From what I've seen this only means that the installer gives up, so as long as the Apply complete! message is printed, the installer is waiting for the bootstrap node to finish. You can view the bootstrap node's progress via the Exploring your cluster section suggestions.

Patience is important: from personal experience, it may take 30 minutes or more for the cluster to be ready after the bootstrap node is completed. If the cluster isn't ready after this point, it might be worth waiting a little bit longer.

Launching Console

Once you have a running cluster running via libvirt, make sure you've set your environment variable for KUBECONFIG to point to the credentials for your cluster (default path provided):

export KUBECONFIG=$GOPATH/src/github.com/openshift/auth/kubeconfig

Then, follow the installer steps for Native Kubernetes:

source ./contrib/environment.sh
./bin/bridge

The console should be running at localhost:9000

In case you don't have privileges to view anything in console, look at the username in the top right of the console (mine was set to system:serviceaccount:kube-system:default) and run the following command, replacing with your assigned username:

oc adm policy add-cluster-role-to-user cluster-admin $USERNAME

This should give you access to view all resources on the cluster.

The oc-environment.sh script doesn't work since it requires a user token, but you will be logged in as system:admin which doesn't use token authentication.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment