This document shows how to deploy an OpenShift instance on a server using CodeReady Containers (crc) that can be accessed remotely from one or more client machines (sometimes called a "headless" instance). This provides a low-cost test and development platform that can be shared by developers. Deploying this way also allows a user to create an instance that uses more cpu and memory resources than may be available on his or her laptop.
While there are benefits to this type of deployment, please note that the primary use case for crc is to deploy a local OpenShift instance on a workstation or laptop and access it directly from the same machine. The headless setup is configured completely outside of crc itself, and supporting a headless setup is beyond the mission of the crc development team. Please do not ask for changes to crc to support this type of deployment, it will only cost the team time as they politely decline :)
The instructions here were tested with Fedora on both the server (F30) and a laptop (F29).
Thanks to Marcel Wysocki from Red Hat for the haproxy solution and the entire CodeReady Containers team for crc!
Red Hat blog article on CodeReady Containers
Download page on cloud.redhat.com
CRC documentation on github.io
Go to the download page and get crc for Linux. You’ll also need the pull secret listed there during the installation process. Make sure to copy the crc binary to /usr/local/bin or somewhere on your path.
The initial setup command only needs to be run once, and it creates a ~/.crc directory. Your user must have sudo privileges since crc will install dependencies for libvirt and modify the NetworkManager config:
$ crc setup
Note: occasionally on some systems this may fail with “Failed to restart NetworkManager”. Just rerun crc setup a few times until it works
$ crc start
You will be asked for the pull secret from the download page, paste it at the prompt.
Optionally, use the -m and -c flags to increase the VM size, for example a 32GiB with 8 cpus:
$ crc start -m 32768 -c 8
See the documentation or crc -h for other things you can do
If you want to just use crc locally on this machine, you can stop here, you’re all set!
sudo dnf -y install haproxy policycoreutils-python-utils jq
$ sudo systemctl start firewalld
$ sudo firewall-cmd --add-port=80/tcp --permanent
$ sudo firewall-cmd --add-port=6443/tcp --permanent
$ sudo firewall-cmd --add-port=443/tcp --permanent
$ sudo systemctl restart firewalld
$ sudo semanage port -a -t http_port_t -p tcp 6443
The steps below will create an haproxy config file with placeholders, update the SERVER_IP and CRC_IP using sed, and copy the new file to the correct location. If you would like to edit the file manually, feel free :)
$ sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
$ tee haproxy.cfg &>/dev/null <<EOF
global
debug
defaults
log global
mode http
timeout connect 5000
timeout client 5000
timeout server 5000
frontend apps
bind SERVER_IP:80
bind SERVER_IP:443
option tcplog
mode tcp
default_backend apps
backend apps
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP check
frontend api
bind SERVER_IP:6443
option tcplog
mode tcp
default_backend api
backend api
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check
EOF
$ export SERVER_IP=$(hostname --ip-address)
$ export CRC_IP=$(crc ip)
$ sed -i "s/SERVER_IP/$SERVER_IP/g" haproxy.cfg
$ sed -i "s/CRC_IP/$CRC_IP/g" haproxy.cfg
$ sudo cp haproxy.cfg /etc/haproxy/haproxy.cfg
$ sudo systemctl start haproxy
NetworkManager needs to be configured to use dnsmasq for DNS. Make sure you have dnsmasq installed:
$ sudo dnf install dnsmasq
Add a file to /etc/NetworkManager/conf.d to enable use of dnsmasq. (Some systems may already have this setting in an existing file, depending on what's been done in the past. If that's the case, continue on without creating a new file)
$ sudo tee /etc/NetworkManager/conf.d/use-dnsmasq.conf &>/dev/null <<EOF
[main]
dns=dnsmasq
EOF
Add dns entries for crc:
$ tee external-crc.conf &>/dev/null <<EOF
address=/apps-crc.testing/SERVER_IP
address=/api.crc.testing/SERVER_IP
EOF
$ export SERVER_IP=”your server’s external IP address”
$ sed -i "s/SERVER_IP/$SERVER_IP/g" external-crc.conf
$ sudo cp external-crc.conf /etc/NetworkManager/dnsmasq.d/external-crc.conf
$ sudo systemctl reload NetworkManager
Note: if you've previously run crc locally on the client machine, you likely have a /etc/NetworkManager/dnsmasq.d/crc.conf file that sets up dns for a local VM. Comment out those entries.
If you don't already have it, you can get the oc client here:
https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/
The password for the kubeadmin account is printed when crc starts, but if you don't have it handy you can do this as the user running crc on the server:
$ crc console --credentials
Now just login to OpenShift from your client machine using the standard crc url
$ oc login -u kubeadmin -p <kubeadmin password> https://api.crc.testing:6443
The OpenShift console will be available at https://console-openshift-console.apps-crc.testing
Beginning in version 1.2.0, CodeReady Containers will renew embedded certificates when they expire (prior to 1.2.0 it was necessary to download and install a new version). When the certificates need to be renewed, this will be noted in the CRC log output and may take up to 5 minutes.
tl;dr I need a firewalld guru to teach me the dark art of firewall rules making.
The Kiali team's crc-openshift.sh script has remote CRC access working. I now also have it where I can change the base domain name from
crc.testing
to<host-ip>.nip.io
(got some hints from @TheiLLeniumStudios in an earlier comment). See this PR which enhances the crc-openshift.sh for that base domain stuff. This will allow me to run multiple clusters of OpenShift in my home network on different computers. That PR has test instructions to see it work.HOWEVER! I cannot get it all to work with the firewall enabled. If the firewall rules are enabled, everything works from the OC CLI perspective (I can
oc login
, I canoc get
, etc). However, the OpenShift Console UI fails to log the user in (it gives me the login screen but it can't log me in). The reason is the console cannot call out to the nip.io address. For example, to see this, if you go to any pod in the cluster, and trycurl <ip>.nip.io
and it will fail. But as soon as I stop firewalld on the host, it all works - I can log into the Console and everything is fine.More specifically, if you attempt to log into the Console UI, and it fails, if you look at the console logs:
oc logs -n openshift-console -l component=ui
it shows this error:
Turn off the firewalld, and it is fine - I can log in and can go about my work.
Does any know the firewall black magic to get this to work?
In short, I need firewalld rule(s) that allow pods in the CRC cluster to talk to the host's HAProxy (which is where the nip.io hostname goes to).
This is what I got so far (this enables OC CLI to work, but Console UI is still blocked):
I tried rich-rules - but... well, ChatGPT told me to try those but ChatGPT hates me and never tells me anything that works.
FWIW, CRC's IP is
192.168.130.11
(For what its worth, that crc-openshift.sh works if you disable firewalld. So have at it if you want a way to install CRC that can be remotely accessed and where you can install multiple CRCs on different machines in your cluster, as long as you don't care if firewalld is disabled. It's configurable in a bunch of ways so you don't have to remember alot of the CRC settings - see
--help
for details)UPDATE: I don't get it, but I can login sometimes once I removed these (see below, these passthroughs were needed when I was using another type of openshift cluster). firewalld is a mystery wrapped in a riddle. Maybe I fixed it, maybe I didn't.. I have no idea. I'll keep playing with it. But now I can log in once these were removed, but only sometimes (might be due to browser caching some cookies container some login tokens, I dunno).
but still seeing internal errors:
authentication
reporting:OuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.192.168.1.20.nip.io/healthz": dial tcp 192.168.1.20:443: connect: connection refused
console
reporting:RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.192.168.1.20.nip.io): Get "https://console-openshift-console.apps.192.168.1.20.nip.io": dial tcp 192.168.1.20:443: connect: connection refused