Skip to content

Instantly share code, notes, and snippets.

@tmckayus
Last active November 7, 2024 21:53
Show Gist options
  • Save tmckayus/8e843f90c44ac841d0673434c7de0c6a to your computer and use it in GitHub Desktop.
Save tmckayus/8e843f90c44ac841d0673434c7de0c6a to your computer and use it in GitHub Desktop.
Running 'crc' on a remote server

Overview: running crc on a remote server

This document shows how to deploy an OpenShift instance on a server using CodeReady Containers (crc) that can be accessed remotely from one or more client machines (sometimes called a "headless" instance). This provides a low-cost test and development platform that can be shared by developers. Deploying this way also allows a user to create an instance that uses more cpu and memory resources than may be available on his or her laptop.

While there are benefits to this type of deployment, please note that the primary use case for crc is to deploy a local OpenShift instance on a workstation or laptop and access it directly from the same machine. The headless setup is configured completely outside of crc itself, and supporting a headless setup is beyond the mission of the crc development team. Please do not ask for changes to crc to support this type of deployment, it will only cost the team time as they politely decline :)

The instructions here were tested with Fedora on both the server (F30) and a laptop (F29).

Thanks to

Thanks to Marcel Wysocki from Red Hat for the haproxy solution and the entire CodeReady Containers team for crc!

Useful links

Red Hat blog article on CodeReady Containers

Download page on cloud.redhat.com

CRC documentation on github.io

Project sources on github

Download and setup CRC on a server

Go to the download page and get crc for Linux. You’ll also need the pull secret listed there during the installation process. Make sure to copy the crc binary to /usr/local/bin or somewhere on your path.

The initial setup command only needs to be run once, and it creates a ~/.crc directory. Your user must have sudo privileges since crc will install dependencies for libvirt and modify the NetworkManager config:

$ crc setup

Note: occasionally on some systems this may fail with “Failed to restart NetworkManager”. Just rerun crc setup a few times until it works

Create an OpenShift Instance with CRC

$ crc start

You will be asked for the pull secret from the download page, paste it at the prompt.

Optionally, use the -m and -c flags to increase the VM size, for example a 32GiB with 8 cpus:

$ crc start -m 32768 -c 8

See the documentation or crc -h for other things you can do

If you want to just use crc locally on this machine, you can stop here, you’re all set!

Make sure you have haproxy and a few other things

sudo dnf -y install haproxy policycoreutils-python-utils jq

Modify the firewall on the server

$ sudo systemctl start firewalld
$ sudo firewall-cmd --add-port=80/tcp --permanent
$ sudo firewall-cmd --add-port=6443/tcp --permanent
$ sudo firewall-cmd --add-port=443/tcp --permanent
$ sudo systemctl restart firewalld
$ sudo semanage port -a -t http_port_t -p tcp 6443

Configure haproxy on the server

The steps below will create an haproxy config file with placeholders, update the SERVER_IP and CRC_IP using sed, and copy the new file to the correct location. If you would like to edit the file manually, feel free :)

$ sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
$ tee haproxy.cfg &>/dev/null <<EOF
global
        debug

defaults
        log global
        mode    http
        timeout connect 5000
        timeout client 5000
        timeout server 5000

frontend apps
    bind SERVER_IP:80
    bind SERVER_IP:443
    option tcplog
    mode tcp
    default_backend apps

backend apps
    mode tcp
    balance roundrobin
    option ssl-hello-chk
    server webserver1 CRC_IP check

frontend api
    bind SERVER_IP:6443
    option tcplog
    mode tcp
    default_backend api

backend api
    mode tcp
    balance roundrobin
    option ssl-hello-chk
    server webserver1 CRC_IP:6443 check
EOF

$ export SERVER_IP=$(hostname --ip-address)
$ export CRC_IP=$(crc ip)
$ sed -i "s/SERVER_IP/$SERVER_IP/g" haproxy.cfg
$ sed -i "s/CRC_IP/$CRC_IP/g" haproxy.cfg
$ sudo cp haproxy.cfg /etc/haproxy/haproxy.cfg
$ sudo systemctl start haproxy

Setup NetworkManager on the client machine

NetworkManager needs to be configured to use dnsmasq for DNS. Make sure you have dnsmasq installed:

$ sudo dnf install dnsmasq

Add a file to /etc/NetworkManager/conf.d to enable use of dnsmasq. (Some systems may already have this setting in an existing file, depending on what's been done in the past. If that's the case, continue on without creating a new file)

$ sudo tee /etc/NetworkManager/conf.d/use-dnsmasq.conf &>/dev/null <<EOF
[main]
dns=dnsmasq
EOF

Add dns entries for crc:

$ tee external-crc.conf &>/dev/null <<EOF
address=/apps-crc.testing/SERVER_IP
address=/api.crc.testing/SERVER_IP
EOF

$ export SERVER_IP=”your server’s external IP address”
$ sed -i "s/SERVER_IP/$SERVER_IP/g" external-crc.conf
$ sudo cp external-crc.conf /etc/NetworkManager/dnsmasq.d/external-crc.conf
$ sudo systemctl reload NetworkManager

Note: if you've previously run crc locally on the client machine, you likely have a /etc/NetworkManager/dnsmasq.d/crc.conf file that sets up dns for a local VM. Comment out those entries.

Get the oc binary on the client machine

If you don't already have it, you can get the oc client here:

https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/

Login to the OpenShift instance from the client machine

The password for the kubeadmin account is printed when crc starts, but if you don't have it handy you can do this as the user running crc on the server:

$ crc console --credentials

Now just login to OpenShift from your client machine using the standard crc url

$ oc login -u kubeadmin -p <kubeadmin password>  https://api.crc.testing:6443

The OpenShift console will be available at https://console-openshift-console.apps-crc.testing

Renewal of expired certificates

Beginning in version 1.2.0, CodeReady Containers will renew embedded certificates when they expire (prior to 1.2.0 it was necessary to download and install a new version). When the certificates need to be renewed, this will be noted in the CRC log output and may take up to 5 minutes.

@EmmanuelKasper
Copy link

EmmanuelKasper commented Aug 22, 2020

With a recent haproxy, I am using the following configuration:

$ rpm -qi haproxy
haproxy-1.8.23-3.el8.x86_64

works well and does not spit warnings everywhere in the haproxy log:)

$ cat /etc/haproxy.cfg

global
        debug
        log 127.0.0.1 local0

defaults
        log global
        mode http
        timeout connect 0
        timeout client 0
        timeout server 0

listen  apps-http
        bind HOST_IP:80
        option tcplog
        mode tcp
        server webserver1 CRC_IP:80 check

listen  apps-https
        bind HOST_IP:443
        option tcplog
        mode tcp
        option ssl-hello-chk
        server webserver1 CRC_IP:443 check

listen  api
        bind HOST_IP:6443
        option tcplog
        mode tcp
        option ssl-hello-chk
        server webserver1 CRC_IP:6443 check

@jboxman
Copy link

jboxman commented Aug 22, 2020

That works also with:

rpm -qi haproxy
Name        : haproxy
Version     : 2.1.7
Release     : 1.fc32

Thanks!

@evanshortiss
Copy link

If you run into issues with Pod logs in the UI failing to load, give this haproxy.cfg a try. It will support the WebSocket upgrade:

defaults
    mode http
    log global
    option httplog
    option  http-server-close
    option  dontlognull
    option  redispatch
    option  contstats
    retries 3
    backlog 10000
    timeout client          25s
    timeout connect          5s
    timeout server          25s
    timeout tunnel        3600s
    timeout http-keep-alive  1s
    timeout http-request    15s
    timeout queue           30s
    timeout tarpit          60s
    default-server inter 3s rise 2 fall 3
    option forwardfor

frontend apps
    bind HOST_IP:80
    bind HOST_IP:443
    option tcplog
    mode tcp
    default_backend apps

backend apps
    mode tcp
    balance roundrobin
    option tcp-check
    server webserver1 CRC_IP check port 80

frontend api
    bind HOST_IP:6443
    option tcplog
    mode tcp
    default_backend api

backend api
    mode tcp
    balance roundrobin
    option tcp-check
    server webserver1 CRC_IP:6443 check port 6443

@rlzh
Copy link

rlzh commented Oct 28, 2020

For anyone trying to use a macOS client machine:

Thanks for this - I have the web portal working remotely but cannot get the command line working remotely - oc. Did you get the command line working with this method?

I am currently facing the same situation. Were you able to resolve this?

@2ECC71
Copy link

2ECC71 commented Oct 28, 2020

For anyone trying to use a macOS client machine:

Thanks for this - I have the web portal working remotely but cannot get the command line working remotely - oc. Did you get the command line working with this method?

I am currently facing the same situation. Were you able to resolve this?

Hm, if I remember correctly I had it working. But I don’t have the setup around anymore so I can’t confirm.

@evanshortiss
Copy link

Can confirm the CLI works for me on macOS Mojave. Details in this blogpost in the "Configure DNS on your Development Machine" section.

@rlzh
Copy link

rlzh commented Nov 5, 2020

Can confirm the CLI works for me on macOS Mojave. Details in this blogpost in the "Configure DNS on your Development Machine" section.

Thanks for the comment and blog post. Turns out I was missing a firewall rule in GCP to allow traffic via TCP 6443 in my case.

@rsletten
Copy link

@evanshortiss
Copy link

Very nice. Added the docs link as the first line of my blogpost.

@willisad
Copy link

willisad commented Apr 26, 2021

I Has anybody managed to the console-openshift-console.apps-crc.testing running on a remote (cloud) server?
I am trying get it to work like my ansible console with a ProxyPass "/" "https://console-openshift-console.apps-crc.testing/" in Apache, but instead of returning the console, it seems to try and resolve console-openshift-console.apps-crc.testing as a public IP.

My problem is I have a public IP listening for request in the cloud and then it forwards traffic to an internal IP (running CRC).
As an example:
#Ansible works OK with
ProxyPass https://192.168.130.111:443;

CRC tries to resolve

ProxyPass https://console-openshift-console.apps-crc.testing;

-- Update on this;
I have added console-openshift-console.apps-crc.testing/ to my local DNS config, which resolves to the correct IP, but still this doesn't work.
Although Appache is acting as a reverse proxy, I am not sure I need to add the IP as a proxy server. This is my next step.. Will keep you informed.

@bicycleboy
Copy link

Very useful guide @tmckayus!

Just a little comment, into the haproxy config file is missing the port in the backend apps (CRC_IP instead of CRC_IP:Port), and the haproxy fails at start:

$ systemctl status haproxy -l
● haproxy.service - HAProxy Load Balancer
      Active: failed (Result: exit-code) since Thu 2020-03-26 14:33:00 CET; 4s ago
         [/etc/haproxy/haproxy.cfg:22] : server webserver1 has neither service port nor check port nor tcp_check rule 'connect' with port information. Check has been disabled

I fixed it adding the 443 port to that line:

server webserver1 CRC_IP:443 check

With this config, from a remote machine I could access the console but not an application hosted by CRC. I used
server webserver1 CRC_IP check port 443
and can access the console and deployed apps

@phvajjha
Copy link

I am using Windows 10 Laptop. How do I configure DNS client with the settings mentioned in the dnsmasq settings?

Any luck with a windows client machine?

@efrozo23
Copy link

It worked for me by editing the file host:

IP_PUBLIC api.crc.testing canary-openshift-ingress-canary.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing downloads-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing

The bad thing is that for each pod I have to add the host to the file

@berndbausch
Copy link

It worked for me by editing the file host:

Same solution as efrozo23 on June 17, and same problem having to manually add all pods' domain names to /etc/hosts.

My next idea was to forward crc.testing and apps-crc.testing DNS requests to the CRC VM at 192.168.130.11. I configured forwarding zones on my BIND DNS server. The problem is that my BIND uses DNSSEC (that's the default, and I don't want to change it), and I can't set up DNSSEC on the CRC's DNS server. Therefore, BIND refuses to forward queries to the CRC VM.

@cg2p
Copy link

cg2p commented Dec 21, 2021

Thank you @tmckayus

To you and the group - any ideas on the problem I have?

  • two servers (VMs on IBM Cloud) - host A and host B

  • I wireguard into the VPN and then shell into host A and host B on their private IPs

  • install CRC on host A as per the guide with haproxy

  • nothing is in host A /etc/hosts

  • so I am not using dnsmasq on host B that is the remote client to CRC host A

  • I am using IBM Cloud DNS service and set up various 'A' records for the "acme.com" domain

  • I have four A records that all point at external private IP of host A (10... not the 192.168.130.11 address):

oauth-openshift.apps-crc.testing
default-route-openshift-image-registry.apps-crc.testing
console-openshift-console.apps-crc.testing
apps-crc.testing
  • on host B I can curl -kLv https://console-openshift-console.apps-crc.testing and I get the console HTML e.g. ..<title>Red Hat OpenShift Container Platform</title>...
  • but then on host B if I do curl -kLv https://console-openshift-console.apps-crc.testing.acme.com I get ...<h1>Application is not available</h1>...

Host A is getting through to host B, going through the haproxy and CRC is serving back HTML, so what could it be? ... something in OpenShift routes, DNS, maybe DNS servers that either host A or B can see ...

Thanks for support

@robertluwang
Copy link

I am using Windows 10 Laptop. How do I configure DNS client with the settings mentioned in the dnsmasq settings?

This is remedy works for me, I installed centos vm on top of vmware workstation on win10 laptop, access centos vm via NAT.

  1. setup haproxy on centos vm, keep in mind to change all SERVER_IP in /etc/haproxy/haproxy.cfg
    SERVER_IP = NAT ip ( for example ens33, 100.94.195.133)
  2. no need dns setup on win10, just add entry to win10 hosts file as Administrator
    100.94.195.133 api.crc.testing canary-openshift-ingress-canary.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing downloads-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing
  3. download and install oc client to win10 and run as below,
    oc login -u kubeadmin -p https://api.crc.testing:6443
  4. GUI access on local win10 browser
    https://console-openshift-console.apps-crc.testing

@ultraJeff
Copy link

It worked for me by editing the file host:

Same solution as efrozo23 on June 17, and same problem having to manually add all pods' domain names to /etc/hosts.

If it helps anyone, I had this same problem too and here's how I solved it:

I'm on OS X Big Sur and followed the guide here https://www.stevenrombauts.be/2018/01/use-dnsmasq-instead-of-etc-hosts/.

Despite the article, brew install dnsmasq uses the dnsmasq.conf file at /opt/homebrew/etc/dnsmasq.conf and not the stated /usr/etc/dnsmasq.conf location. Adding

address=/apps-crc.testing/SERVER_IP
address=/api.crc.testing/SERVER_IP

to /opt/homebrew/etc/dnsmasq.conf fixed the manual entries in the /etc/hosts file for me.

P.S. I wrote the /etc/resolver/testing file that's needed basically the same:

nameserver 127.0.0.1
domain testing

@imperialguy
Copy link

imperialguy commented Jan 5, 2023

On Mac OS Ventura 13.1, I have the following:

$ cat /usr/local/etc/dnsmasq.d/development.conf
address=/apps-crc.testing/<openshift_server_ip_addr>
address=/api.crc.testing/<openshift_server_ip_addr>

The following works perfectly in my browser --> https://console-openshift-console.apps-crc.testing. Meaning, I am able to access the openshift console from Mac OS.

But, the following fails:

$ oc login https://api.crc.testing:6443 --insecure-skip-tls-verify=true --token=sha256~xxxxxxxxx
error: dial tcp: lookup api.crc.testing on 8.8.8.8:53: no such host - verify you have provided the correct host and port and that the server is currently running.

I tried putting the following line in both /etc/resolv.conf and /etc/resolver/testing files as well:

nameserver <openshift_server_ip_addr>

Any ideas?

@evanshortiss
Copy link

I think that

nameserver <openshift_server_ip_addr>

Should be:

nameserver 127.0.0.1

This should make sure dnsmasq is used for resolving the address, since right now it seems to be using 8.8.8.8?

@jboxman
Copy link

jboxman commented Jan 5, 2023

So the issue here is because reasons, Go doesn't support OS X's magic way of dealing with DNS resolution, so for the CLI binary you need to actually edit your /etc/hosts on OS X and put in the API server IP for this to work. For example I have:

192.168.70.30 api.crc.testing oauth-openshift.apps-crc.testing
192.168.70.30 default-route-openshift-image-registry.apps-crc.testing

I filed a bug on this like 4 years ago, but it was punted as a problem with the Go toolchain itself.

@Rusver
Copy link

Rusver commented Feb 3, 2023

Maybe a stupid question, but... If I have CRC running in a CentOS docker container how can I apply this for accessing the web console of CRC from Host machine? I can't figure out how can I do it!

Thanks for the question! This is a good one for the community; as with several of the questions here I have no idea :)

@tmckayus @kambei @willisad

I found another haproxy cfg that worked for me

this is the link

https://sanjitcibm.medium.com/getting-started-with-openshift-local-f59c07cbfd4c

I've tested on Azure cloud with ubuntu 22.04

@TheiLLeniumStudios
Copy link

TheiLLeniumStudios commented Aug 7, 2023

I wrapped everything around an automated script to setup everything. Feel free to check it out: https://github.com/iLLeniumStudios/remote-crc-setup

Aiming to make it more configurable via vars soon

@akosdudas
Copy link

When SELinux is enabled on the host machine haproxy cannot talk to crc by default. The symptom is that all http requests are terminated without response. After setting the following it seems to work now. (I used RHEL 8 host machine.)

sudo setsebool -P haproxy_connect_any=1

@jmazzitelli
Copy link

jmazzitelli commented Apr 2, 2024

tl;dr I need a firewalld guru to teach me the dark art of firewall rules making.

The Kiali team's crc-openshift.sh script has remote CRC access working. I now also have it where I can change the base domain name from crc.testing to <host-ip>.nip.io (got some hints from @TheiLLeniumStudios in an earlier comment). See this PR which enhances the crc-openshift.sh for that base domain stuff. This will allow me to run multiple clusters of OpenShift in my home network on different computers. That PR has test instructions to see it work.

HOWEVER! I cannot get it all to work with the firewall enabled. If the firewall rules are enabled, everything works from the OC CLI perspective (I can oc login , I can oc get, etc). However, the OpenShift Console UI fails to log the user in (it gives me the login screen but it can't log me in). The reason is the console cannot call out to the nip.io address. For example, to see this, if you go to any pod in the cluster, and try curl <ip>.nip.io and it will fail. But as soon as I stop firewalld on the host, it all works - I can log into the Console and everything is fine.

More specifically, if you attempt to log into the Console UI, and it fails, if you look at the console logs:

oc logs -n openshift-console -l component=ui

it shows this error:

failed to get latest auth source data: request to OAuth issuer endpoint
https://oauth-openshift.apps.192.168.1.20.nip.io/oauth/token failed:
Head "https://oauth-openshift.apps.192.168.1.20.nip.io":
dial tcp 192.168.1.20:443: connect: connection refused

Turn off the firewalld, and it is fine - I can log in and can go about my work.

Does any know the firewall black magic to get this to work?

In short, I need firewalld rule(s) that allow pods in the CRC cluster to talk to the host's HAProxy (which is where the nip.io hostname goes to).

This is what I got so far (this enables OC CLI to work, but Console UI is still blocked):

 sudo firewall-cmd --add-forward-port="port=443:proto=tcp:toport=443:toaddr=192.168.130.11"
 sudo firewall-cmd --add-forward-port="port=6443:proto=tcp:toport=6443:toaddr=192.168.130.11"
 sudo firewall-cmd --add-forward-port="port=80:proto=tcp:toport=80:toaddr=192.168.130.11"
 sudo firewall-cmd --direct --passthrough ipv4 -I FORWARD -i crc -j ACCEPT
 sudo firewall-cmd --direct --passthrough ipv4 -I FORWARD -o crc -j ACCEPT

I tried rich-rules - but... well, ChatGPT told me to try those but ChatGPT hates me and never tells me anything that works.

FWIW, CRC's IP is 192.168.130.11

sudo firewall-cmd --add-rich-rule='rule family=ipv4 source address=192.168.130.11 port protocol=tcp port=443 accept'
sudo firewall-cmd --add-rich-rule='rule family=ipv4 source address=192.168.130.11 port protocol=tcp port=6443 accept'
sudo firewall-cmd --add-rich-rule='rule family=ipv4 source address=192.168.130.11 port protocol=tcp port=80 accept'

(For what its worth, that crc-openshift.sh works if you disable firewalld. So have at it if you want a way to install CRC that can be remotely accessed and where you can install multiple CRCs on different machines in your cluster, as long as you don't care if firewalld is disabled. It's configurable in a bunch of ways so you don't have to remember alot of the CRC settings - see --help for details)

UPDATE: I don't get it, but I can login sometimes once I removed these (see below, these passthroughs were needed when I was using another type of openshift cluster). firewalld is a mystery wrapped in a riddle. Maybe I fixed it, maybe I didn't.. I have no idea. I'll keep playing with it. But now I can log in once these were removed, but only sometimes (might be due to browser caching some cookies container some login tokens, I dunno).

firewall-cmd --direct --passthrough ipv4 -I FORWARD -i crc -j ACCEPT
firewall-cmd --direct --passthrough ipv4 -I FORWARD -o crc -j ACCEPT

but still seeing internal errors:

  • clusteroperators.config.openshift.io named authentication reporting: OuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.192.168.1.20.nip.io/healthz": dial tcp 192.168.1.20:443: connect: connection refused
  • clusteroperators.config.openshift.io named console reporting: RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.192.168.1.20.nip.io): Get "https://console-openshift-console.apps.192.168.1.20.nip.io": dial tcp 192.168.1.20:443: connect: connection refused

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment