This original document shows how to deploy an OpenShift instance on a server using CodeReady Containers (crc) that can be accessed remotely from one or more client machines (sometimes called a "headless" instance). This provides a low-cost test and development platform that can be shared by developers. Deploying this way also allows a user to create an instance that uses more cpu and memory resources than may be available on his or her laptop.
While there are benefits to this type of deployment, please note that the primary use case for crc is to deploy a local OpenShift instance on a workstation or laptop and access it directly from the same machine. The headless setup is configured completely outside of crc itself, and supporting a headless setup is beyond the mission of the crc development team. Please do not ask for changes to crc to support this type of deployment, it will only cost the team time as they politely decline :)
The original gist by Roberto has instructions to configure the firewall, haproxy, and setting up NetworkManager on the client machine.
In this new gist:
-
Link to resize CRC (I have not attemtped this myself, but other users have had success)
-
Removed NetworkManager configuration on Client Machine (laptop)
-
Added instructions to disable the BUILT IN dnsmasq used by NetworkManager after CRC has been installed and setup.
-
Create our own dnsmasq entries and configure dnsmasq to also be a dns server
-
Add our new DNS as an additional entry in the Client Machine (Macbook Laptop)
The instructions here were tested with a Centos 7 VM (where CRC is deployed) and a Macbook (to access the remote instance).
Thanks to Marcel Wysocki from Red Hat for the haproxy solution and the entire CodeReady Containers team for crc!
Thanks to Roberto Carratala for writing up the gist.
Red Hat blog article on CodeReady Containers
Download page on cloud.redhat.com
CRC documentation on github.io
Go to the download page and get crc for Linux. You’ll also need the pull secret listed there during the installation process. Make sure to copy the crc binary to /usr/local/bin or somewhere on your path.
The initial setup command only needs to be run once, and it creates a ~/.crc directory. Your user must have sudo privileges since crc will install dependencies for libvirt and modify the NetworkManager config:
$ crc setup
Note: occasionally on some systems this may fail with “Failed to restart NetworkManager”. Just rerun crc setup a few times until it works
also, make sure nested virtualization is enabled
$ crc start
You will be asked for the pull secret from the download page, paste it at the prompt.
Optionally, use the -m and -c flags to increase the VM size, for example a 32GiB with 8 cpus:
$ crc start -m 32768 -c 8
See the documentation or crc -h for other things you can do
If you want to just use crc locally on this machine, you can stop here, you’re all set!
sudo yum install epel-release -y
sudo yum install policycoreutils-python haproxy jq -y
$ sudo systemctl start firewalld
$ sudo firewall-cmd --add-port=80/tcp --permanent
$ sudo firewall-cmd --add-port=6443/tcp --permanent
$ sudo firewall-cmd --add-port=443/tcp --permanent
$ sudo systemctl restart firewalld
$ sudo semanage port -a -t http_port_t -p tcp 6443
The steps below will create an haproxy config file with placeholders, update the SERVER_IP and CRC_IP using sed, and copy the new file to the correct location. If you would like to edit the file manually, feel free :)
$ sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
$ tee haproxy.cfg &>/dev/null <<EOF
global
debug
defaults
log global
mode http
timeout connect 5000
timeout client 5000
timeout server 5000
frontend apps
bind SERVER_IP:80
bind SERVER_IP:443
option tcplog
mode tcp
default_backend apps
backend apps
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP check
frontend api
bind SERVER_IP:6443
option tcplog
mode tcp
default_backend api
backend api
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check
EOF
$ export SERVER_IP=$(hostname --ip-address)
$ export CRC_IP=$(crc ip)
$ sed -i "s/SERVER_IP/$SERVER_IP/g" haproxy.cfg
$ sed -i "s/CRC_IP/$CRC_IP/g" haproxy.cfg
$ sudo cp haproxy.cfg /etc/haproxy/haproxy.cfg
$ sudo systemctl start haproxy
CRC uses the built in dnsmasq in Network Manager. We're going to disable that and configure our own.
$ cd /etc/NetworkManager/conf.d/
$ sudo vi crc-nm-dnsmasq.conf
Change the dns variable to equal none
This is what your crc-nm-dnsmasq.conf should look like:
[main]
dns=none
Exit and save your new crc-nm-dnsmasq.conf file. Run the following command to restart the NetworkManager:
sudo systemctl restart networkmanager
Verify that port 53 is currently not in use by 127.0.0.1 with the following command:
netstat -tulnp
Also, verify that dnsmasq is currently inactive:
systemctl status dnsmasq
Find out what your ip address is and interface type:
$ ip a
In my VM my interface type is ens192 and my VM's ip is 192.168.128.98. You should have a similar output:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:f0:83:34 brd ff:ff:ff:ff:ff:ff
inet 192.168.128.98/24 brd 192.168.128.255 scope global noprefixroute dynamic ens192
valid_lft 378924sec preferred_lft 378924sec
inet6 fe80::370c:91b8:59b2:e707/64 scope link noprefixroute
valid_lft forever preferred_lft forever
Navigate to /etc and make a copy of the current dnsmasq.conf:
$ cp /etc/dnsmasq.conf /etc/dnsmasq.conf.bak
Modify dnsmasq.conf with your interface type and ip address to match the following configurations:
Interface=ens192
Listen-address=127.0.0.1,192.168.128.98
Bind-interface
#if needed, use your internal environments dns instead of google's dns
Server=8.8.8.8
Server=8.8.4.4
If 01-crc.conf doesn't exist in your dnsmasq.d folder, create it:
$cd /etc/dnsmasq.d/
touch 01-crc-.conf
Then we have to add our CRC entries to dnsmasq with our host's IP address like the following:
$vi /etc/dnsmasq.d/01-crc.conf
address=/apps-crc.testing/192.168.128.98
address=/api.crc.testing/192.168.128.98
Modify /etc/resolve.conf to have your host use its own IP as a nameserver
# Generated by NetworkManager
nameserver 192.168.128.98
You may have to restart NetworkManager.
Navigate to Network Preferences. Under your current network, click Advanced...
. Go to the DNS tab. Add your CRC Host's IP address as an ADDITIONAL entry. If your DNS Server's list is empty, add your preferred DNS (internal, google, opendns, etc) as the first entry, then add your CRC Host IP as the second.
If you don't already have it, you can get the oc client here:
https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/
The password for the kubeadmin account is printed when crc starts, but if you don't have it handy you can do this as the user running crc on the server:
$ crc console --credentials
Now just login to OpenShift from your client machine using the standard crc url
$ oc login -u kubeadmin -p <kubeadmin password> https://api.crc.testing:6443
The OpenShift console will be available at https://console-openshift-console.apps-crc.testing
See workaround instructions here: https://access.redhat.com/solutions/4969811 Currently being tracked at this github issue: crc-org/crc#127
Beginning in version 1.2.0, CodeReady Containers will renew embedded certificates when they expire (prior to 1.2.0 it was necessary to download and install a new version). When the certificates need to be renewed, this will be noted in the CRC log output and may take up to 5 minutes.