Skip to content

Instantly share code, notes, and snippets.

@sub-mod
Last active May 1, 2023 09:26
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save sub-mod/a22b6e38f12d0df95b81261f8604605d to your computer and use it in GitHub Desktop.
Save sub-mod/a22b6e38f12d0df95b81261f8604605d to your computer and use it in GitHub Desktop.
Setup CodeReady Containers on Remote Server and connect from Laptop 4.2 - Fedora 31

Setup CodeReady Containers -1.14.0 on Remote Server and connect from Laptop.
These steps work for Fedora 31.

On-the-remote-host

[root@node ~]# cat /etc/redhat-release
Fedora release 31 (Thirty One)

Install podman

[root@node ~]# podman
-bash: podman: command not found
[root@node ~]# yum install -y podman
[root@node ~]# podman info
host:
  BuildahVersion: 1.12.0
  CgroupVersion: v2
  Conmon:
    package: conmon-2.0.9-2.fc31.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.9, commit: 7d46f3e7711aa3578488284ae2f98b447658f086'
  Distribution:
    distribution: fedora
    version: "31"
  MemFree: 83159617536
  MemTotal: 84374663168
  OCIRuntime:
    name: crun
    package: crun-0.10.6-1.fc31.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.10.6
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  SwapFree: 4294963200
  SwapTotal: 4294963200
  arch: amd64
  cpus: 16
  eventlogger: journald
  hostname: sde-ci-works06.3a2m.lab.eng.bos.redhat.com
  kernel: 5.4.12-200.fc31.x86_64
  os: linux
  rootless: false
  uptime: 49h 21m 52.61s (Approximately 2.04 days)
registries:
  search:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - quay.io
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: overlay
  GraphOptions:
    overlay.mountopt: nodev,metacopy=on
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  ImageStore:
    number: 0
  RunRoot: /var/run/containers/storage
  VolumePath: /var/lib/containers/storage/volumes

Test podman

[root@node ~]# podman pull alpine
Trying to pull docker.io/library/alpine...
Getting image source signatures
Copying blob c9b1b535fdd9 done
Copying config e7d92cdc71 done
Writing manifest to image destination
Storing signatures
e7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a
[root@node ~]# podman images
REPOSITORY                 TAG      IMAGE ID       CREATED       SIZE
docker.io/library/alpine   latest   e7d92cdc71fe   12 days ago   5.86 MB
[root@node ~]# podman run --rm ubuntu /bin/echo "Computing for Geeks"
Trying to pull docker.io/library/ubuntu...
Getting image source signatures
Copying blob 19a861ea6baf done
Copying blob 651c9d2d6c4f done
Copying blob c63719cdbe7a done
Copying blob 5c939e3a4d10 done
Copying config ccc6e87d48 done
Writing manifest to image destination
Storing signatures
Computing for Geeks

Install packages

[root@node ~]# dnf config-manager --set-enabled fedora
[root@node ~]# su -c 'dnf -y install git wget tar qemu-kvm libvirt NetworkManager jq tinyproxy'
[root@node ~]# sudo systemctl enable --now libvirtd

Create a new user

[root@node ~]# sudo adduser demouser
[root@node ~]# sudo passwd demouser
Changing password for user demouser.
New password:
[root@node ~]# sudo usermod -aG wheel demouser
[root@node ~]# sudo passwd -d demouser
Removing password for user demouser.
passwd: Success

Test if crcuser needs password.

[root@node ~]# su demouser
[demouser@node root]$
[demouser@node root]$ exit
[root@node ~]#

Setup tinyproxy

[root@node ~]# vi /etc/tinyproxy/tinyproxy.conf

Edit the /etc/tinyproxy/tinyproxy.conf file. Comment out the Allow 127.0.0.1 line as it allows only access from within the VM Search for ConnectPort and add the following line:

ConnectPort 6443

Save the file and Start tinyproxy

[root@node ~]# sudo systemctl enable --now tinyproxy

Setup CodeReady Containers

[root@node ~]# su demouser
[demouser@node root]$ cd /home/demouser

Goto https://cloud.redhat.com/openshift/install/crc/installer-provisioned and download the CodeReady Containers archive

[demouser@node ~]$ wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/1.4.0/crc-linux-amd64.tar.xz
[demouser@node ~]$ tar -xvf crc-linux-amd64.tar.xz
[demouser@node ~]$ cd crc-linux-1.4.0-amd64/
[demouser@node crc-linux-1.4.0-amd64]$ sudo cp ./crc /usr/bin
[demouser@node crc-linux-1.4.0-amd64]$ cd ~
[demouser@node ~]$

Find the available memory of the system.

[demouser@node ~]$ vmstat -s
     82397136 K total memory
       375372 K used memory
      3165912 K active memory
[demouser@node ~]$ ./crc config set memory 71680
Changes to configuration property 'memory' are only applied when a new CRC instance is created.
If you already have a CRC instance, then for this configuration change to take effect, delete the CRC instance with 'crc delete' and start a new one with 'crc start'.
[demouser@node ~]$ crc setup
INFO Checking if oc binary is cached
INFO Checking if CRC bundle is cached in '$HOME/.crc'
INFO Checking if running as non-root
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Adding user to libvirt group
INFO Will use root access: add user to libvirt group
INFO Checking if libvirt is enabled
INFO Checking if libvirt daemon is running
INFO Starting libvirt service
INFO Will use root access: start libvirtd service
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Installing crc-driver-libvirt
INFO Checking for obsolete crc-driver-libvirt
INFO Checking if libvirt 'crc' network is available
INFO Setting up libvirt 'crc' network
INFO Checking if libvirt 'crc' network is active
INFO Starting libvirt 'crc' network
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
INFO Writing Network Manager config for crc
INFO Will use root access: write NetworkManager config in /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf
INFO Will use root access: execute systemctl daemon-reload command
INFO Will use root access: execute systemctl stop/start command
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
INFO Writing dnsmasq config for crc
INFO Will use root access: write dnsmasq configuration in /etc/NetworkManager/dnsmasq.d/crc.conf
INFO Will use root access: execute systemctl daemon-reload command
INFO Will use root access: execute systemctl stop/start command
Setup is complete, you can now run 'crc start' to start the OpenShift cluster

The pull secret is also available on https://cloud.redhat.com/openshift/install/crc/installer-provisioned
Copy and paste it when prompted by crc start.

[demouser@node ~]$ crc start
INFO Checking if oc binary is cached
INFO Checking if running as non-root
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if libvirt is enabled
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
? Image pull secret [? for help] ************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************INFO Extracting bundle: crc_libvirt_4.2.13.crcbundle ... ******************************************************************************************************************************INFO Creating CodeReady Containers VM for OpenShift 4.2.13... *************************************************************************************************************************INFO Verifying validity of the cluster certificates ... *******************************************************************************************************************************INFO Check internal and public DNS query ...      *************************************************************************************************************************************INFO Check DNS query from host ...                *************************************************************************************************************************************INFO Copying kubeconfig file to instance dir ...  *************************************************************************************************************************************INFO Adding user's pull secret ...                *************************************************************************************************************************************INFO Updating cluster ID ...                      *************************************************************************************************************************************INFO Starting OpenShift cluster ... [waiting 3m]  *************************************************************************************************************************************INFO                                              *************************************************************************************************************************************INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions **************************************************************************************INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' ******************************************************************************INFO To login as an admin, run 'oc login -u kubeadmin -p cznQP-n4pBk-cnXTg-nkevH https://api.crc.testing:6443'
INFO
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
Started the OpenShift cluster
WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation

To get the credentials back again run the below command

[demouser@node ~]$ crc console --credentials
To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p cznQP-n4pBk-cnXTg-nkevH https://api.crc.testing:6443'

Check if CodeReady Containers work

Run a app and look at the logs

[demouser@node ~]$ oc login -u kubeadmin -p cznQP-n4pBk-cnXTg-nkevH https://api.crc.testing:6443
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y

Login successful.

You have access to 51 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".
Welcome! See 'oc help' to get started.

[demouser@node ~]$ oc get nodes
NAME                 STATUS   ROLES           AGE   VERSION
crc-k4zmd-master-0   Ready    master,worker   26d   v1.14.6+8e46c0036
[demouser@node ~]$ oc new-app https://github.com/sclorg/cakephp-ex
....
--> Creating resources ...
    imagestream.image.openshift.io "cakephp-ex" created
    buildconfig.build.openshift.io "cakephp-ex" created
    deploymentconfig.apps.openshift.io "cakephp-ex" created
    service "cakephp-ex" created
--> Success
    Build scheduled, use 'oc logs -f bc/cakephp-ex' to track its progress.
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/cakephp-ex'
    Run 'oc status' to view your app.
[demouser@node ~]$ oc get pods
NAME                  READY   STATUS      RESTARTS   AGE
cakephp-ex-1-build    0/1     Completed   0          3m20s
cakephp-ex-1-deploy   0/1     Completed   0          52s
cakephp-ex-1-tx42f    1/1     Running     0          37s
[demouser@node ~]$ oc logs cakephp-ex-1-tx42f
=> sourcing 20-copy-config.sh ...
---> 18:53:59     Processing additional arbitrary httpd configuration provided by s2i ...
=> sourcing 00-documentroot.conf ...

Links:

On-the-Laptop

Install oc client

mylaptop# wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.2.16/openshift-client-linux-4.2.16.tar.gz
mylaptop# tar -xvf openshift-client-linux-4.2.16.tar.gz
mylaptop# cp oc /usr/local/bin
mylaptop# oc version
Client Version: openshift-clients-4.2.2-201910250432-12-g72076900

BROWSER: Setup SSH tunneling to access OpenShift console from browser

Edit /etc/hosts and ensure the entry as given below.

127.0.0.1 localhost console-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing
mylaptop#sudo ssh root@<REMOTE_HOST> -L 443:console-openshift-console.apps-crc.testing:443

Give laptop password first and then the remote system password.
OpenShift console is available at https://console-openshift-console.apps-crc.testing

TERMINAL: Setup for oc client to work

# in a new terminal
mylaptop# export https_proxy=http://vm_ip:8888
# now test api endpoint
mylaptop# curl -k https://api.crc.testing:6443

Now you can login using oc client tool

mylaptop# oc login -u kubeadmin -p e4FEb-9dxdF-9N2wH-Dj7B8 https://api.crc.testing:6443

To access routes

mylaptop# get the route 
mylaptop# oc get routes
NAME     HOST/PORT                                  PATH   SERVICES           PORT   TERMINATION   WILDCARD
tekton   myservice-mynamespace.apps-crc.testing          tekton-dashboard   http                 None

Add the route to /etc/hosts file as given below.

127.0.0.1 localhost console-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing myservice-mynamespace.apps-crc.testing

SSH tunnel in a new terminal

mylaptop# sudo ssh root@<REMOTE_HOST> -L 80:myservice-mynamespace.apps-crc.testing:80

the UI is available at https://myservice-mynamespace.apps-crc.testing

misc

# libvirtd issues FC31
virsh list --all
virsh undefine < id >
@kerberos5
Copy link

hi,
i follow your guide bat not work.... I receive

BU (crc) DBG | time="2023-05-01T11:22:26+02:00" level=debug msg="Getting current state..."
DEBU (crc) DBG | time="2023-05-01T11:22:26+02:00" level=debug msg="Waiting for machine to come up 0/60"
DEBU (crc) DBG | time="2023-05-01T11:22:29+02:00" level=debug msg="GetIP called for crc"
DEBU (crc) DBG | time="2023-05-01T11:22:29+02:00" level=debug msg="Getting current state..."
DEBU (crc) DBG | time="2023-05-01T11:22:29+02:00" level=debug msg="Waiting for machine to come up 1/60"
DEBU (crc) DBG | time="2023-05-01T11:22:32+02:00" level=debug msg="GetIP called for crc"
DEBU (crc) DBG | time="2023-05-01T11:22:32+02:00" level=debug msg="Getting current state..."
DEBU (crc) DBG | time="2023-05-01T11:22:32+02:00" level=debug msg="Waiting for machine to come up 2/60"
[...]
DEBU (crc) DBG | time="2023-05-01T11:25:20+02:00" level=debug msg="Waiting for machine to come up 58/60"
DEBU (crc) DBG | time="2023-05-01T11:25:23+02:00" level=debug msg="GetIP called for crc"
DEBU (crc) DBG | time="2023-05-01T11:25:23+02:00" level=debug msg="Getting current state..."
DEBU (crc) DBG | time="2023-05-01T11:25:23+02:00" level=debug msg="Waiting for machine to come up 59/60"
DEBU (crc) DBG | time="2023-05-01T11:25:26+02:00" level=warning msg="Unable to determine VM's IP address, did it fail to boot?"
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU (crc) DBG | time="2023-05-01T11:25:26+02:00" level=debug msg="Closing plugin on server side"
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
Error starting machine: Error in driver during machine start: Unable to determine VM's IP address, did it fail to boot?

[kerberos5@server-crc ~]$ crc status
CRC VM: Running
OpenShift: Unreachable (v4.12.9)
Disk Usage: 0B of 0B (Inside the CRC VM)
Cache Usage: 18.66GB
Cache Directory: /home/kerberos5/.crc/cache

using hyper-v as virt and Fedora38 all on Windows11

do you have any idea what it could be?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment