Skip to content

Instantly share code, notes, and snippets.

@timroster
timroster / README.md
Last active Feb 8, 2021
Configuring merged ingress resources with OpenShift haproxy-based native ingress controller
View README.md

Using a single host with multiple ingresses on OpenShift - Part 2 - native ingress support

Consider the following legacy application migration scenario that has two macro components. The typical pre-containerization application deployment scenario may have had these hosted on VM's with a reverse proxy handling requests to a single logical host endpoint. Both macro components present REST interfaces and are expecting to see the most simple request URI possible. One of the components can be considered the primary monolithic application and the other is like a plug-in or supporting module.

The reverse proxy design took almost all of the traffic, e.g. / and sent that to the primary application. For the supporting module, only requests to /sub-module/(.*) the reverse proxy does a rewrite of the path in the URI to only contain the matched subpath and adds a request header so that the fully qualified paths can be built into responses. In this example, for an incoming request like /sub-module/foo/bar the re

@timroster
timroster / README.md
Last active Feb 8, 2021
Configuring merged ingress resources with NGINX inc based NGINX Operator ingress controller
View README.md

Using a single host with multiple ingresses on OpenShift - Part 1 - NGINX Operator

Consider the following legacy application migration scenario that has two macro components. The typical pre-containerization application deployment scenario may have had these hosted on VM's with a reverse proxy handling requests to a single logical host endpoint. Both macro components present REST interfaces and are expecting to see the most simple request URI possible. One of the components can be considered the primary monolithic application and the other is like a plug-in or supporting module.

The reverse proxy design took almost all of the traffic, e.g. / and sent that to the primary application. For the supporting module, only requests to /sub-module/(.*) the reverse proxy does a rewrite of the path in the URI to only contain the matched subpath and adds a request header so that the fully qualified paths can be built into responses. In this example, for an incoming request like /sub-module/foo/bar the reverse pr

@timroster
timroster / fixcrc.sh
Last active Feb 2, 2021
Short script to recover etcd on crc when it gets misconfigured
View fixcrc.sh
#!/bin/sh
# address CRC issues like: https://github.com/code-ready/crc/issues/1888
# run this script from the crc vm - something like (switch to id_rsa on crc <= 1.20):
# scp -i ~/.crc/machines/crc/id_ecdsa fixcrc.sh core@192.168.130.11:fixcrc.sh
# ssh -i ~/.crc/machines/crc/id_ecdsa core@192.168.130.11 "chmod +x ./fixcrc.sh ; sudo ./fixcrc.sh"
ETCD_POD_DIR=$(ls -rt /etc/kubernetes/static-pod-resources | grep etc | tail -1)
sed -i 's/192.168.130.11/192.168.126.11/g' /etc/kubernetes/manifests/etcd-pod.yaml
sed -i 's/192.168.130.11/192.168.126.11/g' /etc/kubernetes/static-pod-resources/$ETCD_POD_DIR/configmaps/etcd-pod/pod.yaml
@timroster
timroster / etcd-failed-063830.txt
Created Jan 30, 2021
Journalctl output around failure of crc etcd instance
View etcd-failed-063830.txt
Jan 30 06:38:30 crc-lf65c-master-0 hyperkube[657366]: I0130 06:38:30.114113 657366 eviction_manager.go:243] eviction manager: synchronize housekeeping
Jan 30 06:38:30 crc-lf65c-master-0 hyperkube[657366]: I0130 06:38:30.209340 657366 helpers.go:814] eviction manager: observations: signal=imagefs.inodesFree, available: 36162602, capacity: 36429248, time: 2021-01-30 06:38:30.134626049 +0000 UTC m=+232920.743251214
Jan 30 06:38:30 crc-lf65c-master-0 hyperkube[657366]: I0130 06:38:30.209557 657366 helpers.go:814] eviction manager: observations: signal=pid.available, available: 4188797, capacity: 4Mi, time: 2021-01-30 06:38:30.196877064 +0000 UTC m=+232920.805499364
Jan 30 06:38:30 crc-lf65c-master-0 hyperkube[657366]: I0130 06:38:30.209594 657366 helpers.go:814] eviction manager: observations: signal=memory.available, available: 31850452Ki, capacity: 46043776Ki, time: 2021-01-30 06:38:30.134626049 +0000 UTC m=+232920.743251214
Jan 30 06:38:30 crc-lf65c-master-0 hyperkube[657366]: I0130 06:38:30.209606 657366
View crictl_ps_recent.txt
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
7cc6c88d51e91 8721c7e1633883201ba3e3eaf4c15e547e95b23c480b75112db713016648ed06 1 second ago Running etcd-operator 95 895f8659064d2
e4ef01933fa4b d8b24c9466c02f17563b1cc4ee3433c9d97ac0469311d24ecea074af204a99eb About a minute ago Exited console-operator 438 d1f2b660959a2
42775b3ec0b25 802c37bc4003d650532a609799040a101804d3a855e8543323769b1c7e5277b0
View ifconfig_route_output.txt
ifconfig
cni-podman0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.88.0.1 netmask 255.255.0.0 broadcast 10.88.255.255
inet6 fe80::31:c3ff:fede:7382 prefixlen 64 scopeid 0x20<link>
ether 02:31:c3:de:73:82 txqueuelen 1000 (Ethernet)
RX packets 7366558 bytes 543224172 (518.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7366522 bytes 589497844 (562.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
View qemu_crc.log
2021-01-20 00:14:18.187+0000: starting up libvirt version: 6.0.0, package: 28.module_el8.3.0+555+a55c8938 (CentOS Buildsys <bugs@centos.org>, 2020-11-04-01:04:00, ), qemu version: 4.2.0qemu-kvm-4.2.0-34.module_el8.3.0+613+9ec9f184.1, kernel: 4.18.0-193.14.2.el8_2.x86_64, hostname: tor-crc-schematics-instance
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
HOME=/var/lib/libvirt/qemu/domain-1-crc \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-crc/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-crc/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-crc/.config \
QEMU_AUDIO_DRV=none \
/usr/libexec/qemu-kvm \
-name guest=crc,debug-threads=on \
View systemctl_kubelet_out.txt
kubelet.service - Kubernetes Kubelet
Loaded: loaded (/etc/systemd/system/kubelet.service; disabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-mco-default-env.conf
Active: active (running) since Tue 2021-01-26 14:46:46 UTC; 5h 46min ago
Process: 2989 ExecStartPre=/bin/rm -f /var/lib/kubelet/cpu_manager_state (code=exited, status=0/SUCCESS)
Process: 2987 ExecStartPre=/bin/mkdir --parents /etc/kubernetes/manifests (code=exited, status=0/SUCCESS)
Main PID: 2991 (kubelet)
Tasks: 146 (limit: 287245)
Memory: 463.1M
View failing_vsi_hypervisor.txt
sudo dmidecode -t processor -q | tail -20
Socket Designation: CPU 1
Type: Central Processor
Family: Other
Manufacturer: QEMU
ID: D2 06 03 00 FF FB 8B 0F
Version: pc-i440fx-4.2
Voltage: Unknown
External Clock: Unknown
Max Speed: 2000 MHz
View crclogs-debug.txt
crc start --log-level debug
DEBU CodeReady Containers version: 1.21.0+68a4cdd7
DEBU OpenShift version: 4.6.9 (embedded in executable)
DEBU Running 'crc start'
DEBU Total memory of system is 67277574144 bytes
DEBU Unable to find out if a new version is available: Get "https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/release-info.json": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
INFO Checking if running as non-root
INFO Checking if podman remote executable is cached
DEBU Currently podman remote is not supported
INFO Checking if admin-helper executable is cached