Skip to content

Instantly share code, notes, and snippets.

@timroster
timroster / delete-sl-storage.sh
Last active July 20, 2023 22:23
Bash script to remove orphaned classic volumes from IBM Cloud Kubernetes service, alternately, remove all volumes from the account.
#!/bin/bash
###############################################################################
# #
# delete-sl-storage.sh (c) 2021,2023 IBM Corporation and Tim Robinson #
# #
# IBM Cloud Kubernetes service can leave behind block and file volumes in #
# some cases. Whether by using -retain storage classes or deleting clusters #
# from the API without specifying to remove cluster volumes, these volumes #
# can accumulate in an account and prevent creation of new volumes. #
@timroster
timroster / f5-mgmtNIC-IBMCLoud.md
Last active May 11, 2022 01:02
Notes on switching F5 management to NIC2 on IBM Cloud VPC

WIP - moving F5 management to eth2 on a 3 NIC VSI

Load balancing services on IBM Cloud VPC currently leverage access to the primary NIC associated with the instance. By default, F5 BIG-IP configures to run management traffic on the primary NIC (eth0) and data plane on other NICS. This can be manually switched as follows.

Note - applying a license to a BIG-IP creates a dependency on the mac address of the management interface. Perform all of these steps before applying the BIG-IP license.

Starting point - deployment of F5 BIG-IP using terraform code from https://github.com/f5devcentral/ibmcloud_schematics_bigip_multinic_declared . This can be performed from the command line or schematics. Since the later intent is to have the NIC referred to as management in the code be a public interface, it can be convenient to assign a floating IP address to this interface by setting bigip_management_floating_ip to true.

When created by the terraform code, there will be a root and admin user defined. T

@timroster
timroster / delete-pvc-orphans.sh
Created March 21, 2022 17:55
Script to remove IBM Cloud classic volumes left behind by using Retain storageclasses
#!/bin/bash
###############################################################################
# #
# delete-pvc-orphans.sh (c) 2022 IBM Corporation and Tim Robinson #
# #
# IBM Cloud Kubernetes service can leave behind block and file volumes in #
# some cases. Whether by using -retain storage classes or deleting clusters #
# from the API without specifying to remove cluster volumes, these volumes #
# can accumulate in an account and prevent creation of new volumes. #
@timroster
timroster / delete-orphans.sh
Last active July 20, 2023 22:23
Bash script to remove orphaned classic volumes from IBM Cloud Kubernetes service
#!/bin/bash
###############################################################################
# #
# delete-orphans.sh (c) 2021,2023 IBM Corporation and Tim Robinson #
# #
# IBM Cloud Kubernetes service can leave behind block and file volumes in #
# some cases. Whether by using -retain storage classes or deleting clusters #
# from the API without specifying to remove cluster volumes, these volumes #
# can accumulate in an account and prevent creation of new volumes. #
@timroster
timroster / cpdroks.md
Last active August 27, 2021 18:23
Scenarios to configure TLS on Cloud Pak for Data running on Red Hat OpenShift on IBM Cloud

Configuring TLS with Cloud Pak for Data on Red Hat OpenShift on IBM Cloud

Red Hat OpenShift provides three ways, through the route resouce, to create TLS secured connections to applications running on Kubernetes.

  1. edge - that works like Kubernetes ingress, taking certificate data (certificate + key) to terminate TLS at the pod of the ingress controller, the openshift-router (haproxy-based) and then forward traffic to a Service in the cluster.
  2. reencrypt - that uses a public certificate, like edge to terminate TLS and then use (potentially private) certificate to build a new TLS session to the pods behind the Service.
  3. passthrough - where the ingress controller pod inspects the SNI of a TLS connection, uses this to determine the target host and forwards the traffic directly to the pods behind the Service.

In both cases 2 & 3, the pod is expected to be listening on a port for secured https sessions using some certificate data. By default, when Cloud Pak for Data (CPD) installs,

@timroster
timroster / route
Last active August 10, 2021 14:52
Example of an edge route for an OpenShift application that does not have TLS at the service endpoint
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: myapplicationroute
namespace: myproject
labels:
app: myapplication
release: v1.0
spec:
host: myhost.customdomain.com
@timroster
timroster / lecerts-with-cert-manager.md
Last active May 25, 2022 14:33
Configure LE certificates with Cert Manager

Requesting TLS certificates on Red Hat OpenShift using the cert-manager operator

OpenShift Container Platform typically supports edge-terminated TLS applications in a simple way for application developers through the route resource. This is accomplished through a wildcard certificate which will usually take a form like *.apps.cluster.domain.example.com. By default when exposing a service in OpenShift, a hostname is created by combining the service name (such as console) with a project (like openshift-console) to create a FQDN for a host, resulting in a host name like console-openshift-console.apps.cluster.domain.example.com. This just "works" due to the cluster wildcard certificate.

However, it is possible to manage custom certificates for use with OpenShift routes or Kubernetes ingress resources. The Cert-Manager CNCF project provides a handy tool to request custom TLS certificates for OpenShift, or any other Kubernetes platform. This gist will walk through setting

@timroster
timroster / README.md
Last active February 8, 2021 01:45
Configuring merged ingress resources with OpenShift haproxy-based native ingress controller

Using a single host with multiple ingresses on OpenShift - Part 2 - native ingress support

Consider the following legacy application migration scenario that has two macro components. The typical pre-containerization application deployment scenario may have had these hosted on VM's with a reverse proxy handling requests to a single logical host endpoint. Both macro components present REST interfaces and are expecting to see the most simple request URI possible. One of the components can be considered the primary monolithic application and the other is like a plug-in or supporting module.

The reverse proxy design took almost all of the traffic, e.g. / and sent that to the primary application. For the supporting module, only requests to /sub-module/(.*) the reverse proxy does a rewrite of the path in the URI to only contain the matched subpath and adds a request header so that the fully qualified paths can be built into responses. In this example, for an incoming request like /sub-module/foo/bar the re

@timroster
timroster / README.md
Last active February 8, 2021 01:45
Configuring merged ingress resources with NGINX inc based NGINX Operator ingress controller

Using a single host with multiple ingresses on OpenShift - Part 1 - NGINX Operator

Consider the following legacy application migration scenario that has two macro components. The typical pre-containerization application deployment scenario may have had these hosted on VM's with a reverse proxy handling requests to a single logical host endpoint. Both macro components present REST interfaces and are expecting to see the most simple request URI possible. One of the components can be considered the primary monolithic application and the other is like a plug-in or supporting module.

The reverse proxy design took almost all of the traffic, e.g. / and sent that to the primary application. For the supporting module, only requests to /sub-module/(.*) the reverse proxy does a rewrite of the path in the URI to only contain the matched subpath and adds a request header so that the fully qualified paths can be built into responses. In this example, for an incoming request like /sub-module/foo/bar the reverse pr

@timroster
timroster / fixcrc.sh
Last active February 2, 2021 02:39
Short script to recover etcd on crc when it gets misconfigured
#!/bin/sh
# address CRC issues like: https://github.com/code-ready/crc/issues/1888
# run this script from the crc vm - something like (switch to id_rsa on crc <= 1.20):
# scp -i ~/.crc/machines/crc/id_ecdsa fixcrc.sh core@192.168.130.11:fixcrc.sh
# ssh -i ~/.crc/machines/crc/id_ecdsa core@192.168.130.11 "chmod +x ./fixcrc.sh ; sudo ./fixcrc.sh"
ETCD_POD_DIR=$(ls -rt /etc/kubernetes/static-pod-resources | grep etc | tail -1)
sed -i 's/192.168.130.11/192.168.126.11/g' /etc/kubernetes/manifests/etcd-pod.yaml
sed -i 's/192.168.130.11/192.168.126.11/g' /etc/kubernetes/static-pod-resources/$ETCD_POD_DIR/configmaps/etcd-pod/pod.yaml