Skip to content

Instantly share code, notes, and snippets.

View acsulli's full-sized avatar

Andrew Sullivan acsulli

View GitHub Profile

Container Image Registry

The Registry is managed by the Image Registry Operator, deployed to the openshift-image-registry project.

As of this writing (installer 0.12.0), the deployment follows the steps outlined in the manifest directory:

  1. Create the CRD
  2. Create the namespace
  3. Request credentials from the Cloud Credential Operator
  4. RBAC and ServiceAccount(s)

Removing a partially provisioned OpenShift 4.x AWS cluster

When creating an OCP cluster using openshift-installer that fails before the metadata.json file is created, cleaning up can be difficult because it doesn't know what needs to be removed. Fortunately, there is a workaround:

  • Configure AWS CLI

    This assumes you have configured the AWS CLI using your credentials. If you have not done this, follow the instructions

  • Retrieve the cluster ID

Prosody

Prosody is an XMPP server. My instance is currently hosted on a DigitalOcean VM, running in the smallest droplet avaialble, which is still far more resources than a server with just a few users needs.

At the end of these steps we will have a Prosody server which is isolated (no server-to-server comms), requires an administrator to create new user accounts, is using valid SSL certs issued by Let's Encrypt, and capable of handling file transfers up to 10MiB in size via the https upload service. This was written with CentOS 7.6.

Install

Before beginning, make sure you have DNS entries for the following:

A cluster admin perspective on OpenShift Cluster Operators

OpenShift 4.x relies heavily upon operators for automating all aspects of the services. Discovering which operator controls which service can be difficult and confusing, but there are some steps we can take to help make the process easier and make educated guesses.

This document focuses on OpenShift core services, not add-on operators. While much, if not all, of the information still applys to those types of operators, they are much easier to see, manage, and inflect due to their nature. Be sure to see and read about the Operator Framework for the most up to date information about them.

Some prerequesites:

  1. It's important to understand the concept of operators and how they work
  2. First level operator(s), i.e. the [Cluster Version Operator](https://github.com/derekwaynecarr/openshift-the-easy-way/blob/master/docs
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
annotations:
kubevirt.io/latest-observed-api-version: v1alpha3
kubevirt.io/storage-observed-api-version: v1alpha3
name.os.template.kubevirt.io/silverblue32: Fedora 31 or higher
name: fedora
labels:
app: fedora
@acsulli
acsulli / etcd.fio
Last active October 29, 2020 21:19
[global]
rw=write
ioengine=sync
fdatasync=1
[etcdtest]
directory=.
size=22m
bs=2300
write_bw_log=etcdtest
@acsulli
acsulli / k8s-nfs-client-provisioner.md
Created March 8, 2021 19:45
Deploying the Kubernetes NFS Client dynamic provisioner to OpenShift

Refer to the (now deprecated) project page here for additional details

# create the namespace
cat << EOF | oc apply -f -
kind: Namespace
apiVersion: v1
metadata:
  name: nfs-provisioner
@acsulli
acsulli / fio.md
Last active March 19, 2021 21:40

This page represents a collection of fio performance tests, tuned for a Kubernetes etcd workload per this blog post, against various storage and platforms.

The goal is to execute the below fio command on as many different places as possible to gauge relative performance.

fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest

These tests are completely unscientific and only serve to provide a sampling for anecdotal comparisons.

This gist provides some additional information referenced in the Ask an OpenShift Admin livestream on January 12th, 2022.

Updating OpenShift Clusters

Triggering an update to the cluster is done the same way, whether you're doing an update between z-streams (e.g. 4.9.8 -> 4.9.13) or an upgrade between y-releases (e.g. 4.8.z -> 4.9.z). There are three primary options:

  1. Use the webconsole This is pretty straightforward, browse to the Administration panel, then click the update button. If you're upgrading between y-releases, you may need to change the release stream.

  2. Use the CLI

@acsulli
acsulli / RHV-AIO.md
Last active April 18, 2022 04:40
RHV all-in-one

RHV AIO Install for Lab

This, loosely, documents installing RHV as an all-in-one server. This is not supported and has some flakiness, particularly for updates. Additionally, because it's a lab, no "real" storage was used.

The Server

The physical server used for this has 8 core, 32GB RAM, and a 512GB NVMe drive connected to the network using a single 1 GbE link. You'll need at least 200GiB of storage to comfortably host more than a couple of VMs.

Install and configure