Skip to content

Instantly share code, notes, and snippets.

View acsulli's full-sized avatar

Andrew Sullivan acsulli

View GitHub Profile
@acsulli
acsulli / etcd.fio
Last active October 29, 2020 21:19
[global]
rw=write
ioengine=sync
fdatasync=1
[etcdtest]
directory=.
size=22m
bs=2300
write_bw_log=etcdtest
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
annotations:
kubevirt.io/latest-observed-api-version: v1alpha3
kubevirt.io/storage-observed-api-version: v1alpha3
name.os.template.kubevirt.io/silverblue32: Fedora 31 or higher
name: fedora
labels:
app: fedora

A cluster admin perspective on OpenShift Cluster Operators

OpenShift 4.x relies heavily upon operators for automating all aspects of the services. Discovering which operator controls which service can be difficult and confusing, but there are some steps we can take to help make the process easier and make educated guesses.

This document focuses on OpenShift core services, not add-on operators. While much, if not all, of the information still applys to those types of operators, they are much easier to see, manage, and inflect due to their nature. Be sure to see and read about the Operator Framework for the most up to date information about them.

Some prerequesites:

  1. It's important to understand the concept of operators and how they work
  2. First level operator(s), i.e. the [Cluster Version Operator](https://github.com/derekwaynecarr/openshift-the-easy-way/blob/master/docs

Prosody

Prosody is an XMPP server. My instance is currently hosted on a DigitalOcean VM, running in the smallest droplet avaialble, which is still far more resources than a server with just a few users needs.

At the end of these steps we will have a Prosody server which is isolated (no server-to-server comms), requires an administrator to create new user accounts, is using valid SSL certs issued by Let's Encrypt, and capable of handling file transfers up to 10MiB in size via the https upload service. This was written with CentOS 7.6.

Install

Before beginning, make sure you have DNS entries for the following:

Removing a partially provisioned OpenShift 4.x AWS cluster

When creating an OCP cluster using openshift-installer that fails before the metadata.json file is created, cleaning up can be difficult because it doesn't know what needs to be removed. Fortunately, there is a workaround:

  • Configure AWS CLI

    This assumes you have configured the AWS CLI using your credentials. If you have not done this, follow the instructions

  • Retrieve the cluster ID

Container Image Registry

The Registry is managed by the Image Registry Operator, deployed to the openshift-image-registry project.

As of this writing (installer 0.12.0), the deployment follows the steps outlined in the manifest directory:

  1. Create the CRD
  2. Create the namespace
  3. Request credentials from the Cloud Credential Operator
  4. RBAC and ServiceAccount(s)