Skip to content

Instantly share code, notes, and snippets.

View okd_libvirt.md

Deploying OKD using libvirt

For this environment, we'll be using these hostname/IP combinations:

  • helper = 192.168.110.39
  • bootstrap = 192.168.110.60
  • controlplane-0 = 192.168.110.61
  • controlplane-0 = 192.168.110.62
  • controlplane-0 = 192.168.110.63
  • worker-0 = 192.168.110.65
  • worker-1 = 192.168.110.66
@acsulli
acsulli / k8s-nfs-client-provisioner.md
Created Mar 8, 2021
Deploying the Kubernetes NFS Client dynamic provisioner to OpenShift
View k8s-nfs-client-provisioner.md

Refer to the (now deprecated) project page here for additional details

# create the namespace
cat << EOF | oc apply -f -
kind: Namespace
apiVersion: v1
metadata:
  name: nfs-provisioner
View OCP_iSCSI_for_Trident.sh
#! /usr/bin/env/sh
#
# this script has not been tested nor validated, it is not, in any way
# supported by Red Hat or NetApp. use at your own risk.
#
#
# the purpose of this script is to create an OpenShift MachineConfig
# to apply the NetApp recommended OS configuration to RHCOS machines.
View fio.md

This page represents a collection of fio performance tests, tuned for a Kubernetes etcd workload per this blog post, against various storage and platforms.

The goal is to execute the below fio command on as many different places as possible to gauge relative performance.

fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest

These tests are completely unscientific and only serve to provide a sampling for anecdotal comparisons.

View etcd.fio
[global]
rw=write
ioengine=sync
fdatasync=1
[etcdtest]
directory=.
size=22m
bs=2300
write_bw_log=etcdtest
View fedora-vm.yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
annotations:
kubevirt.io/latest-observed-api-version: v1alpha3
kubevirt.io/storage-observed-api-version: v1alpha3
name.os.template.kubevirt.io/silverblue32: Fedora 31 or higher
name: fedora
labels:
app: fedora
@acsulli
acsulli / RHV-AIO.md
Last active Mar 13, 2021
RHV all-in-one
View RHV-AIO.md

RHV AIO Install for Lab

This, loosely, documents installing RHV as an all-in-one server. This is not supported and has some flakiness, particularly for updates. Additionally, because it's a lab, no "real" storage was used.

The Server

The physical server used for this has 8 core, 32GB RAM, and a 512GB NVMe drive connected to the network using a single 1 GbE link. You'll need at least 200GiB of storage to comfortably host more than a couple of VMs.

Install and configure

View prosody.md

Prosody

Prosody is an XMPP server. My instance is currently hosted on a DigitalOcean VM, running in the smallest droplet avaialble, which is still far more resources than a server with just a few users needs.

At the end of these steps we will have a Prosody server which is isolated (no server-to-server comms), requires an administrator to create new user accounts, is using valid SSL certs issued by Let's Encrypt, and capable of handling file transfers up to 10MiB in size via the https upload service. This was written with CentOS 7.6.

Install

Before beginning, make sure you have DNS entries for the following:

View master-0.ign
{"ignition":{"config":{},"security":{"tls":{}},"timeouts":{},"version":"2.2.0"},"networkd":{},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDT8oVBte33bnCIT/32cyHiQ0MpFMHMg0e5umERvgG0ltGPZefR6EpcqZqeU5ZHpkk747yW8vzgcXbdXMfZ8jOroJOgGMy0CqtUIWguuL05XpKQS7Ds92McFPQsZScv8fmiuGLEb1z0PCHk8EDn4CGua9dnVeoxMHrtcsxdXYcRA+ZUZdvy8uWHGqLNgim9GIIC+yLiP/UzQvPstlrKIeievYxvw1hZ0g71repDnMdF0Kq95xh27J6mA4hqaggb35bGg2O6PpT6mV7phaMGR/FLSeEMBI4hhUck78sCuJ9bnKwEZfKRHTyFliwn999z4IVV6Xsy5cMV0SeKa5gIhXBX ansulliv@ovpn-125-169.rdu2.redhat.com"]}]},"storage":{"files":[{"filesystem":"root","path":"/etc/containers/registries.conf","user":{"name":"root"},"contents":{"source":"data:text/plain;charset=utf-8;base64,","verification":{}},"mode":384},{"filesystem":"root","path":"/etc/motd","user":{"name":"root"},"append":true,"contents":{"source":"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltY
View openshift_4_0.12.0_operators_for_admins.md

A cluster admin perspective on OpenShift Cluster Operators

OpenShift 4.x relies heavily upon operators for automating all aspects of the services. Discovering which operator controls which service can be difficult and confusing, but there are some steps we can take to help make the process easier and make educated guesses.

This document focuses on OpenShift core services, not add-on operators. While much, if not all, of the information still applys to those types of operators, they are much easier to see, manage, and inflect due to their nature. Be sure to see and read about the Operator Framework for the most up to date information about them.

Some prerequesites:

  1. It's important to understand the concept of operators and how they work
  2. First level operator(s), i.e. the [Cluster Version Operator](https://github.com/derekwaynecarr/openshift-the-easy-way/blob/master/docs