Skip to content

Instantly share code, notes, and snippets.

Avatar

Andrew Sullivan acsulli

View GitHub Profile
View fio.md

This page represents a collection of fio performance tests, tuned for a Kubernetes etcd workload per this blog post, against various storage and platforms.

The goal is to execute the below fio command on as many different places as possible to gauge relative performance.

fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest

These tests are completely unscientific and only serve to provide a sampling for anecdotal comparisons.

View etcd.fio
[global]
rw=write
ioengine=sync
fdatasync=1
[etcdtest]
directory=.
size=22m
bs=2300
write_bw_log=etcdtest
View fedora-vm.yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
annotations:
kubevirt.io/latest-observed-api-version: v1alpha3
kubevirt.io/storage-observed-api-version: v1alpha3
name.os.template.kubevirt.io/silverblue32: Fedora 31 or higher
name: fedora
labels:
app: fedora
@acsulli
acsulli / RHV-AIO.md
Last active Mar 28, 2020
RHV all-in-one
View RHV-AIO.md

RHV AIO Install for Lab

This, loosely, documents installing RHV as an all-in-one server. This is not supported and has some flakiness, particularly for updates. Additionally, because it's a lab, no "real" storage was used.

The Server

The physical server used for this has 8 core, 32GB RAM, and a 512GB NVMe drive connected to the network using a single 1 GbE link. You'll need at least 200GiB of storage to comfortably host more than a couple of VMs.

Install and configure

View prosody.md

Prosody

Prosody is an XMPP server. My instance is currently hosted on a DigitalOcean VM, running in the smallest droplet avaialble, which is still far more resources than a server with just a few users needs.

At the end of these steps we will have a Prosody server which is isolated (no server-to-server comms), requires an administrator to create new user accounts, is using valid SSL certs issued by Let's Encrypt, and capable of handling file transfers up to 10MiB in size via the https upload service. This was written with CentOS 7.6.

Install

Before beginning, make sure you have DNS entries for the following:

View master-0.ign
{"ignition":{"config":{},"security":{"tls":{}},"timeouts":{},"version":"2.2.0"},"networkd":{},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDT8oVBte33bnCIT/32cyHiQ0MpFMHMg0e5umERvgG0ltGPZefR6EpcqZqeU5ZHpkk747yW8vzgcXbdXMfZ8jOroJOgGMy0CqtUIWguuL05XpKQS7Ds92McFPQsZScv8fmiuGLEb1z0PCHk8EDn4CGua9dnVeoxMHrtcsxdXYcRA+ZUZdvy8uWHGqLNgim9GIIC+yLiP/UzQvPstlrKIeievYxvw1hZ0g71repDnMdF0Kq95xh27J6mA4hqaggb35bGg2O6PpT6mV7phaMGR/FLSeEMBI4hhUck78sCuJ9bnKwEZfKRHTyFliwn999z4IVV6Xsy5cMV0SeKa5gIhXBX ansulliv@ovpn-125-169.rdu2.redhat.com"]}]},"storage":{"files":[{"filesystem":"root","path":"/etc/containers/registries.conf","user":{"name":"root"},"contents":{"source":"data:text/plain;charset=utf-8;base64,","verification":{}},"mode":384},{"filesystem":"root","path":"/etc/motd","user":{"name":"root"},"append":true,"contents":{"source":"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltY
View openshift_4_0.12.0_operators_for_admins.md

A cluster admin perspective on OpenShift Cluster Operators

OpenShift 4.x relies heavily upon operators for automating all aspects of the services. Discovering which operator controls which service can be difficult and confusing, but there are some steps we can take to help make the process easier and make educated guesses.

This document focuses on OpenShift core services, not add-on operators. While much, if not all, of the information still applys to those types of operators, they are much easier to see, manage, and inflect due to their nature. Be sure to see and read about the Operator Framework for the most up to date information about them.

Some prerequesites:

  1. It's important to understand the concept of operators and how they work
  2. First level operator(s), i.e. the [Cluster Version Operator](https://github.com/derekwaynecarr/openshift-the-easy-way/blob/master/docs
View failed_deployment_recovery_openshift_4_0.12.0.md

Removing a partially provisioned OpenShift 4.x AWS cluster

When creating an OCP cluster using openshift-installer that fails before the metadata.json file is created, cleaning up can be difficult because it doesn't know what needs to be removed. Fortunately, there is a workaround:

  • Configure AWS CLI

    This assumes you have configured the AWS CLI using your credentials. If you have not done this, follow the instructions

  • Retrieve the cluster ID

View openshift_4_0.12.0_registry.md

Container Image Registry

The Registry is managed by the Image Registry Operator, deployed to the openshift-image-registry project.

As of this writing (installer 0.12.0), the deployment follows the steps outlined in the manifest directory:

  1. Create the CRD
  2. Create the namespace
  3. Request credentials from the Cloud Credential Operator
  4. RBAC and ServiceAccount(s)
You can’t perform that action at this time.