Skip to content

Instantly share code, notes, and snippets.

@janeczku
janeczku / cpu-pin-test-deploy.yaml
Created July 27, 2022 16:55
K8s CPU Pinning Test Workload
View cpu-pin-test-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cpu-stress
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: cpu-stress
@janeczku
janeczku / app.yaml
Last active July 12, 2022 16:45
Configure multicast-compatible macvlan interfaces with Multus
View app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
@janeczku
janeczku / rancher-cluster-event-source.yaml
Created July 5, 2022 13:59
Argo Event: Trigger on Rancher Cluster Provisioning
View rancher-cluster-event-source.yaml
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
name: resource
spec:
template:
serviceAccountName: your-service-account
resource:
capi-cluster:
namespace: fleet-default
@janeczku
janeczku / create-config.yaml
Last active June 8, 2022 14:41
Harvester: Adding a custom systemd unit using oem cloud-config
View create-config.yaml
# Adding the following config stanza to all the Harvester create|join configs will create
# a custom cloud-config `/oem/95_user.yaml` during the (early) "initramfs" cloud-init stage
# of the initial Harvester boot.
# This cloud-config will be executed on each system (re-)boot during the (late) "boot" cloud-init
# stage and may contain any cloud-init directives supported by the cOS Toolkit:
# See https://rancher.github.io/elemental-toolkit/docs/reference/cloud_init/
# Additionally, any files added to the /oem folder in day-2 are persistent and won’t be overwritten
# during Harvester upgrades.
write_files:
@janeczku
janeczku / 95_user.yaml
Created June 3, 2022 08:46
Harvester: Adding a custom systemd unit using oem cloud-config
View 95_user.yaml
# Filename: /oem/95_user.yaml
# Ref: https://rancher.github.io/elemental-toolkit/docs/customizing/stages/
name: "User Config"
stages:
initramfs:
- name: "Drop unit file"
files:
- path: /etc/systemd/system/update-ca.service
content: |
[Unit]
@janeczku
janeczku / registries.yaml
Created April 22, 2022 13:38
Configure Harvester to use private container registry
View registries.yaml
# Replace private.registry.com with the host name of your private container registry
# Save as /etc/rancher/rke2/registries.yaml on each Harvester server
mirrors:
docker.io:
endpoint:
- "https://private.registry.com"
private.registry.com:
endpoint:
- "https://private.registry.com"
configs:
@janeczku
janeczku / volume-snapshot-rke.md
Last active March 9, 2022 09:03
How to use the K8s Volume Snapshot Subsystem with Rancher RKE
View volume-snapshot-rke.md

Here are the steps to enable and test the Kubernetes Volume Snaphot subsystem on Rancher RKE or RKE2.

Deploying the volume snapshot controller and CRDs is required for managing snapshots for storage providers such as Longhorn, Rook/Ceph via the native K8s API as well as for integrating Rancher-managed clusters with backup solutions like Kasten K10.

1. Clone the repository

export SNAPSHOT_CTRL_VER=v5.0.1
git clone -b $SNAPSHOT_CTRL_VER https://github.com/kubernetes-csi/external-snapshotter.git
cd external-snapshotter
@janeczku
janeczku / ngin.md
Created February 18, 2022 12:20
Example Rancher NGINX config
View ngin.md
worker_processes 4;
worker_processes 4;
worker_rlimit_nofile 40000;

events {
    worker_connections 8192;
}

http {
@janeczku
janeczku / import.sh
Last active December 14, 2021 17:09
Import Cluster in Rancher 2.6.x
View import.sh
#!/usr/bin/env bash
set -xe
RANCHER_HOST="REDACTED.cloud"
BEARER_TOKEN="token-hcjwf:REDACTED"
if [[ $# -eq 0 ]] ; then
echo "please specify cluster name"
exit 1
fi
@janeczku
janeczku / docker-config.md
Created December 13, 2021 11:55
Example Docker CLI registry auth config
View docker-config.md

/home/USER/.docker/config.json

{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx="
        },
        "quay.io": {
 "xxxx": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"