Skip to content

Instantly share code, notes, and snippets.

Avatar

Jan B janeczku

View GitHub Profile
@janeczku
janeczku / gist:6e989b7852ee694cd4a15f22616e34c2
Created Jan 11, 2021
Fix RHEL8 firewall configuration for Rancher agent
View gist:6e989b7852ee694cd4a15f22616e34c2
sudo iptables -P FORWARD ACCEPT
echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.d/50-docker-forward.conf
for mod in ip_tables ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr; do sudo modprobe $mod; echo $mod | sudo tee -a /etc/modules-load.d/iptables.conf; done
sudo dnf -y install network-scripts
sudo systemctl enable network
sudo systemctl disable NetworkManager
@janeczku
janeczku / eks-launch-template-cloud-init.md
Last active Nov 17, 2020
EKS Launch Template /w Cloud-Init Userdata
View eks-launch-template-cloud-init.md

Terraform Example: Create EC2 Launch Template with Cloud-Init Userdata

Create Cloud-Init template

data "template_file" "cloud_init" {
  template = "${file("init.tpl")}"
  template = <<EOF
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
@janeczku
janeczku / 01-multus-k3s.md
Last active Dec 1, 2020
Multus CNI with k3s and RKE
View 01-multus-k3s.md

Using Multus CNI in K3S

By default, K3S will run with flannel as the CNI and use custom directories to store CNI plugin binaries and config files(You can inspect the kubelet args K3S uses via journalctl -u k3s|grep cni-conf-dir). So you need to configure that properly When deploying Multus CNI.

For example given the official Multus manifests in https://github.com/intel/multus-cni/blob/36f2fd64e0965e639a0f1d17ab754f0130951aba/images/multus-daemonset.yml, the following changes are needed:

volumes:
  - name: cni
@janeczku
janeczku / import-airgapped-downstream-cluster.md
Last active Jan 12, 2021
How-to: Connect an air-gapped k3s cluster to Rancher via enterprise proxy
View import-airgapped-downstream-cluster.md

How-to: Connect an air-gapped k3s cluster to Rancher via enterprise proxy

    +----------------+
    |  Rancher Mgmt  |
    +--------+-------+
             ^
             |
             |   Firewall
@janeczku
janeczku / values.yaml
Created Nov 2, 2020
Use Prometheus Operator with existing PV
View values.yaml
```yaml
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
dataSource:
kind: PersistentVolumeClaim
name: existing-pvc # should exist in prometheus operator namespace
```
@janeczku
janeczku / clusterflow-archive.yaml
Last active Nov 2, 2020
Banzai Cluster Logging Elasticsearch Example
View clusterflow-archive.yaml
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: archive
spec:
match:
- select: {}
outputRefs:
- s3
@janeczku
janeczku / 01-keepalived-vip.yaml
Last active Oct 16, 2020
Easy peasy Failover/VIP solution for bare-metal k3s HA clusters - More info: https://github.com/janeczku/keepalived-ingress-vip
View 01-keepalived-vip.yaml
# Simply drop this file in `/var/lib/rancher/k3s/server/manifests/` on a k3s node
# Requires multicast capable network (won't work in cloud)
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: keepalived-vip
namespace: kube-system
spec:
chart: keepalived-ingress-vip
version: v0.1.6
@janeczku
janeczku / cloud-config.yaml
Last active Sep 29, 2020
Cloud Init to use vSphere Network Protocol Profile for IP assignment on CentOS/RHEL
View cloud-config.yaml
#cloud-config
write_files:
- path: /network-init.sh
content: |
#!/bin/bash
# Gateway 10.164.20.1
# 10.164.20.x/24
vmtoolsd --cmd 'info-get guestinfo.ovfEnv' > /tmp/ovfenv
IPAddress=$(sed -n 's/.*Property oe:key="guestinfo.interface.0.ip.0.address" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)
SubnetMask=$(sed -n 's/.*Property oe:key="guestinfo.interface.0.ip.0.netmask" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)
@janeczku
janeczku / pv-storageclass-none.yaml
Created Sep 8, 2020
How to bind existing persistent volume to PVC
View pv-storageclass-none.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-pv-storageclass-none
spec:
storageClassName: ""
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
@janeczku
janeczku / rancher-vs-service-account-tokens.md
Last active Aug 26, 2020
The different scopes of Rancher API tokens vs. Kubernetes Service Account tokens
View rancher-vs-service-account-tokens.md

This table helps understand the nuances of using Rancher API tokens vs. using Kubernetes Service Account tokens to authenticate external clients such as Helm or CD solution.

Scope & Features \ Token Type Rancher API Token Service Account Token
Rancher CLI x -
K8s clients (e.g. Helm, ArgoCD) x x
Rancher Endpoint (auth proxy) x -
Authorized Cluster Endpoint x x
Single Cluster Token x x
Multi-Cluster Token x -