Skip to content

Instantly share code, notes, and snippets.

Avatar
🎉
just building on k8s

Sam protosam

🎉
just building on k8s
  • Red Hat
  • San Antonio, TX
View GitHub Profile
@protosam
protosam / function-reflection-tricks.go
Created Oct 21, 2021
This example shows how to use reflect to work with functions
View function-reflection-tricks.go
package main
import (
"fmt"
"log"
"reflect"
)
// a map of functions to be dynamically called...
var funcMap = make(map[string]interface{})
View Remapping YAML or Json to Structs.md

Sometimes JSON maps are not perfect, so you must transform the data as it is being marshalled and unmarshalled to fit in Go.

This example implements MarshalJSON() and UnmarshalJSON() on a struct to remap json fields as needed to fit the struct.

@protosam
protosam / Minikube for Mac Users.md
Last active Dec 4, 2021
Notes for switching from Docker Desktop to Minikube
View Minikube for Mac Users.md

With Docker Desktop becoming more restricted, I've decided to move on to just using minikube. In doing so, I've consolidated my notes as follows.

Installation

Use brew to install the docker cli and minikube.

$ brew install minikube docker kubectl hyperkit

Running Minikube

The first time you start minikube, you should specify any settings you desire.

@protosam
protosam / 0 - S3 Storage in K8S.md
Last active Aug 3, 2021
S3 Storage Goodness in K8S
View 0 - S3 Storage in K8S.md

In my storage quests, I finally decided I want to lazily use S3 for ReadWriteMany and to do do some experiments with.

There are a few options, but to save you some time if you just want what I landed on, I like csi-s3.

S3FS Mounted in Pod Containers

Well... this works great! The only problem was that it needed security privileges for mounting. That would be terrible if a container with this power got compromised, so I immediately moved on to getting this a layer away from being managed in-pod.

NFS Provisioner with Mounted S3

My initial plan was to just use the nfs-subdir-external-provisioner on top of a multi-replica S3 backed deployment of NFS Ganesha.

View 0 - K8S Setup.md

This script just sets up a single Ubuntu 20.04LTS server with Kubernetes for testing purposes.

curl -sL https://gist.githubusercontent.com/protosam/ecb00654023114d04f07c0d484d2d185/raw/4ec67760e1870da86674391ccb3d5455401a55fc/k8s-setup.sh | bash
View ABOUT.md

I don't understand why livepony.py works and deadpony.py results in error. This shouldn't matter.

$ python3 deadpony.py 
Traceback (most recent call last):
  File "deadpony.py", line 8, in <module>
    class AuthToken(db.Entity):
  File "deadpony.py", line 10, in AuthToken
    user_uuid     = orm.Required(uuid.UUID)
AttributeError: 'PrimaryKey' object has no attribute 'UUID'
View audit - dind rootless.md

Docker in Docker (rootless) - Audit

This is just a quick test to see if we can jailbreak rootless Docker-in-Docker.

The pod we're using has 2 containers, one that runs the daemon, and one that keeps the user separated from the privileged pod completely.

We have created deploy.yaml and we have dind-rootless running.

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
dind-rootless-5ddddf649b-sk9bv   2/2     Running   0          106s
@protosam
protosam / namespace-cleaner.yaml
Last active Jul 7, 2021
Just a lazy way to use CronJobs in k8s
View namespace-cleaner.yaml
# the job will be using a service account to run kubectl commands
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: namespace-cleaner
automountServiceAccountToken: true
# These permissions permit the job to view and delete namespaces
---
View devspace.yaml
version: v1beta10
vars:
- name: REGISTRY_PASSWORD
password: true
- name: IMAGE
value: kube-makefile # Update this after deciding where the image will be
pullSecrets:
- registry: "null" # Update this after deciding where the image will be
username: ${REGISTRY_USERNAME}
View 0 - Kyverno Audit.md

Kyverno Testing

This was an audit of Kyverno. The test was inside vcluster, but my assumption is that this is possible inside any pod with a service account attached to create resources.

Edit: Clarifications on what's happening here.

  • Kyverno is on the host
  • The vcluster creates a pod that shouldn't be allowed on the host
  • Kyverno doesn't prevent the pod from being created

The Process