Skip to content

Instantly share code, notes, and snippets.

@sfxworks
Last active April 6, 2019 07:17
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save sfxworks/4a01eeedccdb9b379cf7e159b1b03a83 to your computer and use it in GitHub Desktop.
Save sfxworks/4a01eeedccdb9b379cf7e159b1b03a83 to your computer and use it in GitHub Desktop.
Fruits of my torment.md

Deleting a custom resource definition that wont go away, even when describe states its terminating kubernetes/kubernetes#60538 (comment)

Deleting a namespace stuck in terminating state kubernetes/kubernetes#60807 (comment)

In a nutshell, kubectl edit resourceType name and remove the finalizers

Deleting all helm charts helm ls --all --short | xargs -L1 helm delete --purge Also you'll probably want to --purge 99% of the time https://stackoverflow.com/questions/47817818/helm-delete-all-releases

Exlude namespaces for velero restore operations vmware-tanzu/velero#22

velero restore create --from-backup=<yourbackup> --exclude-namespaces="comma,sep,values,actually,dont,work,so,what,is,a,stringArray and is it standardized or is it just late?"

Giving tiller permission to do what it needs to

kubectl --namespace kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
kubectl --namespace kube-system patch deploy/tiller-deploy -p '{"spec": {"template": {"spec": {"serviceAccountName": "tiller"}}}}'

Switching CNIs?

kubeadm reset
systemctl stop kubelet
systemctl stop crio
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig crio0 down
ip link delete cni0
ip link delete flannel.1
#src https://github.com/kubernetes/kubernetes/issues/39557
#swap crio with docker if youre using it

For your convenience

echo '1' > /proc/sys/net/ipv4/ip_forward
modprobe br_netfilter

Remember: Not all CRDs perform validation checks That being said - Todo: Add this example to aws variant for ceph/openstack s3 example v11

apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  name: default
  namespace: velero
spec:
  config:
    region: lax1 #this is required
    s3ForcePathStyle: "true" #you need this
    s3Url: https://swift.your.domain.net #no slash in the end or it breaks and you get sad :(
  objectStorage:
    bucket: velero
  provider: aws #but not really
---
apiVersion: v1
#dont forget your
kind: Secret
metadata:
  namespace: velero
  name: cloud-credentials
  labels:
    component: minio #hmm dont think this needs to be here
stringData:
  cloud: |
    [default]
    aws_access_key_id = 0somenumbersandletters1
    aws_secret_access_key = 0morenumbersandletters1

Good news: --cri-socket is no longer required as long as you only have one

Roses are red, violets are blue

--ignore-preflight-errors=NumCPU

Even if you

cloudflared tunnel --url https://kubernetes:6443 --hostname kluster.yourdomain.net:6443 --lb-pool k8spool --origin-ca-pool /etc/kubernetes/pki/ca.crt

if you're on a free plan it wont handle TCP so you'll get everything but attach and exec (didn't test out log but I imagine that too)

Init containers are nice

 initContainers:
      - name: init-server
        image: alpine:latest #But there's kind of an ongoing issue with Alpine and DNS
        #but you can work around it if you don't need cluster DNS resource 
        dnsPolicy: "None" 
        dnsConfig:
          nameservers:
            - 1.1.1.1
            - 1.0.0.1
        command: ['/bin/sh', '-c']
        args: ['some; commmands && some more; before --the storm']

Weave args can be used like

&env.IPALLOC_RANGE=172.16.0.0/16&env.NO_MASQ_LOCAL=1

With the full example of

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=172.16.0.0/16&env.NO_MASQ_LOCAL=1"

Speaking of, weave requires NO_MASQ_LOCAL=1 in order for externalTrafficPolicy: local to properly forward the origin address to an application

Deploying rook-ceph is easier than the docs make it to be

kubectl create -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/common.yaml
kubectl create -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster.yaml
kubectl create -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml
#                                                               ^but probably a stable branch isntead given time of writing factors

but make sure to clean up properly or you'll be reading the top

Forefully delete pods

--grace-period=0 --force

Rook >9.0 adds timeout flags to the operator when connnecting to the mon in order to better represent CNI issues instead of hanging after setting up the first mon

Switch crio to use cgroupfs (or configure systemd properly with it since it like it better) if using ppa:projectatomic/ppa

sudo sed -i 's/cgroup_manager.*/cgroup_manager = "cgroupfs"/' /etc/crio/crio.conf
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment