Skip to content

Instantly share code, notes, and snippets.

@lizrice
Last active July 13, 2020 03:38
Show Gist options
  • Star 7 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save lizrice/f1150f1d34ea31b453e25ed89775c385 to your computer and use it in GitHub Desktop.
Save lizrice/f1150f1d34ea31b453e25ed89775c385 to your computer and use it in GitHub Desktop.
Preventative Kubernetes Security demo

Kubernetes single-node cluster installed on a Vagrant VM using the Vagrantfile below (variant of what's discussed (here)[https://medium.com/@lizrice/kubernetes-in-vagrant-with-kubeadm-21979ded6c63]), which runs kubeadm to install Kubernetes.

I use v1.9.0 which doesn't include the patch for the critical Kubernetes CVE-2018-1002105.

kubeadm sets up a number of files in /etc/kubernetes/manifests. For the demo I change the API Server yaml file to set --anonymous-auth=true (allows anonymous, unauthenticated access) or --anonymous-access=false The kube-apiserver.yaml file is included in this gist, but it's only line 16 that needs to be modified for the demo. Don't copy this file as-is because your IP addresses will probably be different.

With --anonymous-auth enabled, run kube-hunter from outside the cluster to show unauthenticated access to the API. This is equivalent of curl -k https://<IP address>:6443, curl -k https://<IP address>:6443/api/v1

Run kube-bench master on the node to show API Server --anonymous-auth test.

Create a clusterrolebinding which allows all system serviceaccounts (including default) to view resources.

kubectl create clusterrolebinding serviceaccounts-view   --clusterrole=view  --group=system:serviceaccounts

Run a pod with curl enabled e.g.

kubectl run curl -it --image tutum/curl -- bash

Find the pod (kubectl get pods) and attach to it:

k attach -it curl-74846499d6-gms9k

From inside the pod:

# Access the service account token
export TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`

# No info supplied
curl -k https://<IP address>:6443/api/v1/namespaces

# Use token to use service account's permissions
curl -k -H "Authorization: Bearer $TOKEN" https://<IP address>:6443/api/v1/namespaces

You can simply use https://kubernetes instead of the IP address and port

vagrant@vagrant:~$ sudo more /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=10.0.2.15
- --allow-privileged=true
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=100
- --client-ca-file=/etc/kubernetes/pki/ca.crt
# AlwaysPullImages admission plugin slows down demos!
- --enable-admission-plugins=NodeRestriction,DenyEscalatingExec,NamespaceLifecycle # ,EventRateLimit,PodSecurityPolicy
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
# - --insecure-port=0
- --insecure-bind-address=172.28.128.3
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --profiling=false
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --repair-malformed-updates=false
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.13.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 10.0.2.15
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --anonymous-auth=true
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-allowed-names=front-proxy-client
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --insecure-port=0
- --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,Defau
ltTolerationSeconds,NodeRestriction,ResourceQuota
- --enable-bootstrap-token-auth=true
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-group-headers=X-Remote-Group
- --advertise-address=10.0.2.15
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --allow-privileged=true
- --requestheader-username-headers=X-Remote-User
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --secure-port=6443
- --service-cluster-ip-range=10.96.0.0/12
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --authorization-mode=Node,RBAC
- --etcd-servers=http://127.0.0.1:2379
image: gcr.io/google_containers/kube-apiserver-amd64:v1.9.11
livenessProbe:
failureThreshold: 8
httpGet:
host: 10.0.2.15
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
status: {}
# -*- mode: ruby -*-
# vi: set ft=ruby :
# After loading this
# Install a pod network
# $ kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')
# Allow pods to run on the master node
# $ kubectl taint nodes --all node-role.kubernetes.io/master-
$script = <<-SCRIPT
# Install kubernetes
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet=1.9.0-00 kubeadm=1.9.0-00 kubectl=1.9.0-00
# kubelet requires swap off
swapoff -a
# This adds the line twice, but the second time doesn't matter
sed '/ExecStart=/a Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs"' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Get the IP address that VirtualBox has given this VM
IPADDR=`ifconfig eth1 | grep Mask | awk '{print $2}'| cut -f2 -d:`
echo This VM has IP address $IPADDR
# Set up Kubernetes
kubeadm init --apiserver-cert-extra-sans=$IPADDR --node-name kube
# Set up admin creds for the vagrant user
echo Copying credentials to /home/vagrant...
sudo --user=vagrant mkdir -p /home/vagrant/.kube
cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
chown $(id -u vagrant):$(id -g vagrant) /home/vagrant/.kube/config
# User will need to complete the setup:
echo ""
echo As well as adding pod networking you will probably want to allow pods to run on this master node:
echo ""
echo kubectl taint nodes --all node-role.kubernetes.io/master-
echo ""
SCRIPT
Vagrant.configure("2") do |config|
config.vm.box = "bento/ubuntu-16.04"
config.vm.network "private_network", ip: "172.28.128.3"
config.vm.synced_folder ".", "/kube"
config.vm.hostname = "kube"
config.vm.define "kube"
config.vm.provider :virtualbox do |vb|
vb.name = "kube-1.9"
end
config.vm.provision "docker"
config.vm.provision "shell", inline: $script
end
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment