Skip to content

Instantly share code, notes, and snippets.

@dougbtv
Last active December 18, 2018 04:49
Show Gist options
  • Star 5 You must be signed in to star a gist
  • Fork 5 You must be signed in to fork a gist
  • Save dougbtv/67589a7b3e443d1b4e2cdf05698f58ca to your computer and use it in GitHub Desktop.
Save dougbtv/67589a7b3e443d1b4e2cdf05698f58ca to your computer and use it in GitHub Desktop.
You had ONE JOB -- A Kubernetes job.

You had ONE JOB -- A Kubernetes job.

Let's take a look at how Kubernetes jobs are crafted. I had been jamming some kind of work-around shell scripts in the entrypoint* for some containers in the vnf-asterisk project that Leif and I have been working on. And that's not perfect when we can use Kubernetes jobs, or in their new parlance, "run to completion finite workloads" (I'll stick to calling them "jobs"). They're one-shot containers that do one thing, and then end (sort of like a "oneshot" of systemd units, at least how we'll use them today). I like the idea of using them to complete some service discovery for me when other pods are coming up. Today we'll fire up a pod, and spin up a job to discover that pod (by querying the API for info about it), and put info into etcd. Let's get the job done.

This post also exists as a gist on github where you can grab some files from, which I'll probably reference a couple times.

* Not everyone likes having a custom entrypoint shell script, some people consider it a bit... "Jacked up". But, personally I don't depending on circumstance. Sometimes I think it's a pragmatic solution. So it's not always bad -- it depends on the case. But, where we can break things up and into their particular places, it's a GoodThing(TM).

Let's try firing up a kubernetes job and see how it goes. We'll use the k8s jobs documentation as a basis. But, as you'll see we'll need a bit more help as

Some requirements.

You'll need a Kubernetes cluster up and running here. If you don't, you can spin up k8s on centos with this blog article.

An bit of an editorial is that.... Y'know... OpenShift Origin kind of makes some of these things a little easier compared to vanilla K8s, especially with manging permissions and all that good stuff for the different accounts. It's a little more cinched down in some ways (which you want in production), but, there's some great considerations with oc to handle some of what we have to look at in more fine-grained detail herein.

Running etcd.

You can pick up the YAML for this from the gist, and it should be easy to fire up etcd with:

kubectl create -f etcd.yaml

Assuming you've setup DNS correctly to resolve from the master (get the DNS pod IP, and put it in your resolve.conf and search cluster.local -- also see scratch.sh in the gist), you can check that it works...

# Set the value of the key "message" to be "sup sup"
[centos@kube-master ~]$ curl -L -X PUT http://etcd-client.default.svc.cluster.local:2379/v2/keys/message -d value="sup sup"
{"action":"set","node":{"key":"/message","value":"sup sup","modifiedIndex":9,"createdIndex":9}}

# Now retrieve that value.
[centos@kube-master ~]$ curl -L http://etcd-client.default.svc.cluster.local:2379/v2/keys/message 
{"action":"get","node":{"key":"/message","value":"sup sup","modifiedIndex":9,"createdIndex":9}}

Authenticating against the API.

At least to me -- etcd is easy. Jobs are easy. The hardest part was authenticating against the API. So, let's step through how that works quickly. It's not difficult, I just didn't have all the pieces together at first.

The idea here is that Kubernetes puts the default service account's API token into a file in the container in the pod @ /var/run/secrets/kubernetes.io/serviceaccount/token for the default service account in the default namespace. We then present that to the API in our curl command.

But, It did get me to read about the kubectl proxy command, service accounts, and accessing the cluster. When really what I needed was just a bit of a tip from this stackoverflow answer.

First off, you can see that you have the api running with

kubectl get svc --all-namespaces | grep -i kubernetes

Great, if you see the line there that means you have the API running, and you'll also be able to access it with DNS, which makes things a little cleaner.

Now that you can see that, we can go and access it... let's run a pod.

kubectl run demo --image=centos:centos7 --command -- /bin/bash -c 'while :; do sleep 10; done'

Alright, now that you've got this running (look for it with kubectl get pods), we can enter that container and query the API.

Let's do that just prove it.

[centos@kube-master ~]$ kubectl exec -it demo-1260169299-96zts -- /bin/bash

# Not enough columns for me...
[root@demo-1260169299-96zts /]# stty rows 50 cols 132

# Pull up the service account token.
[root@demo-1260169299-96zts /]# KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token)

# Show it if you want.
[root@demo-1260169299-96zts /]#  echo $KUBE_TOKEN

# Now you can query the API
[root@demo-1260169299-96zts /]# curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://kubernetes.default.svc.cluster.local

Now, we have one job to do...

And that's to create a job. So let's create a job and we'll put the IP address of this "demo pod" into etcd. In theory we'd use this with something else to discover where it's based.

We'll figure out the IP address of the pod by querying the API. If you'd like to dig in a little bit and get your feet with the Kube API, may I suggest this article from TheNewStack on taking the API for a spin.

Why not just query always the API? Well. You could do that too. But, in my case we're going to generally standardize around using etcd. In part because in the full use-case we're going to also store other metadata there that's not "just the IP address".

So, we can query the API directly to find out the fact we're looking for, so let's do that just to test out that our results are OK in the end.

I'm going to cheat here and run this little test from the master (instead of inside a container), it should work if you're deploying using my playbooks.

# Figure out the pod name
[centos@kube-master ~]$ podname=$(curl -s http://localhost:8080/api/v1/namespaces/default/pods | jq ".items[] .metadata.name" | grep -i demo | sed -e 's/"//g')
[centos@kube-master ~]$ echo $podname
demo-1260169299-96zts

# Now using the podname, we can figure out the IP
[centos@kube-master ~]$ podip=$(curl -s http://localhost:8080/api/v1/namespaces/default/pods/$podname | jq '.status.podIP' | sed -s 's/"//g')
[centos@kube-master ~]$ echo $podip
10.244.2.11

Alright, that having been proven out we can create a job to do this for us, now too.

Let's go and define a job YAML definition. You'll find this one borrows generally heavily from the documentation, but, mixes up some things -- especially it uses a customized centos:centos7 image of mine that has jq installed in it, it's called dougbtv/jq and it's available on dockerhub.

Also available in the gist, here's the job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: hoover
spec:
  template:
    metadata:
      name: hoover
    spec:
      containers:
      - name: hoover
        image: dougbtv/jq
        command: ["/bin/bash"]
        args:
          - "-c"
          - >
            KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token) &&
            podname=$(curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods | jq '.items[] .metadata.name' | grep -i demo | sed -e 's/\"//g') && 
            podip=$(curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/$podname | jq '.status.podIP' | sed -s 's/\"//g') &&
            echo "the pod is @ $podip" &&
            curl -L -X PUT http://etcd-client.default.svc.cluster.local:2379/v2/keys/podip -d value="$podip"
      restartPolicy: Never

Let's create it.

[centos@kube-master ~]$ kubectl create -f job.yaml 

It's named "hoover" as it's kinda sucking up some info from the API to do something with it.

So look for it in the list of ALL pods.

[centos@kube-master ~]$ kubectl get pods --show-all
NAME                    READY     STATUS      RESTARTS   AGE
demo-1260169299-96zts   1/1       Running     0          1h
etcd0                   1/1       Running     0          12d
etcd1                   1/1       Running     0          12d
etcd2                   1/1       Running     0          12d
hoover-fkbjj            0/1       Completed   0          20s

Now we can see what the logs are from it. It'll tell us what the IP is.

[centos@kube-master ~]$ kubectl logs hoover-fkbjj

That all said and done... we can complete this by seeing that the value made it to etcd.

[centos@kube-master ~]$ curl -L http://etcd-client.default.svc.cluster.local:2379/v2/keys/podip
{"action":"get","node":{"key":"/podip","value":"10.244.2.11","modifiedIndex":10,"createdIndex":10}}

Voila!

apiVersion: v1
kind: Service
metadata:
name: etcd-client
spec:
ports:
- name: etcd-client-port
port: 2379
protocol: TCP
targetPort: 2379
selector:
app: etcd
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: etcd
etcd_node: etcd0
name: etcd0
spec:
containers:
- command:
- /usr/local/bin/etcd
- --name
- etcd0
- --initial-advertise-peer-urls
- http://etcd0:2380
- --listen-peer-urls
- http://0.0.0.0:2380
- --listen-client-urls
- http://0.0.0.0:2379
- --advertise-client-urls
- http://etcd0:2379
- --initial-cluster
- etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380
- --initial-cluster-state
- new
image: quay.io/coreos/etcd:latest
name: etcd0
ports:
- containerPort: 2379
name: client
protocol: TCP
- containerPort: 2380
name: server
protocol: TCP
restartPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
labels:
etcd_node: etcd0
name: etcd0
spec:
ports:
- name: client
port: 2379
protocol: TCP
targetPort: 2379
- name: server
port: 2380
protocol: TCP
targetPort: 2380
selector:
etcd_node: etcd0
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: etcd
etcd_node: etcd1
name: etcd1
spec:
containers:
- command:
- /usr/local/bin/etcd
- --name
- etcd1
- --initial-advertise-peer-urls
- http://etcd1:2380
- --listen-peer-urls
- http://0.0.0.0:2380
- --listen-client-urls
- http://0.0.0.0:2379
- --advertise-client-urls
- http://etcd1:2379
- --initial-cluster
- etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380
- --initial-cluster-state
- new
image: quay.io/coreos/etcd:latest
name: etcd1
ports:
- containerPort: 2379
name: client
protocol: TCP
- containerPort: 2380
name: server
protocol: TCP
restartPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
labels:
etcd_node: etcd1
name: etcd1
spec:
ports:
- name: client
port: 2379
protocol: TCP
targetPort: 2379
- name: server
port: 2380
protocol: TCP
targetPort: 2380
selector:
etcd_node: etcd1
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: etcd
etcd_node: etcd2
name: etcd2
spec:
containers:
- command:
- /usr/local/bin/etcd
- --name
- etcd2
- --initial-advertise-peer-urls
- http://etcd2:2380
- --listen-peer-urls
- http://0.0.0.0:2380
- --listen-client-urls
- http://0.0.0.0:2379
- --advertise-client-urls
- http://etcd2:2379
- --initial-cluster
- etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380
- --initial-cluster-state
- new
image: quay.io/coreos/etcd:latest
name: etcd2
ports:
- containerPort: 2379
name: client
protocol: TCP
- containerPort: 2380
name: server
protocol: TCP
restartPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
labels:
etcd_node: etcd2
name: etcd2
spec:
ports:
- name: client
port: 2379
protocol: TCP
targetPort: 2379
- name: server
port: 2380
protocol: TCP
targetPort: 2380
selector:
etcd_node: etcd2
apiVersion: batch/v1
kind: Job
metadata:
name: hoover
spec:
template:
metadata:
name: hoover
spec:
containers:
- name: hoover
image: dougbtv/jq
command: ["/bin/bash"]
args:
- "-c"
- >
KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token) &&
podname=$(curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods | jq '.items[] .metadata.name' | grep -i demo | sed -e 's/\"//g') &&
podip=$(curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/$podname | jq '.status.podIP' | sed -s 's/\"//g') &&
echo "the pod is @ $podip" &&
curl -L -X PUT http://etcd-client.default.svc.cluster.local:2379/v2/keys/podip -d value="$podip"
restartPolicy: Never
# Some results of what happened when trying to get it to rumble...
[centos@kube-master ~]$ sudo yum install -y bind-utils
[centos@kube-master ~]$ kubectl get svc --all-namespaces | grep dns
kube-system kube-dns 10.96.0.10 <none> 53/UDP,53/TCP 14d
[centos@kube-master ~]$ nslookup etcd-client.default.svc.cluster.local 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: etcd-client.default.svc.cluster.local
Address: 10.98.147.1
# Setup resolv.conf to search cluster.local and use that nameserver
[centos@kube-master ~]$ cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search cluster.local
nameserver 10.96.0.10
nameserver 192.168.122.1
[centos@kube-master ~]$ nslookup etcd-client.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
Name: etcd-client.default.svc.cluster.local
Address: 10.98.147.1
# Let's run some bogan pod... to make sure DNS resolves.
[centos@kube-master ~]$ kubectl run demo --image=centos:centos7 --command -- /bin/bash -c 'while :; do sleep 10; done'
deployment "demo" created
[centos@kube-master ~]$ kubectl help run
[centos@kube-master ~]$ watch -n1 kubectl get pods
[centos@kube-master ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-1260169299-t49m4 1/1 Running 0 11s
etcd0 1/1 Running 0 11d
etcd1 1/1 Running 0 11d
etcd2 1/1 Running 0 11d
[centos@kube-master ~]$ kubectl exec -it demo-1260169299-t49m4 -- yum install -y bind-utils
[centos@kube-master ~]$ kubectl exec -it demo-1260169299-t49m4 -- nslookup etcd0.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: etcd0.default.svc.cluster.local
Address: 10.99.12.54
# And you can remove it by removing the deployment (deleting the pod is sisyphean)
[centos@kube-master ~]$ kubectl delete deployment demo
# Alright, now we should be able to query etcd from the host...
[centos@kube-master ~]$ curl -L -X PUT http://etcd-client.default.svc.cluster.local:2379/v2/keys/message -d value="sup sup"
{"action":"set","node":{"key":"/message","value":"sup sup","modifiedIndex":9,"createdIndex":9}}
[centos@kube-master ~]$ curl -L http://etcd-client.default.svc.cluster.local:2379/v2/keys/message
{"action":"get","node":{"key":"/message","value":"sup sup","modifiedIndex":9,"createdIndex":9}}
# Now, try a job, let's use the example one.
[centos@kube-master ~]$ cat job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(750)"]
restartPolicy: Never
[centos@kube-master ~]$ kubectl create -f job.yaml
# You can see it for a short while.
[centos@kube-master ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
etcd0 1/1 Running 0 11d
etcd1 1/1 Running 0 11d
etcd2 1/1 Running 0 11d
pi-q4q0d 0/1 ContainerCreating 0 10s
# But the logs are around for a while, at least.
[centos@kube-master ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
etcd0 1/1 Running 0 11d
etcd1 1/1 Running 0 11d
etcd2 1/1 Running 0 11d
[centos@kube-master ~]$ watch -n1 kubectl describe pod pi
[centos@kube-master ~]$ kubectl logs pi-q4q0d
3.141592653589793238[...snip...]
# But not after you delete it.
[centos@kube-master ~]$ kubectl delete -f job.yaml
# So... The question is...
# Can I make this see properties of a companion pod so that it can do the etcd announcing?
# I should probably experiment, but, here's the thing...
# It's a different element, it's not going to see the same interfaces / IP addresses as
# the target pod where I want to put some info in etcd.
# So I think it's gonna need to query the k8s api.
# We'll take the API for a spin https://thenewstack.io/taking-kubernetes-api-spin/
[centos@kube-master ~]$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
# Oh, here it is.
[centos@kube-master ~]$ curl http://localhost:8080
{clipped}
# I don't have swagger-ui, who cares, I guess.
[centos@kube-master ~]$ curl -L http://localhost:8080/swagger-ui
# Also I couldn't ssh tunnel to the API
[doug@desktop ~]$ ssh -L 8080:192.168.122.65:8080 root@192.168.1.119
Last login: Mon Mar 27 17:33:46 2017 from 192.168.1.199
[root@droctagon2 ~]#
channel 3: open failed: connect failed: Connection refused
# Start up the demo pod, again, we'll try to find its IP address
# To make sure we're getting it right, check it out ourselves.
[centos@kube-master ~]$ kubectl run demo --image=centos:centos7 --command -- /bin/bash -c 'while :; do sleep 10; done'
[centos@kube-master ~]$ kubectl exec -it demo-1260169299-5nwj7 -- yum install -y iproute
[centos@kube-master ~]$ kubectl exec -it demo-1260169299-5nwj7 -- ip a | grep inet
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
inet 10.244.3.6/24 scope global eth0
inet6 fe80::c78:32ff:fe04:4571/64 scope link tentative dadfailed
# sidebar install jq
[centos@kube-master ~]$ history | tail -n 5
232 wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64
233 chmod +x ./jq
234 sudo mv jq /usr/bin
# And we can get pretty things like this...
[centos@kube-master ~]$ curl -s http://localhost:8080/api/v1/nodes | jq '.items[] .metadata.labels'
{
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubeadm.alpha.kubernetes.io/role": "master",
"kubernetes.io/hostname": "kube-master"
}
{
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/hostname": "kube-minion-1"
}
{
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/hostname": "kube-minion-2",
"voiptype": "tandem"
}
{
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/hostname": "kube-minion-3"
}
# Now we can get the pod name like so
[centos@kube-master ~]$ podname=$(curl -s http://localhost:8080/api/v1/namespaces/default/pods | jq ".items[] .metadata.name" | grep -i demo | sed -e 's/"//g')
[centos@kube-master ~]$ echo $podname
demo-1260169299-5nwj7
# And we can query the API given that pod name.
# and use it to get the Pods IP
[centos@kube-master ~]$ podip=$(curl -s http://localhost:8080/api/v1/namespaces/default/pods/$podname | jq '.status.podIP' | sed -s 's/"//g')
[centos@kube-master ~]$ echo $podip
10.244.3.6
# Alright, that's fairly good progress.
# But, seems like we're going to need to access that through.... well it's DNS name
# ...cause this job could come up anywhere.
# By default, that's just not working
[centos@kube-master ~]$ curl -k https://kubernetes.default.svc.cluster.local
Unauthorized
# So I think we need to open up the `kubectl proxy` to be a little bit more... ahem, promiscuous.
# https://kubernetes.io/docs/user-guide/kubectl/kubectl_proxy/
# http://stackoverflow.com/questions/42095142/kubectl-proxy-unauthorized-when-accessing-from-another-machine
[centos@kube-master ~]$ kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
Starting to serve on [::]:8001
^C
# WAit, that's interactive? So yeah only as long as it's up...
# So yeah, this might be about giving the service account access...
# https://kubernetes.io/docs/user-guide/service-accounts/
[centos@kube-master ~]$ kubectl get serviceaccounts
NAME SECRETS AGE
default 1 15d
# More information given "accessing the cluster"
# https://kubernetes.io/docs/concepts/cluster-administration/access-cluster/
# No kidding, it gets put into the containers in the pod, the secret that is.
[centos@kube-master ~]$ kubectl exec -it demo-1260169299-5nwj7 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token && echo
eyJhbGciOiJSUzI [... snip ...]
# So yeah, how in tarnation do you use that?
# Ahh, this is how!
[centos@kube-master ~]$ kubectl exec -it demo-1260169299-5nwj7 -- /bin/bash
[root@demo-1260169299-5nwj7 /]$ KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token)
[root@demo-1260169299-5nwj7 /]$ echo $KUBE_TOKEN && echo
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tMDM2bDAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNjOGUxNmNmLTA4YmItMTFlNy1iMGMyLTUyNTQwMGI3NzE1YSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.GhJKWMiLyXqVkQLUEMbS0JiPp6zEwkkNzGSYw9y1-pq6ZKgEpYx44KE7taB85NsnWrzcUxe2NajmhJeFwgB7Wu5bfYNcMqtvAPDTonsfAiDaVMYSUWDsNThIA_hez2NXsqZm5a8ddOcMKkc13-3oyHyft2WWR9avNmywadIPzNuJnzUzhME7ZxrP-3CQl1DpzDsK8FfVFMZM-WS7BxDF4N7lElqtbazp1d8nOt0gaFI3bkpE7ibrkmQ6StE-IpwW6VVlw6mRCwXHNPnCF-6zv8ooVTtK9ltw_Y-FasyM9Wap0kgNcdx9EkNwLh46vN1Ol-F1_XIfpCnMQyG0MkPA6A
[root@demo-1260169299-5nwj7 /]$ curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://kubernetes.default.svc.cluster.local
{
"paths": [
"/api",
[... snip! worked...]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment