The linux kubectl binary can be fetched with a command like:
curl -O https://storage.googleapis.com/kubernetes-release/release/v1.4.5/bin/linux/amd64/kubectl
On an OS X workstation, replace linux in the URL above with darwin:
curl -O https://storage.googleapis.com/kubernetes-release/release/v1.4.5/bin/darwin/amd64/kubectl
After downloading the binary, ensure it is executable and move it into your PATH:
chmod +x kubectl
mv kubectl /usr/local/bin/kubectl
export KUBECONFIG=$(pwd)/kubeconfig
Pro tip: alias k="kubectl"
The first step is to createn an environment with your name
- Edit
namespace/namespace-with-my-name.yml
with you name kubectl create namespace/namespace-with-my-name.yml
- Edit
kubeconfig
with the new namespace - Explain get command (
kubectl get namespaces
)
- Show
images/whoami
and test isdocker run -p 8000:8000 guiocavalcanti/whoami:0.0.1
- Open
deployments/whoami.yml
and see the labels and port kubectl create -f deployment/whoami.yml
- Explain describe command (
kubectl describe deployments
) - Show pods with labels
kubectl get pods
- Show how to introspect one pod with
kubectl port-forward pod-name 8000:8000
- Show how to introspect one pod with
kubectl exec -it pod-name sh
- Scale whoami with
kubectl scale deployment whoami --replicas=3
- Open service/whoami.yml and show selectors
- Create service
kubectl create -f service/whoami.yml
- Create a new pod to inspect
kubectl run -i -t inspector --image=guiocavalcanti/inspector:0.0.1 --restart=Never --rm bash
nslookup whoami
curl whoami
while true; do curl whoami; echo ; done
- Open service/whoami-external.yml and show the selectors
kubectl create -f service/whoami-external.yml
kubectl get svc
kubectl describe svc whoami-external
- Modify your paulo-marlon application container to return "marlon (replica: HOSTNAME)" where
HOSTNAME
is an environment variable - Deploy a two-tier application with:
- a HTTP server as front-end
- the modified paulo-marlon application as backend
Requirements:
- The HTTP server front-end and the backend shoukd scale independently
- The HTTP servcer should be accessble to the Web using an ELB (type: Loadbalancer)
- Both tiers should be fault tolerant (ie. you should be able to delete pods without failing the application)
Tips:
- If you didn't finished the last assignment you can use
images/whoami
(40 mb) as base image andnginx:stable-alpine
(17 mb) - If you can't create an external service (
type: LoadBalancer
). Use kubectl proxy to reach your service.