Skip to content

Instantly share code, notes, and snippets.

@ryanj
Last active June 8, 2017 17:16
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 5 You must be signed in to fork a gist
  • Save ryanj/124d6dda8dd7b09979e03ecfcba867e4 to your computer and use it in GitHub Desktop.
Save ryanj/124d6dda8dd7b09979e03ecfcba867e4 to your computer and use it in GitHub Desktop.
Kubenetes Zone "Hands-On Intro to Kubernetes" Workshop 4/17 in Austin, TX http://bit.ly/k8s-zone
<section>
<section id="kubernetes-hands-on">
<h1>Kubernetes Zone</h1>
<h1>Hands-On Workshop</h1>
<br/>
<p><a href="https://www.eventbrite.com/e/kubernetes-zone-workshop-andor-reception-tickets-32538282880">Kubernetes Zone - April 17, 2017</a></p>
<p><a href="http://bit.ly/k8s-zone">bit.ly/k8s-zone</a></p>
</section>
<section data-state='blackout' data-background-color="#000000" id='presented-by'>
<p>presented by&hellip;</p>
<div class='fragment' style='width:45%; float:left;'>
<p><a href="http://twitter.com/ryanj/"><img alt="ryanj" src="http://ryanjarvinen.com/images/ryanj-mestrefungo-com.gif" style="width:70%" /></p>
<p><a href="http://twitter.com/ryanj/">@ryanj</a>, Open Source Activist at CoreOS</p>
</div>
<div class='fragment' style='width:10%;float:left;margin-top:23%'>&amp;</div>
<div class='fragment' style='width:45%; float:left;'>
<p><a href="http://twitter.com/elsiephilly/"><img alt="Elsie" src="https://pbs.twimg.com/profile_images/794561267855802368/ht3C6MWT.jpg" style="width:70%"/></p>
<p><a href="http://twitter.com/elsiephilly/">Elsie Phillips</a>, Community Lead at CoreOS</p>
</div>
</section>
<section id='coreos' data-markdown>
![CoreOS Logo](http://i.imgur.com/DRm4KEq.png "")
Helping *Secure the Internet* by keeping your Container Linux hosts secure, up-to-date, and ready for the challenges of a modern world
</section>
</section>
<section>
<section id='introduction'>
<h1>Introduction</h1>
</section>
<section id='overview'>
<h2>Workshop Overview</h2>
<ol>
<li class='fragment'><a href="#/introduction">Introduction</a>
<ul>
<li><a href="#/workshop-setup">Workshop Setup</a></li>
</ul>
</li>
<li class='fragment'><a href="#/kubernetes-basics">Kubernetes Basics</a>
<ul>
<li><a href="#/why-k8s">Why Kubernetes?</a></li>
<li><a href="#/terminology">Learn five K8s Primitives</a></li>
</ul>
</li>
<li class='fragment'><a href="#/kubernetes-arch">Kubernetes Architecture</a>
<ul>
<li><a href="#/firedrills">Architecture Experiments</a></li>
</ul>
</li>
<li class='fragment'><a href="#/kubernetes-extensibility">Kubernetes Extensibility</a>
<ul>
<li><a href="#/what-are-operators">The Operator pattern</a></li>
<li><a href="#/operator-examples">Common Operators</a></li>
</ul>
</li>
<li class='fragment'><a href="#/wrap-up">Wrap-up</a></li>
</ol>
</section>
<section id='survey'>
<h3>Intro Survey / Who are you?</h3>
<ol>
<li class='fragment'>doing anything with containers today?</li>
<li class='fragment'>have you tried Container Linux?</li>
<li class='fragment'>do you have any experience using Kubernetes?</li>
<li class='fragment'>do you consider yourself to be proficient with the <code>kubectl</code> cli tool?</li>
<li class='fragment'>can you name five basic primitives or resource types?</li>
<li class='fragment'>can you name five pieces of k8s architecture?</li>
<li class='fragment'>can you confidently define the term "K8s operator"?</li>
<li class='fragment'>do you have any hands-on experience using operators?</li>
</ol>
</section>
</section>
<section>
<section id='workshop-setup' data-markdown>
## Workshop Setup
bring a laptop with the following:
1. [kubectl](#/kubectl)
2. [minikube](#/minikube)
3. [docker](#/docker)
4. [Optional tooling for advanced users](#/go)
Or, [use GKE for a managed Kuberentes environment](http://cloud.google.com):
[http://cloud.google.com](http://cloud.google.com)
</section>
<section id='kubectl'>
<h3>install kubectl</h3>
<p>linux amd64:</p>
<pre><code contenteditable>curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
</code></pre>
<p>osx amd64:</p>
<pre><code contenteditable>curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
</code></pre>
<p>To verify <code>kubectl</code> availability:</p>
<pre><code contenteditable>kubectl version</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/kubectl/install/">official <code>kubectl</code> setup notes</a></p>
</section>
<section id='minikube'>
<h3>install minikube</h3>
<p>linux/amd64:</p>
<pre><code contenteditable>curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/</code></pre>
<p>osx:</p>
<pre><code contenteditable>curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/</code></pre>
<p>to verify <code>minikube</code> availability:</p>
<pre><code contenteditable>minikube start</code></pre>
<p><a href="https://github.com/kubernetes/minikube/releases">official <code>minikube</code> setup notes</a></p>
</section>
<section id='minikube-virt'>
<h4>minikube troubleshooting</h4>
<p>If your minikube environment does not boot correctly:</p>
<ol>
<li>Minikube requires an OS virtualization back-end</li>
<li>Most OSes include some support for virtualization</li>
<li>You can use the <a href="https://github.com/kubernetes/minikube#quickstart"><code>--vm-driver</code></a> flag to select a specific virt provider</li>
</ol>
<pre><code contenteditable>minikube start --vm-driver=virtualbox</code></pre>
<p>Check the project <a href="https://github.com/kubernetes/minikube#requirements"><code>README</code></a> for more information about <a href="https://github.com/kubernetes/minikube#requirements">supported virtualization options</a></p>
</section>
<section id='minikube-rkt'>
<h5><b>ADVANCED CHALLENGE OPTION</b></h5>
<h3>rkt-powered minikube (optional)</h3>
<p>To start <code>minikube</code> with <code>rkt</code> enabled, try:</p>
<pre><code contenteditable>minikube start --network-plugin=cni --container-runtime=rkt</code></pre>
<p>to verify:</p>
<pre><code contenteditable>minikube ssh
docker ps # expect no containers here
rkt list # list running containers</code></pre>
</section>
<section id='docker'>
<h3>install the docker cli</h3>
<p>Download and install binary from <a href="https://store.docker.com/search?offering=community&type=edition">"the docker store"</a></p>
<p>Or, use a package manager to install:</p>
<pre><code contenteditable>brew install docker</code></pre>
<p>To verify <code>docker</code> availability:</p>
<pre><code contenteditable>docker version</code></pre>
<p>To <a href="https://github.com/kubernetes/minikube#reusing-the-docker-daemon">reference minikube's docker daemon from your host</a>, run:</p>
<pre><code contenteditable>eval $(minikube docker-env)</code></pre>
</section>
<section id='go'>
<h5><b>ADVANCED CHALLENGE OPTION</b></h5>
<h3>install go (optional)</h3>
<p>Download and install binary from <a href="https://golang.org/doc/install">golang.org</a></p>
<p>Or, use a package manager to install:</p>
<pre><code contenteditable>brew install go
export GOPATH=$HOME/src/go
export GOROOT=/usr/local/opt/go/libexec
export PATH=$PATH:$GOPATH/bin
export PATH=$PATH:$GOROOT/bin</code></pre>
<p>To verify <code>go</code> availability:</p>
<pre><code contenteditable>go version</code></pre>
</section>
<section id='ready' data-markdown>
# *Ready?*
</section>
</section>
<section>
<section id='kubernetes-basics' data-markdown>
# Kubernetes Basics
</section>
<section id='why-k8s'>
<h3>Why Kubernetes?</h3>
<p><img src="https://pbs.twimg.com/profile_images/511909265720614913/21_d3cvM.png" alt="kubernetes" style='width:30%;'></p>
</section>
<section id='k8s-is'>
<h3>Kubernetes is...</h3>
<ol>
<li class='fragment'>The best way to manage distributed solutions at scale, based on years of industry expertise (Google-scale experience)</li>
<li class='fragment'>agreement on a basis for open source container-driven distributed solution delivery, featuring a modular, HA architecture, and an extensible distributed solutions modeling language</li>
<li class='fragment'>An extensible modeling language with a huge community following</li>
</ol>
</section>
<section id='an-api' data-markdown>
## An API
API object primitives include the following attributes:
1. kind
2. apiVersion
3. metadata
4. spec
5. status
*mostly true
</section>
<section data-transition="linear" id='terminology' data-markdown>
### Basic K8s Terminology
1. [node](#/node)
2. [pod](#/po)
3. [service](#/svc)
4. [deployment](#/deploy)
5. [replicaSet](#/rs)
</section>
</section>
<section>
<section data-transition="linear" id='node' data-markdown>
### Node
A node is a host machine (physical or virtual) where containerized processes run.
Node activity is managed via one or more Master instances.
</section>
<section>
<p>Try using <code>kubectl</code> to list resources by type:</p>
<pre><code contenteditable>kubectl get nodes</code></pre>
<p>Request the same info, but output the results as structured yaml:</p>
<pre><code contenteditable>kubectl get nodes -o yaml</code></pre>
<p>Fetch an individual resource by <code>type/id</code>, output as <code>json</code>:</p>
<pre><code contenteditable>kubectl get node/minikube -o json</code></pre>
<p>View human-readable API output:</p>
<pre><code contenteditable>kubectl describe node/minikube</code></pre>
</section>
<section data-markdown>
### Observations:
* Designed to exist on multiple machines (distributed system)
* high availability of nodes
* platform scale out
* The API ambidextriously supports both json and yaml
</section>
</section>
<section>
<section data-transition="linear" id='po' data-markdown>
### Pod
A group of one or more co-located containers. Pods represent your minimum increment of scale.
> "Pods Scale together, and they Fail together" @theSteve0
</section>
<section>
<p>List resources by type:</p>
<pre><code contenteditable>kubectl get pods</code></pre>
<p>Create a new resource based on a json object specification:</p>
<pre><code contenteditable>curl https://raw.githubusercontent.com/ryanj/metrics-k8s/master/pod.json</code></pre>
<pre><code contenteditable>kubectl create -f https://raw.githubusercontent.com/ryanj/metrics-k8s/master/pod.json</code></pre>
<p>List resources by type:</p>
<pre><code contenteditable>kubectl get pods</code></pre>
<p>Fetch a resource by type and id, output the results as <code>yaml</code>:</p>
<pre><code contenteditable>kubectl get pod metrics-k8s -o yaml</code></pre>
<p>Notice any changes?</p>
</section>
<section data-markdown>
### Observations:
* pods are scheduled to be run on nodes
* asyncronous fulfilment of requests
* declarative specifications
* automatic health checks, lifecycle management for containers (processes)
</section>
<!--
<section data-markdown>
</section>
-->
</section>
<section>
<section data-transition="linear" id='svc' data-markdown>
### Service
Services (svc) establish a single endpoint for a collection of replicated pods, distributing inbound traffic based on label selectors
In our K8s modeling language they represent a load balancer. Their implementation often varies per cloud provider
</section>
<section id='services'>
<h3>Contacting your App</h3>
<p>Expose the pod by creating a new <code>service</code> (or "loadbalancer"):</p>
<pre><code contenteditable>kubectl expose pod/metrics-k8s --port 2015 --type=NodePort</code></pre>
<p>Contact your newly-exposed pod using the associated service id:</p>
<pre><code contenteditable>minikube service metrics-k8s</code></pre>
<p>Schedule a pod to be deleted:</p>
<pre><code contenteditable>kubectl delete pod metrics-k8s</code></pre>
<p>Contact the related service. What happens?:</p>
<pre><code contenteditable>minikube service metrics-k8s</code></pre>
<p>Delete the service:</p>
<pre><code contenteditable>kubectl delete service metrics-k8s</code></pre>
</section>
<section data-markdown>
### Observations:
* *"service"* basically means *"loadbalancer"*
* Pods and Services exist independently, have disjoint lifecycles
</section>
</section>
<section>
<section data-transition="linear" id='deploy' data-markdown>
### Deployment
A `deployment` helps you specify container runtime requirements (in terms of pods)
</section>
<section>
<p>Create a specification for your <code>deployment</code>:</p>
<pre><code contenteditable>kubectl run metrics-k8s --image=quay.io/ryanj/metrics-k8s \
--expose --port=2015 --service-overrides='{ "spec": { "type": "NodePort" } }' \
--dry-run -o yaml > deployment.yaml</code></pre>
<p>View the generated deployment spec file:</p>
<pre><code contenteditable>cat deployment.yaml</code></pre>
<p><i><b>Bug!:</b></i> Edit the file, adding "<code>---</code>" (on it's own line) between resource 1 and resource 2 for a workaround.</p>
<p>Can you think of another way to fix this issue? json compatible?</p>
</section>
<section>
<p>Create a new resource based on your yaml specification:</p>
<pre><code contenteditable>kubectl create -f deployment.yaml</code></pre>
<p>List resources by type:</p>
<pre><code contenteditable>kubectl get po,svc</code></pre>
<p>Connect to your new deployment via the associated service id:</p>
<pre><code contenteditable>minikube service metrics-k8s</code></pre>
</section>
<section id='replication'>
<h2>Replication</h2>
<p>Scale up the <code>metrics-k8s</code> deployment to 3 replicas:</p>
<pre><code contenteditable>kubectl scale deploy/metrics-k8s --replicas=3</code></pre>
<p>List pods:</p>
<pre><code contenteditable>kubectl get po</code></pre>
</section>
<section>
<p>Edit <code>deploy/metrics-k8s</code>, setting <code>spec.replicas</code> to <code>5</code>:</p>
<pre><code contenteditable>kubectl edit deploy/metrics-k8s -o json</code></pre>
<p>Save and quit. What happens?</p>
<pre><code contenteditable>kubectl get pods</code></pre>
</section>
<section id='autorecovery'>
<h2>AutoRecovery</h2>
<p>Watch for changes to <code>pod</code> resources:</p>
<pre><code contenteditable>kubectl get pods --watch</code></pre>
<p>In another terminal, delete several pods by id:</p>
<pre><code contenteditable>kubectl delete pod $(kubectl get pods | grep ^metrics-k8s | cut -f1 -s -d' ' | head -n 3 | tr '\n' ' ')</code></pre>
<p>What happend? How many pods remain?</p>
<pre><code contenteditable>kubectl get pods</code></pre>
</section>
<section data-markdown>
### Observations:
* Use the `--dry-run` flag to generate new resource specifications
* A deployment spec contains a pod spec
</section>
</section>
<section>
<section data-transition="linear" id='rs' data-markdown>
### ReplicaSet
A `replicaset` provides replication and lifecycle management for a specific image release
</section>
<section>
<p>Watch deployments (leave this running until the 'cleanup' section):</p>
<pre><code contenteditable>kubectl get deploy --watch</code></pre>
<p>View the current state of your deployment:</p>
<pre><code contenteditable>minikube service metrics-k8s</code></pre>
</section>
<section>
<h3>Rollouts</h3>
<p>Update your deployment's image spec to rollout a new release:</p>
<pre><code contenteditable>kubectl set image deploy/metrics-k8s metrics-k8s=quay.io/ryanj/metrics-k8s:v1</code></pre>
<p>Reload your browser to view the state of your deployment</p>
<pre><code contenteditable>kubectl get rs,deploy</code></pre>
</section>
<section>
<h3>Rollbacks</h3>
<p>View the list of previous rollouts:</p>
<pre><code contenteditable>kubectl rollout history deploy/metrics-k8s</code></pre>
<p>Rollback to the previous state:</p>
<pre><code contenteditable>kubectl rollout undo deployment metrics-k8s</code></pre>
<p>Reload your browser to view the state of your deployment</p>
</section>
<section>
<h3>Cleanup</h3>
<p>Cleanup old resources if you don't plan to use them:</p>
<pre><code contenteditable>kubectl delete service,deployment metrics-k8s</code></pre>
<p>Close any remaining <code>--watch</code> listeners</p>
</section>
<section data-markdown>
### Observations:
* The API allows for watch operations (in addition to get, set, list)
* ReplicaSets provide lifecycle management for pod resources
* Deployments create ReplicaSets to manage pod replication per rollout (per change in podspec: image:tag, environment vars)
</section>
</section>
<section>
<section id='kubernetes-arch' data-markdown>
# Kubernetes Architecture
</section>
<section data-markdown>
## etcd
![etcd logo](https://raw.githubusercontent.com/coreos/etcd/master/logos/etcd-glyph-color.png)
* distributed key-value store
* implements the RAFT consensus protocol
</section>
<section data-markdown>
### CAP theorum
1. Consistency
2. Availability
3. Partition tolerance
[etcd is "CA"](https://coreos.com/etcd/docs/latest/learning/api_guarantees.html)
</section>
<section data-markdown>
## Degraded Performance
Fault tolerance sizing chart:
![etcd cluster sizing chart](http://cloudgeekz.com/wp-content/uploads/2016/10/etcd-fault-tolerance-table.png)
</section>
<section data-markdown>
### play.etcd.io
[play.etcd.io/play](http://play.etcd.io/play)
</section>
<section data-markdown>
## Kubernetes API
* gatekeeper for etcd (the only way to access the db)
* not required for pod uptime
</section>
<section data-markdown>
### API outage simulation
Example borrowed from [Brandon Philips' "Fire Drills" from OSCON 2016](https://github.com/philips/2016-OSCON-containers-at-scale-with-Kubernetes#fire-drills):
https://github.com/philips/2016-OSCON-containers-at-scale-with-Kubernetes#fire-drills
</section>
<section data-markdown>
Create a pod and a service (repeat our deployment drill). Verify that the service is responding.
ssh into minikube, kill the control plane:
```
minikube ssh
ps aux | grep "localkube"
sudo killall localkube
logout
```
Use kubectl to list pods:
```
kubectl get pods
The connection to the server mycluster.example.com was refused - did you specify the right host or port?
```
The API server is down!
Reload your service. Are your pods still available?
</section>
<section data-markdown>
## Kubelet
Runs on each node, listens to the API for new items with a matching `NodeName`
</section>
<section data-markdown>
## Kubernetes Scheduler
Assigns workloads to Node machines
</section>
<section data-markdown>
## Bypass the Scheduler
Create two pods:
```
kubectl create -f https://raw.githubusercontent.com/ryanj/metrics-k8s/master/pod.json
kubectl create -f https://gist.githubusercontent.com/ryanj/893e0ac5b3887674f883858299cb8b93/raw/0cf16fd5b1c4d2bb1fed115165807ce41a3b7e20/pod-scheduled.json
```
View events:
```
kubectl get events
```
Did both pods get scheduled? run?
</section>
<section data-markdown>
## Kube DNS
</section>
<section data-markdown>
## Kube Proxy
</section>
<section data-markdown>
## CNI
* flannel
* canal
</section>
<section data-markdown>
## CRI
* containerd
* rkt
* oci
[https://coreos.com/blog/rkt-accepted-into-the-cncf.html](https://coreos.com/blog/rkt-accepted-into-the-cncf.html)
</section>
<section id='k8s-controllers' data-markdown>
### K8s Controllers
Controllers work to regulate the declarative nature of the platform state, reconsiling imbalances via a basic control loop
https://kubernetes.io/docs/admin/kube-controller-manager/
Kubernetes allows you to introduce your own custom controllers!
</section>
<section data-markdown>
### Architecture Diagram
![arch diagram](https://cdn.thenewstack.io/media/2016/08/Kubernetes-Architecture-1024x637.png)
</section>
<section data-markdown>
### Interaction Diagram
![interaction diagram](https://i1.wp.com/blog.docker.com/wp-content/uploads/swarm_kubernetes2.png?resize=1024)
[(copied from blog.docker.com)](https://blog.docker.com/2016/03/swarmweek-docker-swarm-exceeds-kubernetes-scale/)
</section>
</section>
<section>
<section id='kubernetes-extensibility' data-markdown>
# Kubernetes Extensibility
</section>
<section id="what-is-an-SRE">
<h3>What is an SRE?</h3>
<p><a href="https://landing.google.com/sre/book.html"><img src="https://landing.google.com/sre/images/book-2x.png" alt="Site Reliability Engineering" style="width: 25%;"></a></p>
<p><i>"how Google runs production systems"</i></p>
<ol>
<li><a href="https://landing.google.com/sre/book.html">Google's SRE book - free to read online</a></li>
<li><a href="https://medium.com/@jerub/googles-approach-4bcdc0533c0a">SRE blog post series on Medium</a></li>
</ol>
</section>
<section id='what-are-operators'>
<h3>What are Operators?</h3>
<p class='fragment'>Kube Operators establish a pattern for introducing higher-order interfaces that represent the logical domain expertise (and perhaps the ideal product output) of a Kubernetes SRE</p>
<p class='fragment'><a href="https://coreos.com/blog/introducing-operators.html">blog post: "Introducing Operators"</a></p>
</section>
<section id='k8s-tpr' data-markdown>
### Third Party Resources (TPRs)
TPRs allow you to establish new k8s primitives, extending the capabilities of the platform by allowing you to add your own terminology to the modeling language
https://kubernetes.io/docs/user-guide/thirdpartyresources/
</section>
<section id='best-practices' data-markdown>
### Best Practices for Writing Operators
https://coreos.com/blog/introducing-operators.html#how-can-you-create-an-operator
</section>
</section>
<section>
<section id='operator-examples' data-markdown>
## Operator Examples
</section>
<section id='etcd-operator' data-markdown>
### Etcd
blog post: https://coreos.com/blog/introducing-the-etcd-operator.html
sources: https://github.com/coreos/etcd-operator
demo video: https://www.youtube.com/watch?v=n4GYyo1V3wY
</section>
<section id='prometheus' data-markdown>
### Prometheus
blog post: https://coreos.com/blog/the-prometheus-operator.html
sources: https://github.com/coreos/prometheus-operator
demo video: https://www.youtube.com/watch?v=GYSKEd9FePk
</section>
<section id='kube-cert-manager' data-markdown>
### Kube Cert Manager
https://github.com/kelseyhightower/kube-cert-manager
</section>
<section id='rook' data-markdown>
### Rook (Storage)
https://rook.io/
</section>
<section id='elasticsearch' data-markdown>
### Elastic Search
https://github.com/upmc-enterprises/elasticsearch-operator
</section>
<section id='postgres' data-markdown>
### PostgreSQL
Postgres Operator from CrunchyData
https://github.com/CrunchyData/postgres-operator
</section>
<section id='tectonic' data-markdown>
### Tectonic
Tectonic uses operators to manage "self-hosted" Kubernetes
[k8s cluster upgrades made easy](https://twitter.com/ryanj/status/846866079792062464)
</section>
<!--
<section id='' data-markdown>
###
</section>
<section id='' data-markdown>
###
</section>
-->
</section>
<section>
<section id='workshop-challenges' data-markdown>
## Operator Challenges
</section>
<section id='basic-challenge' data-markdown>
### Basic Challenge
1. Try the etcd operator
2. Identify new primitives and interfaces
3. Create a new etcd cluster
4. Test autorecovery, leader election
5. Clean up
</section>
<section id='try'>
<h3>Use an Operator</h3>
<p>Try installing <a href="https://github.com/coreos/etcd-operator">the etcd operator</a></p>
<pre class='fragment'><code contenteditable>kubectl create -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/deployment.yaml</code></pre>
</section>
<section id='observe'>
<h3>Observations?</h3>
<p>List TPRs to see if any new primitives have become available<p>
<pre class='fragment'><code contenteditable>kubectl get thirdpartyresources</code></pre>
</section>
<section id='create-new'>
<h3>Run etcd</h3>
<p>Use the new TPR endpoint to create an etcd cluster</p>
<pre class='fragment'><code contenteditable>kubectl create -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/example-etcd-cluster.yaml</code></pre>
</section>
<section id='test'>
<h3>Test Autorecovery, Leader Election</h3>
<ol>
<li>use kubectl to delete etcd members (pods)<br/>
<pre class='fragment'><code contenteditable>kubectl get pods</code></pre>
<pre class='fragment'><code contenteditable>kubectl delete pod pod-id-1 pod-id-2</code></pre>
</li>
<li>list pods to see if the cluster was able to recover automatically<br/>
<pre class='fragment'><code contenteditable>kubectl get pods</code></pre>
</li>
<li class='fragment'><a href="">experiment with other SRE-focused features provided by this operator</a></li>
</ol>
</section>
<section id='clean-up'>
<h3>Clean Up</h3>
<p>Clean up your work, remove the DB cluster and the new API primitives (TPR endpoints)</p>
<pre class='fragment'><code contenteditable>kubectl delete -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/deployment.yaml</code></pre>
<pre class='fragment'><code contenteditable>kubectl delete endpoints etcd-operator</code></pre>
</section>
</section>
<section>
<section id='advanced-challenge' data-markdown>
### Advanced Challenge
1. Check out and run [Eric's custom rollback-controller code](https://github.com/coreos/rollback-controller#example)
2. [Make a small change and test your work](https://github.com/coreos/rollback-controller#exercises)
3. Consider how a TPR might be used to expose similar functionality, extending the basic collection of primitives
4. Share your results with the CoreOS Community (email us at community at coreos.com)
</section>
</section>
<section>
<section id='wrap-up' data-markdown>
## Wrap Up
</section>
<section id='follow-up' data-markdown>
### follow-up topics and links
1. [BrandonPhilips' TPR list](https://gist.github.com/philips/a97a143546c87b86b870a82a753db14c)
2. [Eric's "custom go controllers" presentation](https://github.com/ericchiang/go-1.8-release-party)
3. [Eric's rollback controller example](https://github.com/ericchiang/kube-rollback-controller)
4. [Josh's Operator talk from FOSDEM](https://docs.google.com/presentation/d/1MV029sDifRV2c33JW_83k1tjWDczCfVkFpKvIWuxT6E/edit#slide=id.g1c65fcd8a9_0_54 )
5. [Video of Josh's talk from KubeCon EU](https://www.youtube.com/watch?v=cj5uk1uje_Y)
6. [etcd autorecovery demo from brandon](https://www.youtube.com/watch?v=9sD3mYCPSjc)
7. [Brandon Philips' "Admin Fire Drills" from OSCON 2016](https://github.com/philips/2016-OSCON-containers-at-scale-with-Kubernetes#fire-drills)
8. [helm support added to quay.io](https://coreos.com/blog/quay-application-registry-for-kubernetes.html)
9. [Sign up to receive the CoreOS Community Newsletter](http://coreos.com/newsletter)
</section>
<section id='exit-interview'>
<h3>Exit Interview</h3>
<ol>
<li class='fragment'>can you name five Kubernetes primitives?</li>
<li class='fragment'>do you consider yourself to be proficient with kubernetes and the kubectl cli tool?</li>
<li class='fragment'>did this workshop provide enough hands-on experience with Kubernetes?</li>
<li class='fragment'>can you name five architectural components?</li>
<li class='fragment'>are you confident in your explanation of what a Kubernetes operator is?</li>
<li class='fragment'>do feel like you know what it takes to build an operator, and where to look for follow-up info?</li>
<li class='fragment'>are you ready to sign up to demo your new Kube operator at next month's meetup?</li>
</ol>
</section>
<section id='coreos-training' data-markdown>
### CoreOS Training
Want to learn more?
Check out the lineup of pro training courses from CoreOS!
[coreos.com/training](http://coreos.com/training)
</section>
<section id='coreos-fest' data-markdown>
### CoreOS Fest
Tickets are on sale now!
[coreos.com/fest](http://coreos.com/fest)
</section>
<section id='tectonic-free-teir' data-markdown>
### Tectonic Free Tier
Try CoreOS Tectonic today
[coreos.com/tectonic](http://coreos.com/tectonic)
Your first ten Enterprise-grade Kubernetes nodes are free!
</section>
<section id='coreos-jobs' data-markdown>
### CoreOS is hiring!
Join us in our mission to *Secure the Internet!*
[coreos.com/careers](https://coreos.com/careers)
</section>
</section>
<section>
<section id='thank-you'>
<h1>Thank You!</h1>
<p>for joining us at the</p>
<h1>Kubernetes Zone Workshop</h1>
<p>in Austin, TX</p>
<br/>
<a href="http://bit.ly/k8s-zone"><h5 class='fragment grow'>bit.ly/k8s-zone</h5></a>
</section>
</section>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment