Skip to content

Instantly share code, notes, and snippets.

@ryanj
Last active October 17, 2018 20:55
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ryanj/3467af04b530cccb7abcacccdc1da35a to your computer and use it in GitHub Desktop.
Save ryanj/3467af04b530cccb7abcacccdc1da35a to your computer and use it in GitHub Desktop.
Kubernetes Basics (on Minishift!) http://bit.ly/k8s-minishift
<section data-transition='concave'>
<section id='local-kubernetes-environments-with-minishift'>
<p><img style='border:none;background:none;width:' src="http://hexb.in/vector/openshift.svg" /></p>
<h2>Local Access to Kubernetes</h2>
<h3>with <a href="https://github.com/minishift/minishift"><code>minishift</code></a></h3>
<br/>
<h4 class='fragment grow'><a href="http://bit.ly/k8s-minishift"><code>bit.ly/k8s-minishift</code></a></h4>
</section>
<section data-background-transition='fade' data-background='black' id='presented-by-ryanj'>
<p>presented by <a href="http://twitter.com/ryanj/">@ryanj</a>, Developer Advocate at <a href='http://redhat.com' style='color:red;'>Red Hat</a></p>
<p><a href="http://twitter.com/ryanj/"><img alt="ryanj" src="http://ryanjarvinen.com/images/ryanj-mestrefungo-com.gif" style="width:50%" /></p>
</section>
</section>
<section data-transition='concave' id='minishift-start' data-markdown>
### minishift start
To follow along with these examples, you'll need to run minishift locally:
minishift start
If `minishift` fails to start, check the [official README](https://github.com/minishift/minishift/), or try [bit.ly/k8s-minishift-setup](http://bit.ly/k8s-minishift-setup)
</section>
<section data-transition='zoom-in convex-out' id='ready'>
<h1><i>Ready?</i></h1>
<br/>
<div class='fragment fade-up'>
<p>Verify that your local OpenShift environment is ready by running:<br/>
<pre><code contenteditable>oc version</code></pre>
<p>The output should include your <code>oc</code> version info, and the release version of the kubernetes API server (when available)</p>
</div>
</section>
<section data-background-transition="zoom">
<h1><i>Let's Go!</i></h1>
</section>
<section>
<section id='kubernetes-basics'>
<h1>Kubernetes Basics</h1>
<p>↓</p>
</section>
<section data-markdown>
Kubernetes uses
## etcd
to keep track of the cluster's state
![etcd logo](https://raw.githubusercontent.com/coreos/etcd/master/logos/etcd-glyph-color.png)
* distributed key-value store
* implements the [RAFT](https://raft.github.io/raft.pdf) consensus protocol
* CAP theorum: [CAP twelve years later](https://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed)
</section>
<section data-markdown>
## Etcd cluster sizes
Fault tolerance sizing chart:
![etcd cluster sizing chart](http://cloudgeekz.com/wp-content/uploads/2016/10/etcd-fault-tolerance-table.png)
</section>
<section id='an-api' data-markdown>
Kubernetes provides&hellip;
# An API
API object primitives include the following attributes:
```
kind
apiVersion
metadata
spec
status
```
*mostly true
Extended Kubernetes API Reference:
http://k8s.io/docs/reference/generated/kubernetes-api/v1.12/
</section>
<section data-transition="linear" id='terminology' data-markdown>
### Basic K8s Terminology
1. [node](#/node)
2. [pod](#/po)
3. [service](#/svc)
4. [deployment](#/deployment)
5. [replicaSet](#/rs)
Introduction borrowed from: [bit.ly/k8s-kubectl](http://bit.ly/k8s-kubectl)
</section>
</section>
<section>
<section data-transition="linear" id='node' data-markdown>
### Nodes
A node is a host machine (physical or virtual) where containerized processes run.
Node activity is managed via one or more Master instances.
</section>
<section>
<p>Try using <code>kubectl</code> to list resources by type:</p>
<pre><code contenteditable>kubectl get nodes</code></pre>
</section>
<section>
<p>Log in as an admin user (password "openshift")</p>
<pre><code contenteditable>minishift addon apply admin-user
oc login -u admin</code></pre>
<p>Try to list nodes using admin credentials:</p>
<pre><code contenteditable>kubectl get nodes</code></pre>
<p>Now try using <code>curl</code> to make the same request:</p>
<pre><code contenteditable>curl -k -H"Authorization: Bearer $(oc whoami -t)" https://$(minishift ip):8443/api/v1/nodes</code></pre>
<p>We won't need admin priveleges for the remaining content, so let's swap back to the "developer" user:</p>
<pre><code contenteditable>oc login -u developer</code></pre>
</section>
<section data-markdown>
### Observations:
* Designed to exist on multiple machines (distributed system)
* built for high availability
* platform scale out
* The Kubernetes API checks auth credentials and restricts access to Etcd, our platform's distributed consensus store
* Your JS runs on nodes!
</section>
</section>
<section>
<section data-transition="linear" id='po' data-markdown>
### Pods
A group of one or more co-located containers. Pods represent your minimum increment of scale.
&gt; "Pods Scale together, and they Fail together" @theSteve0
</section>
<section>
<p>Try using <code>kubectl</code> to list resources by type:</p>
<pre><code contenteditable>kubectl get pods</code></pre>
<p>Create a new resource from a json object specification:</p>
<pre><code contenteditable>curl https://raw.githubusercontent.com/jankleinert/hello-workshop/master/pod.json
curl -k -H"Authorization: Bearer $(oc whoami -t)" -H'Content-Type: application/json' https://$(minishift ip):8443/api/v1/namespaces/myproject/pods -X POST --data-binary @pod.json</code></pre>
<p>Attempt the same using <code>kubectl</code>:</p>
<pre><code contenteditable>kubectl create -f https://raw.githubusercontent.com/jankleinert/hello-workshop/master/pod.json</code></pre>
</section>
<section>
<!--
<p>Request the same info using <code>curl</code>:</p>
<pre><code contenteditable>curl -k -H'Authorization: Bearer $(oc whoami -t)' $(minishift ip):8443/api/v1/namespaces/$(oc whoami)/pods/hello-k8s</code></pre>
-->
<p>List pods by type using <code>curl</code>:</p>
<pre><code contenteditable>curl -k -H"Authorization: Bearer $(oc whoami -t)" https://$(minishift ip):8443/api/v1/namespaces/myproject/pods</code></pre>
<p>Fetch an individual resource by <code>type/id</code>; output as <code>json</code>:</p>
<pre><code contenteditable>kubectl get pod hello-k8s -o json</code></pre>
<p>Attempt the same using <code>curl</code>:</p>
<pre><code contenteditable>curl -k -H"Authorization: Bearer $(oc whoami -t)" https://$(minishift ip):8443/api/v1/namespaces/myproject/pods/hello-k8s</code></pre>
<p class='fragment'>Notice any changes between the initial json podspec and the API response?</p>
</section>
<section>
<p>Request the same info, but output the results as structured yaml:</p>
<pre><code contenteditable>kubectl get pod hello-k8s -o yaml</code></pre>
<p>Print human-readable API output:</p>
<pre><code contenteditable>kubectl describe pod/hello-k8s</code></pre>
</section>
<section data-markdown>
### Observations:
* API resources provide declarative specifications with asyncronous fulfilment of requests
* you set the `spec`, the platform will populate the `status`
* automated health checking for PID1 in each container
* Pods are scheduled to be run on nodes
* The API ambidextriously supports both json and yaml
</section>
<!--
<section data-markdown>
</section>
-->
</section>
<section>
<section data-transition="linear" id='svc' data-markdown>
### Services
Services (svc) establish a single endpoint for a collection of replicated pods, distributing traffic based on label selectors
In our K8s modeling language they represent a load balancer. Their implementation may vary per cloud provider
</section>
<section id='connections'>
<h3>Contacting your App</h3>
<p>Expose the pod by creating a new <code>service</code> (or "loadbalancer"):</p>
<pre><code contenteditable>kubectl expose pod/hello-k8s --port 8080 --type=NodePort</code></pre>
<p>Take a look at the resulting <code>{.spec.selector}</code> attribute:</p>
<pre><code contenteditable>kubectl get svc/hello-k8s -o json</code></pre>
<p>Try using a <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/">JSONpath</a> selector to find the assigned port number:</p>
<pre><code contenteditable>kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort}</code></pre>
<p>Contact your newly-exposed pod via the exposed nodePort:</p>
<pre><code contenteditable>echo http://$(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})</code></pre>
<pre><code contenteditable>curl http://$(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})</code></pre>
</section>
<section>
<p>Schedule the deletion of all pods that are labeled with:</p>
<pre><code contenteditable>kubectl get pods -l run=hello-k8s</code></pre>
<pre><code contenteditable>kubectl delete pods -l run=hello-k8s</code></pre>
<p>Contact the related service. What happens?:</p>
<pre><code contenteditable>curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})</code></pre>
<p>Delete the service:</p>
<pre><code contenteditable>kubectl delete service hello-k8s</code></pre>
</section>
<section data-markdown>
### Observations:
* *"service"* basically means *"loadbalancer"*
* Label selectors can be used to organize workloads and manage groups of related resouces
* The Service resource uses label selectors to discover where traffic should be directed
* Pods and Services exist independently, have disjoint lifecycles
</section>
</section>
<section>
<section data-transition="linear" id='deployment' data-markdown>
### Deployments
A `deployment` helps you specify container runtime requirements (in terms of pods)
</section>
<section>
<p>Create a specification for your <code>deployment</code>:</p>
<pre><code contenteditable>kubectl run hello-k8s --image=jkleinert/nodejsint-workshop \
--dry-run -o json &gt; deployment.json</code></pre>
<p>View the generated deployment spec file:</p>
<pre><code contenteditable>cat deployment.json</code></pre>
<p>Create a new deployment from your local spec file:</p>
<pre><code contenteditable>kubectl create -f deployment.json</code></pre>
</section>
<section>
<p>Create a <code>Service</code> spec to direct traffic:</p>
<pre><code contenteditable>kubectl expose deploy/hello-k8s --type=NodePort --port=8080 --dry-run -o json &gt; service.json</code></pre>
<p>View the resulting spec file:</p>
<pre><code contenteditable>cat service.json</code></pre>
<p>Create a new service from your local spec file:</p>
<pre><code contenteditable>kubectl create -f service.json</code></pre>
<p>List multiple resources by type:</p>
<pre><code contenteditable>kubectl get po,svc,deploy</code></pre>
<p>Connect to your new deployment via the associated service port:</p>
<pre><code contenteditable>curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})</code></pre>
</section>
<section id='replication'>
<h2>Replication</h2>
<p>Scale up the <code>hello-k8s</code> deployment to 3 replicas:</p>
<pre><code contenteditable>kubectl scale deploy/hello-k8s --replicas=3</code></pre>
<p>List pods:</p>
<pre><code contenteditable>kubectl get po</code></pre>
</section>
<section>
<p>Edit <code>deploy/hello-k8s</code>, setting <code>spec.replicas</code> to <code>5</code>:</p>
<pre><code contenteditable>kubectl edit deploy/hello-k8s -o json</code></pre>
<p>Save and quit. What happens?</p>
<pre><code contenteditable>kubectl get pods</code></pre>
</section>
<section id='autorecovery'>
<h2>AutoRecovery</h2>
<p>Watch for changes to <code>pod</code> resources:</p>
<pre><code contenteditable>kubectl get pods --watch &</code></pre>
<p>In another terminal, delete several pods by id:</p>
<pre><code contenteditable>kubectl delete pod $(kubectl get pods | grep ^hello-k8s | cut -f1 -s -d' ' | head -n 3 | tr '\n' ' ')</code></pre>
<p class='fragment'>What happened? How many pods remain?</p>
<pre class='fragment'><code contenteditable>kubectl get pods</code></pre>
<p class='fragment'>Close your backgrounded <code>--watch</code> processes by running <code>fg</code>, then sending a break signal (<code>CTRL-c</code>)</p>
</section>
<section data-markdown>
### Observations:
* Use the `--dry-run` flag to generate new resource specifications
* A deployment spec contains a pod spec in it's "template" element
* The API provides `edit` and `watch` operations (in addition to `get`, `set`, and `list`)
</section>
</section>
<section>
<section data-transition="linear" id='rs' data-markdown>
### ReplicaSets
A `replicaset` provides replication and lifecycle management for a specific image release
</section>
<section>
<p>View the current state of your deployment:</p>
<pre><code contenteditable>curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})</code></pre>
<p>Watch deployments:</p>
<pre><code contenteditable>kubectl get deploy -w &amp;</code></pre>
</section>
<section>
<h3>Rollouts</h3>
<p>Update your deployment's image spec to rollout a new release:</p>
<pre><code contenteditable>kubectl set image deploy/hello-k8s hello-k8s=jkleinert/nodejsint-workshop:v1</code></pre>
<p>View the current state of your deployment</p>
<pre><code contenteditable>curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})</code></pre>
<p>Ask the API to list <code>replicaSets</code></p>
<pre><code contenteditable>kubectl get rs</code></pre>
</section>
<section>
<h3>Rollbacks</h3>
<p>View the list of previous rollouts:</p>
<pre><code contenteditable>kubectl rollout history deploy/hello-k8s</code></pre>
<p>Rollback to the previous state:</p>
<pre><code contenteditable>kubectl rollout undo deployment hello-k8s</code></pre>
<p>Reload your browser to view the state of your deployment</p>
</section>
<section>
<h3>Cleanup</h3>
<p>Cleanup all resources:</p>
<pre><code contenteditable>kubectl delete service,deployment hello-k8s</code></pre>
<p>Close your remaining <code>--watch</code> listeners by running <code>fg</code> before sending a break signal (<code>CTRL-c</code>)</p>
<br/>
<p>Verify that your namespace is clean:</p>
<pre><code contenteditable>kubectl get all</code></pre>
</section>
<section data-markdown>
### Observations:
* ReplicaSets provide lifecycle management for pod resources
* Deployments create ReplicaSets to manage pod replication per rollout (per change in podspec: image:tag, environment vars)
* `Deployments` &gt; `ReplicaSets` &gt; `Pods`
</section>
</section>
<section data-transition='concave' id='next-steps'>
<h3>Congratulations on completing:</h3>
<p>
<a href="http://bit.ly/k8s-minishift">
<b>Local OpenShift / Kubernetes Environments with <code>minishift</code></b>
<h5 class='fragment grow'><code>bit.ly/k8s-minishift</code></h5>
</a>
</p>
<br/>
<h4><i>Next Steps</i></h4>
<p>Try the <a href="http://learn.openshift.com">OpenShift learning portal</a> at:</p>
<p><a href="http://learn.openshift.com">learn.openshift.com</a></p>
<p>Or, continue learning with other <a href="http://bit.ly/k8s-workshops"><code>k8s-workshops</code></a>:</p>
<ol>
<!--<li><a href="http://bit.ly/realtime-odo"><b>Realtime Front-End Web Development with odo</b><br/>bit.ly/realtime-odo</a></li> -->
<li><a href="http://bit.ly/k8s-kubectl"><b>Kubernetes Command-Line Basics with <code>kubectl</code></b><br/>bit.ly/k8s-kubectl</a></li>
<!-- <li><a href="http://bit.ly/operatorpattern"><b>Extending Kubernetes with the Operator Pattern</b><br/><span style='font-size:smaller;'>bit.ly/operatorpattern</span></a></li> -->
</ol>
</section>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment