Skip to content

Instantly share code, notes, and snippets.

@IngmarBoddington
Last active April 6, 2020 12:42
Show Gist options
  • Save IngmarBoddington/3372d53048c092123d8f3277a36e4ffe to your computer and use it in GitHub Desktop.
Save IngmarBoddington/3372d53048c092123d8f3277a36e4ffe to your computer and use it in GitHub Desktop.
KUBERNETES / OPENSHIFT TRAINING NOTES
-------------------------------------
https://www.openshift.com/
https://kubernetes.io/
GENERAL
Orchestrate containers
Openshift is built over Kubernetes and is maintained by RedHat.
A container is an instance of an image
A container runtime is an environment within with containers are ran
The container runtime does not limit the types of container which can be used generally
Openshift now uses CRI-O runtime, most Kubernetes implementations use Docker runtime
Image names take the format [repo (assumes Docker.io if omitted)]/project/name
(Almost) everything is a resource and stored as an object in etcd, based on YAML (or JSON config files / representations)
Types:
namespace (ns)
container
node
deployment (deploy)
service
pod
configmap (cm)
secret
daemonset (ds)
?? Also types? More of them?
resource quotas
ingress
event
controller
NAMESPACES
Namespaces are namespaces :P
Pods live within namespaces to give logical boundaries
NODES
Nodes contain (one or more) pods and pods contain (one or more) containers
Master nodes manage things, generally consist of:
apiserver
etcd (database for objects)
controller-manager
scheduler
Worker nodes run pods and contain:
Kubelet
A container runtime must run on the nodes, e.g.:
Docker
cri-containerd
CRI-O
kube-proxy / ip-tables
When a worker node dies, the pods on the node (and their data) are lost
PODS
Pods are an abstraction layer which decouple containers from how they are deployed
In kubernetes you deploy pods (not containers!) - only the pod has an ip address, containers have exposed ports within the pod
Each pod is a host (whether virtual or physical) and has a unique IP address
Every pod within a cluster can communicate with each other
A Pod can contain multiple containers (they will share network and storage)
Best practice is one process / container per Node (but often coupled containers are included together)
Sidecars containers can be used for adapters etc, e.g.
Logging
Proxy requests
3rd party API adapters
AppD Analytics Agent (when not using java 4.5+)
Init containers are short lived and ran at the start of the pods lifecyle - they prepare the pod for the application container / sidecar
SERVICE
A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them.
Uses a selector for app, tier and version - this is used against the pod labels with the same values
Traffic is distributed using round robin
Create a service BEFORE creating back-end nodes and load
Uses a label selector to define group of pods which implement service
Services can be exposed in different formats:
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns.
Services will monitor continuously the running Pods using endpoints, to ensure the traffic is sent only to available Pods.
Services have an integrated load-balancer that will distribute network traffic to all Pods of an exposed Deployment.
DEPLOYMENT
Deployments require a service, services require an ingress - these must be created for a deployment to work
Load Balancers can be setup to go straight to back ends however this will become difficult to manage as deployments change
REPLICASET
Created by a deployment
DAEMONSET
A special type of deployment, also produces pods - but one per kubernetes pod
DNS
kubedns used for DNS
CONFIGMAP
Must define strings! (quote bools, numbers) - can define filenames rather than objects in configs
CONTROLLERS
Generally run in a verify config / fix loop
Deployment Controller
Detects of a number of nodes < config
Replaces instances of nodes
LABEL
Services and pods can be given labels (identifiers)
Services can define a label selector to define which pods implement a service
KUBERNETES COMMANDS - all of these work with oc instead
-------------------------------------------------------
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
Remember to use -n <name> to use namespaces with most commands (else they will apply to the default namespace)
kubectl <action> <resource> <name> [--help]
General form
<resource> <name> can also be in the form <resource>/<name>
kubectl <action> <resource>s
General form for fetching all resources of a type
kubectl apply [<resource> <name>] [-f <file/dir>]
Create a named resource or use -l to define a label selector to get resources to create
(sSme as create, but does not need a delete)
kubectl cluster-info
Output cluster info
kubectl create deployment <name> --image=<image>
Create deployment based on defined image
kubectl create [<resource> <name>] [-f <file/dir>]
Create a named resource or use -l to define a label selector to get resources to create
(Use apply instead)
kubectl create
Deploys objects defined in the yaml spec(s)
kubectl create secret generic <secretname> --from-literal=<identifier>=<value>
kubectl config set-context --current --namespace=<name>
Set current namespace
kubectl config is generally in ~./kubestl/config
kubectl delete [<resource> <name>] [-f <file/dir>]
Delete a named resource or use -l to define a label selector to get resource to delete
kubectl delete -f <file/dir>
Deletes objects in spec / specs
kubectl describe <resource> <name>
Get detailed details from db for given object
Pods with include running container info
kubectl exec -it <pod_name> <sh|/bin/bash>
Execute a command on a pod (sh / bin/bash for a terminal)
kubectl expose <type/name> --type="<service_expose_type>" --port <port>
Create and expose a service
kubectl get <resource(s)> [<name>] [-l <label>]
List of namespaces
Use -o yaml to give full yaml form of object
Use -o wide to get additional columns in normal output
Use -l to get pods / services matching a label
kubectl label <resource> <name> <label>
Apply a new label to a resource
kubectl logs
Print logs
kubectl proxy
Create a proxy for external access to pods
(Can then communicate to api;s on port 8001 - e.g. http://localhost:8001/version)
kubectl rollout status <deployment>
Get current status of a rolling update
kubectl rollout undo <deployment>
Rollback a rolling update
kubectl set image <deployment> <image>
Update image used for a deployment
Starts a rolling update
kubectl scale <deployment> --replicas==<n>
Scale a deployment (can scale to 0)
kubectl version
List version
Use events to check for errors
OPENSHIFT ORIGIN - Built on top of kubernetes (above commands will also work here)
-------------------------------------------------------------------------------------------
https://docs.openshift.com/container-platform/3.10/cli_reference/index.html
https://docs.openshift.com/container-platform/3.10/cli_reference/differences_oc_kubectl.html
oc adm policy add-scc-to-user anyuid -z <service-account>
Allow service accounts to deploy root images add the service account to security context constraint (scc) anyuid
oc cluster up --public-hostname="<dns>" [--base-dir="<persistance_directory>"]
Must use base-dir every restart to reuse persistance
oc delete namespace <name>
Create a new namespace
oc describe pods <name>
Details of a running pod
oc edit <resource> <name>
Edit a resource
Resources: deploy
oc login -u developer
Sign is as developer
oc login -u system:admin
Sign is as system admin
oc project
Find out the project you are in
oc policy add-role-to-user <permission> <user>
Add permission to user (in current project)
Permissions: view, edit
oc scale deploy <deployment> --replicas=<number>
Scale a deployment
oc whoami
Check what user is logged in
IMAGESTREAM
-----------
https://www.openshift.com/blog/image-streams-faq
An Image Stream contains all of the metadata information about any given image that is specified in the Image Stream specification. It is important to note that an Image Stream does not contain the actual image data.
Ultimately it points either to an external registry, like registry.access.redhat.com, hub.docker.com, etc., or to OpenShift's internal registry (if one is deployed in your cluster).
#Example
docker login
docker push <your-docker-account>/cars:v1
# Tag the image in OpenShift. The tag will reflect the project name
oc tag --source=docker <your-docker-account>/cars:v1 appd/cars:v1
# Import the image. An image stream will get created
oc import-image --from="<your-docker-account>/cars:v1" appd/cars:v1 --insecure --confirm
# Find out the new image name
oc get imagestream
# in this example the image stream name is 'cars'. Use the stream name to get the image name
oc describe imagestream <name>
!!!!! Below goes elsewhere
APPDYNAMICS INSTRUMENTATION
---------------------------
Init container used to inject agent into pod storage
Application within container used an env var to reference the agent in storage (e.g. $JAVA_OPTS or any other option used by application runtime)
In public clouds generally not able to instrument the master nodes
kubectl -n dev create secret generic appd-secret --from-literal=appd-key=<controller-key>
Several different machineoptions
Java, .net and node generally we use an init container
php, python we run in container
Machine agents deployed as standalone (with bundled analytics agent) - using a daemon set
/sys / /proc / /etc must be accessable (read only) to the machine agent
Needs these bits!
✔ ~/ContainerLabs/Labs/Lab-2.5-MachineAgent [master|✚ 2]
13:00 $ kubectl -n appdynamics get secret
NAME TYPE DATA AGE
appd-secret Opaque 1 28m
appdynamics-infraviz-token-dvmgh kubernetes.io/service-account-token 3 33s
default-token-6784p kubernetes.io/service-account-token 3 114m
✔ ~/ContainerLabs/Labs/Lab-2.5-MachineAgent [master|✚ 2]
13:01 $ kubectl -n appdynamics get cm
NAME DATA AGE
ma-config 15 4m15s
ma-log-config 1 25m
sim.docker.monitorAPMContainersOnly=false
Needs to be set (but why?)
Cluster agents docs?
Network vis needs deployment as separate containers in a daemonset (liek the stand alone machine agent)
Machine agent needs access to the network agent, the machine agent forwards metrics to the controller
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment