Skip to content

Instantly share code, notes, and snippets.

@Antoinebr
Last active July 2, 2020 14:33
Show Gist options
  • Save Antoinebr/27832cdc1b2c421c0e908abfe63e5487 to your computer and use it in GitHub Desktop.
Save Antoinebr/27832cdc1b2c421c0e908abfe63e5487 to your computer and use it in GitHub Desktop.

Cluster

Ensemble de machines pour faire tourner les apps

Node

Machine physique ou virtuelle où les pods vont être déployés

Pod

Groupe de container toujorus déployé ensemble / démaré en même temps. Ils partagent le même réseau. Ils vont comuniquer enemble en utilsiant localhost.

Ils peuvent parater des volumes et peuvent être répliqué.

Create a Pod

kubectl create -f rmypod.yaml

list all pods

kubectl get pods

Delete a pod

kubectl delete mydeploy-24332

on peut attachet un volume au pod, pour faire persister la data.

Service

Les Pods sont épéhémère et leur IP peut changer dans le temps... Comment êrmettre à des pods "Front-end" d'accéder à des pods "bacend" si leurs IPs peuvent changer ?

Le service est la solution.

Il permet d'accéder à une série de Pods cia une même IP durable même si les pods sont remplacés.

apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379 # On expose le port 6379
    targetPort: 6379 # Tout mes pods vont être sur l'ip 6379
  selector:
    app: redis    # Cible les pods qui poss!de le label app=redis
    role: master
    tier: backend
    

Probe

C'est une sonde ( Probe ) qui permet de vérifier que le pod foncitonne et que les containers fonctionnent.

Ca peut envoyer une requête HTTP GET, une commande... Et K8 va savoir si il faut relancer le POD ou pas etc...

Ingress

Ingres va gérer les règles à appliquer sur les connections externe. Un peu comme un vHost sur apache en mode reverse proxy.

Ca se materialiste par un fichier yaml

Ex si je requête /testpathAvers quel service et quel port je dois renvoyer le trafic.

Ingress va gérer ma connection SSL aussi

GKE

Set up a Redis master

Set up gcloud and kubectl credentials

gcloud container clusters get-credentials antoine-cluster  --zone europe-west4-c

Returns

Fetching cluster endpoint and auth data.
kubeconfig entry generated for antoine-cluster.

In the example folder I have a redus-master-service.yaml file

apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

This file contains configuration to deploy a Redis master. The spec field defines the pod specification that the replication controller uses to create the Redis pod. The image tag refers to a Docker image to be pulled from a registry.

Deploy the master controller

kubectl create -f redis-master-deployment.yaml

View the running pod

kubectl get pods

returns

NAME                           READY   STATUS    RESTARTS   AGE
redis-master-596696dd4-qqxg6   1/1     Running   0          49s

Create the redis-master service

In this section, you create a service to proxy the traffic to the Redis master pod.

View your service configuration:

cat redis-master-service.yaml

returns :

apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend
    

This manifest file defines a service named redis-master with a set of label selectors. These labels match the set of labels that are deployed in the previous step.

Create the service

kubectl create -f redis-master-service.yaml

Returns :

service/redis-master created

Verify that the service has been created:

kubectl get service

Returns :

NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes     ClusterIP   10.105.0.1      <none>        443/TCP    31m
redis-master   ClusterIP   10.105.10.157   <none>        6379/TCP   51s

Set up Redis worker

Although the Redis master is a single pod, you can make it more highly available to meet traffic demands by adding a few Redis worker replicas.

View the manifest file, which defines two replicas for the Redis workers:

apiVersion: apps/v1 #  for k8s versions before 1.9.0 use apps/v1beta2  and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google_samples/gb-redisslave:v1
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Start the two replicas on your container cluster:

kubectl create -f \
    redis-slave-deployment.yaml

Verify that the two Redis worker replicas are running by querying the list of pods:

kubectl get pods

returns : 
NAME                           READY   STATUS    RESTARTS   AGE
redis-master-596696dd4-qqxg6   1/1     Running   0          81m
redis-slave-96685cfdb-5c6bb    1/1     Running   0          43s
redis-slave-96685cfdb-gdh9n    1/1     Running   0          42s

Create the Redis worker service

The guestbook application needs to communicate to Redis workers to read data. To make the Redis workers discoverable, you need to set up a service. A service provides transparent load balancing to a set of pods.

View the configuration file that defines the worker service:

cat redis-slave-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

This file defines a service named redis-slave running on port 6379. Note that the selector field of the service matches the Redis worker pods created in the previous step.

kubectl create -f \
    redis-slave-service.yaml

Returns:

NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes     ClusterIP   10.105.0.1      <none>        443/TCP    107m
redis-master   ClusterIP   10.105.10.157   <none>        6379/TCP   76m
redis-slave    ClusterIP   10.105.4.250    <none>        6379/TCP   4s
kubectl get service

Set up the guestbook web frontend

Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis workers, this is a replicated application managed by a deployment.

This tutorial uses a simple PHP frontend. It is configured to talk to either the Redis worker or master services, depending on whether the request is a read or a write. It exposes a simple JSON interface and serves a user experience based on jQuery and Ajax.

Create the frontend deployment

frontend-deployment.yaml

apiVersion: apps/v1 #  for k8s versions before 1.9.0 use apps/v1beta2  and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v4
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80
kubectl create -f \
    frontend-deployment.yaml

Expose the frontend on an external IP address

The services that you created in the previous steps are only accessible within the container cluster, because the default type for a service does not expose it to the internet.

To make the guestbook web frontend service externally visible, you need to specify the type LoadBalancer in the service configuration.

Use the following command to replace NodePort with LoadBalancer in the type specification in the frontend-service.yaml configuration file:

sed -i -e \
    's/NodePort/LoadBalancer/g' \
    frontend-service.yaml

Create the service

kubectl create -f \
    frontend-service.yaml

Find your external IP address

kubectl get services --watch

Returns

NAME           TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
frontend       LoadBalancer   10.105.6.148    34.91.181.167   80:31574/TCP   89s
kubernetes     ClusterIP      10.105.0.1      <none>          443/TCP        120m
redis-master   ClusterIP      10.105.10.157   <none>          6379/TCP       89m
redis-slave    ClusterIP      10.105.4.250    <none>          6379/TCP       13m
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment