Skip to content

Instantly share code, notes, and snippets.

@gustavohenrique
Created May 18, 2022 13:49
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save gustavohenrique/d4e93f70e81f4c7cbf21240ec5c83f27 to your computer and use it in GitHub Desktop.
Save gustavohenrique/d4e93f70e81f4c7cbf21240ec5c83f27 to your computer and use it in GitHub Desktop.
Kubernetes in AWS by Kops [2018]

Kubernetes na AWS

1. Criando o cluster com Kops

  1. Crie um grupo kops contendo as rules abaixo, crie também um usuário kops e adicione-o ao grupo:
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess
AmazonEC2ContainerRegistryFullAccess
  1. Instale o awscli e configure as variáveis de ambiente
export AWS_ACCESS_KEY_ID="AKIAUMNY5JYFE6ZRL4KA"
export AWS_SECRET_ACCESS_KEY="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"
export AWS_DEFAULT_REGION="sa-east-1"
export AWS_ACCOUNT_ID="311572378121"
pip install awscli
  1. Crie o bucket para guardar o estado do cluster gerado pelo kops
export DOMAIN=meudominio.dev
export BUCKET=s3://staging.${DOMAIN}
export KOPS_STATE_STORE=${BUCKET}
aws s3 mb ${BUCKET} --region ${AWS_DEFAULT_REGION}
  1. Gere o par de chaves SSH que serão utilizadas para acessar as máquinas
ssh-keygen -t rsa -N "" -f ~/.ssh/${DOMAIN}
cat >> ~/.ssh/config <<EOF
Host master.${DOMAIN}
  Hostname master.${DOMAIN}
  IdentityFile ${HOME}/.ssh/${DOMAIN}
  User admin
  IdentitiesOnly yes
EOF
  1. Importe o certificado SSL no Amazon Certificate Manager
export AWS_CERTIFICATE="arn:aws:acm:sa-east-1:301572378122:certificate/80fe3b24-64cd-457f-93e0-9d32a465dfca"
  1. Crie o cluster com o kops
kops create cluster ${DOMAIN} --cloud=aws --master-size="t2.small" --node-count=2 --node-size="t2.small" --zones="sa-east-1a" --cloud-labels="env=staging,kops=true" --ssh-public-key="${HOME}/.ssh/${DOMAIN}.pub" --networking=kube-router --master-public-name="master.${DOMAIN}" --api-ssl-certificate=${AWS_CERTIFICATE} --node-volume-size=64

kops update cluster ${DOMAIN} --yes

Aguarde alguns minutos até o cluster ficar online. Verifique o andamento via:

kops validade cluster ${DOMAIN}
kubectl cluster-info

Altere o security group para liberar o acesso à porta 19123 do Deployer:

SGID=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=master-sa-east-1a.masters.meudominio.dev" | jq -r ".Reservations[0].Instances[0].SecurityGroups[0].GroupId")
aws ec2 authorize-security-group-ingress --group-id ${SGID} --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 19123, "ToPort": 19123, "IpRanges": [{"CidrIp": "0.0.0.0/0", "Description": "Deployer"}]}]'

2. Gerando o certificado TLS com letsencrypt

O certificado TLS permite acessar os serviços expostos via HTTPS utilizando HTTP2. O let's encrypt oferece certificados gratuitos. Para gerar, instale a lib Python certbot e o plugin certbot-dns-route53 conforme abaixo:

mkdir letsencrypt
pip install certbot-dns-route53
certbot certonly -d ${DOMAIN} -d "*.${DOMAIN}" --dns-route53 --logs-dir ./letsencrypt/log/ --config-dir ./letsencrypt/config/ --work-dir ./letsencrypt/work/ -m gustavo.henrique@meudominioconcursos.com.br --agree-tos --non-interactive --server https://acme-v02.api.letsencrypt.org/directory

Após alguns minutos os certificados serão gerados em $PWD/letsencrypt/config/archive/${DOMAIN}.
Cada namespace que precisar expor um serviço via HTTPS vai precisar ter uma secret contendo o certificado válido.

Para renovar o certificado:

certbot renew --cert-name meudominio.ws --logs-dir ./letsencrypt/log/ --config-dir ./letsencrypt/config/ --work-dir ./letsencrypt/work/

export AWS_ACCESS_KEY_ID=<chave>
export AWS_SECRET_ACCESS_KEY=<chave>

# Copie o conteudo dos arquivos gerados e cole nos respectivos campos no Certificate Manager, via console AWS
cat letsencrypt/config/live/meudominio.ws/cert.pem|pbcopy
cat letsencrypt/config/live/meudominio.ws/privkey.pem|pbcopy
cat letsencrypt/config/live/meudominio.ws/fullchain.pem|pbcopy

3. Configurando o cluster

NGINX Ingress

Exitem dois Kubernetes Ingress controllers populares que utilizam o NGINX:

Crie uma secret contendo o certificado TLS no namespace onde vai rodar o Nginx Ingress Controller:

DOMAIN=meudominio.ws
FULLCHAIN_PATH=letsencrypt/config/archive/${DOMAIN}/fullchain2.pem
PRIVKEY_PATH=letsencrypt/config/archive/${DOMAIN}/privkey2.pem
kubectl create ns nginx-ingress
kubectl -n nginx-ingress create secret tls default-server-secret --key ${PRIVKEY_PATH} --cert ${FULLCHAIN_PATH}

ingres-nginx (comunidade open source)

cat > ingress-nginx.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
          args:
            - /nginx-ingress-controller
            - --configmap=\$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=\$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=\$(POD_NAMESPACE)/udp-services
            - --publish-service=\$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

---

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "${AWS_CERTIFICATE}"
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
    nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS, PUT"
    nginx.ingress.kubernetes.io/cors-allow-origin: "*"
    nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: https
      port: 443
      targetPort: http

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  use-proxy-protocol: "false"
  use-forwarded-headers: "true"
  proxy-real-ip-cidr: "0.0.0.0/0"
EOF

kubectl -f ingress-nginx.yaml apply

kubernetes-ingress (nginx inc)

cat > kubernetes-ingress.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress 
  namespace: nginx-ingress

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
  namespace: nginx-ingress
data:
  proxy-connect-timeout: "10s"
  proxy-read-timeout: "10s"
  client-max-body-size: "2m"
  http2: "true"

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress
rules:
- apiGroups:
  - ""
  resources:
  - services
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
  - watch
  - update
  - create
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - list
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - "extensions"
  resources:
  - ingresses/status
  verbs:
  - update

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress
subjects:
- kind: ServiceAccount
  name: nginx-ingress
  namespace: nginx-ingress
roleRef:
  kind: ClusterRole
  name: nginx-ingress
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-ingress
  template:
    metadata:
      labels:
        app: nginx-ingress
    spec:
      serviceAccountName: nginx-ingress
      containers:
      - image: nginx/nginx-ingress:edge
        imagePullPolicy: Always
        name: nginx-ingress
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        args:
          - -nginx-configmaps=\$(POD_NAMESPACE)/nginx-config
          - -external-service=nginx-ingress
          - -default-server-tls-secret=\$(POD_NAMESPACE)/default-server-secret

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    custom.nginx.org/rate-limiting: "on"
    custom.nginx.org/rate-limiting-rate: "5r/s"
    custom.nginx.org/rate-limiting-burst: "1"
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "${AWS_CERTIFICATE}"
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https
  selector:
    app: nginx-ingress
EOF

kubectl -f kubernetes-ingress.yaml apply

LoadBalancer

O LoadBalancer leva alguns minutos até ficar ativo. É possível acompanhar seu status pelo console web ou pelo comando. Por último, é necessário criar um Alias Record no Route53 apontando para o LoadBalancer. Basta criar um alias *.dominio.com apontando para o LoadBalancer.

Os comandos abaixo servem para você conseguir obter o ID do LoadBalancer e do HostedZone para criar um alias a partir de um arquivo JSON:

DOMAIN=meudominio.dev
DNSNAME=$(aws elbv2 describe-load-balancers | grep DNSName | awk -F '"' '{print $4}')
NLB_ZONE_ID=$(aws elbv2 describe-load-balancers | grep CanonicalHostedZoneId | awk -F '"' '{print $4}')
R53_ZONE_ID=$(aws route53 list-hosted-zones-by-name --dns-name $DOMAIN | jq '.HostedZones[0].Id' | cut -d "/" -f 3 | sed 's/"//')

cat > alias.json <<EOF
{
  "Comment": "Creating Alias resource record sets in Route 53",
  "Changes": [
    {
      "Action": "UPSERT",
      "ResourceRecordSet": {
        "Name": "\\\052.${DOMAIN}",
        "Type": "A",
        "AliasTarget": {
          "HostedZoneId": "${NLB_ZONE_ID}",
          "DNSName": "${DNSNAME}",
          "EvaluateTargetHealth": false
        }
      }
    }
  ]
}
EOF

aws route53 change-resource-record-sets --hosted-zone-id ${R53_ZONE_ID} --change-batch file://alias.json

Ambiente do projeto

Crie o namespace staging para agrupar todos os PODs referentes à aplicação e uma secret contendo o certificado TLS que os serviços expostos publicamente precisarão para funcionar com HTTPS:

DOMAIN=meudominio.dev
FULLCHAIN_PATH=letsencrypt/config/archive/${DOMAIN}/fullchain1.pem
PRIVKEY_PATH=letsencrypt/config/archive/${DOMAIN}/privkey1.pem
kubectl create ns staging
kubectl -n staging create secret tls certs --key ${PRIVKEY_PATH} --cert ${FULLCHAIN_PATH}

Variáveis de ambiente

ECR_SECRET_NAME=ecr-secret

POSTGRES_USER=root
POSTGRES_PASSWORD=root
POSTGRES_DB=elearning
POSTGRES_PORT=5432

ELEARNING_PORT=3001
MEUSERVICO1_PORT=4001

Autenticação no ECR

A autenticação feita pelo comando aws ecr get-login tem validade de 12 horas. Por isso é necessário criar um job para atualizar efetuar o login a cada 10 horas e armazenar o token de autenticação em uma secret.

cat > ecr-secret-refresh.yaml <<EOF
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  annotations:
  name: ecr-secret-helper
  namespace: staging
spec:
  concurrencyPolicy: Allow
  failedJobsHistoryLimit: 1
  successfulJobsHistoryLimit: 0
  schedule: 0 */10 * * *
  jobTemplate:
    metadata:
      creationTimestamp: null
    spec:
      template:
        metadata:
          creationTimestamp: null
        spec:
          containers:
          - command:
            - /bin/sh
            - -c
            - |-
              TOKEN=\$(aws ecr get-login --region ${AWS_DEFAULT_REGION} --registry-ids ${AWS_ACCOUNT_ID} --no-include-email | cut -d' ' -f6)
              echo "ENV variables setup done."
              kubectl delete secret --ignore-not-found ${ECR_SECRET_NAME}
              kubectl create secret docker-registry ${ECR_SECRET_NAME} \\
              --docker-server=https://${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com \\
              --docker-username=AWS \\
              --docker-password="\$TOKEN"
              echo "Secret created by name. $ECR_SECRET_NAME"
              kubectl patch serviceaccount default -p '{"imagePullSecrets":[{"name":"'$ECR_SECRET_NAME'"}]}'
              echo "All done."
            env:
            - name: AWS_DEFAULT_REGION
              value: ${AWS_DEFAULT_REGION}
            - name: AWS_SECRET_ACCESS_KEY
              value: ${AWS_SECRET_ACCESS_KEY}
            - name: AWS_ACCESS_KEY_ID
              value: ${AWS_ACCESS_KEY_ID}
            image: odaniait/aws-kubectl:latest
            imagePullPolicy: IfNotPresent
            name: ecr-secret-helper
            resources: {}
            securityContext:
              capabilities: {}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
          dnsPolicy: Default
          hostNetwork: true
          restartPolicy: Never
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
EOF

kubectl -f ecr-secret-refresh.yaml apply

Postgres

Crie o banco de dados utilizado pelo elearning:

cat > db-elearning.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: db-elearning
  namespace: staging
spec:
  selector:
    app: db-elearning
  ports:
    - port: ${POSTGRES_PORT}
      targetPort: ${POSTGRES_PORT}
      protocol: TCP

---

kind: PersistentVolume
apiVersion: v1
metadata:
  name: db-elearning-storage
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/db_elearning_storage"

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: db-elearning
  namespace: staging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: db-elearning
  template:
    metadata:
      labels:
        app: db-elearning
    spec:
      volumes:
      - name: db-elearning-storage
        emptyDir: {}
      containers:
      - name: elearningdb
        image: postgres:alpine
        volumeMounts:
        - mountPath: /var/lib/postgresql/data
          name: db-elearning-storage
        env:
        - name: POSTGRES_USER
          value: "${POSTGRES_USER}"
        - name: POSTGRES_PASSWORD
          value: "${POSTGRES_PASSWORD}"
        - name: POSTGRES_DB
          value: "${POSTGRES_DB}"
        ports:
        - containerPort: ${POSTGRES_PORT}
        resources:
          requests:
            memory: 128M
          limits:
            memory: 256M
      nodeSelector:
        kubernetes.io/role: node
EOF

kubectl -f db-elearning.yaml apply

Para acessar o banco de dados:

pod=$(kubectl -n staging get pods | grep db- | cut -d ' ' -f 1)
kubectl -n staging port-forward $pod 5432:5432

Se quiser acessar pelo terminal:

psql -U root -h 127.0.0.1 elearning

Caso utilize outro client de Postgres, altere apenas o host para 127.0.0.1 e não localhost.

Elearning

O elearning é acessível apenas aos outros PODs, utiliza o Postgres e a secret contendo o token de autenticação no ECR.

cat > elearning.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: elearning
  namespace: staging
spec:
  selector:
    app: elearning
  ports:
    - port: ${ELEARNING_PORT}
      targetPort: ${ELEARNING_PORT}

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: elearning
  namespace: staging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elearning
  template:
    metadata:
      labels:
        app: elearning
    spec:
      containers:
      - name: elearning
        image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/elearning:develop
        imagePullPolicy: "Always"
        env:
        - name: PORT
          value: "${ELEARNING_PORT}"
        - name: DATABASE_URL
          value: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db-elearning:${POSTGRES_PORT}/${POSTGRES_DB}?sslmode=disable"
        - name: USE_TLS
          value: "true"
        - name: APP_NAME
          value: elearning
        ports:
        - containerPort: ${ELEARNING_PORT}
        resources:
          requests:
            memory: 128M
          limits:
            memory: 512M
      nodeSelector:
        kubernetes.io/role: node
      imagePullSecrets:
      - name: ecr-secret
EOF

kubectl -f elearning.yaml apply

MEUSERVICO1

O serviço MEUSERVICO1 é acessível publicamente via ingress e acessa o elearning.

cat > MEUSERVICO1.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: MEUSERVICO1
  namespace: staging
spec:
  selector:
    app: MEUSERVICO1
  ports:
    - port: ${MEUSERVICO1_PORT}
      targetPort: ${MEUSERVICO1_PORT}

---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: MEUSERVICO1
  namespace: staging
spec:
  tls:
  - hosts:
    - MEUSERVICO1.${DOMAIN}
    secretName: certs
  rules:
  - host: MEUSERVICO1.${DOMAIN}
    http:
      paths:
      - path: /
        backend:
          serviceName: MEUSERVICO1
          servicePort: ${MEUSERVICO1_PORT}

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: MEUSERVICO1
  namespace: staging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: MEUSERVICO1
  template:
    metadata:
      labels:
        app: MEUSERVICO1
    spec:
      containers:
      - name: MEUSERVICO1
        image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/MEUSERVICO1:develop
        imagePullPolicy: "Always"
        env:
        - name: PORT
          value: "${MEUSERVICO1_PORT}"
        - name: ELEARNING_URL
          value: "elearning:${ELEARNING_PORT}"
        - name: USE_TLS
          value: "true"
        - name: APP_NAME
          value: "MEUSERVICO1"
        ports:
        - containerPort: ${MEUSERVICO1_PORT}
        resources:
          requests:
            memory: 128M
          limits:
            memory: 512M
      nodeSelector:
        kubernetes.io/role: node
      imagePullSecrets:
      - name: ecr-secret
EOF

kubectl -f MEUSERVICO1.yaml apply

BO

O POD do BO apenas faz um redirect para a URL do S3 pois via ingress-nginx não estava dando certo.

cat > bo.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: bo
  namespace: staging
spec:
  selector:
    app: bo
  ports:
    - port: 2015
      targetPort: 2015

---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: bo
spec:
  tls:
  - hosts:
    - bo.meudominio.dev
    secretName: certs
  rules:
  - host: bo.meudominio.dev
    http:
      paths:
      - path: /
        backend:
          serviceName: bo
          servicePort: 2015

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: caddyfile-bo
  namespace: staging
data:
  Caddyfile: |
    :2015 {
      proxy / http://meudominio-alexandria.s3-website-sa-east-1.amazonaws.com/
    }

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: bo
  namespace: staging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: bo
  template:
    metadata:
      labels:
        app: bo
    spec:
      containers:
      - image: abiosoft/caddy
        name: elastic-proxy
        command: ["caddy"]
        args: ["-conf", "/etc/caddy/Caddyfile"]
        volumeMounts:
        - name: config-caddy
          mountPath: "/etc/caddy"
        resources:
          requests:
            memory: 10M
          limits:
            memory: 30M
      volumes:
      - name: config-caddy
        configMap:
          name: caddyfile-bo
EOF

kubectl -n staging -f bo.yaml apply

4. Permissões de usuários

Cria um arquivo config para ser utilizado com o kubectl com permissões apenas para o namespace staging.

key=$(kops get secret ca | grep '[0-9]' | awk -F " " '{print $3}')
aws s3 cp $BUCKET/$DOMAIN/pki/private/ca/$key.key ca.key
aws s3 cp $BUCKET/$DOMAIN/pki/issued/ca/$key.crt ca.crt

openssl genrsa -out backend.key 2048
openssl req -new -key backend.key -out backend.csr -subj "/CN=backend/O=devops"
openssl x509 -req -in backend.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out backend.crt -days 3000

cat > backend-devops-credentials.yaml <<EOF
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: deployment-manager
  namespace: staging
rules:
- apiGroups: ["", "extensions", "apps", "batch"]
  resources:
  - deployments
  - replicasets
  - pods
  - pods/log
  - pods/portforward
  - ingresses
  - ingresses/status
  - services
  - daemonsets
  - cronjobs
  - configmaps
  - replicationcontrollers
  - statefulsets
  - jobs
  verbs: ["*"]

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: deployment-manager-binding
  namespace: staging
subjects:
- kind: User
  name: backend
  apiGroup: ""
roleRef:
  kind: Role
  name: deployment-manager
  apiGroup: ""
EOF

kubectl -f backend-devops-credentials.yaml apply

Crie um zip contendo os arquivos ca.crt, backend.key e backend.crt. Compartilhe esse zip com o time e peça para que eles executem:

brew install kubectl

unzip backend-devops-credentials.zip
cd backend-devops-credentials

DOMAIN=meudominio.dev
kubectl config set-context devops --cluster=$DOMAIN --namespace=staging --user=backend
kubectl config set-cluster $DOMAIN --server=https://master.$DOMAIN
kubectl config set-cluster $DOMAIN --certificate-authority=ca.crt
kubectl config set-credentials backend --client-certificate=backend.crt --client-key=backend.key
kubectl config set-context $DOMAIN --user=backend --cluster $DOMAIN
kubectl config use-context $DOMAIN

5. Log das aplicações com Fluentbit e Elasticsearch

Utilizando o AWS Elasticsearch

Crie uma instância AWS Elasticsearch dentro dos Security Groups criados pelos Kops contendo a seguinte Access Policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "es:*",
      "Resource": "arn:aws:es:sa-east-1:301572378122:domain/elastic/*"
    }
  ]
}

Como o ES está em uma rede privada sem acesso público, é necessário criar um Pod contendo o Caddy para fazer proxy reverso, permitindo acessar o Kibana pela internet de forma autenticada. Por padrão, o Kibana utiliza o Cognito para autenticação, porém esse serviço não está disponível na região sa-east-1.

cat > elastic-proxy.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: elastic-proxy
  namespace: staging
spec:
  selector:
    app: elastic
  ports:
    - port: ${ELASTIC_PROXY_PORT}
      targetPort: ${ELASTIC_PROXY_PORT}

---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: elastic-proxy
spec:
  tls:
  - hosts:
    - elastic.${DOMAIN}
    secretName: certs
  rules:
  - host: elastic.${DOMAIN}
    http:
      paths:
      - path: /
        backend:
          serviceName: elastic-proxy
          servicePort: ${ELASTIC_PROXY_PORT}

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: caddyfile
  namespace: staging
data:
  Caddyfile: |
    :2015 {
      proxy / https://${ELASTICSEARCH_HOST}/
      log stdout
      # basicauth /kibana ec Senha@2018-stg  # Ta dando problema pois quando tem auth o kibana da AWS espera um header diferente
    }

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: elastic-proxy
  namespace: staging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elastic-proxy
  template:
    metadata:
      labels:
        app: elastic-proxy
    spec:
      containers:
      - image: abiosoft/caddy
        name: elastic-proxy
        command: ["caddy"]
        args: ["-conf", "/etc/caddy/Caddyfile"]
        volumeMounts:
        - name: config-caddy
          mountPath: "/etc/caddy"
        resources:
          requests:
            memory: 10M
          limits:
            memory: 30M
      volumes:
      - name: config-caddy
        configMap:
          name: caddyfile
EOF

kubectl -f elastic-proxy.yaml apply

E para acessar o Kibana:

open http://elastic.${DOMAIN}/_plugin/kibana

Cada container de aplicação em Go contém um agente do fluentbit rodando em segundo plano enviando os logs com a devida formatação para o AWS Elasticsearch.
Anteriromente, foi configurado um daemonset rodando um container de fluentbit para cada pod criado. Isso gerou um excesso de logs no ES de todos os pods, mesmo os que não são da aplicação.

5. Deploy

Basta deletar o POD que o Kubernetes vai criar um novo utilizando a imagem atualizada.

namespace=staging
appName=MEUSERVICO1
image=311572378112.dkr.ecr.sa-east-1.amazonaws.com/${appName}:develop
# kubectl -n ${namespace} set image deployment ${appName} ${appName}=${image}
pod=$(kubectl -n ${namespace} get pods -l app=${appName} -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
kubectl -n ${namespace} delete pod/${pod} --grace-period=10 --force

6. Monitoramento

?

7. Links

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment