Skip to content

Instantly share code, notes, and snippets.

@hawkup
Last active January 30, 2022 07:27
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save hawkup/fc5010299deb291d423b1728df438e1e to your computer and use it in GitHub Desktop.
Save hawkup/fc5010299deb291d423b1728df438e1e to your computer and use it in GitHub Desktop.
Minikube on Mac OS
  1. apply emissary-ingress
kubectl apply -f - <<EOF
---
# Source: emissary-ingress/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: emissary-ingress
  namespace: emissary
  labels:
    app.kubernetes.io/name: emissary-ingress

    app.kubernetes.io/instance: emissary-ingress
    app.kubernetes.io/part-of: emissary-ingress
    app.kubernetes.io/managed-by: getambassador.io
    product: aes
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: emissary-ingress
      app.kubernetes.io/instance: emissary-ingress
  strategy:
    type: RollingUpdate


  progressDeadlineSeconds: 600
  template:
    metadata:
      labels:
        app.kubernetes.io/name: emissary-ingress

        app.kubernetes.io/instance: emissary-ingress
        app.kubernetes.io/part-of: emissary-ingress
        app.kubernetes.io/managed-by: getambassador.io
        product: aes
        profile: main
      annotations:
        consul.hashicorp.com/connect-inject: 'false'
        sidecar.istio.io/inject: 'false'
    spec:
      terminationGracePeriodSeconds: 0
      securityContext:
        runAsUser: 8888
      restartPolicy: Always
      serviceAccountName: emissary-ingress
      volumes:
      - name: ambassador-pod-info
        downwardAPI:
          items:
          - fieldRef:
              fieldPath: metadata.labels
            path: labels
      containers:
      - name: ambassador
        image: docker.io/emissaryingress/emissary:2.1.0
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 8080
        - name: https
          containerPort: 8443
        - name: admin
          containerPort: 8877
        env:
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: AMBASSADOR_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: AES_LOG_LEVEL
          value: "debug"
        - name: AMBASSADOR_DEBUG
          value: "diagd"
        securityContext:
          allowPrivilegeEscalation: false
        livenessProbe:
          httpGet:
            path: /ambassador/v0/check_alive
            port: admin
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 3
        readinessProbe:
          httpGet:
            path: /ambassador/v0/check_ready
            port: admin
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 3
        volumeMounts:
        - name: ambassador-pod-info
          mountPath: /tmp/ambassador-pod-info
          readOnly: true
        resources:
          limits:
            cpu: 1
            memory: 400Mi
          requests:
            cpu: 200m
            memory: 100Mi
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  service: ambassador
              topologyKey: kubernetes.io/hostname
            weight: 100
      imagePullSecrets: []
      dnsPolicy: ClusterFirst
      hostNetwork: false
EOF
  1. debug extauth
kubectl apply -f - <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: emissary-apiext
  namespace: emissary-system
  labels:
    app.kubernetes.io/instance: emissary-apiext
    app.kubernetes.io/managed-by: kubectl_apply_-f_emissary-apiext.yaml
    app.kubernetes.io/name: emissary-apiext
    app.kubernetes.io/part-of: emissary-apiext
spec:
  selector:
    matchLabels:
      app.kubernetes.io/instance: emissary-apiext
      app.kubernetes.io/name: emissary-apiext
      app.kubernetes.io/part-of: emissary-apiext
  replicas: 1
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: emissary-apiext
        app.kubernetes.io/managed-by: kubectl_apply_-f_emissary-apiext.yaml
        app.kubernetes.io/name: emissary-apiext
        app.kubernetes.io/part-of: emissary-apiext
    spec:
      serviceAccountName: emissary-apiext
      containers:
        - name: emissary-apiext
          image: docker.io/emissaryingress/emissary:2.1.0
          imagePullPolicy: IfNotPresent
          command: [ "apiext", "emissary-apiext" ]
          ports:
            - name: http
              containerPort: 8080
            - name: https
              containerPort: 8443
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /probes/live
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 3
            failureThreshold: 3
          env:
            - name: APIEXT_LOGLEVEL
              value: "DEBUG"
EOF
  1. Install Minikube
brew install minikube // minikube version 1.24.0
  1. Create cluster
# Make sure to setup docker use minimum memory 4096 and cpus 2
minikube start
// OR specific memory and cpu
minikube --memory 4096 --cpus 2 start
  • You may face "TLS handshake timeout" error. it may cause Minikube run out of memory. Try stopping unused pods.

Emissary Ingress

  1. Create emissary
kubectl create namespace emissary && \
kubectl apply -f https://app.getambassador.io/yaml/emissary/2.1.2/emissary-crds.yaml && \
kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system
kubectl apply -f https://app.getambassador.io/yaml/emissary/2.1.2/emissary-emissaryns.yaml && \
kubectl -n emissary wait --for condition=available --timeout=90s deploy -lproduct=aes
  1. Start by creating a Listener resource for HTTP on port 8080:
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
  name: emissary-ingress-listener-8080
  namespace: emissary
spec:
  port: 8080
  protocol: HTTP
  securityModel: XFP
  hostBinding:
    namespace:
      from: ALL
EOF
  1. Apply the YAML for the “Quote of the Moment" service.
kubectl apply -f https://app.getambassador.io/yaml/v2-docs/latest/quickstart/qotm.yaml
  1. This Mapping tells Emissary-ingress to route all traffic inbound to the /backend/ path to the quote Service.
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
  name: quote-backend
spec:
  hostname: "*"
  prefix: /backend/
  service: quote
  docs:
    path: "/.ambassador-internal/openapi-docs"
EOF
  1. Expose load balance https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
minikube tunnel
  1. Store the Emissary-ingress load balancer IP address
export LB_ENDPOINT=$(kubectl -n emissary get svc  emissary-ingress \
  -o "go-template={{range .status.loadBalancer.ingress}}{{or .ip .hostname}}{{end}}")
  1. Test acccess
curl -i http://$LB_ENDPOINT/backend/

Ref: https://linoxide.com/deploy-mysql-on-kubernetes/

  1. Create mysql password secret
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Secret
metadata:
  name: mysql-pass
type: Opaque
data:
  password: MTIzNA==
EOF
  1. Create mysql pod // TODO use deployment
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: k8s-mysql
  labels:
    name: lbl-k8s-mysql
spec:
  containers:
  - name: mysql
    image: mysql@sha256:9415bfb9a83752d30b6395c84dde03573eeba7b5b9c937c0e09c3e7b32c76c93
    env:
    - name: MYSQL_ROOT_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mysql-pass
          key: password
    - name: MYSQL_DATABASE
      value: db_data
    ports:
    - name: mysql
      containerPort: 3306
      protocol: TCP
    volumeMounts:
    - name: k8s-mysql-storage
      mountPath: /var/lib/mysql
  volumes:
  - name: k8s-mysql-storage
    emptyDir: {}
EOF
  1. mapping service
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  labels:
    name: lbl-k8s-mysql
spec:
  ports:
  - port: 3306
  selector:
    name: lbl-k8s-mysql
  type: ClusterIP
EOF

Ref: https://phoenixnap.com/kb/kubernetes-redis

  1. Create config map
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: test-redis-config
data:
  redis-config: |
    maxmemory 2mb
    maxmemory-policy allkeys-lru 
EOF
  1. Create pod
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: redis
  labels:
    name: redis
spec:
  containers:
  - name: redis
    image: redis:5.0.4
    command:
      - redis-server
      - "/redis-master/redis.conf"
    env:
    - name: MASTER
      value: "true"
    ports:
    - containerPort: 6379
    resources:
      limits:
        cpu: "0.1"
    volumeMounts:
    - mountPath: /redis-master-data
      name: data
    - mountPath: /redis-master
      name: config
  volumes:
    - name: data
      emptyDir: {}
    - name: config
      configMap:
        name: test-redis-config
        items:
        - key: redis-config
          path: redis.conf
EOF
  1. Mapping service
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Service
metadata:
  name: redis-service
  labels:
    name: redis
spec:
  ports:
  - port: 6379
  selector:
    name: redis
  type: ClusterIP
EOF

Ref: https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html

  1. Install Krew plugin on kubectl https://krew.sigs.k8s.io/docs/user-guide/setup/install/

  2. Install kubectl RabbitMQ plugin https://www.rabbitmq.com/kubernetes/operator/kubectl-plugin.html#install

  3. Installing the Cluster Operator

kubectl rabbitmq install-cluster-operator
  1. install the Local Path Provisioner (Non production)
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
kubectl annotate storageclass local-path storageclass.kubernetes.io/is-default-class=true
  1. create the simplest RabbitMQ Cluster
kubectl apply -f https://raw.githubusercontent.com/rabbitmq/cluster-operator/main/docs/examples/hello-world/rabbitmq.yaml
  1. Access The Management UI
username="$(kubectl get secret hello-world-default-user -o jsonpath='{.data.username}' | base64 --decode)"
echo "username: $username"
password="$(kubectl get secret hello-world-default-user -o jsonpath='{.data.password}' | base64 --decode)"
echo "password: $password"

kubectl port-forward "service/hello-world" 15672

# or using plugin

kubectl rabbitmq manage hello-world
  1. Deploy Zipkin
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: TracingService
metadata:
  name: tracing
spec:
  service: "zipkin.default:9411"
  driver: zipkin
  config: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zipkin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zipkin
  template:
    metadata:
      labels:
        app: zipkin
    spec:
      containers:
        - name: zipkin
          image: openzipkin/zipkin
          env:
            # note: in-memory storage holds all data in memory, purging older data upon a span limit.
            #       you should use a proper storage in production environments
            - name: STORAGE_TYPE
              value: mem
---
apiVersion: v1
kind: Service
metadata:
  labels:
    name: zipkin
  name: zipkin
spec:
  ports:
    - port: 9411
      targetPort: 9411
  selector:
    app: zipkin
EOF
  1. Restart Emissary
kubectl -n emissary rollout restart deploy
  1. Create Listener
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
  name: emissary-ingress-listener-8080
  namespace: emissary
spec:
  port: 8080
  protocol: HTTP
  securityModel: XFP
  hostBinding:
    namespace:
      from: ALL
EOF
  1. Create deployment and service
kubectl apply -f - <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-world
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: backend
        image: docker.io/ibmcom/hello:latest
        ports:
        - name: http
          containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8080
  selector:
    app: hello-world
EOF
  1. Mapping service
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
  name: hello-world-backend
spec:
  hostname: "*"
  prefix: /hello-world/
  service: hello-world
  circuit_breakers:
  - max_connections: 0
    max_pending_requests: 0
EOF

Prometheus

ref: https://www.getambassador.io/docs/emissary/latest/howtos/prometheus/ For forward internal port

minikube service -n emissary emissary-ingress-admin --url
  1. Deploy the Prometheus Operator
kubectl create -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/bundle.yaml
  1. Deploy Prometheus by creating a Prometheus CRD
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
  - configmaps
  verbs: ["get"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: default
EOF
  1. apply Prometheus
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Service
metadata:
  name: prometheus
spec:
  type: ClusterIP
  ports:
  - name: web
    port: 9090
    protocol: TCP
    targetPort: 9090
  selector:
    prometheus: prometheus
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus
spec:
  ruleSelector:
    matchLabels:
      app: prometheus-operator
  serviceAccountName: prometheus
  serviceMonitorSelector:
    matchLabels:
      app: emissary
  resources:
    requests:
      memory: 400Mi
EOF
  1. Create a ServiceMonitor
kubectl apply -f - <<EOF
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: emissary-monitor
  labels:
    app: emissary
spec:
  namespaceSelector:
    matchNames:
    - emissary
  selector:
    matchLabels:
      service: ambassador-admin
  endpoints:
  - port: ambassador-admin
EOF

Grafana

https://www.getambassador.io/docs/emissary/latest/howtos/prometheus/#grafana

  1. deploy Grafana
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: grafana-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: grafana
  name: grafana
spec:
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      securityContext:
        fsGroup: 472
        supplementalGroups:
          - 0
      containers:
        - name: grafana
          image: grafana/grafana:7.5.2
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 3000
              name: http-grafana
              protocol: TCP
          env:
            - name: GF_SERVER_ROOT_URL
              value: http://127.0.0.1/grafana
            - name: GRAFANA_PORT
              value: '3000'
            - name: GF_AUTH_BASIC_ENABLED
              value: 'false'
            - name: GF_AUTH_ANONYMOUS_ENABLED
              value: 'true'
            - name: GF_AUTH_ANONYMOUS_ORG_ROLE
              value: Admin
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /robots.txt
              port: 3000
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 2
          livenessProbe:
            failureThreshold: 3
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            tcpSocket:
              port: 3000
            timeoutSeconds: 1
          resources:
            requests:
              cpu: 250m
              memory: 750Mi
          volumeMounts:
            - mountPath: /var/lib/grafana
              name: grafana-pv
      volumes:
        - name: grafana-pv
          persistentVolumeClaim:
            claimName: grafana-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: grafana
spec:
  ports:
    - port: 80
      targetPort: 3000
      protocol: TCP
      targetPort: http-grafana
  selector:
    app: grafana
  sessionAffinity: None
  type: LoadBalancer
EOF
  1. create a service
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
  name: grafana
spec:
  hostname: "*"
  prefix: /grafana/
  service: grafana
EOF

Prerequisites

Run eval $(minikube -p minikube docker-env) ref: https://medium.com/swlh/how-to-run-locally-built-docker-images-in-kubernetes-b28fbc32cc1d

  1. Create dockerfile
FROM openjdk:11-jre-slim

ARG JAR_FILE

COPY ./app.jar app.jar

EXPOSE 8080

ENTRYPOINT ["java","-jar","/app.jar", "--spring.datasource.url=jdbc:mysql://mysql-service:3306/db_data"]
  1. Build spring boot app docker image that expose 8080 port
docker build -t local/my-app .
  1. Create listener
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
  name: emissary-ingress-listener-8080
  namespace: emissary
spec:
  port: 8080
  protocol: HTTP
  securityModel: XFP
  hostBinding:
    namespace:
      from: ALL
EOF
  1. Create deployment and service
kubectl apply -f - <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: backend
        image: local/my-app
        imagePullPolicy: Never
        ports:
        - name: http
          containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8080
  selector:
    app: my-app
EOF
  1. mapping service
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
  name: my-app
spec:
  hostname: "*"
  prefix: /my-app/
  service: my-app
EOF
  1. Create pod
kubectl apply -f - <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: node-red
  name: node-red
spec:
  selector:
    matchLabels:
      app: node-red
  template:
    metadata:
      labels:
        app: node-red
    spec:
      containers:
      - name: node-red
        image: nodered/node-red
        ports:
        - containerPort: 1880
        volumeMounts:
        - mountPath: /data
          name: node-red-data
      volumes:
        - name: node-red-data
          emptyDir: {}
EOF
  1. Mapping Service
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Service
metadata:
  name: node-red
spec:
  ports:
    - port: 1880
      targetPort: 1880
  selector:
    app: node-red
EOF
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: RateLimitService
metadata:
  name: ratelimit
spec:
  domain: ambassador
  service: "example-rate-limit.default:5000"
  protocol_version: v3
---
apiVersion: v1
kind: Service
metadata:
  name: example-rate-limit
spec:
  type: ClusterIP
  selector:
    app: example-rate-limit
  ports:
  - port: 5000
    name: http-example-rate-limit
    targetPort: http-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-rate-limit
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: example-rate-limit
  template:
    metadata:
      labels:
        app: example-rate-limit
    spec:
      containers:
      - name: example-rate-limit
        image: datawiredev/test-ratelimit:0f13a77b4497
        imagePullPolicy: Always
        ports:
        - name: http-api
          containerPort: 5000
EOF
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
  name: quote-backend
spec:
  hostname: "*"
  prefix: /backend/
  service: quote
  labels:
    ambassador:
      - request_label_group:
        - request_headers:
            key: x-ambassador-test-allow
            header_name: "x-ambassador-test-allow"
            omit_if_not_present: true
EOF
  1. Create config
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: ratelimit-config
data:
  ratelimit-config: |
    ---
    domain: ambassador
    descriptors:
      - key: remote_address
        rate_limit:
          unit: second
          requests_per_unit: 1
      # - key: x-ambassador-test-allow
      #   value: probably
      #   rate_limit:
      #     unit: second
      #     requests_per_unit: 1
EOF
  1. Create and apply ratelimit service
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: RateLimitService
metadata:
  name: ratelimit
spec:
  service: "example-rate-limit.default:8081"
  protocol_version: v3
---
apiVersion: v1
kind: Service
metadata:
  name: example-rate-limit
spec:
  type: ClusterIP
  selector:
    app: example-rate-limit
  ports:
  # - port: 8080
  #   name: http
  #   targetPort: 8080
  - port: 8081
    name: grpc
    targetPort: 8081
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-rate-limit
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: example-rate-limit
  template:
    metadata:
      labels:
        app: example-rate-limit
    spec:
      containers:
      - name: example-rate-limit
        image: envoyproxy/ratelimit:master
        imagePullPolicy: Always
        command: ["/bin/ratelimit"]
        ports:
        # - name: http
        #   containerPort: 8080
        - name: grpc
          containerPort: 8081
        volumeMounts:
        - mountPath: /data/ratelimit/config/config.yaml
          name: config
          subPath: config.yaml
        env:
        - name: USE_STATSD
          value: "false"
        - name: LOG_LEVEL
          value: "DEBUG"
        - name: REDIS_SOCKET_TYPE
          value: "tcp"
        - name: REDIS_URL
          value: "redis-service:6379"
        - name: RUNTIME_ROOT
          value: "/data"
        - name: RUNTIME_SUBDIRECTORY
          value: "ratelimit"
        - name: RUNTIME_WATCH_ROOT
          value: "false"
      volumes:
      - name: config
        configMap:
          name: ratelimit-config
          items:
          - key: ratelimit-config
            path: config.yaml
EOF
  1. add label to quote-backend service
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
  name: quote-backend
spec:
  hostname: "*"
  prefix: /backend/
  service: quote
  labels:
    ambassador:
      - request_label_group:
        - remote_address:
            key: remote_address
EOF
  1. start auth service
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Service
metadata:
  name: example-auth
spec:
  type: ClusterIP
  selector:
    app: example-auth
  ports:
  - port: 3000
    name: http-example-auth
    targetPort: http-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-auth
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: example-auth
  template:
    metadata:
      labels:
        app: example-auth
    spec:
      containers:
      - name: example-auth
        image: docker.io/datawire/ambassador-auth-service:2.0.0
        imagePullPolicy: Always
        ports:
        - name: http-api
          containerPort: 3000
EOF
  1. apply auth service
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: AuthService
metadata:
  name: authentication
spec:
  auth_service: "example-auth.default:3000"
  path_prefix: "/extauth"
  allowed_request_headers:
  - "x-qotm-session"
  allowed_authorization_headers:
  - "x-qotm-session"
  # failure_mode_allow: true
EOF
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment