Skip to content

Instantly share code, notes, and snippets.

@djackyn
Last active August 2, 2019 09:16
Show Gist options
  • Save djackyn/725ac0638c33c89a3cb903d03707f43c to your computer and use it in GitHub Desktop.
Save djackyn/725ac0638c33c89a3cb903d03707f43c to your computer and use it in GitHub Desktop.
processing file "helmfile.yaml" in directory "helmfile.d"
changing working directory to "/deployment/kubernetes/helm/helmfile.d"
first-pass rendering starting for "helmfile.yaml.part.0": inherited=&{default map[] map[]}, overrode=<nil>
first-pass uses: &{default map[] map[]}
first-pass produced: &{default map[] map[]}
first-pass rendering result of "helmfile.yaml.part.0": {default map[] map[]}
vals:
map[]
defaultVals:[]
second-pass rendering result of "helmfile.yaml.part.0":
0: repositories:
1: - name: stable
2: url: https://kubernetes-charts.storage.googleapis.com
3: - name: helm
4: url: <url>
5: username: <username>
6: password: <password>
7:
8: helmDefaults:
9: tillerNamespace: kube-system
10: verify: true
11: wait: true
12: timeout: 600
13: recreatePods: true
14: force: true
15: tls: false
16:
17: # The desired states of Helm releases.
18: #
19: # Helmfile runs various helm commands to converge the current state in the live cluster to the desired state defined here.
20: releases:
21: - name: consul
22: namespace: default
23: chart: helm/consul
24: version: 0.8.1
25: values:
26: - charts/chart-consul/values.yaml
27:
merged environment: &{default map[] map[]}
Adding repo stable https://kubernetes-charts.storage.googleapis.com
exec: helm repo add stable https://kubernetes-charts.storage.googleapis.com
exec: helm repo add stable https://kubernetes-charts.storage.googleapis.com: "stable" has been added to your repositories
"stable" has been added to your repositories
Adding repo helm <url>
exec: helm repo add helm <url> --username <username> --password <password>
exec: helm repo add helm <url> --username <username> --password <password>: "helm" has been added to your repositories
"helm" has been added to your repositories
Updating repo
exec: helm repo update
exec: helm repo update: Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "helm" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "helm" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
worker 1/1 started
successfully generated the value file at charts/chart-consul/values.yaml. produced:
global:
enabled: true
datacenter: gcp-poc
gossipEncryption:
secretName: gossip-encryption
secretKey: gossip-key
secretValue: "mUg0dfYHu+IKENlLu+s3mQ=="
server:
enabled: true
replicas: 1
bootstrapExpect: 1
storage: 1Gi
client:
enabled: true
image: null
join: null
dns:
enabled: true
ui:
enabled: true
service:
enabled: true
worker 1/1 finished
worker 1/1 started
Comparing consul helm/consul
exec: helm diff upgrade --reset-values --allow-unreleased consul helm/consul --version 0.8.1 --tiller-namespace kube-system --namespace default --values /tmp/values046725249 --detailed-exitcode
exec: helm diff upgrade --reset-values --allow-unreleased consul helm/consul --version 0.8.1 --tiller-namespace kube-system --namespace default --values /tmp/values046725249 --detailed-exitcode: ********************
Release was not present in Helm. Diff will show entire contents as new.
********************
default, consul-consul-client-config, ConfigMap (v1) has been added:
- 
+ # Source: consul/templates/client-config-configmap.yaml
+ # ConfigMap with extra configuration specified directly to the chart
+ # for client agents only.
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: consul-consul-client-config
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ data:
+ extra-from-values.json: |-
+ {}
default, consul-consul, DaemonSet (apps) has been added:
+ # Source: consul/templates/client-daemonset.yaml
+ # DaemonSet to run the Consul clients on every node.
+ apiVersion: apps/v1
+ kind: DaemonSet
+ metadata:
+ name: consul-consul
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ selector:
+ matchLabels:
+ app: consul
+ chart: consul-helm
+ release: consul
+ component: client
+ hasDNS: "true"
+ template:
+ metadata:
+ labels:
+ app: consul
+ chart: consul-helm
+ release: consul
+ component: client
+ hasDNS: "true"
+ annotations:
+ "consul.hashicorp.com/connect-inject": "false"
+ spec:
+ terminationGracePeriodSeconds: 10
+ serviceAccountName: consul-consul-client
+ 
+ # Consul agents require a directory for data, even clients. The data
+ # is okay to be wiped though if the Pod is removed, so just use an
+ # emptyDir volume.
+ volumes:
+ - name: data
+ emptyDir: {}
+ - name: config
+ configMap:
+ name: consul-consul-client-config
+ 
+ containers:
+ - name: consul
+ image: "consul:1.5.0"
+ env:
+ - name: POD_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.podIP
+ - name: NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: NODE
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ - name: GOSSIP_KEY
+ valueFrom:
+ secretKeyRef:
+ name: consul-consul-gossip-encryption
+ key: gossip-key
+ 
+ command:
+ - "/bin/sh"
+ - "-ec"
+ - |
+ CONSUL_FULLNAME="consul-consul"
+ exec /bin/consul agent \
+ -node="${NODE}" \
+ -advertise="${POD_IP}" \
+ -bind=0.0.0.0 \
+ -client=0.0.0.0 \
+ -config-dir=/consul/config \
+ -datacenter=gcp-poc \
+ -data-dir=/consul/data \
+ -encrypt="${GOSSIP_KEY}" \
+ -retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
+ -domain=consul
+ volumeMounts:
+ - name: data
+ mountPath: /consul/data
+ - name: config
+ mountPath: /consul/config
+ lifecycle:
+ preStop:
+ exec:
+ command:
+ - /bin/sh
+ - -c
+ - consul leave
+ ports:
+ - containerPort: 8500
+ hostPort: 8500
+ name: http
+ - containerPort: 8502
+ hostPort: 8502
+ name: grpc
+ - containerPort: 8301
+ name: serflan
+ - containerPort: 8302
+ name: serfwan
+ - containerPort: 8300
+ name: server
+ - containerPort: 8600
+ name: dns-tcp
+ protocol: "TCP"
+ - containerPort: 8600
+ name: dns-udp
+ protocol: "UDP"
+ readinessProbe:
+ # NOTE(mitchellh): when our HTTP status endpoints support the
+ # proper status codes, we should switch to that. This is temporary.
+ exec:
+ command:
+ - "/bin/sh"
+ - "-ec"
+ - |
+ curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
+ grep -E '".+"'
default, consul-consul-server, StatefulSet (apps) has been added:
+ # Source: consul/templates/server-statefulset.yaml
+ # StatefulSet to run the actual Consul server cluster.
+ apiVersion: apps/v1
+ kind: StatefulSet
+ metadata:
+ name: consul-consul-server
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ serviceName: consul-consul-server
+ podManagementPolicy: Parallel
+ replicas: 1
+ selector:
+ matchLabels:
+ app: consul
+ chart: consul-helm
+ release: consul
+ component: server
+ hasDNS: "true"
+ template:
+ metadata:
+ labels:
+ app: consul
+ chart: consul-helm
+ release: consul
+ component: server
+ hasDNS: "true"
+ annotations:
+ "consul.hashicorp.com/connect-inject": "false"
+ spec:
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ app: consul
+ release: "consul"
+ component: server
+ topologyKey: kubernetes.io/hostname
+ terminationGracePeriodSeconds: 10
+ serviceAccountName: consul-consul-server
+ securityContext:
+ fsGroup: 1000
+ volumes:
+ - name: config
+ configMap:
+ name: consul-consul-server-config
+ containers:
+ - name: consul
+ image: "consul:1.5.0"
+ env:
+ - name: POD_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.podIP
+ - name: NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: GOSSIP_KEY
+ valueFrom:
+ secretKeyRef:
+ name: consul-consul-gossip-encryption
+ key: gossip-key
+ 
+ command:
+ - "/bin/sh"
+ - "-ec"
+ - |
+ CONSUL_FULLNAME="consul-consul"
+ exec /bin/consul agent \
+ -advertise="${POD_IP}" \
+ -bind=0.0.0.0 \
+ -bootstrap-expect=1 \
+ -client=0.0.0.0 \
+ -config-dir=/consul/config \
+ -datacenter=gcp-poc \
+ -data-dir=/consul/data \
+ -domain=consul \
+ -encrypt="${GOSSIP_KEY}" \
+ -ui \
+ -retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
+ -server
+ volumeMounts:
+ - name: data-default
+ mountPath: /consul/data
+ - name: config
+ mountPath: /consul/config
+ lifecycle:
+ preStop:
+ exec:
+ command:
+ - /bin/sh
+ - -c
+ - consul leave
+ ports:
+ - containerPort: 8500
+ name: http
+ - containerPort: 8301
+ name: serflan
+ - containerPort: 8302
+ name: serfwan
+ - containerPort: 8300
+ name: server
+ - containerPort: 8600
+ name: dns-tcp
+ protocol: "TCP"
+ - containerPort: 8600
+ name: dns-udp
+ protocol: "UDP"
+ readinessProbe:
+ # NOTE(mitchellh): when our HTTP status endpoints support the
+ # proper status codes, we should switch to that. This is temporary.
+ exec:
+ command:
+ - "/bin/sh"
+ - "-ec"
+ - |
+ curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
+ grep -E '".+"'
+ failureThreshold: 2
+ initialDelaySeconds: 5
+ periodSeconds: 3
+ successThreshold: 1
+ timeoutSeconds: 5 
+ volumeClaimTemplates:
+ - metadata:
+ name: data-default
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
default, consul-consul-client, PodSecurityPolicy (policy) has been added:
- 
+ # Source: consul/templates/client-podsecuritypolicy.yaml
+ apiVersion: policy/v1beta1
+ kind: PodSecurityPolicy
+ metadata:
+ name: consul-consul-client
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ privileged: false
+ # Required to prevent escalations to root.
+ allowPrivilegeEscalation: false
+ # This is redundant with non-root + disallow privilege escalation,
+ # but we can provide it for defense in depth.
+ requiredDropCapabilities:
+ - ALL
+ # Allow core volume types.
+ volumes:
+ - 'configMap'
+ - 'emptyDir'
+ - 'projected'
+ - 'secret'
+ - 'downwardAPI'
+ hostNetwork: false
+ hostPorts:
+ - min: 8500
+ max: 8502
+ hostIPC: false
+ hostPID: false
+ runAsUser:
+ # Require the container to run without root privileges.
+ rule: 'RunAsAny'
+ seLinux:
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'RunAsAny'
+ fsGroup:
+ rule: 'RunAsAny'
+ readOnlyRootFilesystem: false
default, consul-consul-server, PodSecurityPolicy (policy) has been added:
- 
+ # Source: consul/templates/server-podsecuritypolicy.yaml
+ apiVersion: policy/v1beta1
+ kind: PodSecurityPolicy
+ metadata:
+ name: consul-consul-server
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ privileged: false
+ # Required to prevent escalations to root.
+ allowPrivilegeEscalation: false
+ # This is redundant with non-root + disallow privilege escalation,
+ # but we can provide it for defense in depth.
+ requiredDropCapabilities:
+ - ALL
+ # Allow core volume types.
+ volumes:
+ - 'configMap'
+ - 'emptyDir'
+ - 'projected'
+ - 'secret'
+ - 'downwardAPI'
+ - 'persistentVolumeClaim'
+ hostNetwork: false
+ hostIPC: false
+ hostPID: false
+ runAsUser:
+ # Require the container to run without root privileges.
+ rule: 'RunAsAny'
+ seLinux:
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'RunAsAny'
+ fsGroup:
+ rule: 'RunAsAny'
+ readOnlyRootFilesystem: false
default, consul-consul-gossip-encryption, Secret (v1) has been added:
- 
+ # Source: consul/templates/client-secret.yaml
+ # Secret with extra configuration specified directly to the chart
+ # for client agents only.
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: consul-consul-gossip-encryption
+ labels:
+ app: consul
+ chart: consul-helm
+ release: consul
+ heritage: Tiller
+ type: Opaque
+ data:
+ gossip-key: "bVVnMGRmWUh1K0lLRU5sTHUrczNtUT09"
default, consul-consul-server-config, ConfigMap (v1) has been added:
- 
+ # Source: consul/templates/server-config-configmap.yaml
+ # StatefulSet to run the actual Consul server cluster.
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: consul-consul-server-config
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ data:
+ extra-from-values.json: |-
+ {}
default, consul-consul-server, ServiceAccount (v1) has been added:
- 
+ # Source: consul/templates/server-serviceaccount.yaml
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: consul-consul-server
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
default, consul-consul-client, ClusterRole (rbac.authorization.k8s.io) has been added:
- 
+ # Source: consul/templates/client-clusterrole.yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: consul-consul-client
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ rules:
+ - apiGroups: ["policy"]
+ resources: ["podsecuritypolicies"]
+ resourceNames:
+ - consul-consul-client
+ verbs:
+ - use
default, consul-consul-server, ClusterRole (rbac.authorization.k8s.io) has been added:
- 
+ # Source: consul/templates/server-clusterrole.yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: consul-consul-server
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ rules:
+ - apiGroups: ["policy"]
+ resources: ["podsecuritypolicies"]
+ resourceNames:
+ - consul-consul-server
+ verbs:
+ - use
default, consul-consul-server, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:
- 
+ # Source: consul/templates/server-clusterrolebinding.yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ name: consul-consul-server
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: consul-consul-server
+ subjects:
+ - kind: ServiceAccount
+ name: consul-consul-server
+ namespace: default
default, consul-consul-ui, Service (v1) has been added:
- 
+ # Source: consul/templates/ui-service.yaml
+ # UI Service for Consul Server
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: consul-consul-ui
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ selector:
+ app: consul
+ release: "consul"
+ component: server
+ ports:
+ - name: http
+ port: 80
+ targetPort: 8500
default, consul-consul-server, PodDisruptionBudget (policy) has been added:
- 
+ # Source: consul/templates/server-disruptionbudget.yaml
+ # PodDisruptionBudget to prevent degrading the server cluster through
+ # voluntary cluster changes.
+ apiVersion: policy/v1beta1
+ kind: PodDisruptionBudget
+ metadata:
+ name: consul-consul-server
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ maxUnavailable: 0
+ selector:
+ matchLabels:
+ app: consul
+ release: "consul"
+ component: server
default, consul-consul-client, ServiceAccount (v1) has been added:
- 
+ # Source: consul/templates/client-serviceaccount.yaml
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: consul-consul-client
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
default, consul-consul-client, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:
- 
+ # Source: consul/templates/client-clusterrolebinding.yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ name: consul-consul-client
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: consul-consul-client
+ subjects:
+ - kind: ServiceAccount
+ name: consul-consul-client
+ namespace: default
default, consul-consul-dns, Service (v1) has been added:
- 
+ # Source: consul/templates/dns-service.yaml
+ # Service for Consul DNS.
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: consul-consul-dns
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ ports:
+ - name: dns-tcp
+ port: 53
+ protocol: "TCP"
+ targetPort: dns-tcp
+ - name: dns-udp
+ port: 53
+ protocol: "UDP"
+ targetPort: dns-udp
+ selector:
+ app: consul
+ release: "consul"
+ hasDNS: "true"
default, consul-consul-server, Service (v1) has been added:
- 
+ # Source: consul/templates/server-service.yaml
+ # Headless service for Consul server DNS entries. This service should only
+ # point to Consul servers. For access to an agent, one should assume that
+ # the agent is installed locally on the node and the NODE_IP should be used.
+ # If the node can't run a Consul agent, then this service can be used to
+ # communicate directly to a server agent.
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: consul-consul-server
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ annotations:
+ # This must be set in addition to publishNotReadyAddresses due
+ # to an open issue where it may not work:
+ # https://github.com/kubernetes/kubernetes/issues/58662
+ service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
+ spec:
+ clusterIP: None
+ # We want the servers to become available even if they're not ready
+ # since this DNS is also used for join operations.
+ publishNotReadyAddresses: true
+ ports:
+ - name: http
+ port: 8500
+ targetPort: 8500
+ - name: serflan-tcp
+ protocol: "TCP"
+ port: 8301
+ targetPort: 8301
+ - name: serflan-udp
+ protocol: "UDP"
+ port: 8301
+ targetPort: 8301
+ - name: serfwan-tcp
+ protocol: "TCP"
+ port: 8302
+ targetPort: 8302
+ - name: serfwan-udp
+ protocol: "UDP"
+ port: 8302
+ targetPort: 8302
+ - name: server
+ port: 8300
+ targetPort: 8300
+ - name: dns-tcp
+ protocol: "TCP"
+ port: 8600
+ targetPort: dns-tcp
+ - name: dns-udp
+ protocol: "UDP"
+ port: 8600
+ targetPort: dns-udp
+ selector:
+ app: consul
+ release: "consul"
+ component: server
identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)
********************
Release was not present in Helm. Diff will show entire contents as new.
********************
default, consul-consul-client-config, ConfigMap (v1) has been added:
- 
+ # Source: consul/templates/client-config-configmap.yaml
+ # ConfigMap with extra configuration specified directly to the chart
+ # for client agents only.
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: consul-consul-client-config
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ data:
+ extra-from-values.json: |-
+ {}
default, consul-consul, DaemonSet (apps) has been added:
+ # Source: consul/templates/client-daemonset.yaml
+ # DaemonSet to run the Consul clients on every node.
+ apiVersion: apps/v1
+ kind: DaemonSet
+ metadata:
+ name: consul-consul
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ selector:
+ matchLabels:
+ app: consul
+ chart: consul-helm
+ release: consul
+ component: client
+ hasDNS: "true"
+ template:
+ metadata:
+ labels:
+ app: consul
+ chart: consul-helm
+ release: consul
+ component: client
+ hasDNS: "true"
+ annotations:
+ "consul.hashicorp.com/connect-inject": "false"
+ spec:
+ terminationGracePeriodSeconds: 10
+ serviceAccountName: consul-consul-client
+ 
+ # Consul agents require a directory for data, even clients. The data
+ # is okay to be wiped though if the Pod is removed, so just use an
+ # emptyDir volume.
+ volumes:
+ - name: data
+ emptyDir: {}
+ - name: config
+ configMap:
+ name: consul-consul-client-config
+ 
+ containers:
+ - name: consul
+ image: "consul:1.5.0"
+ env:
+ - name: POD_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.podIP
+ - name: NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: NODE
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ - name: GOSSIP_KEY
+ valueFrom:
+ secretKeyRef:
+ name: consul-consul-gossip-encryption
+ key: gossip-key
+ 
+ command:
+ - "/bin/sh"
+ - "-ec"
+ - |
+ CONSUL_FULLNAME="consul-consul"
+ exec /bin/consul agent \
+ -node="${NODE}" \
+ -advertise="${POD_IP}" \
+ -bind=0.0.0.0 \
+ -client=0.0.0.0 \
+ -config-dir=/consul/config \
+ -datacenter=gcp-poc \
+ -data-dir=/consul/data \
+ -encrypt="${GOSSIP_KEY}" \
+ -retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
+ -domain=consul
+ volumeMounts:
+ - name: data
+ mountPath: /consul/data
+ - name: config
+ mountPath: /consul/config
+ lifecycle:
+ preStop:
+ exec:
+ command:
+ - /bin/sh
+ - -c
+ - consul leave
+ ports:
+ - containerPort: 8500
+ hostPort: 8500
+ name: http
+ - containerPort: 8502
+ hostPort: 8502
+ name: grpc
+ - containerPort: 8301
+ name: serflan
+ - containerPort: 8302
+ name: serfwan
+ - containerPort: 8300
+ name: server
+ - containerPort: 8600
+ name: dns-tcp
+ protocol: "TCP"
+ - containerPort: 8600
+ name: dns-udp
+ protocol: "UDP"
+ readinessProbe:
+ # NOTE(mitchellh): when our HTTP status endpoints support the
+ # proper status codes, we should switch to that. This is temporary.
+ exec:
+ command:
+ - "/bin/sh"
+ - "-ec"
+ - |
+ curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
+ grep -E '".+"'
default, consul-consul-server, StatefulSet (apps) has been added:
+ # Source: consul/templates/server-statefulset.yaml
+ # StatefulSet to run the actual Consul server cluster.
+ apiVersion: apps/v1
+ kind: StatefulSet
+ metadata:
+ name: consul-consul-server
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ serviceName: consul-consul-server
+ podManagementPolicy: Parallel
+ replicas: 1
+ selector:
+ matchLabels:
+ app: consul
+ chart: consul-helm
+ release: consul
+ component: server
+ hasDNS: "true"
+ template:
+ metadata:
+ labels:
+ app: consul
+ chart: consul-helm
+ release: consul
+ component: server
+ hasDNS: "true"
+ annotations:
+ "consul.hashicorp.com/connect-inject": "false"
+ spec:
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ app: consul
+ release: "consul"
+ component: server
+ topologyKey: kubernetes.io/hostname
+ terminationGracePeriodSeconds: 10
+ serviceAccountName: consul-consul-server
+ securityContext:
+ fsGroup: 1000
+ volumes:
+ - name: config
+ configMap:
+ name: consul-consul-server-config
+ containers:
+ - name: consul
+ image: "consul:1.5.0"
+ env:
+ - name: POD_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.podIP
+ - name: NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: GOSSIP_KEY
+ valueFrom:
+ secretKeyRef:
+ name: consul-consul-gossip-encryption
+ key: gossip-key
+ 
+ command:
+ - "/bin/sh"
+ - "-ec"
+ - |
+ CONSUL_FULLNAME="consul-consul"
+ exec /bin/consul agent \
+ -advertise="${POD_IP}" \
+ -bind=0.0.0.0 \
+ -bootstrap-expect=1 \
+ -client=0.0.0.0 \
+ -config-dir=/consul/config \
+ -datacenter=gcp-poc \
+ -data-dir=/consul/data \
+ -domain=consul \
+ -encrypt="${GOSSIP_KEY}" \
+ -ui \
+ -retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
+ -server
+ volumeMounts:
+ - name: data-default
+ mountPath: /consul/data
+ - name: config
+ mountPath: /consul/config
+ lifecycle:
+ preStop:
+ exec:
+ command:
+ - /bin/sh
+ - -c
+ - consul leave
+ ports:
+ - containerPort: 8500
+ name: http
+ - containerPort: 8301
+ name: serflan
+ - containerPort: 8302
+ name: serfwan
+ - containerPort: 8300
+ name: server
+ - containerPort: 8600
+ name: dns-tcp
+ protocol: "TCP"
+ - containerPort: 8600
+ name: dns-udp
+ protocol: "UDP"
+ readinessProbe:
+ # NOTE(mitchellh): when our HTTP status endpoints support the
+ # proper status codes, we should switch to that. This is temporary.
+ exec:
+ command:
+ - "/bin/sh"
+ - "-ec"
+ - |
+ curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
+ grep -E '".+"'
+ failureThreshold: 2
+ initialDelaySeconds: 5
+ periodSeconds: 3
+ successThreshold: 1
+ timeoutSeconds: 5 
+ volumeClaimTemplates:
+ - metadata:
+ name: data-default
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
default, consul-consul-client, PodSecurityPolicy (policy) has been added:
- 
+ # Source: consul/templates/client-podsecuritypolicy.yaml
+ apiVersion: policy/v1beta1
+ kind: PodSecurityPolicy
+ metadata:
+ name: consul-consul-client
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ privileged: false
+ # Required to prevent escalations to root.
+ allowPrivilegeEscalation: false
+ # This is redundant with non-root + disallow privilege escalation,
+ # but we can provide it for defense in depth.
+ requiredDropCapabilities:
+ - ALL
+ # Allow core volume types.
+ volumes:
+ - 'configMap'
+ - 'emptyDir'
+ - 'projected'
+ - 'secret'
+ - 'downwardAPI'
+ hostNetwork: false
+ hostPorts:
+ - min: 8500
+ max: 8502
+ hostIPC: false
+ hostPID: false
+ runAsUser:
+ # Require the container to run without root privileges.
+ rule: 'RunAsAny'
+ seLinux:
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'RunAsAny'
+ fsGroup:
+ rule: 'RunAsAny'
+ readOnlyRootFilesystem: false
default, consul-consul-server, PodSecurityPolicy (policy) has been added:
- 
+ # Source: consul/templates/server-podsecuritypolicy.yaml
+ apiVersion: policy/v1beta1
+ kind: PodSecurityPolicy
+ metadata:
+ name: consul-consul-server
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ privileged: false
+ # Required to prevent escalations to root.
+ allowPrivilegeEscalation: false
+ # This is redundant with non-root + disallow privilege escalation,
+ # but we can provide it for defense in depth.
+ requiredDropCapabilities:
+ - ALL
+ # Allow core volume types.
+ volumes:
+ - 'configMap'
+ - 'emptyDir'
+ - 'projected'
+ - 'secret'
+ - 'downwardAPI'
+ - 'persistentVolumeClaim'
+ hostNetwork: false
+ hostIPC: false
+ hostPID: false
+ runAsUser:
+ # Require the container to run without root privileges.
+ rule: 'RunAsAny'
+ seLinux:
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'RunAsAny'
+ fsGroup:
+ rule: 'RunAsAny'
+ readOnlyRootFilesystem: false
default, consul-consul-gossip-encryption, Secret (v1) has been added:
- 
+ # Source: consul/templates/client-secret.yaml
+ # Secret with extra configuration specified directly to the chart
+ # for client agents only.
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: consul-consul-gossip-encryption
+ labels:
+ app: consul
+ chart: consul-helm
+ release: consul
+ heritage: Tiller
+ type: Opaque
+ data:
+ gossip-key: "bVVnMGRmWUh1K0lLRU5sTHUrczNtUT09"
default, consul-consul-server-config, ConfigMap (v1) has been added:
- 
+ # Source: consul/templates/server-config-configmap.yaml
+ # StatefulSet to run the actual Consul server cluster.
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: consul-consul-server-config
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ data:
+ extra-from-values.json: |-
+ {}
default, consul-consul-server, ServiceAccount (v1) has been added:
- 
+ # Source: consul/templates/server-serviceaccount.yaml
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: consul-consul-server
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
default, consul-consul-client, ClusterRole (rbac.authorization.k8s.io) has been added:
- 
+ # Source: consul/templates/client-clusterrole.yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: consul-consul-client
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ rules:
+ - apiGroups: ["policy"]
+ resources: ["podsecuritypolicies"]
+ resourceNames:
+ - consul-consul-client
+ verbs:
+ - use
default, consul-consul-server, ClusterRole (rbac.authorization.k8s.io) has been added:
- 
+ # Source: consul/templates/server-clusterrole.yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: consul-consul-server
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ rules:
+ - apiGroups: ["policy"]
+ resources: ["podsecuritypolicies"]
+ resourceNames:
+ - consul-consul-server
+ verbs:
+ - use
default, consul-consul-server, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:
- 
+ # Source: consul/templates/server-clusterrolebinding.yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ name: consul-consul-server
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: consul-consul-server
+ subjects:
+ - kind: ServiceAccount
+ name: consul-consul-server
+ namespace: default
default, consul-consul-ui, Service (v1) has been added:
- 
+ # Source: consul/templates/ui-service.yaml
+ # UI Service for Consul Server
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: consul-consul-ui
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ selector:
+ app: consul
+ release: "consul"
+ component: server
+ ports:
+ - name: http
+ port: 80
+ targetPort: 8500
default, consul-consul-server, PodDisruptionBudget (policy) has been added:
- 
+ # Source: consul/templates/server-disruptionbudget.yaml
+ # PodDisruptionBudget to prevent degrading the server cluster through
+ # voluntary cluster changes.
+ apiVersion: policy/v1beta1
+ kind: PodDisruptionBudget
+ metadata:
+ name: consul-consul-server
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ maxUnavailable: 0
+ selector:
+ matchLabels:
+ app: consul
+ release: "consul"
+ component: server
default, consul-consul-client, ServiceAccount (v1) has been added:
- 
+ # Source: consul/templates/client-serviceaccount.yaml
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: consul-consul-client
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
default, consul-consul-client, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:
- 
+ # Source: consul/templates/client-clusterrolebinding.yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ name: consul-consul-client
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: consul-consul-client
+ subjects:
+ - kind: ServiceAccount
+ name: consul-consul-client
+ namespace: default
default, consul-consul-dns, Service (v1) has been added:
- 
+ # Source: consul/templates/dns-service.yaml
+ # Service for Consul DNS.
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: consul-consul-dns
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ spec:
+ ports:
+ - name: dns-tcp
+ port: 53
+ protocol: "TCP"
+ targetPort: dns-tcp
+ - name: dns-udp
+ port: 53
+ protocol: "UDP"
+ targetPort: dns-udp
+ selector:
+ app: consul
+ release: "consul"
+ hasDNS: "true"
default, consul-consul-server, Service (v1) has been added:
- 
+ # Source: consul/templates/server-service.yaml
+ # Headless service for Consul server DNS entries. This service should only
+ # point to Consul servers. For access to an agent, one should assume that
+ # the agent is installed locally on the node and the NODE_IP should be used.
+ # If the node can't run a Consul agent, then this service can be used to
+ # communicate directly to a server agent.
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: consul-consul-server
+ namespace: default
+ labels:
+ app: consul
+ chart: consul-helm
+ heritage: Tiller
+ release: consul
+ annotations:
+ # This must be set in addition to publishNotReadyAddresses due
+ # to an open issue where it may not work:
+ # https://github.com/kubernetes/kubernetes/issues/58662
+ service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
+ spec:
+ clusterIP: None
+ # We want the servers to become available even if they're not ready
+ # since this DNS is also used for join operations.
+ publishNotReadyAddresses: true
+ ports:
+ - name: http
+ port: 8500
+ targetPort: 8500
+ - name: serflan-tcp
+ protocol: "TCP"
+ port: 8301
+ targetPort: 8301
+ - name: serflan-udp
+ protocol: "UDP"
+ port: 8301
+ targetPort: 8301
+ - name: serfwan-tcp
+ protocol: "TCP"
+ port: 8302
+ targetPort: 8302
+ - name: serfwan-udp
+ protocol: "UDP"
+ port: 8302
+ targetPort: 8302
+ - name: server
+ port: 8300
+ targetPort: 8300
+ - name: dns-tcp
+ protocol: "TCP"
+ port: 8600
+ targetPort: dns-tcp
+ - name: dns-udp
+ protocol: "UDP"
+ port: 8600
+ targetPort: dns-udp
+ selector:
+ app: consul
+ release: "consul"
+ component: server
identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)
worker 1/1 finished
worker 1/1 started
successfully generated the value file at charts/chart-consul/values.yaml. produced:
global:
enabled: true
datacenter: gcp-poc
gossipEncryption:
secretName: gossip-encryption
secretKey: gossip-key
secretValue: "mUg0dfYHu+IKENlLu+s3mQ=="
server:
enabled: true
replicas: 1
bootstrapExpect: 1
storage: 1Gi
client:
enabled: true
image: null
join: null
dns:
enabled: true
ui:
enabled: true
service:
enabled: true
worker 1/1 finished
worker 1/1 started
Upgrading helm/consul
exec: helm upgrade --install --reset-values consul helm/consul --version 0.8.1 --verify --wait --timeout 600 --force --recreate-pods --tiller-namespace kube-system --namespace default --values /tmp/values839267820
exec: helm upgrade --install --reset-values consul helm/consul --version 0.8.1 --verify --wait --timeout 600 --force --recreate-pods --tiller-namespace kube-system --namespace default --values /tmp/values839267820:
worker 1/1 finished
err: release "consul" in "helmfile.yaml" failed: failed processing release consul: helm exited with status 1:
Error: failed to download "helm/consul" (hint: running `helm repo update` may help)
changing working directory back to "/deployment/kubernetes/helm"
in helmfile.d/helmfile.yaml: failed processing release consul: helm exited with status 1:
Error: failed to download "helm/consul" (hint: running `helm repo update` may help)
bash-4.4# helm install --name consul helm/consul -f ./helmfile.d/charts/chart-consul/values.yaml --debug
[debug] Created tunnel using local port: '43631'
[debug] SERVER: "127.0.0.1:43631"
[debug] Original chart version: ""
[debug] Fetched helm/consul to /root/.helm/cache/archive/consul-0.8.1.tgz
[debug] CHART PATH: /root/.helm/cache/archive/consul-0.8.1.tgz
NAME: consul
REVISION: 1
RELEASED: Fri Aug 2 09:07:09 2019
CHART: consul-0.8.1
USER-SUPPLIED VALUES:
client:
enabled: true
image: null
join: null
dns:
enabled: true
global:
datacenter: gcp-poc
enabled: true
gossipEncryption:
secretKey: gossip-key
secretName: gossip-encryption
secretValue: mUg0dfYHu+IKENlLu+s3mQ==
server:
bootstrapExpect: 1
enabled: true
replicas: 1
storage: 1Gi
ui:
enabled: true
service:
enabled: true
COMPUTED VALUES:
client:
annotations: null
enabled: true
extraConfig: |
{}
extraEnvironmentVars: {}
extraVolumes: []
grpc: false
image: null
join: null
nodeSelector: null
priorityClassName: ""
resources: null
tolerations: ""
cloud:
azure:
secretName: az-cloud-config
consul_agent:
join_tag_name: consul_join
join_tag_value: server
enabled: false
connectInject:
aclBindingRuleSelector: serviceaccount.name!=default
centralConfig:
defaultProtocol: null
enabled: false
proxyDefaults: |
{}
certs:
caBundle: ""
certName: tls.crt
keyName: tls.key
secretName: null
default: false
enabled: false
image: null
imageConsul: null
imageEnvoy: null
namespaceSelector: null
nodeSelector: null
dns:
enabled: true
global:
bootstrapACLs: false
datacenter: gcp-poc
domain: consul
enablePodSecurityPolicies: true
enabled: true
gossipEncryption:
secretKey: gossip-key
secretName: gossip-encryption
secretValue: mUg0dfYHu+IKENlLu+s3mQ==
image: consul:1.5.0
imageK8S: hashicorp/consul-k8s:0.8.1
server:
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "consul.name" . }}
release: "{{ .Release.Name }}"
component: server
topologyKey: kubernetes.io/hostname
annotations: null
bootstrapExpect: 1
connect: false
disruptionBudget:
enabled: true
maxUnavailable: null
enabled: true
enterpriseLicense:
secretKey: null
secretName: null
extraConfig: |
{}
extraEnvironmentVars: {}
extraVolumes: []
image: null
nodeSelector: null
priorityClassName: ""
replicas: 1
resources: null
storage: 1Gi
storageClass: null
tolerations: ""
updatePartition: 0
syncCatalog:
aclSyncToken:
secretKey: null
secretName: null
consulPrefix: null
default: true
enabled: false
image: null
k8sPrefix: null
k8sTag: null
nodePortSyncType: ExternalFirst
nodeSelector: null
syncClusterIPServices: true
toConsul: true
toK8S: true
ui:
enabled: true
service:
additionalSpec: null
annotations: null
enabled: true
type: null
HOOKS:
MANIFEST:
---
# Source: consul/templates/client-podsecuritypolicy.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: consul-consul-client
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
requiredDropCapabilities:
- ALL
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
hostNetwork: false
hostPorts:
- min: 8500
max: 8502
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: false
---
# Source: consul/templates/server-podsecuritypolicy.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: consul-consul-server
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
requiredDropCapabilities:
- ALL
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: false
---
# Source: consul/templates/server-disruptionbudget.yaml
# PodDisruptionBudget to prevent degrading the server cluster through
# voluntary cluster changes.
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: consul-consul-server
namespace: default
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
spec:
maxUnavailable: 0
selector:
matchLabels:
app: consul
release: "consul"
component: server
---
# Source: consul/templates/client-secret.yaml
# Secret with extra configuration specified directly to the chart
# for client agents only.
apiVersion: v1
kind: Secret
metadata:
name: consul-consul-gossip-encryption
labels:
app: consul
chart: consul-helm
release: consul
heritage: Tiller
type: Opaque
data:
gossip-key: "bVVnMGRmWUh1K0lLRU5sTHUrczNtUT09"
---
# Source: consul/templates/client-config-configmap.yaml
# ConfigMap with extra configuration specified directly to the chart
# for client agents only.
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-consul-client-config
namespace: default
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
data:
extra-from-values.json: |-
{}
---
# Source: consul/templates/server-config-configmap.yaml
# StatefulSet to run the actual Consul server cluster.
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-consul-server-config
namespace: default
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
data:
extra-from-values.json: |-
{}
---
# Source: consul/templates/client-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul-consul-client
namespace: default
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
---
# Source: consul/templates/server-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul-consul-server
namespace: default
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
---
# Source: consul/templates/client-clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: consul-consul-client
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
rules:
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
resourceNames:
- consul-consul-client
verbs:
- use
---
# Source: consul/templates/server-clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: consul-consul-server
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
rules:
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
resourceNames:
- consul-consul-server
verbs:
- use
---
# Source: consul/templates/client-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: consul-consul-client
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: consul-consul-client
subjects:
- kind: ServiceAccount
name: consul-consul-client
namespace: default
---
# Source: consul/templates/server-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: consul-consul-server
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: consul-consul-server
subjects:
- kind: ServiceAccount
name: consul-consul-server
namespace: default
---
# Source: consul/templates/dns-service.yaml
# Service for Consul DNS.
apiVersion: v1
kind: Service
metadata:
name: consul-consul-dns
namespace: default
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
spec:
ports:
- name: dns-tcp
port: 53
protocol: "TCP"
targetPort: dns-tcp
- name: dns-udp
port: 53
protocol: "UDP"
targetPort: dns-udp
selector:
app: consul
release: "consul"
hasDNS: "true"
---
# Source: consul/templates/server-service.yaml
# Headless service for Consul server DNS entries. This service should only
# point to Consul servers. For access to an agent, one should assume that
# the agent is installed locally on the node and the NODE_IP should be used.
# If the node can't run a Consul agent, then this service can be used to
# communicate directly to a server agent.
apiVersion: v1
kind: Service
metadata:
name: consul-consul-server
namespace: default
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
annotations:
# This must be set in addition to publishNotReadyAddresses due
# to an open issue where it may not work:
# https://github.com/kubernetes/kubernetes/issues/58662
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None
# We want the servers to become available even if they're not ready
# since this DNS is also used for join operations.
publishNotReadyAddresses: true
ports:
- name: http
port: 8500
targetPort: 8500
- name: serflan-tcp
protocol: "TCP"
port: 8301
targetPort: 8301
- name: serflan-udp
protocol: "UDP"
port: 8301
targetPort: 8301
- name: serfwan-tcp
protocol: "TCP"
port: 8302
targetPort: 8302
- name: serfwan-udp
protocol: "UDP"
port: 8302
targetPort: 8302
- name: server
port: 8300
targetPort: 8300
- name: dns-tcp
protocol: "TCP"
port: 8600
targetPort: dns-tcp
- name: dns-udp
protocol: "UDP"
port: 8600
targetPort: dns-udp
selector:
app: consul
release: "consul"
component: server
---
# Source: consul/templates/ui-service.yaml
# UI Service for Consul Server
apiVersion: v1
kind: Service
metadata:
name: consul-consul-ui
namespace: default
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
spec:
selector:
app: consul
release: "consul"
component: server
ports:
- name: http
port: 80
targetPort: 8500
---
# Source: consul/templates/client-daemonset.yaml
# DaemonSet to run the Consul clients on every node.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: consul-consul
namespace: default
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
spec:
selector:
matchLabels:
app: consul
chart: consul-helm
release: consul
component: client
hasDNS: "true"
template:
metadata:
labels:
app: consul
chart: consul-helm
release: consul
component: client
hasDNS: "true"
annotations:
"consul.hashicorp.com/connect-inject": "false"
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: consul-consul-client
# Consul agents require a directory for data, even clients. The data
# is okay to be wiped though if the Pod is removed, so just use an
# emptyDir volume.
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: consul-consul-client-config
containers:
- name: consul
image: "consul:1.5.0"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: GOSSIP_KEY
valueFrom:
secretKeyRef:
name: consul-consul-gossip-encryption
key: gossip-key
command:
- "/bin/sh"
- "-ec"
- |
CONSUL_FULLNAME="consul-consul"
exec /bin/consul agent \
-node="${NODE}" \
-advertise="${POD_IP}" \
-bind=0.0.0.0 \
-client=0.0.0.0 \
-config-dir=/consul/config \
-datacenter=gcp-poc \
-data-dir=/consul/data \
-encrypt="${GOSSIP_KEY}" \
-retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-domain=consul
volumeMounts:
- name: data
mountPath: /consul/data
- name: config
mountPath: /consul/config
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave
ports:
- containerPort: 8500
hostPort: 8500
name: http
- containerPort: 8502
hostPort: 8502
name: grpc
- containerPort: 8301
name: serflan
- containerPort: 8302
name: serfwan
- containerPort: 8300
name: server
- containerPort: 8600
name: dns-tcp
protocol: "TCP"
- containerPort: 8600
name: dns-udp
protocol: "UDP"
readinessProbe:
# NOTE(mitchellh): when our HTTP status endpoints support the
# proper status codes, we should switch to that. This is temporary.
exec:
command:
- "/bin/sh"
- "-ec"
- |
curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
grep -E '".+"'
---
# Source: consul/templates/server-statefulset.yaml
# StatefulSet to run the actual Consul server cluster.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul-consul-server
namespace: default
labels:
app: consul
chart: consul-helm
heritage: Tiller
release: consul
spec:
serviceName: consul-consul-server
podManagementPolicy: Parallel
replicas: 1
selector:
matchLabels:
app: consul
chart: consul-helm
release: consul
component: server
hasDNS: "true"
template:
metadata:
labels:
app: consul
chart: consul-helm
release: consul
component: server
hasDNS: "true"
annotations:
"consul.hashicorp.com/connect-inject": "false"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: consul
release: "consul"
component: server
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
serviceAccountName: consul-consul-server
securityContext:
fsGroup: 1000
volumes:
- name: config
configMap:
name: consul-consul-server-config
containers:
- name: consul
image: "consul:1.5.0"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: GOSSIP_KEY
valueFrom:
secretKeyRef:
name: consul-consul-gossip-encryption
key: gossip-key
command:
- "/bin/sh"
- "-ec"
- |
CONSUL_FULLNAME="consul-consul"
exec /bin/consul agent \
-advertise="${POD_IP}" \
-bind=0.0.0.0 \
-bootstrap-expect=1 \
-client=0.0.0.0 \
-config-dir=/consul/config \
-datacenter=gcp-poc \
-data-dir=/consul/data \
-domain=consul \
-encrypt="${GOSSIP_KEY}" \
-ui \
-retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-server
volumeMounts:
- name: data-default
mountPath: /consul/data
- name: config
mountPath: /consul/config
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave
ports:
- containerPort: 8500
name: http
- containerPort: 8301
name: serflan
- containerPort: 8302
name: serfwan
- containerPort: 8300
name: server
- containerPort: 8600
name: dns-tcp
protocol: "TCP"
- containerPort: 8600
name: dns-udp
protocol: "UDP"
readinessProbe:
# NOTE(mitchellh): when our HTTP status endpoints support the
# proper status codes, we should switch to that. This is temporary.
exec:
command:
- "/bin/sh"
- "-ec"
- |
curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
grep -E '".+"'
failureThreshold: 2
initialDelaySeconds: 5
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 5
volumeClaimTemplates:
- metadata:
name: data-default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
# Source: consul/templates/client-azure-secret.yaml
# Cloud Azure Secret with extra configuration specified directly to the chart
# for client agents only.
---
# Source: consul/templates/connect-inject-clusterrole.yaml
# The ClusterRole to enable the Connect injector to get, list, watch and patch MutatingWebhookConfiguration.
---
# Source: consul/templates/connect-inject-deployment.yaml
# The deployment for running the Connect sidecar injector
---
# Source: consul/templates/connect-inject-mutatingwebhook.yaml
# The MutatingWebhookConfiguration to enable the Connect injector.
---
# Source: consul/templates/connect-inject-service.yaml
# The service for the Connect sidecar injector
---
# Source: consul/templates/sync-catalog-deployment.yaml
# The deployment for running the sync-catalog pod
LAST DEPLOYED: Fri Aug 2 09:07:09 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ClusterRole
NAME AGE
consul-consul-client 2s
consul-consul-server 2s
==> v1/ClusterRoleBinding
NAME AGE
consul-consul-client 2s
consul-consul-server 2s
==> v1/ConfigMap
NAME DATA AGE
consul-consul-client-config 1 2s
consul-consul-server-config 1 2s
==> v1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
consul-consul 1 1 0 1 0 <none> 2s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
consul-consul-lcxbn 0/1 Running 0 2s
consul-consul-server-0 0/1 Pending 0 2s
==> v1/Secret
NAME TYPE DATA AGE
consul-consul-gossip-encryption Opaque 1 2s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-consul-dns ClusterIP 10.23.245.149 <none> 53/TCP,53/UDP 2s
consul-consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 2s
consul-consul-ui ClusterIP 10.23.244.61 <none> 80/TCP 2s
==> v1/ServiceAccount
NAME SECRETS AGE
consul-consul-client 1 2s
consul-consul-server 1 2s
==> v1/StatefulSet
NAME READY AGE
consul-consul-server 0/1 2s
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
consul-consul-server N/A 0 0 2s
==> v1beta1/PodSecurityPolicy
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
consul-consul-client false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,downwardAPI
consul-consul-server false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
NOTES:
Thank you for installing HashiCorp Consul!
Now that you have deployed Consul, you should look over the docs on using
Consul with Kubernetes available here:
https://www.consul.io/docs/platform/k8s/index.html
Your release is named consul. To learn more about the release, try:
$ helm status consul
$ helm get consul
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment