-
-
Save djackyn/725ac0638c33c89a3cb903d03707f43c to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
processing file "helmfile.yaml" in directory "helmfile.d" | |
changing working directory to "/deployment/kubernetes/helm/helmfile.d" | |
first-pass rendering starting for "helmfile.yaml.part.0": inherited=&{default map[] map[]}, overrode=<nil> | |
first-pass uses: &{default map[] map[]} | |
first-pass produced: &{default map[] map[]} | |
first-pass rendering result of "helmfile.yaml.part.0": {default map[] map[]} | |
vals: | |
map[] | |
defaultVals:[] | |
second-pass rendering result of "helmfile.yaml.part.0": | |
0: repositories: | |
1: - name: stable | |
2: url: https://kubernetes-charts.storage.googleapis.com | |
3: - name: helm | |
4: url: <url> | |
5: username: <username> | |
6: password: <password> | |
7: | |
8: helmDefaults: | |
9: tillerNamespace: kube-system | |
10: verify: true | |
11: wait: true | |
12: timeout: 600 | |
13: recreatePods: true | |
14: force: true | |
15: tls: false | |
16: | |
17: # The desired states of Helm releases. | |
18: # | |
19: # Helmfile runs various helm commands to converge the current state in the live cluster to the desired state defined here. | |
20: releases: | |
21: - name: consul | |
22: namespace: default | |
23: chart: helm/consul | |
24: version: 0.8.1 | |
25: values: | |
26: - charts/chart-consul/values.yaml | |
27: | |
merged environment: &{default map[] map[]} | |
Adding repo stable https://kubernetes-charts.storage.googleapis.com | |
exec: helm repo add stable https://kubernetes-charts.storage.googleapis.com | |
exec: helm repo add stable https://kubernetes-charts.storage.googleapis.com: "stable" has been added to your repositories | |
"stable" has been added to your repositories | |
Adding repo helm <url> | |
exec: helm repo add helm <url> --username <username> --password <password> | |
exec: helm repo add helm <url> --username <username> --password <password>: "helm" has been added to your repositories | |
"helm" has been added to your repositories | |
Updating repo | |
exec: helm repo update | |
exec: helm repo update: Hang tight while we grab the latest from your chart repositories... | |
...Skip local chart repository | |
...Successfully got an update from the "helm" chart repository | |
...Successfully got an update from the "stable" chart repository | |
Update Complete. ⎈ Happy Helming!⎈ | |
Hang tight while we grab the latest from your chart repositories... | |
...Skip local chart repository | |
...Successfully got an update from the "helm" chart repository | |
...Successfully got an update from the "stable" chart repository | |
Update Complete. ⎈ Happy Helming!⎈ | |
worker 1/1 started | |
successfully generated the value file at charts/chart-consul/values.yaml. produced: | |
global: | |
enabled: true | |
datacenter: gcp-poc | |
gossipEncryption: | |
secretName: gossip-encryption | |
secretKey: gossip-key | |
secretValue: "mUg0dfYHu+IKENlLu+s3mQ==" | |
server: | |
enabled: true | |
replicas: 1 | |
bootstrapExpect: 1 | |
storage: 1Gi | |
client: | |
enabled: true | |
image: null | |
join: null | |
dns: | |
enabled: true | |
ui: | |
enabled: true | |
service: | |
enabled: true | |
worker 1/1 finished | |
worker 1/1 started | |
Comparing consul helm/consul | |
exec: helm diff upgrade --reset-values --allow-unreleased consul helm/consul --version 0.8.1 --tiller-namespace kube-system --namespace default --values /tmp/values046725249 --detailed-exitcode | |
exec: helm diff upgrade --reset-values --allow-unreleased consul helm/consul --version 0.8.1 --tiller-namespace kube-system --namespace default --values /tmp/values046725249 --detailed-exitcode: ******************** | |
Release was not present in Helm. Diff will show entire contents as new. | |
******************** | |
[33mdefault, consul-consul-client-config, ConfigMap (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-config-configmap.yaml[0m | |
[32m+ # ConfigMap with extra configuration specified directly to the chart[0m | |
[32m+ # for client agents only.[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: ConfigMap[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-client-config[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ data:[0m | |
[32m+ extra-from-values.json: |-[0m | |
[32m+ {}[0m | |
[33mdefault, consul-consul, DaemonSet (apps) has been added:[0m | |
[32m+ # Source: consul/templates/client-daemonset.yaml[0m | |
[32m+ # DaemonSet to run the Consul clients on every node.[0m | |
[32m+ apiVersion: apps/v1[0m | |
[32m+ kind: DaemonSet[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ selector:[0m | |
[32m+ matchLabels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ release: consul[0m | |
[32m+ component: client[0m | |
[32m+ hasDNS: "true"[0m | |
[32m+ template:[0m | |
[32m+ metadata:[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ release: consul[0m | |
[32m+ component: client[0m | |
[32m+ hasDNS: "true"[0m | |
[32m+ annotations:[0m | |
[32m+ "consul.hashicorp.com/connect-inject": "false"[0m | |
[32m+ spec:[0m | |
[32m+ terminationGracePeriodSeconds: 10[0m | |
[32m+ serviceAccountName: consul-consul-client[0m | |
[32m+ [0m | |
[32m+ # Consul agents require a directory for data, even clients. The data[0m | |
[32m+ # is okay to be wiped though if the Pod is removed, so just use an[0m | |
[32m+ # emptyDir volume.[0m | |
[32m+ volumes:[0m | |
[32m+ - name: data[0m | |
[32m+ emptyDir: {}[0m | |
[32m+ - name: config[0m | |
[32m+ configMap:[0m | |
[32m+ name: consul-consul-client-config[0m | |
[32m+ [0m | |
[32m+ containers:[0m | |
[32m+ - name: consul[0m | |
[32m+ image: "consul:1.5.0"[0m | |
[32m+ env:[0m | |
[32m+ - name: POD_IP[0m | |
[32m+ valueFrom:[0m | |
[32m+ fieldRef:[0m | |
[32m+ fieldPath: status.podIP[0m | |
[32m+ - name: NAMESPACE[0m | |
[32m+ valueFrom:[0m | |
[32m+ fieldRef:[0m | |
[32m+ fieldPath: metadata.namespace[0m | |
[32m+ - name: NODE[0m | |
[32m+ valueFrom:[0m | |
[32m+ fieldRef:[0m | |
[32m+ fieldPath: spec.nodeName[0m | |
[32m+ - name: GOSSIP_KEY[0m | |
[32m+ valueFrom:[0m | |
[32m+ secretKeyRef:[0m | |
[32m+ name: consul-consul-gossip-encryption[0m | |
[32m+ key: gossip-key[0m | |
[32m+ [0m | |
[32m+ command:[0m | |
[32m+ - "/bin/sh"[0m | |
[32m+ - "-ec"[0m | |
[32m+ - |[0m | |
[32m+ CONSUL_FULLNAME="consul-consul"[0m | |
[32m+ exec /bin/consul agent \[0m | |
[32m+ -node="${NODE}" \[0m | |
[32m+ -advertise="${POD_IP}" \[0m | |
[32m+ -bind=0.0.0.0 \[0m | |
[32m+ -client=0.0.0.0 \[0m | |
[32m+ -config-dir=/consul/config \[0m | |
[32m+ -datacenter=gcp-poc \[0m | |
[32m+ -data-dir=/consul/data \[0m | |
[32m+ -encrypt="${GOSSIP_KEY}" \[0m | |
[32m+ -retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \[0m | |
[32m+ -domain=consul[0m | |
[32m+ volumeMounts:[0m | |
[32m+ - name: data[0m | |
[32m+ mountPath: /consul/data[0m | |
[32m+ - name: config[0m | |
[32m+ mountPath: /consul/config[0m | |
[32m+ lifecycle:[0m | |
[32m+ preStop:[0m | |
[32m+ exec:[0m | |
[32m+ command:[0m | |
[32m+ - /bin/sh[0m | |
[32m+ - -c[0m | |
[32m+ - consul leave[0m | |
[32m+ ports:[0m | |
[32m+ - containerPort: 8500[0m | |
[32m+ hostPort: 8500[0m | |
[32m+ name: http[0m | |
[32m+ - containerPort: 8502[0m | |
[32m+ hostPort: 8502[0m | |
[32m+ name: grpc[0m | |
[32m+ - containerPort: 8301[0m | |
[32m+ name: serflan[0m | |
[32m+ - containerPort: 8302[0m | |
[32m+ name: serfwan[0m | |
[32m+ - containerPort: 8300[0m | |
[32m+ name: server[0m | |
[32m+ - containerPort: 8600[0m | |
[32m+ name: dns-tcp[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ - containerPort: 8600[0m | |
[32m+ name: dns-udp[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ readinessProbe:[0m | |
[32m+ # NOTE(mitchellh): when our HTTP status endpoints support the[0m | |
[32m+ # proper status codes, we should switch to that. This is temporary.[0m | |
[32m+ exec:[0m | |
[32m+ command:[0m | |
[32m+ - "/bin/sh"[0m | |
[32m+ - "-ec"[0m | |
[32m+ - |[0m | |
[32m+ curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \[0m | |
[32m+ grep -E '".+"'[0m | |
[33mdefault, consul-consul-server, StatefulSet (apps) has been added:[0m | |
[32m+ # Source: consul/templates/server-statefulset.yaml[0m | |
[32m+ # StatefulSet to run the actual Consul server cluster.[0m | |
[32m+ apiVersion: apps/v1[0m | |
[32m+ kind: StatefulSet[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ serviceName: consul-consul-server[0m | |
[32m+ podManagementPolicy: Parallel[0m | |
[32m+ replicas: 1[0m | |
[32m+ selector:[0m | |
[32m+ matchLabels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ release: consul[0m | |
[32m+ component: server[0m | |
[32m+ hasDNS: "true"[0m | |
[32m+ template:[0m | |
[32m+ metadata:[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ release: consul[0m | |
[32m+ component: server[0m | |
[32m+ hasDNS: "true"[0m | |
[32m+ annotations:[0m | |
[32m+ "consul.hashicorp.com/connect-inject": "false"[0m | |
[32m+ spec:[0m | |
[32m+ affinity:[0m | |
[32m+ podAntiAffinity:[0m | |
[32m+ requiredDuringSchedulingIgnoredDuringExecution:[0m | |
[32m+ - labelSelector:[0m | |
[32m+ matchLabels:[0m | |
[32m+ app: consul[0m | |
[32m+ release: "consul"[0m | |
[32m+ component: server[0m | |
[32m+ topologyKey: kubernetes.io/hostname[0m | |
[32m+ terminationGracePeriodSeconds: 10[0m | |
[32m+ serviceAccountName: consul-consul-server[0m | |
[32m+ securityContext:[0m | |
[32m+ fsGroup: 1000[0m | |
[32m+ volumes:[0m | |
[32m+ - name: config[0m | |
[32m+ configMap:[0m | |
[32m+ name: consul-consul-server-config[0m | |
[32m+ containers:[0m | |
[32m+ - name: consul[0m | |
[32m+ image: "consul:1.5.0"[0m | |
[32m+ env:[0m | |
[32m+ - name: POD_IP[0m | |
[32m+ valueFrom:[0m | |
[32m+ fieldRef:[0m | |
[32m+ fieldPath: status.podIP[0m | |
[32m+ - name: NAMESPACE[0m | |
[32m+ valueFrom:[0m | |
[32m+ fieldRef:[0m | |
[32m+ fieldPath: metadata.namespace[0m | |
[32m+ - name: GOSSIP_KEY[0m | |
[32m+ valueFrom:[0m | |
[32m+ secretKeyRef:[0m | |
[32m+ name: consul-consul-gossip-encryption[0m | |
[32m+ key: gossip-key[0m | |
[32m+ [0m | |
[32m+ command:[0m | |
[32m+ - "/bin/sh"[0m | |
[32m+ - "-ec"[0m | |
[32m+ - |[0m | |
[32m+ CONSUL_FULLNAME="consul-consul"[0m | |
[32m+ exec /bin/consul agent \[0m | |
[32m+ -advertise="${POD_IP}" \[0m | |
[32m+ -bind=0.0.0.0 \[0m | |
[32m+ -bootstrap-expect=1 \[0m | |
[32m+ -client=0.0.0.0 \[0m | |
[32m+ -config-dir=/consul/config \[0m | |
[32m+ -datacenter=gcp-poc \[0m | |
[32m+ -data-dir=/consul/data \[0m | |
[32m+ -domain=consul \[0m | |
[32m+ -encrypt="${GOSSIP_KEY}" \[0m | |
[32m+ -ui \[0m | |
[32m+ -retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \[0m | |
[32m+ -server[0m | |
[32m+ volumeMounts:[0m | |
[32m+ - name: data-default[0m | |
[32m+ mountPath: /consul/data[0m | |
[32m+ - name: config[0m | |
[32m+ mountPath: /consul/config[0m | |
[32m+ lifecycle:[0m | |
[32m+ preStop:[0m | |
[32m+ exec:[0m | |
[32m+ command:[0m | |
[32m+ - /bin/sh[0m | |
[32m+ - -c[0m | |
[32m+ - consul leave[0m | |
[32m+ ports:[0m | |
[32m+ - containerPort: 8500[0m | |
[32m+ name: http[0m | |
[32m+ - containerPort: 8301[0m | |
[32m+ name: serflan[0m | |
[32m+ - containerPort: 8302[0m | |
[32m+ name: serfwan[0m | |
[32m+ - containerPort: 8300[0m | |
[32m+ name: server[0m | |
[32m+ - containerPort: 8600[0m | |
[32m+ name: dns-tcp[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ - containerPort: 8600[0m | |
[32m+ name: dns-udp[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ readinessProbe:[0m | |
[32m+ # NOTE(mitchellh): when our HTTP status endpoints support the[0m | |
[32m+ # proper status codes, we should switch to that. This is temporary.[0m | |
[32m+ exec:[0m | |
[32m+ command:[0m | |
[32m+ - "/bin/sh"[0m | |
[32m+ - "-ec"[0m | |
[32m+ - |[0m | |
[32m+ curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \[0m | |
[32m+ grep -E '".+"'[0m | |
[32m+ failureThreshold: 2[0m | |
[32m+ initialDelaySeconds: 5[0m | |
[32m+ periodSeconds: 3[0m | |
[32m+ successThreshold: 1[0m | |
[32m+ timeoutSeconds: 5 [0m | |
[32m+ volumeClaimTemplates:[0m | |
[32m+ - metadata:[0m | |
[32m+ name: data-default[0m | |
[32m+ spec:[0m | |
[32m+ accessModes:[0m | |
[32m+ - ReadWriteOnce[0m | |
[32m+ resources:[0m | |
[32m+ requests:[0m | |
[32m+ storage: 1Gi[0m | |
[33mdefault, consul-consul-client, PodSecurityPolicy (policy) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-podsecuritypolicy.yaml[0m | |
[32m+ apiVersion: policy/v1beta1[0m | |
[32m+ kind: PodSecurityPolicy[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ privileged: false[0m | |
[32m+ # Required to prevent escalations to root.[0m | |
[32m+ allowPrivilegeEscalation: false[0m | |
[32m+ # This is redundant with non-root + disallow privilege escalation,[0m | |
[32m+ # but we can provide it for defense in depth.[0m | |
[32m+ requiredDropCapabilities:[0m | |
[32m+ - ALL[0m | |
[32m+ # Allow core volume types.[0m | |
[32m+ volumes:[0m | |
[32m+ - 'configMap'[0m | |
[32m+ - 'emptyDir'[0m | |
[32m+ - 'projected'[0m | |
[32m+ - 'secret'[0m | |
[32m+ - 'downwardAPI'[0m | |
[32m+ hostNetwork: false[0m | |
[32m+ hostPorts:[0m | |
[32m+ - min: 8500[0m | |
[32m+ max: 8502[0m | |
[32m+ hostIPC: false[0m | |
[32m+ hostPID: false[0m | |
[32m+ runAsUser:[0m | |
[32m+ # Require the container to run without root privileges.[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ seLinux:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ supplementalGroups:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ fsGroup:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ readOnlyRootFilesystem: false[0m | |
[33mdefault, consul-consul-server, PodSecurityPolicy (policy) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-podsecuritypolicy.yaml[0m | |
[32m+ apiVersion: policy/v1beta1[0m | |
[32m+ kind: PodSecurityPolicy[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ privileged: false[0m | |
[32m+ # Required to prevent escalations to root.[0m | |
[32m+ allowPrivilegeEscalation: false[0m | |
[32m+ # This is redundant with non-root + disallow privilege escalation,[0m | |
[32m+ # but we can provide it for defense in depth.[0m | |
[32m+ requiredDropCapabilities:[0m | |
[32m+ - ALL[0m | |
[32m+ # Allow core volume types.[0m | |
[32m+ volumes:[0m | |
[32m+ - 'configMap'[0m | |
[32m+ - 'emptyDir'[0m | |
[32m+ - 'projected'[0m | |
[32m+ - 'secret'[0m | |
[32m+ - 'downwardAPI'[0m | |
[32m+ - 'persistentVolumeClaim'[0m | |
[32m+ hostNetwork: false[0m | |
[32m+ hostIPC: false[0m | |
[32m+ hostPID: false[0m | |
[32m+ runAsUser:[0m | |
[32m+ # Require the container to run without root privileges.[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ seLinux:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ supplementalGroups:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ fsGroup:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ readOnlyRootFilesystem: false[0m | |
[33mdefault, consul-consul-gossip-encryption, Secret (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-secret.yaml[0m | |
[32m+ # Secret with extra configuration specified directly to the chart[0m | |
[32m+ # for client agents only.[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: Secret[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-gossip-encryption[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ release: consul[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ type: Opaque[0m | |
[32m+ data:[0m | |
[32m+ gossip-key: "bVVnMGRmWUh1K0lLRU5sTHUrczNtUT09"[0m | |
[33mdefault, consul-consul-server-config, ConfigMap (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-config-configmap.yaml[0m | |
[32m+ # StatefulSet to run the actual Consul server cluster.[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: ConfigMap[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server-config[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ data:[0m | |
[32m+ extra-from-values.json: |-[0m | |
[32m+ {}[0m | |
[33mdefault, consul-consul-server, ServiceAccount (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-serviceaccount.yaml[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: ServiceAccount[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[33mdefault, consul-consul-client, ClusterRole (rbac.authorization.k8s.io) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-clusterrole.yaml[0m | |
[32m+ apiVersion: rbac.authorization.k8s.io/v1[0m | |
[32m+ kind: ClusterRole[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ rules:[0m | |
[32m+ - apiGroups: ["policy"][0m | |
[32m+ resources: ["podsecuritypolicies"][0m | |
[32m+ resourceNames:[0m | |
[32m+ - consul-consul-client[0m | |
[32m+ verbs:[0m | |
[32m+ - use[0m | |
[33mdefault, consul-consul-server, ClusterRole (rbac.authorization.k8s.io) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-clusterrole.yaml[0m | |
[32m+ apiVersion: rbac.authorization.k8s.io/v1[0m | |
[32m+ kind: ClusterRole[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ rules:[0m | |
[32m+ - apiGroups: ["policy"][0m | |
[32m+ resources: ["podsecuritypolicies"][0m | |
[32m+ resourceNames:[0m | |
[32m+ - consul-consul-server[0m | |
[32m+ verbs:[0m | |
[32m+ - use[0m | |
[33mdefault, consul-consul-server, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-clusterrolebinding.yaml[0m | |
[32m+ apiVersion: rbac.authorization.k8s.io/v1[0m | |
[32m+ kind: ClusterRoleBinding[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ roleRef:[0m | |
[32m+ apiGroup: rbac.authorization.k8s.io[0m | |
[32m+ kind: ClusterRole[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ subjects:[0m | |
[32m+ - kind: ServiceAccount[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ namespace: default[0m | |
[33mdefault, consul-consul-ui, Service (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/ui-service.yaml[0m | |
[32m+ # UI Service for Consul Server[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: Service[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-ui[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ selector:[0m | |
[32m+ app: consul[0m | |
[32m+ release: "consul"[0m | |
[32m+ component: server[0m | |
[32m+ ports:[0m | |
[32m+ - name: http[0m | |
[32m+ port: 80[0m | |
[32m+ targetPort: 8500[0m | |
[33mdefault, consul-consul-server, PodDisruptionBudget (policy) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-disruptionbudget.yaml[0m | |
[32m+ # PodDisruptionBudget to prevent degrading the server cluster through[0m | |
[32m+ # voluntary cluster changes.[0m | |
[32m+ apiVersion: policy/v1beta1[0m | |
[32m+ kind: PodDisruptionBudget[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ maxUnavailable: 0[0m | |
[32m+ selector:[0m | |
[32m+ matchLabels:[0m | |
[32m+ app: consul[0m | |
[32m+ release: "consul"[0m | |
[32m+ component: server[0m | |
[33mdefault, consul-consul-client, ServiceAccount (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-serviceaccount.yaml[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: ServiceAccount[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[33mdefault, consul-consul-client, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-clusterrolebinding.yaml[0m | |
[32m+ apiVersion: rbac.authorization.k8s.io/v1[0m | |
[32m+ kind: ClusterRoleBinding[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ roleRef:[0m | |
[32m+ apiGroup: rbac.authorization.k8s.io[0m | |
[32m+ kind: ClusterRole[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ subjects:[0m | |
[32m+ - kind: ServiceAccount[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ namespace: default[0m | |
[33mdefault, consul-consul-dns, Service (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/dns-service.yaml[0m | |
[32m+ # Service for Consul DNS.[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: Service[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-dns[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ ports:[0m | |
[32m+ - name: dns-tcp[0m | |
[32m+ port: 53[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ targetPort: dns-tcp[0m | |
[32m+ - name: dns-udp[0m | |
[32m+ port: 53[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ targetPort: dns-udp[0m | |
[32m+ selector:[0m | |
[32m+ app: consul[0m | |
[32m+ release: "consul"[0m | |
[32m+ hasDNS: "true"[0m | |
[33mdefault, consul-consul-server, Service (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-service.yaml[0m | |
[32m+ # Headless service for Consul server DNS entries. This service should only[0m | |
[32m+ # point to Consul servers. For access to an agent, one should assume that[0m | |
[32m+ # the agent is installed locally on the node and the NODE_IP should be used.[0m | |
[32m+ # If the node can't run a Consul agent, then this service can be used to[0m | |
[32m+ # communicate directly to a server agent.[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: Service[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ annotations:[0m | |
[32m+ # This must be set in addition to publishNotReadyAddresses due[0m | |
[32m+ # to an open issue where it may not work:[0m | |
[32m+ # https://github.com/kubernetes/kubernetes/issues/58662[0m | |
[32m+ service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"[0m | |
[32m+ spec:[0m | |
[32m+ clusterIP: None[0m | |
[32m+ # We want the servers to become available even if they're not ready[0m | |
[32m+ # since this DNS is also used for join operations.[0m | |
[32m+ publishNotReadyAddresses: true[0m | |
[32m+ ports:[0m | |
[32m+ - name: http[0m | |
[32m+ port: 8500[0m | |
[32m+ targetPort: 8500[0m | |
[32m+ - name: serflan-tcp[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ port: 8301[0m | |
[32m+ targetPort: 8301[0m | |
[32m+ - name: serflan-udp[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ port: 8301[0m | |
[32m+ targetPort: 8301[0m | |
[32m+ - name: serfwan-tcp[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ port: 8302[0m | |
[32m+ targetPort: 8302[0m | |
[32m+ - name: serfwan-udp[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ port: 8302[0m | |
[32m+ targetPort: 8302[0m | |
[32m+ - name: server[0m | |
[32m+ port: 8300[0m | |
[32m+ targetPort: 8300[0m | |
[32m+ - name: dns-tcp[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ port: 8600[0m | |
[32m+ targetPort: dns-tcp[0m | |
[32m+ - name: dns-udp[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ port: 8600[0m | |
[32m+ targetPort: dns-udp[0m | |
[32m+ selector:[0m | |
[32m+ app: consul[0m | |
[32m+ release: "consul"[0m | |
[32m+ component: server[0m | |
identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled) | |
******************** | |
Release was not present in Helm. Diff will show entire contents as new. | |
******************** | |
[33mdefault, consul-consul-client-config, ConfigMap (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-config-configmap.yaml[0m | |
[32m+ # ConfigMap with extra configuration specified directly to the chart[0m | |
[32m+ # for client agents only.[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: ConfigMap[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-client-config[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ data:[0m | |
[32m+ extra-from-values.json: |-[0m | |
[32m+ {}[0m | |
[33mdefault, consul-consul, DaemonSet (apps) has been added:[0m | |
[32m+ # Source: consul/templates/client-daemonset.yaml[0m | |
[32m+ # DaemonSet to run the Consul clients on every node.[0m | |
[32m+ apiVersion: apps/v1[0m | |
[32m+ kind: DaemonSet[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ selector:[0m | |
[32m+ matchLabels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ release: consul[0m | |
[32m+ component: client[0m | |
[32m+ hasDNS: "true"[0m | |
[32m+ template:[0m | |
[32m+ metadata:[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ release: consul[0m | |
[32m+ component: client[0m | |
[32m+ hasDNS: "true"[0m | |
[32m+ annotations:[0m | |
[32m+ "consul.hashicorp.com/connect-inject": "false"[0m | |
[32m+ spec:[0m | |
[32m+ terminationGracePeriodSeconds: 10[0m | |
[32m+ serviceAccountName: consul-consul-client[0m | |
[32m+ [0m | |
[32m+ # Consul agents require a directory for data, even clients. The data[0m | |
[32m+ # is okay to be wiped though if the Pod is removed, so just use an[0m | |
[32m+ # emptyDir volume.[0m | |
[32m+ volumes:[0m | |
[32m+ - name: data[0m | |
[32m+ emptyDir: {}[0m | |
[32m+ - name: config[0m | |
[32m+ configMap:[0m | |
[32m+ name: consul-consul-client-config[0m | |
[32m+ [0m | |
[32m+ containers:[0m | |
[32m+ - name: consul[0m | |
[32m+ image: "consul:1.5.0"[0m | |
[32m+ env:[0m | |
[32m+ - name: POD_IP[0m | |
[32m+ valueFrom:[0m | |
[32m+ fieldRef:[0m | |
[32m+ fieldPath: status.podIP[0m | |
[32m+ - name: NAMESPACE[0m | |
[32m+ valueFrom:[0m | |
[32m+ fieldRef:[0m | |
[32m+ fieldPath: metadata.namespace[0m | |
[32m+ - name: NODE[0m | |
[32m+ valueFrom:[0m | |
[32m+ fieldRef:[0m | |
[32m+ fieldPath: spec.nodeName[0m | |
[32m+ - name: GOSSIP_KEY[0m | |
[32m+ valueFrom:[0m | |
[32m+ secretKeyRef:[0m | |
[32m+ name: consul-consul-gossip-encryption[0m | |
[32m+ key: gossip-key[0m | |
[32m+ [0m | |
[32m+ command:[0m | |
[32m+ - "/bin/sh"[0m | |
[32m+ - "-ec"[0m | |
[32m+ - |[0m | |
[32m+ CONSUL_FULLNAME="consul-consul"[0m | |
[32m+ exec /bin/consul agent \[0m | |
[32m+ -node="${NODE}" \[0m | |
[32m+ -advertise="${POD_IP}" \[0m | |
[32m+ -bind=0.0.0.0 \[0m | |
[32m+ -client=0.0.0.0 \[0m | |
[32m+ -config-dir=/consul/config \[0m | |
[32m+ -datacenter=gcp-poc \[0m | |
[32m+ -data-dir=/consul/data \[0m | |
[32m+ -encrypt="${GOSSIP_KEY}" \[0m | |
[32m+ -retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \[0m | |
[32m+ -domain=consul[0m | |
[32m+ volumeMounts:[0m | |
[32m+ - name: data[0m | |
[32m+ mountPath: /consul/data[0m | |
[32m+ - name: config[0m | |
[32m+ mountPath: /consul/config[0m | |
[32m+ lifecycle:[0m | |
[32m+ preStop:[0m | |
[32m+ exec:[0m | |
[32m+ command:[0m | |
[32m+ - /bin/sh[0m | |
[32m+ - -c[0m | |
[32m+ - consul leave[0m | |
[32m+ ports:[0m | |
[32m+ - containerPort: 8500[0m | |
[32m+ hostPort: 8500[0m | |
[32m+ name: http[0m | |
[32m+ - containerPort: 8502[0m | |
[32m+ hostPort: 8502[0m | |
[32m+ name: grpc[0m | |
[32m+ - containerPort: 8301[0m | |
[32m+ name: serflan[0m | |
[32m+ - containerPort: 8302[0m | |
[32m+ name: serfwan[0m | |
[32m+ - containerPort: 8300[0m | |
[32m+ name: server[0m | |
[32m+ - containerPort: 8600[0m | |
[32m+ name: dns-tcp[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ - containerPort: 8600[0m | |
[32m+ name: dns-udp[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ readinessProbe:[0m | |
[32m+ # NOTE(mitchellh): when our HTTP status endpoints support the[0m | |
[32m+ # proper status codes, we should switch to that. This is temporary.[0m | |
[32m+ exec:[0m | |
[32m+ command:[0m | |
[32m+ - "/bin/sh"[0m | |
[32m+ - "-ec"[0m | |
[32m+ - |[0m | |
[32m+ curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \[0m | |
[32m+ grep -E '".+"'[0m | |
[33mdefault, consul-consul-server, StatefulSet (apps) has been added:[0m | |
[32m+ # Source: consul/templates/server-statefulset.yaml[0m | |
[32m+ # StatefulSet to run the actual Consul server cluster.[0m | |
[32m+ apiVersion: apps/v1[0m | |
[32m+ kind: StatefulSet[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ serviceName: consul-consul-server[0m | |
[32m+ podManagementPolicy: Parallel[0m | |
[32m+ replicas: 1[0m | |
[32m+ selector:[0m | |
[32m+ matchLabels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ release: consul[0m | |
[32m+ component: server[0m | |
[32m+ hasDNS: "true"[0m | |
[32m+ template:[0m | |
[32m+ metadata:[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ release: consul[0m | |
[32m+ component: server[0m | |
[32m+ hasDNS: "true"[0m | |
[32m+ annotations:[0m | |
[32m+ "consul.hashicorp.com/connect-inject": "false"[0m | |
[32m+ spec:[0m | |
[32m+ affinity:[0m | |
[32m+ podAntiAffinity:[0m | |
[32m+ requiredDuringSchedulingIgnoredDuringExecution:[0m | |
[32m+ - labelSelector:[0m | |
[32m+ matchLabels:[0m | |
[32m+ app: consul[0m | |
[32m+ release: "consul"[0m | |
[32m+ component: server[0m | |
[32m+ topologyKey: kubernetes.io/hostname[0m | |
[32m+ terminationGracePeriodSeconds: 10[0m | |
[32m+ serviceAccountName: consul-consul-server[0m | |
[32m+ securityContext:[0m | |
[32m+ fsGroup: 1000[0m | |
[32m+ volumes:[0m | |
[32m+ - name: config[0m | |
[32m+ configMap:[0m | |
[32m+ name: consul-consul-server-config[0m | |
[32m+ containers:[0m | |
[32m+ - name: consul[0m | |
[32m+ image: "consul:1.5.0"[0m | |
[32m+ env:[0m | |
[32m+ - name: POD_IP[0m | |
[32m+ valueFrom:[0m | |
[32m+ fieldRef:[0m | |
[32m+ fieldPath: status.podIP[0m | |
[32m+ - name: NAMESPACE[0m | |
[32m+ valueFrom:[0m | |
[32m+ fieldRef:[0m | |
[32m+ fieldPath: metadata.namespace[0m | |
[32m+ - name: GOSSIP_KEY[0m | |
[32m+ valueFrom:[0m | |
[32m+ secretKeyRef:[0m | |
[32m+ name: consul-consul-gossip-encryption[0m | |
[32m+ key: gossip-key[0m | |
[32m+ [0m | |
[32m+ command:[0m | |
[32m+ - "/bin/sh"[0m | |
[32m+ - "-ec"[0m | |
[32m+ - |[0m | |
[32m+ CONSUL_FULLNAME="consul-consul"[0m | |
[32m+ exec /bin/consul agent \[0m | |
[32m+ -advertise="${POD_IP}" \[0m | |
[32m+ -bind=0.0.0.0 \[0m | |
[32m+ -bootstrap-expect=1 \[0m | |
[32m+ -client=0.0.0.0 \[0m | |
[32m+ -config-dir=/consul/config \[0m | |
[32m+ -datacenter=gcp-poc \[0m | |
[32m+ -data-dir=/consul/data \[0m | |
[32m+ -domain=consul \[0m | |
[32m+ -encrypt="${GOSSIP_KEY}" \[0m | |
[32m+ -ui \[0m | |
[32m+ -retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \[0m | |
[32m+ -server[0m | |
[32m+ volumeMounts:[0m | |
[32m+ - name: data-default[0m | |
[32m+ mountPath: /consul/data[0m | |
[32m+ - name: config[0m | |
[32m+ mountPath: /consul/config[0m | |
[32m+ lifecycle:[0m | |
[32m+ preStop:[0m | |
[32m+ exec:[0m | |
[32m+ command:[0m | |
[32m+ - /bin/sh[0m | |
[32m+ - -c[0m | |
[32m+ - consul leave[0m | |
[32m+ ports:[0m | |
[32m+ - containerPort: 8500[0m | |
[32m+ name: http[0m | |
[32m+ - containerPort: 8301[0m | |
[32m+ name: serflan[0m | |
[32m+ - containerPort: 8302[0m | |
[32m+ name: serfwan[0m | |
[32m+ - containerPort: 8300[0m | |
[32m+ name: server[0m | |
[32m+ - containerPort: 8600[0m | |
[32m+ name: dns-tcp[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ - containerPort: 8600[0m | |
[32m+ name: dns-udp[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ readinessProbe:[0m | |
[32m+ # NOTE(mitchellh): when our HTTP status endpoints support the[0m | |
[32m+ # proper status codes, we should switch to that. This is temporary.[0m | |
[32m+ exec:[0m | |
[32m+ command:[0m | |
[32m+ - "/bin/sh"[0m | |
[32m+ - "-ec"[0m | |
[32m+ - |[0m | |
[32m+ curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \[0m | |
[32m+ grep -E '".+"'[0m | |
[32m+ failureThreshold: 2[0m | |
[32m+ initialDelaySeconds: 5[0m | |
[32m+ periodSeconds: 3[0m | |
[32m+ successThreshold: 1[0m | |
[32m+ timeoutSeconds: 5 [0m | |
[32m+ volumeClaimTemplates:[0m | |
[32m+ - metadata:[0m | |
[32m+ name: data-default[0m | |
[32m+ spec:[0m | |
[32m+ accessModes:[0m | |
[32m+ - ReadWriteOnce[0m | |
[32m+ resources:[0m | |
[32m+ requests:[0m | |
[32m+ storage: 1Gi[0m | |
[33mdefault, consul-consul-client, PodSecurityPolicy (policy) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-podsecuritypolicy.yaml[0m | |
[32m+ apiVersion: policy/v1beta1[0m | |
[32m+ kind: PodSecurityPolicy[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ privileged: false[0m | |
[32m+ # Required to prevent escalations to root.[0m | |
[32m+ allowPrivilegeEscalation: false[0m | |
[32m+ # This is redundant with non-root + disallow privilege escalation,[0m | |
[32m+ # but we can provide it for defense in depth.[0m | |
[32m+ requiredDropCapabilities:[0m | |
[32m+ - ALL[0m | |
[32m+ # Allow core volume types.[0m | |
[32m+ volumes:[0m | |
[32m+ - 'configMap'[0m | |
[32m+ - 'emptyDir'[0m | |
[32m+ - 'projected'[0m | |
[32m+ - 'secret'[0m | |
[32m+ - 'downwardAPI'[0m | |
[32m+ hostNetwork: false[0m | |
[32m+ hostPorts:[0m | |
[32m+ - min: 8500[0m | |
[32m+ max: 8502[0m | |
[32m+ hostIPC: false[0m | |
[32m+ hostPID: false[0m | |
[32m+ runAsUser:[0m | |
[32m+ # Require the container to run without root privileges.[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ seLinux:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ supplementalGroups:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ fsGroup:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ readOnlyRootFilesystem: false[0m | |
[33mdefault, consul-consul-server, PodSecurityPolicy (policy) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-podsecuritypolicy.yaml[0m | |
[32m+ apiVersion: policy/v1beta1[0m | |
[32m+ kind: PodSecurityPolicy[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ privileged: false[0m | |
[32m+ # Required to prevent escalations to root.[0m | |
[32m+ allowPrivilegeEscalation: false[0m | |
[32m+ # This is redundant with non-root + disallow privilege escalation,[0m | |
[32m+ # but we can provide it for defense in depth.[0m | |
[32m+ requiredDropCapabilities:[0m | |
[32m+ - ALL[0m | |
[32m+ # Allow core volume types.[0m | |
[32m+ volumes:[0m | |
[32m+ - 'configMap'[0m | |
[32m+ - 'emptyDir'[0m | |
[32m+ - 'projected'[0m | |
[32m+ - 'secret'[0m | |
[32m+ - 'downwardAPI'[0m | |
[32m+ - 'persistentVolumeClaim'[0m | |
[32m+ hostNetwork: false[0m | |
[32m+ hostIPC: false[0m | |
[32m+ hostPID: false[0m | |
[32m+ runAsUser:[0m | |
[32m+ # Require the container to run without root privileges.[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ seLinux:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ supplementalGroups:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ fsGroup:[0m | |
[32m+ rule: 'RunAsAny'[0m | |
[32m+ readOnlyRootFilesystem: false[0m | |
[33mdefault, consul-consul-gossip-encryption, Secret (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-secret.yaml[0m | |
[32m+ # Secret with extra configuration specified directly to the chart[0m | |
[32m+ # for client agents only.[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: Secret[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-gossip-encryption[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ release: consul[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ type: Opaque[0m | |
[32m+ data:[0m | |
[32m+ gossip-key: "bVVnMGRmWUh1K0lLRU5sTHUrczNtUT09"[0m | |
[33mdefault, consul-consul-server-config, ConfigMap (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-config-configmap.yaml[0m | |
[32m+ # StatefulSet to run the actual Consul server cluster.[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: ConfigMap[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server-config[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ data:[0m | |
[32m+ extra-from-values.json: |-[0m | |
[32m+ {}[0m | |
[33mdefault, consul-consul-server, ServiceAccount (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-serviceaccount.yaml[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: ServiceAccount[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[33mdefault, consul-consul-client, ClusterRole (rbac.authorization.k8s.io) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-clusterrole.yaml[0m | |
[32m+ apiVersion: rbac.authorization.k8s.io/v1[0m | |
[32m+ kind: ClusterRole[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ rules:[0m | |
[32m+ - apiGroups: ["policy"][0m | |
[32m+ resources: ["podsecuritypolicies"][0m | |
[32m+ resourceNames:[0m | |
[32m+ - consul-consul-client[0m | |
[32m+ verbs:[0m | |
[32m+ - use[0m | |
[33mdefault, consul-consul-server, ClusterRole (rbac.authorization.k8s.io) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-clusterrole.yaml[0m | |
[32m+ apiVersion: rbac.authorization.k8s.io/v1[0m | |
[32m+ kind: ClusterRole[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ rules:[0m | |
[32m+ - apiGroups: ["policy"][0m | |
[32m+ resources: ["podsecuritypolicies"][0m | |
[32m+ resourceNames:[0m | |
[32m+ - consul-consul-server[0m | |
[32m+ verbs:[0m | |
[32m+ - use[0m | |
[33mdefault, consul-consul-server, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-clusterrolebinding.yaml[0m | |
[32m+ apiVersion: rbac.authorization.k8s.io/v1[0m | |
[32m+ kind: ClusterRoleBinding[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ roleRef:[0m | |
[32m+ apiGroup: rbac.authorization.k8s.io[0m | |
[32m+ kind: ClusterRole[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ subjects:[0m | |
[32m+ - kind: ServiceAccount[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ namespace: default[0m | |
[33mdefault, consul-consul-ui, Service (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/ui-service.yaml[0m | |
[32m+ # UI Service for Consul Server[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: Service[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-ui[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ selector:[0m | |
[32m+ app: consul[0m | |
[32m+ release: "consul"[0m | |
[32m+ component: server[0m | |
[32m+ ports:[0m | |
[32m+ - name: http[0m | |
[32m+ port: 80[0m | |
[32m+ targetPort: 8500[0m | |
[33mdefault, consul-consul-server, PodDisruptionBudget (policy) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-disruptionbudget.yaml[0m | |
[32m+ # PodDisruptionBudget to prevent degrading the server cluster through[0m | |
[32m+ # voluntary cluster changes.[0m | |
[32m+ apiVersion: policy/v1beta1[0m | |
[32m+ kind: PodDisruptionBudget[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ maxUnavailable: 0[0m | |
[32m+ selector:[0m | |
[32m+ matchLabels:[0m | |
[32m+ app: consul[0m | |
[32m+ release: "consul"[0m | |
[32m+ component: server[0m | |
[33mdefault, consul-consul-client, ServiceAccount (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-serviceaccount.yaml[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: ServiceAccount[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[33mdefault, consul-consul-client, ClusterRoleBinding (rbac.authorization.k8s.io) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/client-clusterrolebinding.yaml[0m | |
[32m+ apiVersion: rbac.authorization.k8s.io/v1[0m | |
[32m+ kind: ClusterRoleBinding[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ roleRef:[0m | |
[32m+ apiGroup: rbac.authorization.k8s.io[0m | |
[32m+ kind: ClusterRole[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ subjects:[0m | |
[32m+ - kind: ServiceAccount[0m | |
[32m+ name: consul-consul-client[0m | |
[32m+ namespace: default[0m | |
[33mdefault, consul-consul-dns, Service (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/dns-service.yaml[0m | |
[32m+ # Service for Consul DNS.[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: Service[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-dns[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ spec:[0m | |
[32m+ ports:[0m | |
[32m+ - name: dns-tcp[0m | |
[32m+ port: 53[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ targetPort: dns-tcp[0m | |
[32m+ - name: dns-udp[0m | |
[32m+ port: 53[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ targetPort: dns-udp[0m | |
[32m+ selector:[0m | |
[32m+ app: consul[0m | |
[32m+ release: "consul"[0m | |
[32m+ hasDNS: "true"[0m | |
[33mdefault, consul-consul-server, Service (v1) has been added:[0m | |
[31m- [0m | |
[32m+ # Source: consul/templates/server-service.yaml[0m | |
[32m+ # Headless service for Consul server DNS entries. This service should only[0m | |
[32m+ # point to Consul servers. For access to an agent, one should assume that[0m | |
[32m+ # the agent is installed locally on the node and the NODE_IP should be used.[0m | |
[32m+ # If the node can't run a Consul agent, then this service can be used to[0m | |
[32m+ # communicate directly to a server agent.[0m | |
[32m+ apiVersion: v1[0m | |
[32m+ kind: Service[0m | |
[32m+ metadata:[0m | |
[32m+ name: consul-consul-server[0m | |
[32m+ namespace: default[0m | |
[32m+ labels:[0m | |
[32m+ app: consul[0m | |
[32m+ chart: consul-helm[0m | |
[32m+ heritage: Tiller[0m | |
[32m+ release: consul[0m | |
[32m+ annotations:[0m | |
[32m+ # This must be set in addition to publishNotReadyAddresses due[0m | |
[32m+ # to an open issue where it may not work:[0m | |
[32m+ # https://github.com/kubernetes/kubernetes/issues/58662[0m | |
[32m+ service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"[0m | |
[32m+ spec:[0m | |
[32m+ clusterIP: None[0m | |
[32m+ # We want the servers to become available even if they're not ready[0m | |
[32m+ # since this DNS is also used for join operations.[0m | |
[32m+ publishNotReadyAddresses: true[0m | |
[32m+ ports:[0m | |
[32m+ - name: http[0m | |
[32m+ port: 8500[0m | |
[32m+ targetPort: 8500[0m | |
[32m+ - name: serflan-tcp[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ port: 8301[0m | |
[32m+ targetPort: 8301[0m | |
[32m+ - name: serflan-udp[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ port: 8301[0m | |
[32m+ targetPort: 8301[0m | |
[32m+ - name: serfwan-tcp[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ port: 8302[0m | |
[32m+ targetPort: 8302[0m | |
[32m+ - name: serfwan-udp[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ port: 8302[0m | |
[32m+ targetPort: 8302[0m | |
[32m+ - name: server[0m | |
[32m+ port: 8300[0m | |
[32m+ targetPort: 8300[0m | |
[32m+ - name: dns-tcp[0m | |
[32m+ protocol: "TCP"[0m | |
[32m+ port: 8600[0m | |
[32m+ targetPort: dns-tcp[0m | |
[32m+ - name: dns-udp[0m | |
[32m+ protocol: "UDP"[0m | |
[32m+ port: 8600[0m | |
[32m+ targetPort: dns-udp[0m | |
[32m+ selector:[0m | |
[32m+ app: consul[0m | |
[32m+ release: "consul"[0m | |
[32m+ component: server[0m | |
identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled) | |
worker 1/1 finished | |
worker 1/1 started | |
successfully generated the value file at charts/chart-consul/values.yaml. produced: | |
global: | |
enabled: true | |
datacenter: gcp-poc | |
gossipEncryption: | |
secretName: gossip-encryption | |
secretKey: gossip-key | |
secretValue: "mUg0dfYHu+IKENlLu+s3mQ==" | |
server: | |
enabled: true | |
replicas: 1 | |
bootstrapExpect: 1 | |
storage: 1Gi | |
client: | |
enabled: true | |
image: null | |
join: null | |
dns: | |
enabled: true | |
ui: | |
enabled: true | |
service: | |
enabled: true | |
worker 1/1 finished | |
worker 1/1 started | |
Upgrading helm/consul | |
exec: helm upgrade --install --reset-values consul helm/consul --version 0.8.1 --verify --wait --timeout 600 --force --recreate-pods --tiller-namespace kube-system --namespace default --values /tmp/values839267820 | |
exec: helm upgrade --install --reset-values consul helm/consul --version 0.8.1 --verify --wait --timeout 600 --force --recreate-pods --tiller-namespace kube-system --namespace default --values /tmp/values839267820: | |
worker 1/1 finished | |
err: release "consul" in "helmfile.yaml" failed: failed processing release consul: helm exited with status 1: | |
Error: failed to download "helm/consul" (hint: running `helm repo update` may help) | |
changing working directory back to "/deployment/kubernetes/helm" | |
in helmfile.d/helmfile.yaml: failed processing release consul: helm exited with status 1: | |
Error: failed to download "helm/consul" (hint: running `helm repo update` may help) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
bash-4.4# helm install --name consul helm/consul -f ./helmfile.d/charts/chart-consul/values.yaml --debug | |
[debug] Created tunnel using local port: '43631' | |
[debug] SERVER: "127.0.0.1:43631" | |
[debug] Original chart version: "" | |
[debug] Fetched helm/consul to /root/.helm/cache/archive/consul-0.8.1.tgz | |
[debug] CHART PATH: /root/.helm/cache/archive/consul-0.8.1.tgz | |
NAME: consul | |
REVISION: 1 | |
RELEASED: Fri Aug 2 09:07:09 2019 | |
CHART: consul-0.8.1 | |
USER-SUPPLIED VALUES: | |
client: | |
enabled: true | |
image: null | |
join: null | |
dns: | |
enabled: true | |
global: | |
datacenter: gcp-poc | |
enabled: true | |
gossipEncryption: | |
secretKey: gossip-key | |
secretName: gossip-encryption | |
secretValue: mUg0dfYHu+IKENlLu+s3mQ== | |
server: | |
bootstrapExpect: 1 | |
enabled: true | |
replicas: 1 | |
storage: 1Gi | |
ui: | |
enabled: true | |
service: | |
enabled: true | |
COMPUTED VALUES: | |
client: | |
annotations: null | |
enabled: true | |
extraConfig: | | |
{} | |
extraEnvironmentVars: {} | |
extraVolumes: [] | |
grpc: false | |
image: null | |
join: null | |
nodeSelector: null | |
priorityClassName: "" | |
resources: null | |
tolerations: "" | |
cloud: | |
azure: | |
secretName: az-cloud-config | |
consul_agent: | |
join_tag_name: consul_join | |
join_tag_value: server | |
enabled: false | |
connectInject: | |
aclBindingRuleSelector: serviceaccount.name!=default | |
centralConfig: | |
defaultProtocol: null | |
enabled: false | |
proxyDefaults: | | |
{} | |
certs: | |
caBundle: "" | |
certName: tls.crt | |
keyName: tls.key | |
secretName: null | |
default: false | |
enabled: false | |
image: null | |
imageConsul: null | |
imageEnvoy: null | |
namespaceSelector: null | |
nodeSelector: null | |
dns: | |
enabled: true | |
global: | |
bootstrapACLs: false | |
datacenter: gcp-poc | |
domain: consul | |
enablePodSecurityPolicies: true | |
enabled: true | |
gossipEncryption: | |
secretKey: gossip-key | |
secretName: gossip-encryption | |
secretValue: mUg0dfYHu+IKENlLu+s3mQ== | |
image: consul:1.5.0 | |
imageK8S: hashicorp/consul-k8s:0.8.1 | |
server: | |
affinity: | | |
podAntiAffinity: | |
requiredDuringSchedulingIgnoredDuringExecution: | |
- labelSelector: | |
matchLabels: | |
app: {{ template "consul.name" . }} | |
release: "{{ .Release.Name }}" | |
component: server | |
topologyKey: kubernetes.io/hostname | |
annotations: null | |
bootstrapExpect: 1 | |
connect: false | |
disruptionBudget: | |
enabled: true | |
maxUnavailable: null | |
enabled: true | |
enterpriseLicense: | |
secretKey: null | |
secretName: null | |
extraConfig: | | |
{} | |
extraEnvironmentVars: {} | |
extraVolumes: [] | |
image: null | |
nodeSelector: null | |
priorityClassName: "" | |
replicas: 1 | |
resources: null | |
storage: 1Gi | |
storageClass: null | |
tolerations: "" | |
updatePartition: 0 | |
syncCatalog: | |
aclSyncToken: | |
secretKey: null | |
secretName: null | |
consulPrefix: null | |
default: true | |
enabled: false | |
image: null | |
k8sPrefix: null | |
k8sTag: null | |
nodePortSyncType: ExternalFirst | |
nodeSelector: null | |
syncClusterIPServices: true | |
toConsul: true | |
toK8S: true | |
ui: | |
enabled: true | |
service: | |
additionalSpec: null | |
annotations: null | |
enabled: true | |
type: null | |
HOOKS: | |
MANIFEST: | |
--- | |
# Source: consul/templates/client-podsecuritypolicy.yaml | |
apiVersion: policy/v1beta1 | |
kind: PodSecurityPolicy | |
metadata: | |
name: consul-consul-client | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
spec: | |
privileged: false | |
# Required to prevent escalations to root. | |
allowPrivilegeEscalation: false | |
# This is redundant with non-root + disallow privilege escalation, | |
# but we can provide it for defense in depth. | |
requiredDropCapabilities: | |
- ALL | |
# Allow core volume types. | |
volumes: | |
- 'configMap' | |
- 'emptyDir' | |
- 'projected' | |
- 'secret' | |
- 'downwardAPI' | |
hostNetwork: false | |
hostPorts: | |
- min: 8500 | |
max: 8502 | |
hostIPC: false | |
hostPID: false | |
runAsUser: | |
# Require the container to run without root privileges. | |
rule: 'RunAsAny' | |
seLinux: | |
rule: 'RunAsAny' | |
supplementalGroups: | |
rule: 'RunAsAny' | |
fsGroup: | |
rule: 'RunAsAny' | |
readOnlyRootFilesystem: false | |
--- | |
# Source: consul/templates/server-podsecuritypolicy.yaml | |
apiVersion: policy/v1beta1 | |
kind: PodSecurityPolicy | |
metadata: | |
name: consul-consul-server | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
spec: | |
privileged: false | |
# Required to prevent escalations to root. | |
allowPrivilegeEscalation: false | |
# This is redundant with non-root + disallow privilege escalation, | |
# but we can provide it for defense in depth. | |
requiredDropCapabilities: | |
- ALL | |
# Allow core volume types. | |
volumes: | |
- 'configMap' | |
- 'emptyDir' | |
- 'projected' | |
- 'secret' | |
- 'downwardAPI' | |
- 'persistentVolumeClaim' | |
hostNetwork: false | |
hostIPC: false | |
hostPID: false | |
runAsUser: | |
# Require the container to run without root privileges. | |
rule: 'RunAsAny' | |
seLinux: | |
rule: 'RunAsAny' | |
supplementalGroups: | |
rule: 'RunAsAny' | |
fsGroup: | |
rule: 'RunAsAny' | |
readOnlyRootFilesystem: false | |
--- | |
# Source: consul/templates/server-disruptionbudget.yaml | |
# PodDisruptionBudget to prevent degrading the server cluster through | |
# voluntary cluster changes. | |
apiVersion: policy/v1beta1 | |
kind: PodDisruptionBudget | |
metadata: | |
name: consul-consul-server | |
namespace: default | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
spec: | |
maxUnavailable: 0 | |
selector: | |
matchLabels: | |
app: consul | |
release: "consul" | |
component: server | |
--- | |
# Source: consul/templates/client-secret.yaml | |
# Secret with extra configuration specified directly to the chart | |
# for client agents only. | |
apiVersion: v1 | |
kind: Secret | |
metadata: | |
name: consul-consul-gossip-encryption | |
labels: | |
app: consul | |
chart: consul-helm | |
release: consul | |
heritage: Tiller | |
type: Opaque | |
data: | |
gossip-key: "bVVnMGRmWUh1K0lLRU5sTHUrczNtUT09" | |
--- | |
# Source: consul/templates/client-config-configmap.yaml | |
# ConfigMap with extra configuration specified directly to the chart | |
# for client agents only. | |
apiVersion: v1 | |
kind: ConfigMap | |
metadata: | |
name: consul-consul-client-config | |
namespace: default | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
data: | |
extra-from-values.json: |- | |
{} | |
--- | |
# Source: consul/templates/server-config-configmap.yaml | |
# StatefulSet to run the actual Consul server cluster. | |
apiVersion: v1 | |
kind: ConfigMap | |
metadata: | |
name: consul-consul-server-config | |
namespace: default | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
data: | |
extra-from-values.json: |- | |
{} | |
--- | |
# Source: consul/templates/client-serviceaccount.yaml | |
apiVersion: v1 | |
kind: ServiceAccount | |
metadata: | |
name: consul-consul-client | |
namespace: default | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
--- | |
# Source: consul/templates/server-serviceaccount.yaml | |
apiVersion: v1 | |
kind: ServiceAccount | |
metadata: | |
name: consul-consul-server | |
namespace: default | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
--- | |
# Source: consul/templates/client-clusterrole.yaml | |
apiVersion: rbac.authorization.k8s.io/v1 | |
kind: ClusterRole | |
metadata: | |
name: consul-consul-client | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
rules: | |
- apiGroups: ["policy"] | |
resources: ["podsecuritypolicies"] | |
resourceNames: | |
- consul-consul-client | |
verbs: | |
- use | |
--- | |
# Source: consul/templates/server-clusterrole.yaml | |
apiVersion: rbac.authorization.k8s.io/v1 | |
kind: ClusterRole | |
metadata: | |
name: consul-consul-server | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
rules: | |
- apiGroups: ["policy"] | |
resources: ["podsecuritypolicies"] | |
resourceNames: | |
- consul-consul-server | |
verbs: | |
- use | |
--- | |
# Source: consul/templates/client-clusterrolebinding.yaml | |
apiVersion: rbac.authorization.k8s.io/v1 | |
kind: ClusterRoleBinding | |
metadata: | |
name: consul-consul-client | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
roleRef: | |
apiGroup: rbac.authorization.k8s.io | |
kind: ClusterRole | |
name: consul-consul-client | |
subjects: | |
- kind: ServiceAccount | |
name: consul-consul-client | |
namespace: default | |
--- | |
# Source: consul/templates/server-clusterrolebinding.yaml | |
apiVersion: rbac.authorization.k8s.io/v1 | |
kind: ClusterRoleBinding | |
metadata: | |
name: consul-consul-server | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
roleRef: | |
apiGroup: rbac.authorization.k8s.io | |
kind: ClusterRole | |
name: consul-consul-server | |
subjects: | |
- kind: ServiceAccount | |
name: consul-consul-server | |
namespace: default | |
--- | |
# Source: consul/templates/dns-service.yaml | |
# Service for Consul DNS. | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: consul-consul-dns | |
namespace: default | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
spec: | |
ports: | |
- name: dns-tcp | |
port: 53 | |
protocol: "TCP" | |
targetPort: dns-tcp | |
- name: dns-udp | |
port: 53 | |
protocol: "UDP" | |
targetPort: dns-udp | |
selector: | |
app: consul | |
release: "consul" | |
hasDNS: "true" | |
--- | |
# Source: consul/templates/server-service.yaml | |
# Headless service for Consul server DNS entries. This service should only | |
# point to Consul servers. For access to an agent, one should assume that | |
# the agent is installed locally on the node and the NODE_IP should be used. | |
# If the node can't run a Consul agent, then this service can be used to | |
# communicate directly to a server agent. | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: consul-consul-server | |
namespace: default | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
annotations: | |
# This must be set in addition to publishNotReadyAddresses due | |
# to an open issue where it may not work: | |
# https://github.com/kubernetes/kubernetes/issues/58662 | |
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" | |
spec: | |
clusterIP: None | |
# We want the servers to become available even if they're not ready | |
# since this DNS is also used for join operations. | |
publishNotReadyAddresses: true | |
ports: | |
- name: http | |
port: 8500 | |
targetPort: 8500 | |
- name: serflan-tcp | |
protocol: "TCP" | |
port: 8301 | |
targetPort: 8301 | |
- name: serflan-udp | |
protocol: "UDP" | |
port: 8301 | |
targetPort: 8301 | |
- name: serfwan-tcp | |
protocol: "TCP" | |
port: 8302 | |
targetPort: 8302 | |
- name: serfwan-udp | |
protocol: "UDP" | |
port: 8302 | |
targetPort: 8302 | |
- name: server | |
port: 8300 | |
targetPort: 8300 | |
- name: dns-tcp | |
protocol: "TCP" | |
port: 8600 | |
targetPort: dns-tcp | |
- name: dns-udp | |
protocol: "UDP" | |
port: 8600 | |
targetPort: dns-udp | |
selector: | |
app: consul | |
release: "consul" | |
component: server | |
--- | |
# Source: consul/templates/ui-service.yaml | |
# UI Service for Consul Server | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: consul-consul-ui | |
namespace: default | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
spec: | |
selector: | |
app: consul | |
release: "consul" | |
component: server | |
ports: | |
- name: http | |
port: 80 | |
targetPort: 8500 | |
--- | |
# Source: consul/templates/client-daemonset.yaml | |
# DaemonSet to run the Consul clients on every node. | |
apiVersion: apps/v1 | |
kind: DaemonSet | |
metadata: | |
name: consul-consul | |
namespace: default | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
spec: | |
selector: | |
matchLabels: | |
app: consul | |
chart: consul-helm | |
release: consul | |
component: client | |
hasDNS: "true" | |
template: | |
metadata: | |
labels: | |
app: consul | |
chart: consul-helm | |
release: consul | |
component: client | |
hasDNS: "true" | |
annotations: | |
"consul.hashicorp.com/connect-inject": "false" | |
spec: | |
terminationGracePeriodSeconds: 10 | |
serviceAccountName: consul-consul-client | |
# Consul agents require a directory for data, even clients. The data | |
# is okay to be wiped though if the Pod is removed, so just use an | |
# emptyDir volume. | |
volumes: | |
- name: data | |
emptyDir: {} | |
- name: config | |
configMap: | |
name: consul-consul-client-config | |
containers: | |
- name: consul | |
image: "consul:1.5.0" | |
env: | |
- name: POD_IP | |
valueFrom: | |
fieldRef: | |
fieldPath: status.podIP | |
- name: NAMESPACE | |
valueFrom: | |
fieldRef: | |
fieldPath: metadata.namespace | |
- name: NODE | |
valueFrom: | |
fieldRef: | |
fieldPath: spec.nodeName | |
- name: GOSSIP_KEY | |
valueFrom: | |
secretKeyRef: | |
name: consul-consul-gossip-encryption | |
key: gossip-key | |
command: | |
- "/bin/sh" | |
- "-ec" | |
- | | |
CONSUL_FULLNAME="consul-consul" | |
exec /bin/consul agent \ | |
-node="${NODE}" \ | |
-advertise="${POD_IP}" \ | |
-bind=0.0.0.0 \ | |
-client=0.0.0.0 \ | |
-config-dir=/consul/config \ | |
-datacenter=gcp-poc \ | |
-data-dir=/consul/data \ | |
-encrypt="${GOSSIP_KEY}" \ | |
-retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \ | |
-domain=consul | |
volumeMounts: | |
- name: data | |
mountPath: /consul/data | |
- name: config | |
mountPath: /consul/config | |
lifecycle: | |
preStop: | |
exec: | |
command: | |
- /bin/sh | |
- -c | |
- consul leave | |
ports: | |
- containerPort: 8500 | |
hostPort: 8500 | |
name: http | |
- containerPort: 8502 | |
hostPort: 8502 | |
name: grpc | |
- containerPort: 8301 | |
name: serflan | |
- containerPort: 8302 | |
name: serfwan | |
- containerPort: 8300 | |
name: server | |
- containerPort: 8600 | |
name: dns-tcp | |
protocol: "TCP" | |
- containerPort: 8600 | |
name: dns-udp | |
protocol: "UDP" | |
readinessProbe: | |
# NOTE(mitchellh): when our HTTP status endpoints support the | |
# proper status codes, we should switch to that. This is temporary. | |
exec: | |
command: | |
- "/bin/sh" | |
- "-ec" | |
- | | |
curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \ | |
grep -E '".+"' | |
--- | |
# Source: consul/templates/server-statefulset.yaml | |
# StatefulSet to run the actual Consul server cluster. | |
apiVersion: apps/v1 | |
kind: StatefulSet | |
metadata: | |
name: consul-consul-server | |
namespace: default | |
labels: | |
app: consul | |
chart: consul-helm | |
heritage: Tiller | |
release: consul | |
spec: | |
serviceName: consul-consul-server | |
podManagementPolicy: Parallel | |
replicas: 1 | |
selector: | |
matchLabels: | |
app: consul | |
chart: consul-helm | |
release: consul | |
component: server | |
hasDNS: "true" | |
template: | |
metadata: | |
labels: | |
app: consul | |
chart: consul-helm | |
release: consul | |
component: server | |
hasDNS: "true" | |
annotations: | |
"consul.hashicorp.com/connect-inject": "false" | |
spec: | |
affinity: | |
podAntiAffinity: | |
requiredDuringSchedulingIgnoredDuringExecution: | |
- labelSelector: | |
matchLabels: | |
app: consul | |
release: "consul" | |
component: server | |
topologyKey: kubernetes.io/hostname | |
terminationGracePeriodSeconds: 10 | |
serviceAccountName: consul-consul-server | |
securityContext: | |
fsGroup: 1000 | |
volumes: | |
- name: config | |
configMap: | |
name: consul-consul-server-config | |
containers: | |
- name: consul | |
image: "consul:1.5.0" | |
env: | |
- name: POD_IP | |
valueFrom: | |
fieldRef: | |
fieldPath: status.podIP | |
- name: NAMESPACE | |
valueFrom: | |
fieldRef: | |
fieldPath: metadata.namespace | |
- name: GOSSIP_KEY | |
valueFrom: | |
secretKeyRef: | |
name: consul-consul-gossip-encryption | |
key: gossip-key | |
command: | |
- "/bin/sh" | |
- "-ec" | |
- | | |
CONSUL_FULLNAME="consul-consul" | |
exec /bin/consul agent \ | |
-advertise="${POD_IP}" \ | |
-bind=0.0.0.0 \ | |
-bootstrap-expect=1 \ | |
-client=0.0.0.0 \ | |
-config-dir=/consul/config \ | |
-datacenter=gcp-poc \ | |
-data-dir=/consul/data \ | |
-domain=consul \ | |
-encrypt="${GOSSIP_KEY}" \ | |
-ui \ | |
-retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \ | |
-server | |
volumeMounts: | |
- name: data-default | |
mountPath: /consul/data | |
- name: config | |
mountPath: /consul/config | |
lifecycle: | |
preStop: | |
exec: | |
command: | |
- /bin/sh | |
- -c | |
- consul leave | |
ports: | |
- containerPort: 8500 | |
name: http | |
- containerPort: 8301 | |
name: serflan | |
- containerPort: 8302 | |
name: serfwan | |
- containerPort: 8300 | |
name: server | |
- containerPort: 8600 | |
name: dns-tcp | |
protocol: "TCP" | |
- containerPort: 8600 | |
name: dns-udp | |
protocol: "UDP" | |
readinessProbe: | |
# NOTE(mitchellh): when our HTTP status endpoints support the | |
# proper status codes, we should switch to that. This is temporary. | |
exec: | |
command: | |
- "/bin/sh" | |
- "-ec" | |
- | | |
curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \ | |
grep -E '".+"' | |
failureThreshold: 2 | |
initialDelaySeconds: 5 | |
periodSeconds: 3 | |
successThreshold: 1 | |
timeoutSeconds: 5 | |
volumeClaimTemplates: | |
- metadata: | |
name: data-default | |
spec: | |
accessModes: | |
- ReadWriteOnce | |
resources: | |
requests: | |
storage: 1Gi | |
--- | |
# Source: consul/templates/client-azure-secret.yaml | |
# Cloud Azure Secret with extra configuration specified directly to the chart | |
# for client agents only. | |
--- | |
# Source: consul/templates/connect-inject-clusterrole.yaml | |
# The ClusterRole to enable the Connect injector to get, list, watch and patch MutatingWebhookConfiguration. | |
--- | |
# Source: consul/templates/connect-inject-deployment.yaml | |
# The deployment for running the Connect sidecar injector | |
--- | |
# Source: consul/templates/connect-inject-mutatingwebhook.yaml | |
# The MutatingWebhookConfiguration to enable the Connect injector. | |
--- | |
# Source: consul/templates/connect-inject-service.yaml | |
# The service for the Connect sidecar injector | |
--- | |
# Source: consul/templates/sync-catalog-deployment.yaml | |
# The deployment for running the sync-catalog pod | |
LAST DEPLOYED: Fri Aug 2 09:07:09 2019 | |
NAMESPACE: default | |
STATUS: DEPLOYED | |
RESOURCES: | |
==> v1/ClusterRole | |
NAME AGE | |
consul-consul-client 2s | |
consul-consul-server 2s | |
==> v1/ClusterRoleBinding | |
NAME AGE | |
consul-consul-client 2s | |
consul-consul-server 2s | |
==> v1/ConfigMap | |
NAME DATA AGE | |
consul-consul-client-config 1 2s | |
consul-consul-server-config 1 2s | |
==> v1/DaemonSet | |
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE | |
consul-consul 1 1 0 1 0 <none> 2s | |
==> v1/Pod(related) | |
NAME READY STATUS RESTARTS AGE | |
consul-consul-lcxbn 0/1 Running 0 2s | |
consul-consul-server-0 0/1 Pending 0 2s | |
==> v1/Secret | |
NAME TYPE DATA AGE | |
consul-consul-gossip-encryption Opaque 1 2s | |
==> v1/Service | |
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | |
consul-consul-dns ClusterIP 10.23.245.149 <none> 53/TCP,53/UDP 2s | |
consul-consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 2s | |
consul-consul-ui ClusterIP 10.23.244.61 <none> 80/TCP 2s | |
==> v1/ServiceAccount | |
NAME SECRETS AGE | |
consul-consul-client 1 2s | |
consul-consul-server 1 2s | |
==> v1/StatefulSet | |
NAME READY AGE | |
consul-consul-server 0/1 2s | |
==> v1beta1/PodDisruptionBudget | |
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE | |
consul-consul-server N/A 0 0 2s | |
==> v1beta1/PodSecurityPolicy | |
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES | |
consul-consul-client false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,downwardAPI | |
consul-consul-server false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim | |
NOTES: | |
Thank you for installing HashiCorp Consul! | |
Now that you have deployed Consul, you should look over the docs on using | |
Consul with Kubernetes available here: | |
https://www.consul.io/docs/platform/k8s/index.html | |
Your release is named consul. To learn more about the release, try: | |
$ helm status consul | |
$ helm get consul |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment