Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save PI-Victor/189a0b3d52f96d64d3fdf7829d99ded0 to your computer and use it in GitHub Desktop.
Save PI-Victor/189a0b3d52f96d64d3fdf7829d99ded0 to your computer and use it in GitHub Desktop.
How to monitor an external secure etcd cluster with Prometheus Operator
# How to monitor a secure external etcd service with Prometheus Operator
This guide will help you monitor an external etcd cluster. When the etcd is not hosted inside Kubernetes.
This is often the case with the Kubernetes setup. This has been tested with kube-aws but same principals will apply to other tools.
# Step 1 - Make the etcd certificates available to Prometheus pod
Prometheus Operator (and Prometheus) allow us to specify a tlsConfig. This is required as most likely your etcd metrics end points is secure.
## a - Create the secrets in the namespace
Prometheus Operator allows us to mount secrets in the pod. By loading the secrets as files, they can be made available inside the Prometheus pod.
`kubectl -n monitoring create secret generic etcd-certs --from-file=CREDENTIAL_PATH/etcd-client.pem --from-file=CREDENTIAL_PATH/etcd-client-key.pem --from-file=CREDENTIAL_PATH/ca.pem`
where CREDENTIAL_PATH is the path to your etcd client credentials on your work machine.
(Kube-aws stores them inside the credential folder).
## b - Get Promnetheus Operator to load the secret
In the previous step we have named the secret 'etcd-certs'.
Edit prometheus-operator/contrib/kube-prometheus/manifests/prometheus/prometheus-k8s.yaml and add the secret under the spec of the Prometheus object manifest:
```
secrets:
- etcd-certs
```
The manifest will look like that:
```
apiVersion: monitoring.coreos.com/v1alpha1
kind: Prometheus
metadata:
name: k8s
labels:
prometheus: k8s
spec:
replicas: 2
secrets:
- etcd-certs
version: v1.7.0
```
If your Prometheus Operator is already in place, update it:
`kubectl -n monitoring replace -f contrib/kube-prometheus/manifests/prometheus/prometheus-k8s.yaml
# Step 2 - Create the Service, endpoints and ServiceMonitor
The below manifest creates a Service to expose etcd metrics (port 2379)
Replace IP_OF_YOUR_ETCD_NODE with the IP of your etcd node. If you have more than one node, add them to the same list.
In this example we use insecureSkipVerify: true as kube-aws default certiicates are not valid against the IP. They were created for the DNS.
```
apiVersion: v1
kind: Service
metadata:
name: etcd-k8s
labels:
k8s-app: etcd
spec:
type: ClusterIP
clusterIP: None
ports:
- name: api
port: 2379
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
name: etcd-k8s
labels:
k8s-app: etcd
subsets:
- addresses:
- ip: IP_OF_YOUR_ETCD_NODE
nodeName: IP_OF_YOUR_ETCD_NODE
ports:
- name: api
port: 2379
protocol: TCP
---
apiVersion: monitoring.coreos.com/v1alpha1
kind: ServiceMonitor
metadata:
name: etcd-k8s
labels:
k8s-app: etcd-k8s
spec:
jobLabel: k8s-app
endpoints:
- port: api
interval: 30s
scheme: https
tlsConfig:
caFile: /etc/prometheus/secrets/etcd-certs/ca.pem
certFile: /etc/prometheus/secrets/etcd-certs/etcd-client.pem
keyFile: /etc/prometheus/secrets/etcd-certs/etcd-client-key.pem
insecureSkipVerify: true
selector:
matchLabels:
k8s-app: etcd
namespaceSelector:
matchNames:
- monitoring
```
# Step 3: Open the port
You now need to allow the nodes Prometheus are running on to talk to the etcd on the port 2379 (if 2379 is the port used by etcd to expose the metrics)
If using kube-aws, you will need to edit the etcd security group inbound, specifying the security group of your node (worker) as the source.
# Step 4: verify
Go to the Prometheus UI on :9090/config and check that you have an etcd job entry:
```
- job_name: monitoring/etcd-k8s/0
scrape_interval: 30s
scrape_timeout: 10s
...
```
On the :9090/targets page, you should see "etcd" with the UP state. If not, check the Error column for more information.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment