Skip to content

Instantly share code, notes, and snippets.

@DD-ScottBeamish
Last active June 18, 2018 17:49
Show Gist options
  • Save DD-ScottBeamish/76d7537882d562ae05f92776794195f5 to your computer and use it in GitHub Desktop.
Save DD-ScottBeamish/76d7537882d562ae05f92776794195f5 to your computer and use it in GitHub Desktop.
OpenShift 3.6 Development Environment w/RBAC
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: datadog
rules:
- nonResourceURLs:
- "/version" # Used to get apiserver version metadata
- "/healthz" # Healthcheck
verbs: ["get"]
- apiGroups: [""]
resources:
- "nodes"
- "namespaces" #
- "events" # Cluster events + kube_service cache invalidation
- "services" # kube_service tag
verbs: ["get", "list"]
- apiGroups: [""]
resources:
- "configmaps"
resourceNames: ["datadog-leader-elector"]
verbs: ["get", "delete", "update"]
- apiGroups: [""]
resources:
- "configmaps"
verbs: ["create"]
# Your admin user needs the same permissions to be able to grant them
# Easiest way is to bind your user to the cluster-admin role
# See https://cloud.google.com/container-engine/docs/role-based-access-control#setting_up_role-based_access_control
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: datadog
subjects:
- kind: ServiceAccount
name: datadog
namespace: default
roleRef:
kind: ClusterRole
name: datadog
apiGroup: rbac.authorization.k8s.io
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: datadog-agent
spec:
selector:
matchLabels:
name: datadog-agent
template:
metadata:
labels:
app: datadog-agent
name: datadog-agent
name: datadog-agent
spec:
nodeSelector:
label: local
spec:
serviceAccountName: datadog
containers:
- image: datadog/agent:latest
imagePullPolicy: Always
name: datadog-agent
ports:
- containerPort: 8125
name: dogstatsdport
protocol: UDP
- containerPort: 8126
name: traceport
protocol: TCP
env:
- name: DD_API_KEY
value: <YOUR_API_KEY>
- name: DD_COLLECT_KUBERNETES_EVENTS
value: "true"
- name: DD_LEADER_ELECTION
value: "true"
- name: KUBERNETES
value: "yes"
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "250m"
volumeMounts:
- name: dockersocket
mountPath: /var/run/docker.sock
- name: procdir
mountPath: /host/proc
readOnly: true
- name: cgroups
mountPath: /host/sys/fs/cgroup
readOnly: true
livenessProbe:
exec:
command:
- ./probe.sh
initialDelaySeconds: 15
periodSeconds: 5
volumes:
- hostPath:
path: /var/run/docker.sock
name: dockersocket
- hostPath:
path: /proc
name: procdir
- hostPath:
path: /sys/fs/cgroup
name: cgroups
# You need to use that account for your dd-agent DaemonSet
apiVersion: v1
kind: ServiceAccount
metadata:
name: datadog
automountServiceAccountToken: true
@DD-ScottBeamish
Copy link
Author

DD-ScottBeamish commented Mar 2, 2018

Minishift specific instructions

  1. Start minshift with the --metrics flag

    minishift start --vm-driver=virtualbox --metrics

  2. Label the default node with local

    oc label node localhost label=local

Installing the Container via DaemonSet

  1. Ensure that the current namespace is 'default'

    oc project default

  2. Create the Datadog ServiceAccount

    oc create -f service-account.yaml

  3. Apply the privileged scc to the Datadog ServiceAccount

    oc adm policy add-scc-to-user privileged -n default -z datadog

  4. Create the Datadog ClusterRole which provides access to the various objects required to gather metrics

    oc create -f cluster-role.yaml

  5. Create the ClusterRoleBinding to map the ClusterRole to the ServiceAccount

    oc create -f clusterrole-binding.yaml

  6. Create the DaemonSet which instructs the scheduler to run 1 instance of the Datadog Agent container on each kubelet

    oc create -f dd-agent.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment