Skip to content

Instantly share code, notes, and snippets.

@omerlh
Last active April 20, 2023 08:50
Show Gist options
  • Star 6 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save omerlh/cc5724ffeea17917eb06843dbff987b7 to your computer and use it in GitHub Desktop.
Save omerlh/cc5724ffeea17917eb06843dbff987b7 to your computer and use it in GitHub Desktop.
A daemonset that print the most heavy files on each node
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: disk-checker
labels:
app: disk-checker
spec:
selector:
matchLabels:
app: disk-checker
template:
metadata:
labels:
app: disk-checker
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- resources:
requests:
cpu: 0.15
securityContext:
privileged: true
image: busybox
imagePullPolicy: IfNotPresent
name: disk-checked
command: ["/bin/sh"]
args: ["-c", "du -a /host | sort -n -r | head -n 20"]
volumeMounts:
- name: host
mountPath: "/host"
volumes:
- name: host
hostPath:
path: "/"
@manosnoam
Copy link

Hi Omer,
Thanks for this interesting app!

I'm getting on OpenShift this error:
no matches for kind "DaemonSet" in version "extensions/v1beta1"

Any suggestion ?

@omerlh
Copy link
Author

omerlh commented Aug 4, 2020

Yep, it should be apps/v1 instead - in the apiVersion

@manosnoam
Copy link

Tried it as we speak ;)
But now it fails on:

The DaemonSet "disk-checker" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"disk-checker"}: selectordoes not match templatelabels`

@omerlh
Copy link
Author

omerlh commented Aug 4, 2020

The selector was missing, I added it now

@manosnoam
Copy link

bash-4.2$ cat disk.yaml 

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: disk-checker
  labels:
    name: disk-checker
spec:
  selector:
    matchLabels:
      name: disk-checker
  template:
    metadata:
      labels:
        app: disk-checker
    spec:
      hostPID: true
      hostIPC: true
      hostNetwork: true
      containers:
      - resources:
          requests:
            cpu: 0.15
        securityContext:
          privileged: true
        image: busybox
        imagePullPolicy: IfNotPresent
        name: disk-checked
        command: ["/bin/sh"]
        args: ["-c", "du -a /host | sort -n -r | head -n 20"]
        volumeMounts:
        - name: host
          mountPath: "/host"
      volumes:
        - name: host
          hostPath:
            path: "/"

bash-4.2$ oc apply -f disk.yaml 
The DaemonSet "disk-checker" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"disk-checker"}: `selector` does not match template `labels`

@omerlh
Copy link
Author

omerlh commented Aug 4, 2020

Fixed :) sorry!

@manosnoam
Copy link

bash-4.2$ oc apply -f disk.yaml
daemonset.apps/disk-checker created

Muchas gracias 👍

@manosnoam
Copy link

However, no pod is running:

bash-4.2$ oc get pods  -l app=disk-checker
No resources found.
bash-4.2$ oc get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/nginx-cl-b-55668cf7cd-h96dz   1/1     Running   0          6h13m

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/nginx-cl-b   ClusterIP   100.96.109.206   <none>        8080/TCP   6h13m

NAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/disk-checker   0         0         0       0            0           <none>          3m24s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-cl-b   1/1     1            1           6h13m

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-cl-b-55668cf7cd   1         1         1       6h13m
bash-4.2$ 
bash-4.2$ 
bash-4.2$ 
bash-4.2$ oc describe daemonset.apps/disk-checker
Name:           disk-checker
Selector:       app=disk-checker
Node-Selector:  <none>
Labels:         app=disk-checker
Annotations:    deprecated.daemonset.template.generation: 1
                kubectl.kubernetes.io/last-applied-configuration:
                  {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"app":"disk-checker"},"name":"disk-checker","namespace":...
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=disk-checker
  Containers:
   disk-checked:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sh
    Args:
      -c
      du -a /host | sort -n -r | head -n 20
    Requests:
      cpu:        150m
    Environment:  <none>
    Mounts:
      /host from host (rw)
  Volumes:
   host:
    Type:          HostPath (bare host directory volume)
    Path:          /
    HostPathType:  
Events:
  Type     Reason        Age                   From                  Message
  ----     ------        ----                  ----                  -------
  Warning  FailedCreate  53s (x17 over 3m37s)  daemonset-controller  Error creating: pods "disk-checker-" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used provider restricted: .spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used provider restricted: .spec.securityContext.hostIPC: Invalid value: true: Host IPC is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.containers[0].securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.hostIPC: Invalid value: true: Host IPC is not allowed to be used]
bash-4.2$ 

@omerlh
Copy link
Author

omerlh commented Aug 4, 2020

Look like some security configuration on your cluster are blocking it

@manosnoam
Copy link

This seems to solve privilages issue when running it on OpenShift cluster:

$ oc create serviceaccount privilegeduser
$ oc adm policy add-scc-to-user privileged -z privilegeduser

# Add ” serviceAccountName: privilegeduser ” into yaml spec, and then apply the file

@omerlh
Copy link
Author

omerlh commented Aug 4, 2020

Happy to hear this was solved!

@voyager123bg
Copy link

Just passing by, wanted to thanks for the clever solution.

@omerlh
Copy link
Author

omerlh commented May 16, 2021

Thank you :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment