This document describes the steps for a basic demonstration of Pod Security Admission (PSa) and Security Context Constraints (SCCs) in an OpenShift cluster (OCP version 4.15).
- Create a new project:
oc new-project demo
- Create a role and a service account, and bind it to the role:
oc apply -f -<<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: demo-role
namespace: demo
rules:
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
resourceNames:
- restricted-v2
verbs:
- use
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-sa
namespace: demo
---
apiVersion: authorization.openshift.io/v1
kind: RoleBinding
metadata:
name: demo-rolebinding
namespace: demo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: demo-role
namespace: demo
subjects:
- kind: ServiceAccount
name: demo-sa
namsepace: demo
EOF
Create a deployment using demo-sa
:
oc apply -f -<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
namespace: demo
spec:
replicas: 1
selector:
matchLabels:
app: hello-openshift
template:
metadata:
labels:
app: hello-openshift
spec:
serviceAccountName: demo-sa
containers:
- name: hello-openshift
image: openshift/hello-openshift:latest
ports:
- containerPort: 80
EOF
Check that deployment pods are ready:
oc get pods
Then, inspect the pod's manifest:
oc get pod hello-openshift-5549d985b9-pxzqp -oyaml|less
In particular, look for:
- in the pod's metadata, the annotation
openshift.io/scc
- in the pod's spec, the field
securityContext
The pod has been admitted with the restricted-v2
SCC and its security context has been mutated to match the requirements of the chosen SCC.
Let's say that you need to change the security context of your pods, and add the SYS_PTRACE
capability so that the container can debug other processes.
First, delete the deployment:
oc delete deployment hello-openshift
Then, create it again as shown below, adding the required capability:
oc apply -f -<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
namespace: demo
spec:
replicas: 1
selector:
matchLabels:
app: hello-openshift
template:
metadata:
labels:
app: hello-openshift
spec:
serviceAccountName: demo-sa
containers:
- name: hello-openshift
image: openshift/hello-openshift:latest
ports:
- containerPort: 80
securityContext:
capabilities:
add:
- SYS_PTRACE
EOF
We can now observe two things:
- there is a warning that this change would violate pod security
restricted:v1.24
(however this is just a warning for now) - the deployment does not proceed, because the pods are unable to validate against any SCC that the SA has access to (seen with
oc get deployment hello-openshift -oyaml
)
Currently, demo-sa
can only use the restricted-v2
SCC, therefore it can only create pods that comply to that SCC. To be able to use the additional capability (SYS_PTRACE
), we need to grant it permission to use the privileged
SCC.
To perform this change, we must add the privileged
SCC to the demo-role
role, by editing it:
oc edit role demo-role
...
rules:
- apiGroups:
- security.openshift.io
resourceNames:
- restricted-v2
- privileged
resources:
- securitycontextconstraints
verbs:
- use
Now, inspect the labels of the demo
namespace:
oc get ns demo -oyaml
Note the change in the PSa levels: it is now set to privileged
instead of restricted
; this is because an internal controller observed a change in the namespace's service account permissions, and thus it automatically updated the PSa labels (the controller is called the LabelSyncer).
Finally, delete and recreate the deployment, and observe that the pods are getting admitted with the privileged
SCC.
Now let's say that some admin decides that the demo
namespace should not have privileged
PSa, but rather restricted
. Edit the namespace and set the labels to restricted
:
oc edit ns demo
Change pod-security.kubernetes.io/audit
and warn
to restricted
:
...
labels:
kubernetes.io/metadata.name: demo
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: v1.24
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: v1.24
Now, delete and recreate the deployment. The SA still has access to the privileged
SCC, but PSa now will not allow this; the deployment will now show the following warning, meaning that restricted
PSa does not accept the security context that the pods of the deployment have:
Warning: would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "hello-openshift" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "hello-openshift" must set securityContext.capabilities.drop=["ALL"]; container "hello-openshift" must not include "SYS_PTRACE" in securityContext.capabilities.add), runAsNonRoot != true (pod or container "hello-openshift" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "hello-openshift" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Note that despite the warning, the deployment proceeds and pods are created. The reason is because in the namespace's PSa labels, we have not set pod-security.kubernetes.io/enforce
; we have only set audit
and warn
to restricted
, therefore PSa is not enforcing this level, but only warning & creating audit log entries for it.
To enforce the admission of a pod under a specific SCC, set the annotation openshift.io/required-scc
to the desired SCC in the pod's metadata.
First, edit the demo-role
role to give access to restricted-v2
and anyuid
only:
...
rules:
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
resourceNames:
- restricted-v2
- anyuid
verbs:
- use
Observe that now, the PSa level for the namespace is baseline
:
oc get ns demo -oyaml
Second, delete and create the original deployment again:
oc apply -f -<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
namespace: demo
spec:
replicas: 1
selector:
matchLabels:
app: hello-openshift
template:
metadata:
labels:
app: hello-openshift
spec:
serviceAccountName: demo-sa
containers:
- name: hello-openshift
image: openshift/hello-openshift:latest
ports:
- containerPort: 80
EOF
Check out the pod's manifest, and observe that now, the pod has been admitted with the anyuid
SCC (instead of the original restricted-v2
). anyuid
has higher priority than restricted-v2
and therefore gets picked.
You might not want that, to avoid granting any unnecessary permissions to your pod. You can require a specific SCC, by setting the openshift.io/required-scc
annotation on the pod. Delete and recreate the deployment as shown below:
oc apply -f -<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
namespace: demo
spec:
replicas: 1
selector:
matchLabels:
app: hello-openshift
template:
metadata:
labels:
app: hello-openshift
annotations:
openshift.io/required-scc: restricted-v2
spec:
serviceAccountName: demo-sa
containers:
- name: hello-openshift
image: openshift/hello-openshift:latest
ports:
- containerPort: 80
EOF