The format to run this workshop is phase by phase, moving on after each activity is complete. The markdown has a corresponding repo where each phase has a corresponding branch. If you ever get stuck you can peek at the repo.
- Background
- Phase 1 - Get to know Hello Pepr
- Activity 1 - Run Hello Pepr
- Phase 2 - Validating security posture
- Activity 2 - No privileged pods
- Phase 3 - Mutating security posture
- Activity 3 - Standardized security contexts
- Phase 4 - Programming organizational knowledge
- Activity 4 - When there is static, slap the TV
- Phase 5 - Operator for repeatable deployments
- Activity 5 - Deploying a webapp
- Phase 6 - Building Kubernetes manifests
- Activity 6 - Deploy from kubectl
- Phase 7 - What's next?
- Mac or Linux
- Node.js v18.0.0+ (even-numbered releases only)
- npm v10.1.0+
- Visual Studio Code for inline debugging and Pepr Capabilities creation.
- k3d - A Kubernetes cluster for npx pepr dev. Pepr modules include npm run k3d-setup if you want to test locally with K3d and Docker.
kubectl
- jq
You are the Chief Architect at Big Enterprise Co which maintains over 200 production apps from different teams. You are overseeing an effort to migrate all apps to a new multi-cloud HA Kubernetes cluster.
It is your job to ensure:
- All 200 Apps are migrated
- Apps meet security requirements
- The teams are able to quickly release patches and updates
Big Enterprise Co maintains strict standards across the board and does not make exceptions for any team. The teams have different levels of experience in Kubernetes. In order to enforce standarization, you decide to create an Admission Controller so that all resources entering the cluster are validated and mutated to meet the standards.
After researching potential Admission Controllers, you decide to use Pepr because:
- It is fully open source
- It allows the creation of Policy to dictate what can enter a Kubernetes Cluster, similar to Kyverno and OPA Gatekepper
- It has a Kubernetes Watch Mechanism, similar to Operator-SDK and Kube-Builder, allowing you to write full Kubernetes native applications to simplify advanced configuration
- It is lightweight and developer friendly with a simple, easy to use, API and comes with IntelliSense out of the box
- It comes with an intuitive Kubernetes Client that uses Server Side Apply as a more efficient means to speak to the Kube-APIServer.
The first order of business is to create the scaffolding for your Admission Controller, call it enterprise-admission-controller
.
Initialize a new Pepr module. "Module" is what we call a project in Pepr.
npx pepr init
You will be asked for a description for the module and what to do in the event of a failure. The description will be an annotation
on the controller's deployments. The event failure will be used in the Webhook's failurePolicy.
✔ Enter a name for the new Pepr module. This will create a new directory based on the name.
… enterprise-admission-controller
✔ (Recommended) Enter a description for the new Pepr module.
… DevopsDaysRaleigh 2024
✔ How do you want Pepr to handle errors encountered during K8s operations? › Reject the operation
To be generated:
enterprise-admission-controller
├── .eslintrc.json
├── .gitignore
├── .prettierrc
├── capabilties
│ ├── hello-pepr.samples.yaml
│ └── hello-pepr.ts
├── package.json
│ {
│ name: 'enterprise-admission-controller',
│ version: '0.0.1',
│ description: 'DevopsDaysRaleigh 2024',
│ keywords: [ 'pepr', 'k8s', 'policy-engine', 'pepr-module', 'security' ],
│ engines: { node: '>=18.0.0' },
│ pepr: {
│ uuid: 'c8219d66-6901-5ef5-bcd5-6bb66f6afbb7',
│ onError: 'reject',
│ webhookTimeout: 10,
│ customLabels: { namespace: { 'pepr.dev': '' } },
│ alwaysIgnore: { namespaces: [] },
│ includedFiles: [],
│ env: {}
│ },
│ scripts: {
│ 'k3d-setup': "k3d cluster delete pepr-dev && k3d cluster create pepr-dev --k3s-arg '--debug@server:0' --wait && kubectl rollout status deployment -n kube-system"
│ },
│ dependencies: { pepr: '0.28.7' },
│ devDependencies: { typescript: '5.3.3' }
│ }
├── pepr.ts
├── README.md
└── tsconfig.json
? Create the new Pepr module? › (y/N)
A new VSCode project will pop up. Run npm i
to install the modules, then spend a few moments looking over capabilities/hello-pepr.ts
to try and get an idea how Pepr works.
Notice how you are able to to Mutate, Validate, and Watch a Kubernetes Object.
The general format for each "binding" is:
When(a.<KubernetesObject>)
.<event>(IsCreated/IsUpdated/IsCreatedOrUpdated/IsDeleted)
.<filters>(WithName/WithLabel/WithAnnotation/InNamespace)
.<callback>(Mutate/Validate/Watch/Reconcile)
For example:
When(a.Namespace)
.IsCreated()
.WithAnnotation("DevopsDaysRaleigh 2024")
.Mutate(ns => ns.RemoveLabel("remove-me"));
Next, create our dev cluster by running: npm run k3d-setup
.
Open a JavaScript Debug Terminal
in VSCode:
Inside of the debug terminal run:
npx pepr dev --confirm
The debug terminal will be used to look at logs during Activity 1.
Wait until you see a log like [xx:xx:xx.xxx] INFO (xxxxx): ✅ Scheduling processed
. This indicates that the module is ready.
Next, open a second terminal beside the debug terminal. We'll use this new terminal to create the namespace pepr-demo
.
On line 38 of capabilites/hello-pepr.ts
we see that inside the Mutate callback there is a RemoveLabel("remove-me")
. Create a namespace with that label and test that it properly Mutates the remove-me
label:
kubectl apply -f -<<EOF
apiVersion: v1
kind: Namespace
metadata:
name: pepr-demo
labels:
keep-me: please
remove-me: right-now
EOF
By checking the labels we see that Pepr mutated the remove-me label:
kubectl get ns pepr-demo --show-labels
Output confirming that the remove-me
label has been removed and only the keep-me
label remains:
NAME STATUS AGE LABELS
pepr-demo Active 6s keep-me=please,kubernetes.io/metadata.name=pepr-demo
Next, on line 157 of capabilites/hello-pepr.ts
there is a Validate that should reject any ConfigMap created with annotation evil
. To test this, create a ConfigMap in pepr-demo
with an evil
annotation:
kubectl apply -f -<<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: example-evil-cm
namespace: pepr-demo
annotations:
evil: 'true'
data:
key: ex-evil-cm-val
EOF
In the output we see that the validating webhook rejected the ConfigMap:
Error from server: error when creating "STDIN": admission webhook "pepr-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.pepr.dev" denied the request: No evil CM annotations allowed.
Finally, let's see how Watch
works.
On line 51 of capabilites/hello-pepr.ts
change .WithName("pepr-demo-2")
to .WithName("pepr-demo")
and on line 53 change Log.info("Namespace pepr-demo-2 was created.");
to Log.info("Namespace pepr-demo was updated again.");
. On line 60 change pepr-demo-2
to pepr-demo
and save.
Lines 49-73 should look like:
When(a.Namespace)
.IsCreatedOrUpdated()
.WithName("pepr-demo")
.Watch(async ns => {
Log.info("Namespace pepr-demo was updated again.");
try {
// Apply the ConfigMap using K8s server-side apply
await K8s(kind.ConfigMap).Apply({
metadata: {
name: "pepr-ssa-demo",
namespace: "pepr-demo",
},
data: {
"ns-uid": ns.metadata.uid,
},
});
} catch (error) {
// You can use the Log object to log messages to the Pepr controller pod
Log.error(error, "Failed to apply ConfigMap using server-side apply.");
}
// You can share data between actions using the Store, including between different types of actions
Store.setItem("watch-data", "This data was stored by a Watch Action.");
});
Now update the pepr-demo
namespace while looking at logs in the debug terminal:
kubectl label ns pepr-demo hello=devopsdaysraleigh
You should see a log with level INFO that looks like:
[xx:xx:xx.xxx] INFO (xxxxx): Namespace pepr-demo was updated again
There also should be a new configMap created in pepr-demo
named pepr-ssa-demo
. You can confirm this with:
kubectl get cm -n pepr-demo pepr-ssa-demo -oyaml
The output will be similar to:
apiVersion: v1
data:
ns-uid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
kind: ConfigMap
metadata:
name: pepr-ssa-demo
namespace: pepr-demo
uid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
This confirms the Kubernetes Fluent Client created a resource when Pepr received the Watch
event from the Kubernetes API Server.
Sometimes Watch is a better option than Mutate or Validate because it is not affected by WebHook Timeouts because it is a call to the Kube-Api Server and not an Admission Phase. However, it does require additional RBAC to Watch resources. This additional RBAC is not needed when using Mutate or Validate which are Admission Phases.
Before moving on, play with the IntelliSense. Starting:
When(a.<notice_intellisense>).<notice_intellisense>.<notice_intellisense>
Developer experience is a first class citizen in Pepr and it will help you move more quickly from prototype to MVP.
Anytime you make changes or want to format your code use:
npx pepr format
This will tree-shake your code, ensuring your module is as small as possible.
Big Enterprise Co was exploited when a hacker reversed shelled out of an over privileged container and sensitive data was stolen from the nodes. In response, the company has created a zero tolerance policy on privileged containers.
Create an action to enforce the new security standard that containers running as privileged are not allowed to enter the cluster.
Hints:
- Copy
capability/hello-pepr.ts
tocapability/admission.ts
- Rename the HelloPepr Capability to the Admission Capability
- Remove the HelloPepr actions and create a new action for When a Pod is created or updated
- Set the capability namespaces to
namespaces: []
so that the policy applies to all namespaces - Use the helper function
containers()
frompepr/sdk
to return all containers on a pod - Update
pepr.ts
to point toadmission.ts
- Look back in
hello-pepr.ts
to see how the validates work and review the Validate Docs
You can compare your new Admission Capability to our solution:
import { sdk } from "pepr/sdk"
export const Admission = new Capability({
name: "Admission",
description: "Global admission controler.",
namespaces: [],
});
const { containers } = sdk;
const { When } = Admission;
When(a.Pod)
.IsCreatedOrUpdated()
.Validate(po => {
// Now traverse the list of containers to check the securityContexts
let podContainers = containers(po);
})
Confirm Correctness
To condirm your module correctly rejects privileged pods:
- Refresh your cluster:
npm run k3d-setup
- Format the module using
npx pepr format
- Run the module using
npx pepr dev --confirm
- Apply test resources using:
kubectl apply -f -<<EOF
---
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: phase-2
spec: {}
status: {}
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: double-privileged-po
name: double-privileged-po
namespace: phase-2
spec:
containers:
- image: nginx
name: double-privileged-po
resources: {}
securityContext:
privileged: true
allowPrivilegeEscalation: true
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: privileged-po
name: privileged-po
namespace: phase-2
spec:
containers:
- image: nginx
name: privileged-po
resources: {}
securityContext:
privileged: true
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: escalated-privileged-po
name: escalated-privileged-po
namespace: phase-2
spec:
containers:
- image: nginx
name: escalated-privileged-po
resources: {}
securityContext:
allowPrivilegeEscalation: true
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: unprivileged-po
name: unprivileged-po
namespace: phase-2
spec:
containers:
- image: nginx
name: unprivileged-po
resources: {}
securityContext: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: root-user-pod
name: root-user-pod
namespace: phase-2
spec:
containers:
- image: nginx
name: root-user-pod
resources: {}
securityContext:
runAsUser: 0
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
EOF
- Check the correctness of the pods admitted into the cluster:
kubectl get po -n phase-2 --no-headers -o custom-columns=NAME:.metadata.name
The expected output is:
unprivileged-po
root-user-pod
If you got the expected output, you win! Go on to Phase 3. DO NOT DELETE YOUR CLUSTER
Big Enterprise Co has started to enhance its security posture. Your bosses are pleased, but you can't help noticing that the Pod and Container securityContexts are all over the map. Some pods are running as user 0 (which is a problem), some are running as user 655532, and some are running as user 1000. There is no standardization.
Our goal is to standardize the runAsUser
securityContext for pods and containers. We do NOT want to override any runAsUser
securityContext that is not running as user 0, but we want to define the securityContext should it not be defined. The only case where we should override a securityContext is if the pod or container is running as user 0.
Create a new action to Mutate pods (and the containers of said pods) to have a default runAsUser value:
- If no runAsUser value exists - assign 655532
- If a runAsUser value exists:
- override the value to set it to 1000 if the runAsUser value is set to a number less than 10
- except if a pod has the label
ignore-me
, then do not override runAsUser even if it is less than 10.
IMPORTANT When the container user changes, it can lead to problems creating files and mounting volumes which breaks some applications. We need an escape clause. You will need this for the operator activity that is later in this workshop.
Hints:
- Add a new Mutate action to the
capability/admission.ts
you created in Activity 2 - Remember that pods and containers both have
securityContext.runAsUser
- You need to consider that there could be containers, initContainers, and ephemeralContainers
- Use the containers function from
pepr/sdk
to see if you need to update any containers - Create a
containerHelper
helper to re-use code - You may find it helpful to review the Mutate Docs
You can compare your updated Admission Capability to our solution:
import { sdk } from "pepr/sdk"
// V1Container is the container type for the helper
import { V1Container } from "@kubernetes/client-node";
export const Admission = new Capability({
name: "Admission",
description: "Global admission controler.",
namespaces: [],
});
const { containers } = sdk;
const { When } = Admission;
// existing
When(a.Pod)
.IsCreatedOrUpdated()
.Validate(po => {})
// new
const containerHelper = (container: V1Container) => {...}
When(a.Pod)
.IsCreatedOrUpdated()
.Mutate(po => {})
Check Correctness
To check if your module correctly rejects privileged pods:
- Format the module using
npx pepr format
- Run the module using
npx pepr dev --confirm
- Apply test resources using:
kubectl apply -f -<<EOF
---
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: phase-3
spec: {}
status: {}
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: po
ignore-me: sure
name: ignore-me
namespace: phase-3
spec:
securityContext:
runAsUser: 5
containers:
- image: ubuntu
command: ["sh", "-c", "sleep 3600"]
name: po
securityContext:
runAsUser: 5
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: mutate-pod
name: mutate-pod-leave-container
namespace: phase-3
spec:
securityContext:
runAsUser: 5
containers:
- image: ubuntu
command: ["sh", "-c", "sleep 3600"]
name: mutate-pod
securityContext:
runAsUser: 5555
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: mutate-pod
name: mutate-pod-mutate-container
namespace: phase-3
spec:
securityContext:
runAsUser: 5
containers:
- image: ubuntu
name: mutate-pod
command: ["sh", "-c", "sleep 3600"]
securityContext:
runAsUser: 5
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: mutate-to-default
name: mutate-to-default
namespace: phase-3
spec:
containers:
- image: ubuntu
name: mutate-to-default
command: ["sh", "-c", "sleep 3600"]
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
EOF
- Check the correctness of the mutated pods:
- The ignore-me pod should have not been mutated because has the
ignore-me
label). Both runAsUser securityContexts should be 5.
kubectl get po ignore-me -o custom-columns='PodSecurityContext:.spec.securityContext.runAsUser,ContainerSecurityContext:.spec.containers[*].securityContext.runAsUser' -n phase-3
expected output
PodSecurityContext ContainerSecurityContext
5 5
- The mutate-pod-leave-container pod should have mutated only the pod and not the container
kubectl get po mutate-pod-leave-container -o custom-columns='PodSecurityContext:.spec.securityContext.runAsUser,ContainerSecurityContext:.spec.containers[*].securityContext.runAsUser' -n phase-3
expected output
PodSecurityContext ContainerSecurityContext
1000 5555
- The mutate-pod-mutate-container pod should have mutated both the pod and the container. The securityContexts should have been mutated to 1000 since they were set at 5.
kubectl get po mutate-pod-mutate-container -o custom-columns='PodSecurityContext:.spec.securityContext.runAsUser,ContainerSecurityContext:.spec.containers[*].securityContext.runAsUser' -n phase-3
expected output
PodSecurityContext ContainerSecurityContext
1000 1000
- The mutate-to-defaults pod should have mutated both the pod and the container to 1000 since no securityContexts were set.
kubectl get po mutate-to-default -o custom-columns='PodSecurityContext:.spec.securityContext.runAsUser,ContainerSecurityContext:.spec.containers[*].securityContext.runAsUser' -n phase-3
expected output
PodSecurityContext ContainerSecurityContext
655532 655532
If you got the expected output, you win! Go on to Phase 4. DO NOT DELETE YOUR CLUSTER
Pepr features a full featured Store and Schedule. Back in Phase 3 we were applying default runAsUser securityContexts but we ignored pods with label ignore-me
. Big Enterprise Co wants to run a job that reports every 10 seconds the last app that uses the ignore label. You were thinking about using a Kubernetes native CronJob, but you realize with Pepr's Store and Schedule you can do this all in one place.
This activity focuses on learning to use Pepr's Store and Schedule.
We need to update capability/admission.ts
to store the last pod that has the label ignore-me
and then create a schedule that runs every 10 seconds to check the store for the last pod that has the label ignore-me
.
Hints:
- Update the Mutate action that is looking for pods that were CreatedOrUpdated, if the pod has label "ignore-me", set the pod in the store with
Store.setItem("last-ignore-me", po.Raw.metadata.name);
- Create an OnSchedule that gets the item from the store "last-ignore-me" and does a Log.info("Last ignored pod was xxx") and then sets another item in the store "pass" to "{name-of-pod}".
- You may find it helpful to review the OnSchedule Docs
- You may find it helpful to review the Store Docs
- Make sure you run Pepr with PEPR_WATCH_MODE=true in order to use the schedule
PEPR_WATCH_MODE="true" npx pepr dev --confirm
You can compare your updated capability/admission.ts
to our solution:
const { When, Store, OnSchedule } = Admission;
OnSchedule({
name: "...",
every: ?,
unit: "seconds",
run: () => {...},
});
When(a.Pod)
.IsCreatedOrUpdated()
.Mutate(po => {
if (!po.HasLabel("ignore-me")) {
} else {
....
}
Check Correctness
kubectl apply -f -<<EOF
---
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: phase-4
spec: {}
status: {}
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: legacy-app
alert: critical
ignore-me: sure
name: legacy-app
namespace: phase-4
spec:
containers:
- image: nginx
name: legacy-app
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
EOF
Wait around 20 seconds and check the store:
sleep 20
kubectl get peprstore -n pepr-system -oyaml | grep pass
expected output
Admission-pass: legacy-app
Congrats! So far 199/200 apps are onboarded. Unfortunately, the last app team has no Kubernetes experience and the team members are having a difficult time creating repeatable deployments. "Heroics" are involved every time they need to do a release. After seeing the team pull all-nighters, you decide to build an operator to help deploy the app. Because Pepr natively speaks to the Kubernetes Watch API and has a reconcile callback that processes events in a Queue guaranteeing ordered and synchronous processing of events, an operator will allow for consistent, repeatable deployments even when the system may be under heavy load.
The App that you'll be creating a controller for is a webapp that has 3 major configuration options:
- Language - English, Spanish
- Theme - Dark, Light
- Replicas - 1-7
Your job is to consolidate this down to one resource so that the team can focus more on building the app and less on the deployment specifics.
Create an Operator in Pepr that reconciles on a WebApp resource.
When the Operator is deployed:
- The WebApp CRD Should be Created
- If the WebApp CRD is deleted, it should be auto created again
- If a resource needed by the WebApp is deleted, it should be auto created (Deployment, Service, ConfigMap)
- If the WebApp instance is deleted, then the owned resources should also be deleted
NOTE The WebApp CRD is supplied for you.
Hints:
- You may find it helpful to review the Operator Tutorial
- You may find it helpful to review the Excellent Example Operator
- Use Reconcile to process the Events from the Kube APIServer in the order in which they come in. Reconcile Docs
- Run your pepr module with
npx pepr dev --confirm
in one terminal.
Check Correctness
- Make sure the WebApp CRD was deployed by the Controller
kubectl get crd webapps.pepr.io --no-headers
expected output
webapps.pepr.io 2024-04-08T16:29:29Z
- Deploy a webapp instance and check to see if a
ConfigMap
with Spanish, aService
, and aDeployment
are created
kubectl create ns webapps;
kubectl apply -f -<<EOF
kind: WebApp
apiVersion: pepr.io/v1alpha1
metadata:
name: webapp-light-en
namespace: webapps
spec:
theme: light
language: en
replicas: 1
EOF
The webapp instance should have a ConfigMap
, Service
, and Deployment
called webapp-light-en:
kubectl get svc,deploy,cm -n webapps --no-headers
expected output
service/webapp-light-en ClusterIP 10.43.173.219 <none> 80/TCP 16s
deployment.apps/webapp-light-en 1/1 1 1 16s
configmap/kube-root-ca.crt 1 17s
configmap/web-content-webapp-light-en 1 16s
┌─[cmwylie19@Cases-MacBook-Pro] - [~/enterprise-adm
- Expect all replicas to be available:
kubectl get deploy -n webapps webapp-light-en --template="{{.status.availableReplicas}}"
expected output
1
- If you delete a WebApp owned resource, it is auto created:
kubectl delete cm --all -n webapps
kubectl get cm -n webapps
expected output
NAME DATA AGE
kube-root-ca.crt 1 0s
web-content-webapp-light-en 1 0s
- Expect that if you delete the WebApp Resource, the owned resources will have a cascading deletion:
kubectl delete webapps -n webapps --all
# wait several seconds
sleep 10
kubectl get cm,deploy,svc -n webapps
expected output
NAME DATA AGE
configmap/kube-root-ca.crt 1 105s
Feel free to ask questions if you missed anything.
Big Enterprise Co has a GitOps workflow. You need to generate the Kubernetes manifests for your project. To build your code you can run npx pepr build
.
In our case, we need to extend the WebHook timeout because our Operator needs to be deleted from the Store BEFORE it is deleted from the Kubernetes cluster. Sometimes it can take several seconds for something to be deleted from the store so it is safer to extend the timeout.
To build your code and extend the timeout:
npx pepr build --timeout=25
To make sure that the Operator is deleted from the Store before it is deleted from the Kubernetes cluster, you can update your code to include:
When(WebApp)
.IsDeleted()
.Mutate(async instance => {
await Store.removeItemAndWait(instance.Raw.metadata.name);
});
When(a.ConfigMap)
.IsDeleted()
.WithLabel("pepr.dev/operator")
.Watch(async cm => {
const instance = JSON.parse(
Store.getItem(cm.metadata!.labels["pepr.dev/operator"]),
) as a.GenericKind;
await Deploy(instance);
});
Another alternative we have is to deploy our Controller with rbac-mode scoped down to least privileged:
npx pepr build --rbac-mode=scoped
Remember, Mutating and Validating require zero permissions, but Watch and Reconcile do. Running the build with rbac-mode=scoped
will generate enough RBAC for you to watch the resources but if you are CREATING, READING, UPDATING, DELETING resources yourself, in terms of calls to the Kube-APIServer, you will need add that to the cluster role. By default, the service account will have cluster admin and will be able to do ANY API CALL! For prod, it is recommended to scope it down to the least privilege possible.
Recreate the cluster and deploy with rbac mode scoped and the webhook timeout set to 25 seconds
Hints:
- You may find it helpful to review the RBAC Docs
- You may find it helpful to review the Build Docs
You can deploy with rbac mode scoped and the webhoot timeout set to 25 seconds using:
npx pepr build --rbac-mode=scoped --timeout=25
Check Correctness
First, get your module up and running:
kubectl apply -f dist/pepr-module*.yaml
kubectl wait --for=condition=ready pod -l app -n
pepr-system
Expected outputs:
Admission Controller Pods Ready
kubectl get deploy -n pepr-system -l pepr.dev/controller=admission
NAME READY UP-TO-DATE AVAILABLE AGE
pepr-c8219d66-6901-5ef5-bcd5-6bb66f6afbb7 2/2 2 2 3m58s
Watch Controller Pods Ready
kubectl get deploy -n pepr-system -l pepr.dev/controller=watcher
NAME READY UP-TO-DATE AVAILABLE AGE
pepr-c8219d66-6901-5ef5-bcd5-6bb66f6afbb7-watcher 1/1 1 1 3m58s
Think of a security posture that you want to enforce at your company and create it by adding more mutations and validations to your webhooks.