Skip to content

Instantly share code, notes, and snippets.

@skonto
Last active February 8, 2024 12:19
Show Gist options
  • Save skonto/e9aa295a540c016e868d59702f77e750 to your computer and use it in GitHub Desktop.
Save skonto/e9aa295a540c016e868d59702f77e750 to your computer and use it in GitHub Desktop.
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.13.1/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.13.1/serving-core.yaml
kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.13.1/kourier.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.13.1/serving-hpa.yaml
values1.yaml:
kube-state-metrics:
metricLabelsAllowlist:
- pods=[*]
- deployments=[app.kubernetes.io/name,app.kubernetes.io/component,app.kubernetes.io/instance]
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
podMonitorSelectorNilUsesHelmValues: false
grafana:
sidecar:
dashboards:
enabled: true
searchNamespace: ALL
helm install prometheus prometheus-community/kube-prometheus-stack -n default -f values1.yaml
values2.yaml:
prometheus:
url: http://prometheus-kube-prometheus-prometheus.default.svc
helm install my-release-ad prometheus-community/prometheus-adapter -f values2.yaml
Create custom metrics aggregation CR:
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta2.custom.metrics.k8s.io
spec:
group: custom.metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: my-release-ad-prometheus-adapter
namespace: default
version: v1beta2
versionPriority: 100
For a K8s dep:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app
labels:
app: sample-app
spec:
replicas: 1
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- image: luxas/autoscale-demo:v0.1.2
name: metrics-provider
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
app: sample-app
name: sample-app
spec:
ports:
- name: http
port: 8090
protocol: TCP
targetPort: 8080
selector:
app: sample-app
type: LoadBalancer
---
kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
name: sample-app
labels:
app: sample-app
spec:
selector:
matchLabels:
app: sample-app
endpoints:
- port: http
---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2
metadata:
name: sample-app
spec:
scaleTargetRef:
# point the HPA at the sample application
# you created above
apiVersion: apps/v1
kind: Deployment
name: sample-app
# autoscale between 1 and 10 replicas
minReplicas: 1
maxReplicas: 10
metrics:
# use a "Pods" metric, which takes the average of the
# given metric across all pods controlled by the autoscaling target
- type: Pods
pods:
# use the metric that you used above: pods/http_requests
metric:
name: http_requests
# target 500 milli-requests per second,
# which is 1 request every two seconds
target:
type: Value
averageValue: 500m
Applying this on minikube in test ns:
minikube service list:
$ minikube service list
]|-----------------|----------------------------------------------------|--------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------------|----------------------------------------------------|--------------|-----------------------------|
| default | alertmanager-operated | No node port | |
| default | helloworld-go | No node port | |
| default | helloworld-go-00001 | No node port | |
| default | helloworld-go-00001-private | No node port | |
| default | kubernetes | No node port | |
| default | my-release-ad-prometheus-adapter | No node port | |
| default | prometheus-grafana | No node port | |
| default | prometheus-kube-prometheus-alertmanager | No node port | |
| default | prometheus-kube-prometheus-operator | No node port | |
| default | prometheus-kube-prometheus-prometheus | No node port | |
| default | prometheus-kube-state-metrics | No node port | |
| default | prometheus-operated | No node port | |
| default | prometheus-prometheus-node-exporter | No node port | |
| knative-serving | activator-service | No node port | |
| knative-serving | autoscaler | No node port | |
| knative-serving | autoscaler-bucket-00-of-01 | No node port | |
| knative-serving | autoscaler-hpa | No node port | |
| knative-serving | controller | No node port | |
| knative-serving | net-kourier-controller | No node port | |
| knative-serving | webhook | No node port | |
| kourier-system | kourier | http2/80 | http://192.168.39.169:31534 |
| | | https/443 | http://192.168.39.169:32580 |
| kourier-system | kourier-internal | No node port | |
| kube-system | kube-dns | No node port | |
| kube-system | prometheus-kube-prometheus-coredns | No node port | |
| kube-system | prometheus-kube-prometheus-kube-controller-manager | No node port | |
| kube-system | prometheus-kube-prometheus-kube-etcd | No node port | |
| kube-system | prometheus-kube-prometheus-kube-proxy | No node port | |
| kube-system | prometheus-kube-prometheus-kube-scheduler | No node port | |
| kube-system | prometheus-kube-prometheus-kubelet | No node port | |
| serving-test | metrics-test | No node port | |
| serving-test | metrics-test-00001 | No node port | |
| serving-test | metrics-test-00001-private | No node port | |
| serving-test | metrics-test-sm | No node port | |
| test | sample-app | http/8090 | http://192.168.39.169:30550 |
|-----------------|----------------------------------------------------|--------------|-----------------------------|
curl http://192.168.39.169:30550/
Hello! My name is sample-app-5499d69c59-vqst4. I have served 1267 requests so far.
$ oc get hpa -n test
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
sample-app Deployment/sample-app 37m/500m 1 10 1 126m
Doing the same with a Knative ksvc, apply the following first in serving-test ns:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: metrics-test
spec:
template:
metadata:
labels:
app: metrics-test
annotations:
autoscaling.knative.dev/minScale: "1"
autoscaling.knative.dev/maxScale: "10"
autoscaling.knative.dev/target: "10"
autoscaling.knative.dev/class: "hpa.autoscaling.knative.dev"
autoscaling.knative.dev/metric: "http_requests"
spec:
containers:
- image: luxas/autoscale-demo:v0.1.2
imagePullPolicy: Always
ports:
- name: http1
containerPort: 8080
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
name: metrics-test-sm
spec:
endpoints:
- port: metrics
scheme: http
namespaceSelector: {}
selector:
matchLabels:
name: metrics-test-sm
---
apiVersion: v1
kind: Service
metadata:
labels:
name: metrics-test-sm
name: metrics-test-sm
spec:
ports:
- name: metrics
port: 8080
protocol: TCP
targetPort: 8080
selector:
serving.knative.dev/service: metrics-test
type: ClusterIP
$ curl -H "Host: metrics-test.serving-test.example.com" http://192.168.39.169:31534/metrics
# HELP http_requests_total The amount of requests served by the server in total
# TYPE http_requests_total counter
http_requests_total 4157
$ oc get hpa -n serving-test
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
metrics-test-00001 Deployment/metrics-test-00001-deployment 33m/10 1 10 1 52m
Create some load: for i in {1..1000}; do curl -H "Host: metrics-test.serving-test.example.com" http://192.168.39.169:31534/metrics; done
$ oc get hpa -n serving-test
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
metrics-test-00001 Deployment/metrics-test-00001-deployment 11465m/10 1 10 3 58m
$ oc get po -n serving-test --watch
NAME READY STATUS RESTARTS AGE
metrics-test-00001-deployment-59579b768d-665wx 2/2 Running 0 88m
metrics-test-00001-deployment-59579b768d-d7kqj 2/2 Running 0 47s
metrics-test-00001-deployment-59579b768d-r8dfm 0/2 Pending 0 0s
metrics-test-00001-deployment-59579b768d-r8dfm 0/2 Pending 0 0s
metrics-test-00001-deployment-59579b768d-r8dfm 0/2 ContainerCreating 0 0s
metrics-test-00001-deployment-59579b768d-r8dfm 1/2 Running 0 3s
metrics-test-00001-deployment-59579b768d-r8dfm 2/2 Running 0 4s
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment