Skip to content

Instantly share code, notes, and snippets.

@aojea
Last active April 22, 2024 05:03
Show Gist options
  • Save aojea/f9ca1a51e2afd03621744c95bfdab5b8 to your computer and use it in GitHub Desktop.
Save aojea/f9ca1a51e2afd03621744c95bfdab5b8 to your computer and use it in GitHub Desktop.
kube-proxy nftables and iptables vs a Service with 100k endpoints

kube-proxy nftables and iptables vs a Service with 100k endpoints

Background

Iptables performance is limited mainly by two reasons:

The kernel community moved to nftables as replacement of iptables, with the goal of removing the existing performance bottlenecks. Kubernetes has decided to implement a new nftables proxy because of this and another reasons explained in more detail in the corresponding KEP and during the Kubernetes Contributor Summit in Chicago 2023 on the session Iptables, end of an era

Watch the video

Methodology

In order to get an understanding of the improvemens of nftables vs iptables, we can run a scale model testing to evaluate the difference, consisting in:

  1. Create a Service with 100k endpoints (no need to create pods), measure the time to program the dataplane
  2. Create a second service with a real http server backend, ensure this service rules are evaluated after the first service, by sending requests from a client to the exposed port by the Service

Scale model testing

Environment

  • GCE VM Large (n2d-standard-48) with vCPU: 48 and RAM: 192 GB
  • Kind version v0.22.0
  • Kubernetes version v1.29.2

Common steps

  1. Create the KIND cluster (use the corresponding configuration files in this gist)
kind create cluster --config kind-iptables.yaml

OR

kind create cluster --config kind-nftables.yaml
  1. Modify kube-proxy to enable the metrics server on all addresses
# Get the current config
original_kube_proxy=$(kubectl get -oyaml -n=kube-system configmap/kube-proxy)
echo "Original kube proxy config:"
echo "${original_kube_proxy}"
# Patch it
fixed_kube_proxy=$(
    printf '%s' "${original_kube_proxy}" | sed \
        's/\(.*metricsBindAddress:\)\( .*\)/\1 "0.0.0.0:10249"/' \
    )
echo "Patched kube-proxy config:"
echo "${fixed_kube_proxy}"
printf '%s' "${fixed_kube_proxy}" | kubectl apply -f -
# restart kube-proxy
kubectl -n kube-system rollout restart ds kube-proxy
  1. Install prometheus (it will expose the prometheus endpoint with a NodePort Service in the monitoring namespace)
kubectl apply -f monitoring.yaml
kubectl get service -n monitoring
NAME                 TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
prometheus-service   NodePort   10.96.22.134   <none>        8080:30846/TCP   4m26s
  1. Create a Service with 100k endpoints
go run bigservice.go --endpoints 100000

Created slice svc-test-wxyz0 with 1000 endpoints
Created slice svc-test-wxyz1 with 1000 endpoints
Created slice svc-test-wxyz2 with 1000 endpoints
Created slice svc-test-wxyz3 with 1000 endpoints
Created slice svc-test-wxyz4 with 1000 endpoints
Created slice svc-test-wxyz5 with 1000 endpoints
Created slice svc-test-wxyz6 with 1000 endpoints
Created slice svc-test-wxyz7 with 1000 endpoints
Created slice svc-test-wxyz8 with 1000 endpoints
Created slice svc-test-wxyz9 with 1000 endpoints
Created slice svc-test-wxyz10 with 1000 endpoints
Created slice svc-test-wxyz11 with 1000 endpoints
Created slice svc-test-wxyz12 with 1000 endpoints
Created slice svc-test-wxyz13 with 1000 endpoints
Created slice svc-test-wxyz14 with 1000 endpoints
Created slice svc-test-wxyz15 with 1000 endpoints
Created slice svc-test-wxyz16 with 1000 endpoints
Created slice svc-test-wxyz17 with 1000 endpoints
Created slice svc-test-wxyz18 with 1000 endpoints
Created slice svc-test-wxyz19 with 1000 endpoints
Created slice svc-test-wxyz20 with 1000 endpoints
Created slice svc-test-wxyz21 with 1000 endpoints
Created slice svc-test-wxyz22 with 1000 endpoints
Created slice svc-test-wxyz23 with 1000 endpoints
Created slice svc-test-wxyz24 with 1000 endpoints
Created slice svc-test-wxyz25 with 1000 endpoints
Created slice svc-test-wxyz26 with 1000 endpoints
Created slice svc-test-wxyz27 with 1000 endpoints
Created slice svc-test-wxyz28 with 1000 endpoints
Created slice svc-test-wxyz29 with 1000 endpoints
Created slice svc-test-wxyz30 with 1000 endpoints
Created slice svc-test-wxyz31 with 1000 endpoints
Created slice svc-test-wxyz32 with 1000 endpoints
Created slice svc-test-wxyz33 with 1000 endpoints
Created slice svc-test-wxyz34 with 1000 endpoints
Created slice svc-test-wxyz35 with 1000 endpoints
Created slice svc-test-wxyz36 with 1000 endpoints
Created slice svc-test-wxyz37 with 1000 endpoints
Created slice svc-test-wxyz38 with 1000 endpoints
Created slice svc-test-wxyz39 with 1000 endpoints
Created slice svc-test-wxyz40 with 1000 endpoints
Created slice svc-test-wxyz41 with 1000 endpoints
Created slice svc-test-wxyz42 with 1000 endpoints
Created slice svc-test-wxyz43 with 1000 endpoints
Created slice svc-test-wxyz44 with 1000 endpoints
Created slice svc-test-wxyz45 with 1000 endpoints
Created slice svc-test-wxyz46 with 1000 endpoints
Created slice svc-test-wxyz47 with 1000 endpoints
Created slice svc-test-wxyz48 with 1000 endpoints
Created slice svc-test-wxyz49 with 1000 endpoints
Created slice svc-test-wxyz50 with 1000 endpoints
Created slice svc-test-wxyz51 with 1000 endpoints
Created slice svc-test-wxyz52 with 1000 endpoints
Created slice svc-test-wxyz53 with 1000 endpoints
Created slice svc-test-wxyz54 with 1000 endpoints
Created slice svc-test-wxyz55 with 1000 endpoints
Created slice svc-test-wxyz56 with 1000 endpoints
Created slice svc-test-wxyz57 with 1000 endpoints
Created slice svc-test-wxyz58 with 1000 endpoints
Created slice svc-test-wxyz59 with 1000 endpoints
Created slice svc-test-wxyz60 with 1000 endpoints
Created slice svc-test-wxyz61 with 1000 endpoints
Created slice svc-test-wxyz62 with 1000 endpoints
Created slice svc-test-wxyz63 with 1000 endpoints
Created slice svc-test-wxyz64 with 1000 endpoints
Created slice svc-test-wxyz65 with 1000 endpoints
Created slice svc-test-wxyz66 with 1000 endpoints
Created slice svc-test-wxyz67 with 1000 endpoints
Created slice svc-test-wxyz68 with 1000 endpoints
Created slice svc-test-wxyz69 with 1000 endpoints
Created slice svc-test-wxyz70 with 1000 endpoints
Created slice svc-test-wxyz71 with 1000 endpoints
Created slice svc-test-wxyz72 with 1000 endpoints
Created slice svc-test-wxyz73 with 1000 endpoints
Created slice svc-test-wxyz74 with 1000 endpoints
Created slice svc-test-wxyz75 with 1000 endpoints
Created slice svc-test-wxyz76 with 1000 endpoints
Created slice svc-test-wxyz77 with 1000 endpoints
Created slice svc-test-wxyz78 with 1000 endpoints
Created slice svc-test-wxyz79 with 1000 endpoints
Created slice svc-test-wxyz80 with 1000 endpoints
Created slice svc-test-wxyz81 with 1000 endpoints
Created slice svc-test-wxyz82 with 1000 endpoints
Created slice svc-test-wxyz83 with 1000 endpoints
Created slice svc-test-wxyz84 with 1000 endpoints
Created slice svc-test-wxyz85 with 1000 endpoints
Created slice svc-test-wxyz86 with 1000 endpoints
Created slice svc-test-wxyz87 with 1000 endpoints
Created slice svc-test-wxyz88 with 1000 endpoints
Created slice svc-test-wxyz89 with 1000 endpoints
Created slice svc-test-wxyz90 with 1000 endpoints
Created slice svc-test-wxyz91 with 1000 endpoints
Created slice svc-test-wxyz92 with 1000 endpoints
Created slice svc-test-wxyz93 with 1000 endpoints
Created slice svc-test-wxyz94 with 1000 endpoints
Created slice svc-test-wxyz95 with 1000 endpoints
Created slice svc-test-wxyz96 with 1000 endpoints
Created slice svc-test-wxyz97 with 1000 endpoints
Created slice svc-test-wxyz98 with 1000 endpoints
Created slice svc-test-wxyz99 with 1000 endpoints

Iptables

Using the kind cluster we created with the kube-proxy iptables mode

  1. If we get the logs of one kube-proxy Pod, we can see is blocked on the iptables-restore
I0413 17:45:41.121569       1 trace.go:236] Trace[89257398]: "iptables restore" (13-Apr-2024 17:45:36.642) (total time: 4478ms):
Trace[89257398]: [4.478738268s] [4.478738268s] END
I0413 17:47:09.820138       1 trace.go:236] Trace[1299421304]: "iptables restore" (13-Apr-2024 17:45:41.269) (total time: 88550ms):
Trace[1299421304]: [1m28.550942623s] [1m28.550942623s] END

It also consumes the whole CPU

image

  1. If we check the prometheus metrics the p50 is very high, it is at the maximum value for the histogram, and there are also gaps in the graph, probably because kube-proxy is stuck on the iptables operations

image

  1. If we install an additional Service
$ kubectl apply -f svc-webapp.yaml
deployment.apps/server-deployment created
service/test-service created
$ kubectl get service
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP   13m
test-service       ClusterIP   10.96.254.227   <none>        80/TCP    4s
svc-test           ClusterIP   10.96.253.88    <none>        80/TCP    8m51s

and try to query it using ab

$ kubectl run -it test --image httpd:2 bash
$ kubectl exec  test --  ab -n 1000 -c 100 http://10.96.254.227/
This is ApacheBench, Version 2.3 <$Revision: 1913912 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.96.254.227 (be patient)
apr_pollset_poll: The timeout specified has expired (70007)

It times out :(

Nftables

Using the kind cluster we created with the kube-proxy nftables mode

  1. If kube-proxy logs with verbosity 2 we can find metrics of the network programming latency on the logs, maximum time is about 11s , kube-proxy right now logs both ipv4 and ipv6 proxiers, that will be fixed with kubernetes/kubernetes#122979, also I have to get more details on the CPU consumption, but there is no additional CPU load observed when using top
I0413 17:28:50.729678       1 proxier.go:950] "Syncing nftables rules"
I0413 17:28:52.853880       1 proxier.go:1551] "Reloading service nftables data" numServices=7 numEndpoints=100010
I0413 17:29:01.912845       1 proxier.go:944] "SyncProxyRules complete" elapsed="11.465378712s"
I0413 17:29:01.912886       1 proxier.go:950] "Syncing nftables rules"
I0413 17:29:02.953876       1 proxier.go:1551] "Reloading service nftables data" numServices=0 numEndpoints=0
I0413 17:29:03.323651       1 proxier.go:944] "SyncProxyRules complete" elapsed="1.410751875s"
I0413 17:29:20.352321       1 proxier.go:950] "Syncing nftables rules"
I0413 17:29:20.352402       1 proxier.go:950] "Syncing nftables rules"
I0413 17:29:21.421475       1 proxier.go:1551] "Reloading service nftables data" numServices=0 numEndpoints=0
I0413 17:29:21.856391       1 proxier.go:944] "SyncProxyRules complete" elapsed="1.50406568s"
I0413 17:29:21.856439       1 bounded_frequency_runner.go:296] sync-runner: ran, next possible in 1s, periodic in 30s
I0413 17:29:22.868422       1 proxier.go:1551] "Reloading service nftables data" numServices=7 numEndpoints=100010
I0413 17:29:31.322133       1 proxier.go:944] "SyncProxyRules complete" elapsed="10.96973879s"
I0413 17:29:31.322179       1 bounded_frequency_runner.go:296] sync-runner: ran, next possible in 1s, periodic in 30s
  1. Connecting to the prometheus exposed port to get the metrics for kube-proxy we can observe the p50 and p95 values

image

image

  1. Install a second service with a web application
kubectl apply -f svc-webapp.yaml

Get the Service ClusterIP and run several request against it to get the latency

kubectl exec  test --  ab -n 1000 -c 100 http://10.96.246.227/
This is ApacheBench, Version 2.3 <$Revision: 1913912 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.96.246.227 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:
Server Hostname:        10.96.246.227
Server Port:            80

Document Path:          /
Document Length:        60 bytes

Concurrency Level:      100
Time taken for tests:   0.158 seconds
Complete requests:      1000
Failed requests:        29
   (Connect: 0, Receive: 0, Length: 29, Exceptions: 0)
Total transferred:      176965 bytes
HTML transferred:       59965 bytes
Requests per second:    6333.92 [#/sec] (mean)
Time per request:       15.788 [ms] (mean)
Time per request:       0.158 [ms] (mean, across all concurrent requests)
Transfer rate:          1094.61 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    6   2.2      6      10
Processing:     0    9   6.9      7      37
Waiting:        0    7   6.9      4      37
Total:          0   15   6.6     14      41

Percentage of the requests served within a certain time (ms)
  50%     14
  66%     14
  75%     15
  80%     15
  90%     25
  95%     35
  98%     36
  99%     37
 100%     41 (longest request)

The latencies does not seem to be affected by the large service

Conclusion

No much to say, kube-proxy nftables seems to solve the iptables scalability and performance problems , KUDOS to the netfilter people and to @danwinship for their great work.

Since kube-proxy nftables is still in alpha, all the performance and scale improvement will come in beta, so most likely current state will improve

package main
import (
"context"
"flag"
"fmt"
"net"
"path/filepath"
v1 "k8s.io/api/core/v1"
discovery "k8s.io/api/discovery/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
netutils "k8s.io/utils/net"
"k8s.io/utils/ptr"
)
var (
kubeconfig *string
name *string
namespace *string
endpoints *int
)
func main() {
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
name = flag.String("name", "svc-test", "name of the service to be created")
namespace = flag.String("namespace", "default", "namespace for the service to be created")
endpoints = flag.Int("endpoints", 1000, "number of endpoints for service")
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
// delete existing endpoints and services with the same name
clientset.CoreV1().Services(*namespace).Delete(context.TODO(), *name, metav1.DeleteOptions{})
lSelector := discovery.LabelServiceName + "=" + *name
esList, err := clientset.DiscoveryV1().EndpointSlices(*namespace).List(context.Background(), metav1.ListOptions{LabelSelector: lSelector})
if err != nil {
panic(err.Error())
}
for _, slice := range esList.Items {
clientset.DiscoveryV1().EndpointSlices(*namespace).Delete(context.TODO(), slice.Name, metav1.DeleteOptions{})
}
// Create a service with the number of endpoint addresses specified
// generate a base service object
svc := &v1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: *name,
Namespace: *namespace,
},
Spec: v1.ServiceSpec{
Type: v1.ServiceTypeClusterIP,
Ports: []v1.ServicePort{{
Name: "8080",
Port: 80,
TargetPort: intstr.FromInt(8080),
}},
},
}
_, err = clientset.CoreV1().Services(*namespace).Create(context.TODO(), svc, metav1.CreateOptions{})
if err != nil {
panic(err.Error())
}
// generate a base endpoint slice object
baseEp := netutils.BigForIP(net.ParseIP("172.16.0.1"))
epBase := &discovery.EndpointSlice{
ObjectMeta: metav1.ObjectMeta{
Name: *name + "-wxyz",
Namespace: *namespace,
Labels: map[string]string{
discovery.LabelServiceName: *name,
},
},
AddressType: discovery.AddressTypeIPv4,
Endpoints: []discovery.Endpoint{},
Ports: []discovery.EndpointPort{{
Name: ptr.To("8080"),
Port: ptr.To(int32(8080)),
Protocol: ptr.To(v1.ProtocolTCP),
}},
}
chunkSize := 1000
for i := 0; i < *endpoints; {
eps := epBase.DeepCopy()
eps.Name = epBase.Name + fmt.Sprintf("%d", i/chunkSize)
n := min(chunkSize, *endpoints-i)
for j := 0; j < n; j++ {
ipEp := netutils.AddIPOffset(baseEp, i+j)
eps.Endpoints = append(eps.Endpoints, discovery.Endpoint{
Addresses: []string{ipEp.String()},
Hostname: ptr.To(fmt.Sprintf("pod%d", i+j)),
})
}
i = i + n
_, err = clientset.DiscoveryV1().EndpointSlices(*namespace).Create(context.TODO(), eps, metav1.CreateOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("Created slice %s with %d endpoints\n", eps.Name, n)
//defer
}
}
#!/bin/sh
# Full credit BenTheElder
# https://github.com/kubernetes-sigs/kind/blob/b6bc112522651d98c81823df56b7afa511459a3b/hack/ci/e2e-k8s.sh#L190-L205
# Get the current config
original_kube_proxy=$(kubectl get -oyaml -n=kube-system configmap/kube-proxy)
echo "Original CoreDNS config:"
echo "${original_kube_proxy}"
# Patch it
fixed_kube_proxy=$(
printf '%s' "${original_kube_proxy}" | sed \
's/\(.*metricsBindAddress:\)\( .*\)/\1 "0.0.0.0:10249"/' \
)
echo "Patched kube-proxy config:"
echo "${fixed_kube_proxy}"
printf '%s' "${fixed_kube_proxy}" | kubectl apply -f -
# restart kube-proxy
kubectl -n kube-system rollout restart ds kube-proxy
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
ipFamily: ipv4
dnsSearch: []
nodes:
- role: control-plane
- role: worker
- role: worker
featureGates: {"NFTablesProxyMode": true}
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
metadata:
name: config
apiServer:
extraArgs:
"v": "4"
controllerManager:
extraArgs:
"v": "4"
scheduler:
extraArgs:
"v": "4"
---
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
"v": "4"
---
kind: KubeProxyConfiguration
mode: "nftables"
nftables:
minSyncPeriod: 1s
syncPeriod: 30s
---
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
---
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9090'
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- port: 8080
targetPort: 9090
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: default
namespace: monitoring
---
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-server-conf
labels:
name: prometheus-server-conf
namespace: monitoring
data:
prometheus.yml: |-
global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-controller-manager'
honor_labels: true
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
static_configs:
- targets:
- 127.0.0.1:10257
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: localhost:6443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: localhost:6443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: kube-proxy
honor_labels: true
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_name
separator: '/'
regex: 'kube-system/kube-proxy.+'
- source_labels:
- __address__
action: replace
target_label: __address__
regex: (.+?)(\\:\\d+)?
replacement: $1:10249
---
apiVersion: v1
kind: Pod
metadata:
name: prometheus
namespace: monitoring
labels:
app: prometheus-server
spec:
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
containers:
- name: prometheus
image: prom/prometheus:v2.26.0
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
- "--web.enable-admin-api"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
labels:
app: MyApp
spec:
replicas: 2
selector:
matchLabels:
app: MyApp
template:
metadata:
labels:
app: MyApp
spec:
containers:
- name: agnhost
image: k8s.gcr.io/e2e-test-images/agnhost:2.21
args:
- netexec
- --http-port=80
- --udp-port=80
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
type: ClusterIP
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 80
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment