-
-
Save HirokiYoshida837/7541612b0597874d99fb6cad6de0e375 to your computer and use it in GitHub Desktop.
kube-dnsについて調べてみる。 |
他、いくつか見つけた資料
クラスタ内DNSは namespace kube-system
でサービスとして動いてる。
As of Kubernetes v1.12, CoreDNS is the recommended DNS Server
Customizing DNS Service | Kubernetes
V1.10、1.12 あたりで CoreDNSが推奨 (kubeadmではデフォルトのDNSサービス) に変更されたらしい。
ただ、過去のものとの互換性の問題もある(名前参照)もあるので、名前自体は kube-dns としてサービスが定義されて
裏側ではCoreDNSが動いてると書かれている。
n kubernetes version 1.2, the DNS service is provided by skydns, which consists of four containers: kube2sky, skydns, etcd and healthz.
Attachment 011. Kubernetes DNS and construction
更に遡ると、1.2の頃は、skydns でサービスが構築されていたらしい
GKEでpod見た時の情報
kubernetes-perfect-guide/samples/chapter05 on 2nd-edition on ☁️ xxx@gmail.com(asia-northeast1)
❯ kubectl get pods -n kube-system | grep dns
kube-dns-697dc8fc8b-lqzn4 4/4 Running 0 36m
kube-dns-697dc8fc8b-zqjw8 4/4 Running 0 37m
kube-dns-autoscaler-844c9d9448-6czgj 1/1 Running 0 41m
githubのリポジトリ構成みるかぎり、kube-dns 自体は4つのコンポーネントあたりで構成されている。
node-cache
Images
kube-dns
sidecar
dnsmasq
node-cache
kube-dns
// Package DNS provides a backend for the skydns DNS server started by the
// kubedns cluster addon. It exposes the 2 interface method: Records and
// ReverseRecord, which skydns invokes according to the DNS queries it
// receives. It serves these records by consulting an in memory tree
// populated with Kubernetes Services and Endpoints received from the Kubernetes
// API server.
dns/doc.go at master · kubernetes/dns
skyDNSサーバー。DNSクエリを受け取ってレコードを返す。
実装を見る限り、kube-api-server を自分で叩いて、サービスのレコードを取りに行ってる。
sidecar
sidecar is a daemon that exports metrics and performs healthcheck on DNS systems.
dns/README.md at master · kubernetes/dns
ヘルスチェック・metrics収集のためのサイドカーらしい
node-cache
DaemonSet として各node上でうごいてるキャッシュっぽい
Using NodeLocal DNSCache in Kubernetes clusters | Kubernetes
node-cache
❯ kubectl describe service kube-dns -n kube-system
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: <none>
Selector: k8s-app=kube-dns
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.64.0.10
IPs: 10.64.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.60.0.9:53,10.60.1.4:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.60.0.9:53,10.60.1.4:53
Session Affinity: None
Events: <none>
❯ kubectl describe po kube-dns-697dc8fc8b-8fs7n -n kube-system
Name: kube-dns-697dc8fc8b-8fs7n
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: gke-k8s-benkyo-gke-k8s-benkyo-gke-nod-9c8265c2-tznl/10.10.0.5
Start Time: Sat, 05 Feb 2022 20:21:40 +0900
Labels: k8s-app=kube-dns
pod-template-hash=697dc8fc8b
Annotations: cni.projectcalico.org/containerID: 414fc9ded1418ecbb0cce132a9cabd06f83c108f8c2d2406ad8d31b3c9a39fb1
cni.projectcalico.org/podIP: 10.60.1.4/32
cni.projectcalico.org/podIPs: 10.60.1.4/32
components.gke.io/component-name: kubedns
prometheus.io/port: 10054
prometheus.io/scrape: true
scheduler.alpha.kubernetes.io/critical-pod:
seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Running
IP: 10.60.1.4
IPs:
IP: 10.60.1.4
Controlled By: ReplicaSet/kube-dns-697dc8fc8b
Containers:
kubedns:
Container ID: containerd://07dc2543ffc30e9ce39b64f36d7de42289c03f1942cce10f673953b8516626fb
Image: gke.gcr.io/k8s-dns-kube-dns:1.21.0-gke.0
Image ID: gke.gcr.io/k8s-dns-kube-dns@sha256:b5dd662f1a366bbc034954dcc66beb2a5009a78982479f2b7ab7d431b12efb3f
Ports: 10053/UDP, 10053/TCP, 10055/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
--domain=cluster.local.
--dns-port=10053
--config-dir=/kube-dns-config
--v=2
State: Running
Started: Sat, 05 Feb 2022 20:21:46 +0900
Ready: True
Restart Count: 0
Limits:
memory: 210Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Environment:
PROMETHEUS_PORT: 10055
Mounts:
/kube-dns-config from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g8v6z (ro)
dnsmasq:
Container ID: containerd://fcbab93edf6e930007a45c0304ef977041791246a702c5ae15d7d4ac0b8eb247
Image: gke.gcr.io/k8s-dns-dnsmasq-nanny:1.21.0-gke.0
Image ID: gke.gcr.io/k8s-dns-dnsmasq-nanny@sha256:64b131898a7aead50510baa425a0525aa71b2b2733ea0352e50ccdebad682720
Ports: 53/UDP, 53/TCP
Host Ports: 0/UDP, 0/TCP
Args:
-v=2
-logtostderr
-configDir=/etc/k8s/dns/dnsmasq-nanny
-restartDnsmasq=true
--
-k
--cache-size=1000
--no-negcache
--dns-forward-max=1500
--log-facility=-
--server=/cluster.local/127.0.0.1#10053
--server=/in-addr.arpa/127.0.0.1#10053
--server=/ip6.arpa/127.0.0.1#10053
State: Running
Started: Sat, 05 Feb 2022 20:21:49 +0900
Ready: True
Restart Count: 0
Requests:
cpu: 150m
memory: 20Mi
Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g8v6z (ro)
sidecar:
Container ID: containerd://0d403010d0af2ded0c18d784e6600e911c27ab642215e33d758ca065d4f47b36
Image: gke.gcr.io/k8s-dns-sidecar:1.21.0-gke.0
Image ID: gke.gcr.io/k8s-dns-sidecar@sha256:6a175b4ddbff9d87551437c481581f7c26444ff678ddf98d16bb458df75e0eb8
Port: 10054/TCP
Host Port: 0/TCP
Args:
--v=2
--logtostderr
--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
State: Running
Started: Sat, 05 Feb 2022 20:21:52 +0900
Ready: True
Restart Count: 0
Requests:
cpu: 10m
memory: 20Mi
Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g8v6z (ro)
prometheus-to-sd:
Container ID: containerd://2ff968f2f6071e3cf61907da2282e817421c1769840db8d983a3b8222fd6008a
Image: gke.gcr.io/prometheus-to-sd:v0.4.2
Image ID: gke.gcr.io/prometheus-to-sd@sha256:aca8ef83a7fae83f1f8583e978dd4d1ff655b9f2ca0a76bda5edce6d8965bdf2
Port: <none>
Host Port: <none>
Command:
/monitor
--source=kubedns:http://localhost:10054?whitelisted=probe_kubedns_latency_ms,probe_kubedns_errors,dnsmasq_misses,dnsmasq_hits
--stackdriver-prefix=container.googleapis.com/internal/addons
--api-override=https://monitoring.googleapis.com/
--pod-id=$(POD_NAME)
--namespace-id=$(POD_NAMESPACE)
--v=2
State: Running
Started: Sat, 05 Feb 2022 20:21:54 +0900
Ready: True
Restart Count: 0
Environment:
POD_NAME: kube-dns-697dc8fc8b-8fs7n (v1:metadata.name)
POD_NAMESPACE: kube-system (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g8v6z (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-dns-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-dns
Optional: true
kube-api-access-g8v6z:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly op=Exists
components.gke.io/gke-managed-components op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m12s default-scheduler Successfully assigned kube-system/kube-dns-697dc8fc8b-8fs7n to gke-k8s-benkyo-gke-k8s-benkyo-gke-nod-9c8265c2-tznl
Normal Pulling 7m11s kubelet Pulling image "gke.gcr.io/k8s-dns-kube-dns:1.21.0-gke.0"
Normal Pulled 7m7s kubelet Successfully pulled image "gke.gcr.io/k8s-dns-kube-dns:1.21.0-gke.0" in 4.544893878s
Normal Created 7m6s kubelet Created container kubedns
Normal Started 7m6s kubelet Started container kubedns
Normal Pulling 7m6s kubelet Pulling image "gke.gcr.io/k8s-dns-dnsmasq-nanny:1.21.0-gke.0"
Normal Pulled 7m4s kubelet Successfully pulled image "gke.gcr.io/k8s-dns-dnsmasq-nanny:1.21.0-gke.0" in 2.102300153s
Normal Pulling 7m3s kubelet Pulling image "gke.gcr.io/k8s-dns-sidecar:1.21.0-gke.0"
Normal Created 7m3s kubelet Created container dnsmasq
Normal Started 7m3s kubelet Started container dnsmasq
Normal Pulled 7m1s kubelet Successfully pulled image "gke.gcr.io/k8s-dns-sidecar:1.21.0-gke.0" in 2.319810819s
Normal Created 7m kubelet Created container sidecar
Normal Started 7m kubelet Started container sidecar
Normal Pulling 7m kubelet Pulling image "gke.gcr.io/prometheus-to-sd:v0.4.2"
Normal Pulled 6m58s kubelet Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.4.2" in 1.889991719s
Normal Created 6m58s kubelet Created container prometheus-to-sd
Normal Started 6m58s kubelet Started container prometheus-to-sd
SkyDNSのetcdストレージが削除され、DNSレコードがメモリに直接保存されて、クエリのパフォーマンスが向上します。
添付資料011.KubernetesDNSと構築
kube-dnsのコードを十分に読めてないけど、この資料でも言われているようにSkyDNS関係の処理を色々含んでるけど、バックエンド自体はオンメモリになってそう。
ちゃんと調べようとしたらコードリーディングとSkyDNSのことを調べないと難しそう。
一旦ここまで。
GCPでGKEを使用していた際、クラスタ内のDNSに kube-dns がデフォルトで使用されているようなので調査してみる。