以下の情報を参考に GKE Image Streaming の中身を想像してみる。
- 高速なアプリケーション起動と自動スケーリングのための GKE イメージ ストリーミングの導入 の紹介記事
- 徳永さんの Stargz Snapshotter: イメージのpullを省略しcontainerdでコンテナを高速に起動する のスライド
- utam0k さんの CNDF 2023 の Lazy Pulling のスライド
クラスタレベルで Image Streaming を有効化して、Node Auto Provisioning でノードを自動生成する。
Image Streaming が有効化されている。
❯ kubectl get nodes gke-ngsw-development-nap-e2-standard--d57c41d3-gsqr -oyaml | grep cloud.google.com/gke-image-streaming
cloud.google.com/gke-image-streaming: "true"
Debug Container でノードに Pod をアタッチして起動する。
❯ kubectl debug -it node/gke-ngsw-development-nap-e2-standard--d57c41d3-gsqr --image=cgr.dev/chainguard/wolfi-base:latest -- ash
containerd の設定ファイルの中を見る。
/ # cat /host/etc/containerd/config.toml
version = 2
required_plugins = ["io.containerd.grpc.v1.cri"]
# Kubernetes doesn't use containerd restart manager.
disabled_plugins = ["io.containerd.internal.v1.restart"]
oom_score = -999
[debug]
level = "info"
[grpc]
gid = 412
[plugins."io.containerd.grpc.v1.cri"]
stream_server_address = "127.0.0.1"
max_container_log_line_size = 262144
sandbox_image = "gke.gcr.io/pause:3.8@sha256:880e63f94b145e46f1b1082bb71b85e21f16b99b180b9996407d61240ceb9830"
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/home/kubernetes/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://mirror.gcr.io","https://registry-1.docker.io"]
[metrics]
address = "127.0.0.1:1338"
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
snapshotter = "gcfs"
disable_snapshot_annotations = false
discard_unpacked_layers = true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[proxy_plugins]
[proxy_plugins.gcfs]
type = "snapshot"
address = "/run/containerd-gcfs-grpc/containerd-gcfs-grpc.sock"
GKE の Image Streaming では Snapshotter の plugin として containerd-gcfs-grpc を使っている。
GKE のノード上で gcfsd と呼ばれるプロセスが 2 つ動いている。
/ # ps aux | grep gcfsd
1714 root 0:39 /home/kubernetes/bin/gcfsd --mount_point=/run/gcfsd/mnt --max_content_cache_size_mb=36 --max_large_files_cache_size_mb=36 --layer_cache_dir=/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers --images_in_use_db_path=/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/gcfsd/images_in_use.db --enable_pull_secret_keychain
1830 root 1:14 /home/kubernetes/bin/gcfsd --downloader --downloader_uds_path=/run/gcfsd/downloader.sock
- 片方が FUSE Server でキャッシュされたコンテナイメージのレイヤーを FUSE Driver に返すやつっぽい
- もう片方がコンテナイメージのレイヤーをダウンロードして展開してキャッシュするやつっぽい
gcfsd のヘルプメッセージ
/ # /host/home/kubernetes/bin/gcfsd --help
Usage of /host/home/kubernetes/bin/gcfsd:
-add_pull_secret="": Pull secret in one of accepted formats: <image>,[file:]oauth2accesstoken:*, <image>,[file:]_json_key:*, <image>,[file:]_json_key_base64:*
-addr="": Container File System gRPC endpoint address (in the form of 'dns://*.googleapis.com:443', and if empty, a regional prod endpoint address will be constructed)
-async_read=true: FUSE async-read mount option
-auth_refresh_period=30m0s: Period to refresh image/layer authentication
-authimage_max_attempts=3: Maximum number of AuthImage attempts on retryable failures
-authimage_retry_interval=1s: Interval between AuthImage retries
-client_name="": Free form client name used for monitoring
-client_uid="": Unique client identifier, default is the GCE VM ID
-client_version="0.0": Free form client version used for monitoring
-downloader=false: If true, starts the async layer downloader only, not the FUSE server
-downloader_uds_path="/run/gcfsd/downloader.sock": (Downloader only) Unix domain socket path for the downloader server/client communication
-enable_prefetching=true: Switch for the image content prefetching feature
-enable_pull_secret_keychain=false: Enable pull secret keychain, needed by GKE image pull secret support
-event_reporter_type="k8s": Event reporter type: none|k8s
-fail_on_max_read_blocks=false: Switch to fail if a read request is for more blocks that specified with max_read_blocks
-flatten_images=false: Switch to control whether to create/add a flattened dir for every image
-fuse_debug=false: Enable verbose FUSE debugging
-gc_period=3m0s: Garbage collection period for deleted layers/images (snapshot cleanup, 0 means disabled)
-ignore_fuse_intr_on_rpc=true: Switch to ignore FUSE Interrupts on RPCs
-images_in_use_db_path="": Path of DB to persist names of images in use
-layer_cache_dir="": Directory path where downloaded/untarred layers are stored
-log_formatter="text": logrus log formatter (text, json)
-log_level="info": logrus logging level (debug, info, warn, error, fatal)
-max_content_cache_size_mb=1024: Maximally allowed size (in MiB) of the in-memory content cache
-max_large_files_cache_size_mb=0: Deprecated: Maximally allowed size (in MiB) of in-memory large files cache. This was previously for a large file specific cache. You should use max_content_cache_size_mb
-max_layer_downloads=3: Maximum number of concurrent downloads for layer caching
-max_read_ahead=1048576: FUSE max read-ahead size mount option (effective only for a positive flag value)
-max_read_blocks=2: Maximum number of backend blocks to read with a file content read request
-metrics_flavor="gke": Flavor of the metrics
-mount_point="/run/gcfsd/mnt": Mount point of the Container File System FUSE filesystem
-port=11253: The port that gcfsd will listen to
-test_pull_secret="": `{username}:{password}` image pull secret for manual testing purpose only
-try_direct_path=true: Switch to decide whether to try DirectPath or not
- FUSE Server が
images_in_use.db
に使用中のイメージの情報を保存して、不要になったコンテナイメージのレイヤーを削除している? - コンテナイメージの prefetch にも対応しているっぽい
- Kubernetes の Event (
ImageStreaming
) オブジェクトも FUSE Server が作っている?
Container File System FUSE ファイルシステムのマウントポイントを見てみると images と layers のディレクトリがある。コンテナの rootfs 置き場?
/ # df -h /host/run/gcfsd/mnt
Filesystem Size Used Available Use% Mounted on
gcfsd 0 0 0 0% /host/run/gcfsd/mnt
/ # ls /host/run/gcfsd/mnt/
images layers
layers 配下に unpack されたイメージレイヤーが保管されていて、images 配下は layers や FUSE Server のキャッシュレイヤーへのシンボリックリンクになっている。
/ # ls /host/run/gcfsd/mnt/images/
gke.gcr.io#addon-resizer@sha256=73f83a267713c9ec9bdb5564be404567b8d446813d39c74a5eff2fdbcc91ebf2
gke.gcr.io#cilium#cilium@sha256=d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70
gke.gcr.io#cluster-proportional-autoscaler@sha256=0f232ba18b63363e33f205d0242ef98324fb388434f8598c2fc8e967dca146bc
gke.gcr.io#cluster-proportional-autoscaler@sha256=2c6c6093f7d5ecaf30531c6aad67a64b616f61c5b5a7d2cc3e5387e3b3047fa8
...
/ # ls -la /host/run/gcfsd/mnt/images/gke.gcr.io#addon-resizer@sha256=73f83a267713c9ec9bdb5564be404567b8d446813d39c74a5eff2fdbcc91
ebf2
total 0
lrwxr-xr-x 1 root root 0 Aug 3 02:11 0 -> ../../layers/sha256=16639cf327e01f4c30abe6a6bba8153931bdec356c30ba961b75f1aee3fca992
lrwxr-xr-x 1 root root 0 Aug 3 02:11 1 -> ../../layers/sha256=cd356c75e3d95e0e7493301a9c0381e49fd9f72c5ba7e257ab153809e2d1ef4e
lrwxr-xr-x 1 root root 0 Aug 3 02:11 10 -> /var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=ff5700ec54186528cbae40f54c24b1a34fb7c01527beaa1232868c16e2353f52
lrwxr-xr-x 1 root root 0 Aug 3 02:11 11 -> ../../layers/sha256=399826b51fcf6c959b7a7e86b89ac1ee6685d64da54e5223e1d182c491a1bbd6
lrwxr-xr-x 1 root root 0 Aug 3 02:11 12 -> /var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=6fbdf253bbc2490dcfede5bdb58ca0db63ee8aff565f6ea9f918f3bce9e2d5aa
lrwxr-xr-x 1 root root 0 Aug 3 02:11 13 -> ../../layers/sha256=d0157aa0c95a4cae128dab97d699b2f303c8bea46914dc4a40722411f50bb40e
lrwxr-xr-x 1 root root 0 Aug 3 02:11 2 -> ../../layers/sha256=0503f40217046bec7f90c787ea68f0603bb121be9106249c91852f59db11adb2
lrwxr-xr-x 1 root root 0 Aug 3 02:11 3 -> ../../layers/sha256=28e0c3068bea194b1853a599abcafee53c850cace355c5fd67645176d7d4d431
lrwxr-xr-x 1 root root 0 Aug 3 02:11 4 -> ../../layers/sha256=102cf81c83c720520b4539719118954cfd40b2279708e92b9f5c9892014b5545
lrwxr-xr-x 1 root root 0 Aug 3 02:11 5 -> /var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=4cb10dd2545bd173858450b80853b850e49608260f1a0789e0d0b39edf12f500
lrwxr-xr-x 1 root root 0 Aug 3 02:11 6 -> /var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=d2d7ec0f6756eb51cf1602c6f8ac4dd811d3d052661142e0110357bf0b581457
lrwxr-xr-x 1 root root 0 Aug 3 02:11 7 -> /var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=1a73b54f556b477f0a8b939d13c504a3b4f4db71f7a09c63afbc10acb3de5849
lrwxr-xr-x 1 root root 0 Aug 3 02:11 8 -> /var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=e624a5370eca2b8266e74d179326e2a8767d361db14d13edd9fb57e408731784
lrwxr-xr-x 1 root root 0 Aug 3 02:11 9 -> /var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=d52f02c6501c9c4410568f0bf6ff30d30d8290f57794c308fe36ea78393afac2
gcfsd のログを追いかける
umount: /run/gcfsd/mnt: no mount point specified.
time="2023-08-03T02:03:39Z" level=info msg="Starting Prometheus metrics HTTP handler"
time="2023-08-03T02:03:39Z" level=info msg="Detected 0 existing layers in cacheDir" module=layercache
time="2023-08-03T02:03:40Z" level=info msg="Trying DirectPath for containerfilesystem.googleapis.com API in region=\"asia-northeast1\""
time="2023-08-03T02:03:40Z" level=info msg="Tried DirectPath for region=\"asia-northeast1\", but failed (err=failed to get IPv6 addr from instance metadata: metadata: GCE metadata \"instance/network-interfaces/0/ipv6s\" not defined), so falling back to non-DirectPath address=\"dns:///asia-northeast1-containerfilesystem.googleapis.com:443\""
time="2023-08-03T02:03:40.500590963Z" level=info msg="GCFSD (version=\"v0.162.0\", build number=527865957): creating Container File System client with addr=\"dns:///asia-northeast1-containerfilesystem.googleapis.com:443\", layer_cache_dir=\"/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers\""
gke-ngsw-development-app-default-pool-b3e308aa-trvx
time="2023-08-03T02:03:40.503198523Z" level=info msg="Using \"****-test\" as quota project." module=creds
time="2023-08-03T02:03:40.552219981Z" level=info msg="Creating credentialsBundleWrapper with quota project \"****-test\"." module=creds
time="2023-08-03T02:03:40.552350694Z" level=info msg="Creating credentialsBundleWrapper with quota project \"****-test\"." module=creds
time="2023-08-03T02:03:40.557013361Z" level=info msg="Creating perRPCCredentialsWrapper with quota project \"****-test\"." module=creds
time="2023-08-03T02:03:40.810016721Z" level=info msg="'X-Goog-User-Project' header is supported." error="<nil>" module=gcfs_backend
time="2023-08-03T02:03:40.811037565Z" level=info msg="Starting keychain server"
time="2023-08-03T02:03:40.81116475Z" level=info msg="GCFSD: mounting with mount_point=\"/run/gcfsd/mnt\", maxReadAhead=1048576 (<=0 for default), asyncRead=true"
time="2023-08-03T02:03:40.811072642Z" level=info msg="Starting keychain server on /run/gcfsd/keychain.sock"
time="2023-08-03T02:03:40.819982851Z" level=info msg="GCFSD: serving"
time="2023-08-03T02:03:40.820067775Z" level=info msg="Initializing GC with period=3m0s"
F0803 02:04:10.148215 1740 main.go:71] cannot create certificate signing request: Post "https://172.16.0.2/apis/certificates.k8s.io/v1/certificatesigningrequests?timeout=5m0s": dial tcp 172.16.0.2:443: i/o timeout
time="2023-08-03T02:04:10.15034849Z" level=error msg="K8s initialization error: Get \"https://172.16.0.2/api/v1/nodes/gke-ngsw-development-app-default-pool-b3e308aa-trvx\": getting credentials: exec: executable /home/kubernetes/bin/gke-exec-auth-plugin failed with exit code 1"
time="2023-08-03T02:06:40.821012489Z" level=info msg="fsRoot.GC() finished" duration="121.069µs" module=filesystem_lib timestamp="2023-08-03 02:06:40.82100888 +0000 UTC m=+181.779817825"
time="2023-08-03T02:06:40.821000838Z" level=info msg="Found layersToRemove=map[], imagesToRemove=map[]" module=filesystem_lib
time="2023-08-03T02:06:40.820925861Z" level=info msg="fsRoot.GC() started" module=filesystem_lib timestamp="2023-08-03 02:06:40.820887501 +0000 UTC m=+181.779696756"
time="2023-08-03T02:06:55.577895526Z" level=info msg="Received UpdateCreds request for image gke.gcr.io/gke-metrics-agent@sha256:8a2d20b7e403be765981ca6347d192f26be57dcf1e2e26554b800435dd984310"
time="2023-08-03T02:06:56.114244627Z" level=info msg="Received UpdateCreds request for image gke.gcr.io/cilium/cilium@sha256:d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70"
time="2023-08-03T02:06:56.329064686Z" level=info msg="Received UpdateCreds request for image gke.gcr.io/csi-node-driver-registrar@sha256:058c8a34b91fdc3425dfb263f75d73fd9ae6a532bb688f95fea972687fb1cf44"
time="2023-08-03T02:06:56.530581426Z" level=info msg="Client ID: \"4083088346143175729\", version: \"1.27.3-gke.1700\""
ERROR: logging before google.Init: E0803 02:06:57.319401 693 riptideclient.go:309] Got 0 transit metadatas, want exactly one
time="2023-08-03T02:06:57.731616793Z" level=info msg="Async layer download started" image="gke.gcr.io/cilium/cilium@sha256:d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70" layer="sha256:2c96d9d88989f39ea119ce9277181711445443d4f0c511a67951e2d6f1236749" module=filesystem_lib
time="2023-08-03T02:06:57.731964232Z" level=info msg="Async layer download started" image="gke.gcr.io/cilium/cilium@sha256:d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70" layer="sha256:9fa93e2a84564f124cbf07f5c0e29047cf383eb3a16a6a58bc0c3356788dfb6c" module=filesystem_lib
time="2023-08-03T02:06:57.732159601Z" level=info msg="Async layer download started" image="gke.gcr.io/cilium/cilium@sha256:d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70" layer="sha256:bf8f83b889a3aa348193a702ebf06f0daf4f17f07cd90acd7cc659165c2944a7" module=filesystem_lib
...
time="2023-08-03T02:06:57.735921966Z" level=info msg="Starting layer download/unpack" cached_layer_dir="/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=b4c2c422d8ed936f3f3427ec05035c317f4e2c897c50502e97dc377c693406bc" image_name="gke.gcr.io/cilium/cilium@sha256:d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70" layer_id="sha256:b4c2c422d8ed936f3f3427ec05035c317f4e2c897c50502e97dc377c693406bc" module=layercache
time="2023-08-03T02:06:57.735605851Z" level=info msg="Starting layer download/unpack" cached_layer_dir="/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=ab71c6c20b21bba9cc9d76471122678ae0a14d7fbc280bd39581667467155232" image_name="gke.gcr.io/cilium/cilium@sha256:d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70" layer_id="sha256:ab71c6c20b21bba9cc9d76471122678ae0a14d7fbc280bd39581667467155232" module=layercache
time="2023-08-03T02:06:57.741769799Z" level=info msg="Starting layer download/unpack" cached_layer_dir="/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=2c96d9d88989f39ea119ce9277181711445443d4f0c511a67951e2d6f1236749" image_name="gke.gcr.io/cilium/cilium@sha256:d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70" layer_id="sha256:2c96d9d88989f39ea119ce9277181711445443d4f0c511a67951e2d6f1236749" module=layercache
time="2023-08-03T02:06:57.769777791Z" level=error msg="Prefetching for image failed" error="getting prefetch image report for imageName=\"gke.gcr.io/cilium/cilium@sha256:d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70\" failed: rpc error: code = FailedPrecondition desc = Precondition check failed." imageName="gke.gcr.io/cilium/cilium@sha256:d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70" module=filesystem_lib
time="2023-08-03T02:06:58.484742596Z" level=info msg="Async layer download started" image="gke.gcr.io/gke-metrics-agent@sha256:8a2d20b7e403be765981ca6347d192f26be57dcf1e2e26554b800435dd984310" layer="sha256:0663a67a54cce6ca6d81cb21dce0ea0e0b37177cffc0065c4e2afd2e46a03ea9" module=filesystem_lib
time="2023-08-03T02:06:58.485529814Z" level=info msg="Async layer download started" image="gke.gcr.io/gke-metrics-agent@sha256:8a2d20b7e403be765981ca6347d192f26be57dcf1e2e26554b800435dd984310" layer="sha256:3c9afd9c5c5baec90bcf165d9cab671b0cc488eec84968abca62bbe6f695e412" module=filesystem_lib
...
time="2023-08-03T02:06:58.807716182Z" level=error msg="downloader stderr=\"time=\\\"2023-08-03T02:06:58Z\\\" level=info msg=\\\"Layer untar succeeded\\\" module=download size=0 untar_dir=\\\"/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/untar_sha256:2c96d9d88989f39ea119ce9277181711445443d4f0c511a67951e2d6f1236749\\\"\"" module=download
time="2023-08-03T02:06:58.808214297Z" level=info msg="downloader responded OK (succeeded=true, hasMismatchedHash=false)" download_req="image_name:\"gke.gcr.io/cilium/cilium@sha256:d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70\" layer_id:\"sha256:2c96d9d88989f39ea119ce9277181711445443d4f0c511a67951e2d6f1236749\" layer_dir:\"/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=2c96d9d88989f39ea119ce9277181711445443d4f0c511a67951e2d6f1236749\" untar_dir:\"/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/untar_sha256:2c96d9d88989f39ea119ce9277181711445443d4f0c511a67951e2d6f1236749\" pull_secret:\"(redacted)\"" download_resp="succeeded:true"
time="2023-08-03T02:06:58.808481358Z" level=info msg="Layer download/untar done (succeeded=true)" cached_layer_dir="/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=2c96d9d88989f39ea119ce9277181711445443d4f0c511a67951e2d6f1236749" image_name="gke.gcr.io/cilium/cilium@sha256:d9dc4aeed17ea929c45c784581d62d0ec5167e9191ed9d15f4cd31dc23006b70" layer_id="sha256:2c96d9d88989f39ea119ce9277181711445443d4f0c511a67951e2d6f1236749" module=layercache
...
containerfilesystem.googleapis.com
というのが 高速なアプリケーション起動と自動スケーリングのための GKE イメージ ストリーミングの導入 で出てくる Artifact Registry のキャッシュからイメージを落とすための API っぽいcreating Container File System client with addr=\"dns:///asia-northeast1-containerfilesystem.googleapis.com:443\"
とあるので、確かに Artifact Registry のリージョンに複製されたキャッシュからダウンロードしているっぽい/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/sha256=....
のキャッシュ領域に非同期でダウンロードしまくっている- untar したものは
/var/lib/containerd/io.containerd.snapshotter.v1.gcfs/snapshotter/layers/untar_sha256=...
に一時的に置いてるみたい
クラスタレベルで Image Streaming を無効化して、Node Auto Provisioning でノードを自動生成する。
Image Streaming が有効になっていない。
❯ kubectl get nodes -oyaml | grep image-stream
Debug Container でノードに Pod をアタッチして起動する。
❯ kubectl debug -it node/gke-ngsw-development-nap-e2-medium-10-059467de-68b4 --image=cgr.dev/chainguard/wolfi-base:latest -- ash
containerd の設定ファイルの中を見る。
/ # cat /host/etc/containerd/config.toml
version = 2
required_plugins = ["io.containerd.grpc.v1.cri"]
# Kubernetes doesn't use containerd restart manager.
disabled_plugins = ["io.containerd.internal.v1.restart"]
oom_score = -999
[debug]
level = "info"
[grpc]
gid = 412
[plugins."io.containerd.grpc.v1.cri"]
stream_server_address = "127.0.0.1"
max_container_log_line_size = 262144
sandbox_image = "gke.gcr.io/pause:3.8@sha256:880e63f94b145e46f1b1082bb71b85e21f16b99b180b9996407d61240ceb9830"
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/home/kubernetes/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://mirror.gcr.io","https://registry-1.docker.io"]
[metrics]
address = "127.0.0.1:1338"
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
discard_unpacked_layers = true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
Image Streaming が無効な場合は、containerd の Snapshotter の機能は使っていない。
ちなみに、stargz-snapshotter の場合は、以下のように設定することになる。
# Plug stargz snapshotter into containerd
# Containerd recognizes stargz snapshotter through specified socket address.
# The specified address below is the default which stargz snapshotter listen to.
[proxy_plugins]
[proxy_plugins.stargz]
type = "snapshot"
address = "/run/containerd-stargz-grpc/containerd-stargz-grpc.sock"
# Use stargz snapshotter through CRI
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "stargz"
disable_snapshot_annotations = false
Trying pre-converted images にあるように Stargz はコンテナイメージの変換が必要。