Skip to content

Instantly share code, notes, and snippets.

@dhermes
Last active March 21, 2020 16:43
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dhermes/fb95866e742f5d87864d26ee6e357d3c to your computer and use it in GitHub Desktop.
Save dhermes/fb95866e742f5d87864d26ee6e357d3c to your computer and use it in GitHub Desktop.
Minimal Istio bug repro for "protocol selection snafu" (for https://github.com/istio/istio/issues/22367)
istio-1.5.0/
sleep-injected.yml
sidecar.yml
listeners-000.json
ngrok
break-template.yml
break.yml
listeners-001.json
0.0.0.0_*_http.json
listeners-002.json
listeners-003.json
missing-template.yml
missing.yml
listeners-004.json
0.0.0.0_*_missing.json
listeners-005.json
0.0.0.0_*_grpc.json
0.0.0.0_*_grpc-web.json
0.0.0.0_*_http2.json
cluster_ip_*_https.json
cluster_ip_*_mongo.json
cluster_ip_*_mysql.json
cluster_ip_*_redis.json
cluster_ip_*_tcp.json
cluster_ip_*_tls.json
cluster_ip_*_no_prefix.json

Minimal Istio bug repro for "protocol selection snafu"

See Issue #22367 on the Istio issue tracker

NOTE: All snippets are "copy-paste optimized". If you set a few environment variables (that will change if this gets run again), you can just copy-paste most commands. Additionally commands will not be prefixed with a PS1 like $ ; instead STDOUT / STDERR will be prefixed with a "comment" # .

Download 1.5.0 Release

OS=osx
curl \
  --location "https://github.com/istio/istio/releases/download/1.5.0/istio-1.5.0-${OS}.tar.gz" \
  --output "istio-1.5.0-${OS}.tar.gz"
tar xzvf "istio-1.5.0-${OS}.tar.gz"
rm -f "istio-1.5.0-${OS}.tar.gz"
./istio-1.5.0/bin/istioctl --help

Create A Test Namespace

NAMESPACE=dhermes-repro
kubectl create namespace "${NAMESPACE}"
kubectl label namespace "${NAMESPACE}" istio-injection=enabled

Create a Service in the Mesh

Deploy the sleep service from the docs:

./istio-1.5.0/bin/istioctl kube-inject \
  --filename ./istio-1.5.0/samples/sleep/sleep.yaml \
  > sleep-injected.yml

kubectl apply --namespace "${NAMESPACE}" --filename ./sleep-injected.yml

Take note of the pod that was created:

kubectl get pods --namespace "${NAMESPACE}"
# NAME                     READY   STATUS    RESTARTS   AGE
# sleep-54c989bc97-lks72   2/2     Running   0          44s

POD_NAME=sleep-54c989bc97-lks72
kubectl exec --namespace "${NAMESPACE}" "${POD_NAME}" --container sleep -- hostname
# sleep-54c989bc97-lks72
kubectl exec --namespace "${NAMESPACE}" "${POD_NAME}" --container sleep -- hostname -i
# 10.101.151.188

Create a Sidecar to Limit the Discovered Services

Before applying note the number of discovered services in the cluster

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --output json \
  | jq '. | length'
# 468

We can limit egress with sidecar.yml

---
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
  name: limited-egress
spec:
  egress:
  - hosts:
    - istio-system/*
    - ./*

After applying we can see the number of listeners drop:

kubectl apply --namespace "${NAMESPACE}" --filename ./sidecar.yml

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --output json \
  > listeners-000.json

/bin/cat listeners-000.json | jq '. | length'
# 15

/bin/cat listeners-000.json | jq '.[].name' -r
# 10.101.151.188_80     # Pod IP     for sleep.${NAMESPACE} Service
# 10.101.151.188_15020  # Pod IP     for sleep.${NAMESPACE} Service
# 10.101.22.111_15011   # Cluster IP for istio-pilot.istio-system Service
# 10.101.22.111_15012   # Cluster IP for istio-pilot.istio-system Service
# 10.101.28.8_15012     # Cluster IP for istiod.istio-system Service
# 10.101.28.8_443       # Cluster IP for istiod.istio-system Service
# 10.101.22.111_443     # Cluster IP for istio-pilot.istio-system Service
# 0.0.0.0_15014         #
# 0.0.0.0_8080          #
# 0.0.0.0_15010         #
# 0.0.0.0_20001         #
# 0.0.0.0_80            #
# virtualOutbound
# virtualInbound
# null

Spin Up Raw TCP Server

I wanted to expose a raw TCP server to the outside internet to showcase how an Istio bug causes a BlackHoleCluster Envoy listener to swallow non-HTTP traffic for a given port.

There are plenty of other ways to expose a raw TCP server, I chose to use ngrok just to expose a port via nc -l 12892 to the outside internet. (There are plenty of other ways to expose a raw TCP server, this is just an example.)

./ngrok authtoken ...  # "TCP tunnels are only available after you sign up"
./ngrok tcp 12892
# ngrok by @inconshreveable                                   (Ctrl+C to quit)
#
# Session Status                online
# Account                        (Plan: Business)
# Version                       2.3.35
# Region                        United States (us)
# Web Interface                 http://127.0.0.1:4040
# Forwarding                    tcp://0.tcp.ngrok.io:13602 -> localhost:12892
#
# Connections                   ttl     opn     rt1     rt5     p50     p90
#                               0       0       0.00    0.00    0.00    0.00

To confirm it is running

TCP_HOSTNAME=0.tcp.ngrok.io
TCP_PORT=13602
echo "did you make it?" | nc ${TCP_HOSTNAME} ${TCP_PORT}  # Client

nc -l 12892  # Server
# did you make it?

and also we want to confirm that the sleep pod in the Istio service mesh can hit the TCP server

kubectl exec  `# Client` \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --container sleep \
  -- /bin/sh -c 'echo "$(hostname):$(hostname -i)" | nc '"${TCP_HOSTNAME} ${TCP_PORT}"

nc -l 12892  # Server
# sleep-54c989bc97-lks72:10.101.151.188

and with an HTTP client

kubectl exec  `# Client` \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --container sleep \
  -- /bin/sh -c \
    'curl --silent --verbose --max-time 2 --header "pod: $(hostname)" http://'"${TCP_HOSTNAME}:${TCP_PORT}"
# * Expire in 0 ms for 6 (transfer 0x5638a5d08680)
# * Expire in 2000 ms for 8 (transfer 0x5638a5d08680)
# ...
# *   Trying 3.17.202.129...
# * TCP_NODELAY set
# * Expire in 200 ms for 4 (transfer 0x5638a5d08680)
# * Connected to 0.tcp.ngrok.io (3.17.202.129) port 13602 (#0)
# > GET / HTTP/1.1
# > Host: 0.tcp.ngrok.io:13602
# > User-Agent: curl/7.64.0
# > Accept: */*
# > pod: sleep-54c989bc97-lks72
# >
# * Operation timed out after 2001 milliseconds with 0 bytes received
# * Closing connection 0
# command terminated with exit code 28

nc -l 12892  # Server
# GET / HTTP/1.1
# Host: 0.tcp.ngrok.io:13602
# User-Agent: curl/7.64.0
# Accept: */*
# pod: sleep-54c989bc97-lks72
#

Create a Kubernetes Service That "Breaks" ${TCP_PORT}

We can create a service via break-template.yml

---
apiVersion: v1
kind: Service
metadata:
  name: break
spec:
  ports:
  - name: http-break
    port: ${TCP_PORT}
    protocol: TCP
  type: ClusterIP

which will cause a new Envoy listener to be created:

sed s/'${TCP_PORT}'/${TCP_PORT}/g break-template.yml > break.yml
kubectl apply --namespace "${NAMESPACE}" --filename ./break.yml

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --output json \
  > listeners-001.json

/bin/cat listeners-001.json | jq '. | length'
# 16

/bin/cat listeners-001.json | jq '.[].name' -r
# 10.101.151.188_80  # Pod IP     for sleep.${NAMESPACE} Service
# ...
# virtualInbound
# 0.0.0.0_13602        # Newly added listener
# null

LISTENER_INDEX=14  # Next to last
/bin/cat listeners-001.json | jq ".[${LISTENER_INDEX}]" > 0.0.0.0_${TCP_PORT}_http.json

This listener has a BlackHoleCluster filter chain with an envoy.tcp_proxy filter (of type tcp_proxy.v2.TcpProxy):

/bin/cat listeners-001.json | jq ".[${LISTENER_INDEX}].name" -r
# 0.0.0.0_13602

/bin/cat listeners-001.json | jq ".[${LISTENER_INDEX}].filterChains[0].filters[1]"
# {
#   "name": "envoy.tcp_proxy",
#   "typedConfig": {
#     "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
#     "statPrefix": "BlackHoleCluster",
#     "cluster": "BlackHoleCluster"
#   }
# }

Verify TCP Traffic Gets Blackholed

After the 0.0.0.0_${TCP_PORT} Envoy listener gets created, any outgoing requests to ${TCP_PORT} will match that listener. Making the same request in the pod

kubectl exec \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --container sleep \
  -- /bin/sh -c 'echo "$(hostname):$(hostname -i)" | nc '"${TCP_HOSTNAME} ${TCP_PORT}"

the client receives nothing. If instead we make an HTTP request, it goes through just fine (even if the nc -l server can't provide an HTTP response):

kubectl exec  `# Client` \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --container sleep \
  -- /bin/sh -c \
    'curl --silent --verbose --max-time 2 --header "pod: $(hostname)" http://'"${TCP_HOSTNAME}:${TCP_PORT}"
# * Expire in 0 ms for 6 (transfer 0x5591e81bd680)
# * Expire in 2000 ms for 8 (transfer 0x5591e81bd680)
# ...
# *   Trying 3.134.196.116...
# * TCP_NODELAY set
# * Expire in 200 ms for 4 (transfer 0x5591e81bd680)
# * Connected to 0.tcp.ngrok.io (3.134.196.116) port 13602 (#0)
# > GET / HTTP/1.1
# > Host: 0.tcp.ngrok.io:13602
# > User-Agent: curl/7.64.0
# > Accept: */*
# > pod: sleep-54c989bc97-lks72
# >
# * Operation timed out after 2001 milliseconds with 0 bytes received
# * Closing connection 0
# command terminated with exit code 28

nc -l 12892  # Server
# GET / HTTP/1.1
# host: 0.tcp.ngrok.io:13602
# user-agent: curl/7.64.0
# accept: */*
# pod: sleep-54c989bc97-lks72
# x-forwarded-for: 10.101.151.188
# x-forwarded-proto: http
# x-envoy-internal: true
# x-request-id: d9be301d-d90d-9f3b-afeb-aa9f7011fe78
# x-envoy-peer-metadata: CiAK...
# x-envoy-peer-metadata-id: sidecar~10.101.151.188~sleep-54c989bc97-lks72.dhermes-repro~dhermes-repro.svc.cluster.local
# x-b3-traceid: 875fdf0be92d99a619b41aa11660abcf
# x-b3-spanid: 19b41aa11660abcf
# x-b3-sampled: 1
# content-length: 0
#

Using a Non-HTTP Protocol

The core of the issue is the http- prefix in the port name (http-break). By using it, we have opted into "manual protocol selection" and this is where the bug resides. If any of the HTTP-like prefixes are used (http-, http2-, grpc-) a BlackHoleCluster Envoy listener will get created on the "any IPv4 address at all" IP 0.0.0.0.

If instead we apply a patch to rename the port, the ${TCP_PORT} Envoy listener will change

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"tcp-break\",\"port\":${TCP_PORT}}]}}"
echo "${JSON_PATCH}" | jq
# {
#   "spec": {
#     "ports": [
#       {
#         "name": "tcp-break",
#         "port": 13602
#       }
#     ]
#   }
# }

kubectl patch service \
  --namespace "${NAMESPACE}" \
  break \
  --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --output json \
  > listeners-002.json

/bin/cat listeners-002.json | jq '. | length'
# 16

/bin/cat listeners-002.json | jq '.[].name' -r
# 10.101.151.188_80
# ...
# virtualInbound
# 10.101.20.10_13602  # Cluster IP for break.${NAMESPACE} Service
# null

kubectl get service --namespace "${NAMESPACE}" break
# NAME    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
# break   ClusterIP   10.101.20.10   <none>        13602/TCP   34m

the outgoing traffic from that port will now go through

kubectl exec  `# Client` \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --container sleep \
  -- /bin/sh -c 'echo "$(hostname):$(hostname -i)" | nc '"${TCP_HOSTNAME} ${TCP_PORT}"

nc -l 12892  # Server
# sleep-54c989bc97-lks72:10.101.151.188

and the outgoing HTTP request will not have any of the Envoy modifications

kubectl exec  `# Client` \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --container sleep \
  -- /bin/sh -c \
    'curl --silent --verbose --max-time 2 --header "pod: $(hostname)" http://'"${TCP_HOSTNAME}:${TCP_PORT}"
# * Expire in 0 ms for 6 (transfer 0x5570c6476680)
# * Expire in 2000 ms for 8 (transfer 0x5570c6476680)
# ...
# *   Trying 3.13.191.225...
# * TCP_NODELAY set
# * Expire in 200 ms for 4 (transfer 0x5570c6476680)
# * Connected to 0.tcp.ngrok.io (3.13.191.225) port 13602 (#0)
# > GET / HTTP/1.1
# > Host: 0.tcp.ngrok.io:13602
# > User-Agent: curl/7.64.0
# > Accept: */*
# > pod: sleep-54c989bc97-lks72
# >
# * Operation timed out after 2000 milliseconds with 0 bytes received
# * Closing connection 0
# command terminated with exit code 28

nc -l 12892  # Server
# GET / HTTP/1.1
# Host: 0.tcp.ngrok.io:13602
# User-Agent: curl/7.64.0
# Accept: */*
# pod: sleep-54c989bc97-lks72
#

Removing Listener

By deleting the break service the Envoy listener on ${TCP_PORT} will be removed as well

kubectl delete --namespace "${NAMESPACE}" --filename ./break.yml

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --output json \
  > listeners-003.json

/bin/cat listeners-003.json | jq '. | length'
# 15  # i.e. the original amount

Clean Up

kubectl delete --namespace "${NAMESPACE}" --filename ./sidecar.yml
kubectl delete --namespace "${NAMESPACE}" --filename ./sleep-injected.yml
kubectl delete namespace "${NAMESPACE}"

rm -f \
  sleep-injected.yml \
  sidecar.yml \
  listeners-000.json \
  break-template.yml \
  break.yml \
  listeners-001.json \
  0.0.0.0_${TCP_PORT}_http.json \
  listeners-002.json \
  listeners-003.json

rm -fr istio-1.5.0/ ngrok  # If you must

Epilogue: Other Sources of 0.0.0.0_${PORT} Listeners

NOTE: This document assumes the reader has also read README.md and will refer directly to it throughout.

Service with ClusterIP=None

We can create a service via missing-template.yml

---
apiVersion: v1
kind: Service
metadata:
  name: missing
spec:
  clusterIP: None
  ports:
  - name: unknown-prefix
    port: ${TCP_PORT}
    protocol: TCP
  type: ClusterIP

which has no Cluster IP. Even though this doesn't use a "bad" prefix like http- for the port name, it still results in a BlackHoleCluster listener on 0.0.0.0_${TCP_PORT}

sed s/'${TCP_PORT}'/${TCP_PORT}/g missing-template.yml > missing.yml
kubectl apply --namespace "${NAMESPACE}" --filename ./missing.yml
kubectl get service --namespace "${NAMESPACE}" missing
# NAME      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
# missing   ClusterIP   None         <none>        13602/TCP   8s

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --output json \
  > listeners-004.json

/bin/cat listeners-004.json | jq '. | length'
# 16

/bin/cat listeners-004.json | jq '.[].name' -r
# 10.101.151.188_80  # Pod IP     for sleep.${NAMESPACE} Service
# ...
# virtualInbound
# 0.0.0.0_13602        # Newly added listener
# null

/bin/cat listeners-004.json | jq ".[${LISTENER_INDEX}]" > 0.0.0.0_${TCP_PORT}_missing.json

By explicitly providing a non-HTTP prefix for the port, the listener disappears (i.e. there is no listener at all on the port):

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"tcp\",\"port\":${TCP_PORT}}]}}"
echo "${JSON_PATCH}" | jq
# {
#   "spec": {
#     "ports": [
#       {
#         "name": "tcp",
#         "port": 13602
#       }
#     ]
#   }
# }

kubectl patch service \
  --namespace "${NAMESPACE}" \
  missing \
  --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" \
  "${POD_NAME}" \
  --output json \
  > listeners-005.json

/bin/cat listeners-005.json | jq '. | length'
# 15

Using Every Prefix

We'll go through every single prefix from "manual protocol selection" and compare the listeners that get created. To start out, we'll make sure the break service is deployed and then apply a series of patches:

kubectl get service --namespace "${NAMESPACE}" break
# NAME    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
# break   ClusterIP   10.101.20.10   <none>        13602/TCP   34m

http

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"http-break\",\"port\":${TCP_PORT}}]}}"
echo "${JSON_PATCH}" | jq
# {
#   "spec": {
#     "ports": [
#       {
#         "name": "http-break",
#         "port": 13602
#       }
#     ]
#   }
# }

kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '.[0]' > 0.0.0.0_${TCP_PORT}_http.json

/bin/cat 0.0.0.0_${TCP_PORT}_http.json | jq '.name' -r
# 0.0.0.0_13602

The file 0.0.0.0_${TCP_PORT}_http.json is identical to all of the HTTP-like filters

{
  "name": "0.0.0.0_13602",
  "address": {
    "socketAddress": {
      "address": "0.0.0.0",
      "portValue": 13602
    }
  },
  "filterChains": [
    {
      "filterChainMatch": {
        "prefixRanges": [
          {
            "addressPrefix": "10.101.151.188",
            "prefixLen": 32
          }
        ]
      },
      "filters": [
        {
          "name": "envoy.filters.network.wasm",
          "typedConfig": {
            "@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
            "typeUrl": "type.googleapis.com/envoy.config.filter.network.wasm.v2.Wasm",
            "value": {
              "config": {
                "configuration": "{\n  \"debug\": \"false\",\n  \"stat_prefix\": \"istio\",\n}\n",
                "root_id": "stats_outbound",
                "vm_config": {
                  "code": {
                    "local": {
                      "inline_string": "envoy.wasm.stats"
                    }
                  },
                  "runtime": "envoy.wasm.runtime.null",
                  "vm_id": "stats_outbound"
                }
              }
            }
          }
        },
        {
          "name": "envoy.tcp_proxy",
          "typedConfig": {
            "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
            "statPrefix": "BlackHoleCluster",
            "cluster": "BlackHoleCluster"
          }
        }
      ]
    },
    {
      "filters": [
        {
          "name": "envoy.http_connection_manager",
          "typedConfig": {
            "@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
            "statPrefix": "outbound_0.0.0.0_13602",
            "rds": {
              "configSource": {
                "ads": {}
              },
              "routeConfigName": "13602"
            },
            "httpFilters": [
              {
                "name": "envoy.filters.http.wasm",
                "typedConfig": {
                  "@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
                  "typeUrl": "type.googleapis.com/envoy.config.filter.http.wasm.v2.Wasm",
                  "value": {
                    "config": {
                      "configuration": "envoy.wasm.metadata_exchange",
                      "vm_config": {
                        "code": {
                          "local": {
                            "inline_string": "envoy.wasm.metadata_exchange"
                          }
                        },
                        "runtime": "envoy.wasm.runtime.null"
                      }
                    }
                  }
                }
              },
              {
                "name": "istio.alpn",
                "typedConfig": {
                  "@type": "type.googleapis.com/istio.envoy.config.filter.http.alpn.v2alpha1.FilterConfig",
                  "alpnOverride": [
                    {
                      "alpnOverride": [
                        "istio-http/1.0",
                        "istio"
                      ]
                    },
                    {
                      "upstreamProtocol": "HTTP11",
                      "alpnOverride": [
                        "istio-http/1.1",
                        "istio"
                      ]
                    },
                    {
                      "upstreamProtocol": "HTTP2",
                      "alpnOverride": [
                        "istio-h2",
                        "istio"
                      ]
                    }
                  ]
                }
              },
              {
                "name": "envoy.cors"
              },
              {
                "name": "envoy.fault"
              },
              {
                "name": "envoy.filters.http.wasm",
                "typedConfig": {
                  "@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
                  "typeUrl": "type.googleapis.com/envoy.config.filter.http.wasm.v2.Wasm",
                  "value": {
                    "config": {
                      "configuration": "{\n  \"debug\": \"false\",\n  \"stat_prefix\": \"istio\",\n}\n",
                      "root_id": "stats_outbound",
                      "vm_config": {
                        "code": {
                          "local": {
                            "inline_string": "envoy.wasm.stats"
                          }
                        },
                        "runtime": "envoy.wasm.runtime.null",
                        "vm_id": "stats_outbound"
                      }
                    }
                  }
                }
              },
              {
                "name": "envoy.router"
              }
            ],
            "tracing": {
              "clientSampling": {
                "value": 100
              },
              "randomSampling": {
                "value": 1
              },
              "overallSampling": {
                "value": 100
              }
            },
            "streamIdleTimeout": "0s",
            "useRemoteAddress": true,
            "generateRequestId": true,
            "upgradeConfigs": [
              {
                "upgradeType": "websocket"
              }
            ],
            "normalizePath": true
          }
        }
      ]
    }
  ],
  "deprecatedV1": {
    "bindToPort": false
  },
  "trafficDirection": "OUTBOUND"
}

grpc

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"grpc-break\",\"port\":${TCP_PORT}}]}}"
kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '.[0]' > 0.0.0.0_${TCP_PORT}_grpc.json

/bin/cat 0.0.0.0_${TCP_PORT}_grpc.json | jq '.name' -r
# 0.0.0.0_13602

We see this is identical to the listener created for http:

diff --report-identical-files 0.0.0.0_${TCP_PORT}_http.json 0.0.0.0_${TCP_PORT}_grpc.json
# Files 0.0.0.0_13602_http.json and 0.0.0.0_13602_grpc.json are identical

grpc-web

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"grpc-web-break\",\"port\":${TCP_PORT}}]}}"
kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '.[0]' > 0.0.0.0_${TCP_PORT}_grpc-web.json

/bin/cat 0.0.0.0_${TCP_PORT}_grpc-web.json | jq '.name' -r
# 0.0.0.0_13602

We see this is identical to the listener created for http:

diff --report-identical-files 0.0.0.0_${TCP_PORT}_http.json 0.0.0.0_${TCP_PORT}_grpc-web.json
# Files 0.0.0.0_13602_http.json and 0.0.0.0_13602_grpc-web.json are identical

http2

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"http2-break\",\"port\":${TCP_PORT}}]}}"
kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '.[0]' > 0.0.0.0_${TCP_PORT}_http2.json

/bin/cat 0.0.0.0_${TCP_PORT}_http2.json | jq '.name' -r
# 0.0.0.0_13602

We see this is identical to the listener created for http:

diff --report-identical-files 0.0.0.0_${TCP_PORT}_http.json 0.0.0.0_${TCP_PORT}_http2.json
# Files 0.0.0.0_13602_http.json and 0.0.0.0_13602_http2.json are identical

Missing Cluster IP

The last 0.0.0.0 listener is the one created for a service that has ClusterIP=None, which we stored in 0.0.0.0_${TCP_PORT}_missing.json. Comparing this to the base http listener:

diff 0.0.0.0_${TCP_PORT}_http.json 0.0.0.0_${TCP_PORT}_missing.json

we see that it adds a filter chain that points to the missing.${NAMESPACE}.svc.cluster.local cluster but this comes after a nearly identical filter chain that point to BlackHoleCluster. Additionally it adds the envoy.listener.tls_inspector and envoy.listener.http_inspector listener filters, which makes it more resemble the "unknown prefix" case.

--- 0.0.0.0_13602_http.json     2020-03-20 21:05:57.000000000 -0700
+++ 0.0.0.0_13602_missing.json  2020-03-20 20:34:58.000000000 -0700
@@ -52,6 +52,47 @@
     {
       "filters": [
         {
+          "name": "envoy.filters.network.wasm",
+          "typedConfig": {
+            "@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
+            "typeUrl": "type.googleapis.com/envoy.config.filter.network.wasm.v2.Wasm",
+            "value": {
+              "config": {
+                "configuration": "{\n  \"debug\": \"false\",\n  \"stat_prefix\": \"istio\",\n}\n",
+                "root_id": "stats_outbound",
+                "vm_config": {
+                  "code": {
+                    "local": {
+                      "inline_string": "envoy.wasm.stats"
+                    }
+                  },
+                  "runtime": "envoy.wasm.runtime.null",
+                  "vm_id": "stats_outbound"
+                }
+              }
+            }
+          }
+        },
+        {
+          "name": "envoy.tcp_proxy",
+          "typedConfig": {
+            "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
+            "statPrefix": "outbound|13602||missing.${NAMESPACE}.svc.cluster.local",
+            "cluster": "outbound|13602||missing.${NAMESPACE}.svc.cluster.local"
+          }
+        }
+      ]
+    },
+    {
+      "filterChainMatch": {
+        "applicationProtocols": [
+          "http/1.0",
+          "http/1.1",
+          "h2c"
+        ]
+      },
+      "filters": [
+        {
           "name": "envoy.http_connection_manager",
           "typedConfig": {
             "@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
@@ -171,5 +212,15 @@
   "deprecatedV1": {
     "bindToPort": false
   },
+  "listenerFilters": [
+    {
+      "name": "envoy.listener.tls_inspector"
+    },
+    {
+      "name": "envoy.listener.http_inspector"
+    }
+  ],
+  "listenerFiltersTimeout": "0.100s",
+  "continueOnListenerFiltersTimeout": true,
   "trafficDirection": "OUTBOUND"
 }

tcp

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"tcp-break\",\"port\":${TCP_PORT}}]}}"
kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '.[0]' > cluster_ip_${TCP_PORT}_tcp.json

/bin/cat cluster_ip_${TCP_PORT}_tcp.json | jq '.name' -r
# 10.101.20.10_13602

The file cluster_ip_${TCP_PORT}_tcp.json is identical or very similar to the remaining non HTTP-like listeners

{
  "name": "10.101.20.10_13602",
  "address": {
    "socketAddress": {
      "address": "10.101.20.10",
      "portValue": 13602
    }
  },
  "filterChains": [
    {
      "filters": [
        {
          "name": "envoy.filters.network.wasm",
          "typedConfig": {
            "@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
            "typeUrl": "type.googleapis.com/envoy.config.filter.network.wasm.v2.Wasm",
            "value": {
              "config": {
                "configuration": "{\n  \"debug\": \"false\",\n  \"stat_prefix\": \"istio\",\n}\n",
                "root_id": "stats_outbound",
                "vm_config": {
                  "code": {
                    "local": {
                      "inline_string": "envoy.wasm.stats"
                    }
                  },
                  "runtime": "envoy.wasm.runtime.null",
                  "vm_id": "stats_outbound"
                }
              }
            }
          }
        },
        {
          "name": "envoy.tcp_proxy",
          "typedConfig": {
            "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
            "statPrefix": "outbound|13602||break.${NAMESPACE}.svc.cluster.local",
            "cluster": "outbound|13602||break.${NAMESPACE}.svc.cluster.local"
          }
        }
      ]
    }
  ],
  "deprecatedV1": {
    "bindToPort": false
  },
  "trafficDirection": "OUTBOUND"
}

https

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"https-break\",\"port\":${TCP_PORT}}]}}"
kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '.[0]' > cluster_ip_${TCP_PORT}_https.json

/bin/cat cluster_ip_${TCP_PORT}_https.json | jq '.name' -r
# 10.101.20.10_13602

We see this is identical to the listener created for tcp:

diff --report-identical-files cluster_ip_${TCP_PORT}_tcp.json cluster_ip_${TCP_PORT}_https.json
# Files cluster_ip_13602_tcp.json and cluster_ip_13602_https.json are identical

tls

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"tls-break\",\"port\":${TCP_PORT}}]}}"
kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '.[0]' > cluster_ip_${TCP_PORT}_tls.json

/bin/cat cluster_ip_${TCP_PORT}_tls.json | jq '.name' -r
# 10.101.20.10_13602

We see this is identical to the listener created for tcp:

diff --report-identical-files cluster_ip_${TCP_PORT}_tcp.json cluster_ip_${TCP_PORT}_tls.json
# Files cluster_ip_13602_tcp.json and cluster_ip_13602_tls.json are identical

mongo

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"mongo-break\",\"port\":${TCP_PORT}}]}}"
kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '.[0]' > cluster_ip_${TCP_PORT}_mongo.json

/bin/cat cluster_ip_${TCP_PORT}_mongo.json | jq '.name' -r
# 10.101.20.10_13602

diff cluster_ip_${TCP_PORT}_tcp.json cluster_ip_${TCP_PORT}_mongo.json

This adds an envoy.mongo_proxy filter to the tcp listener at the beginning of the filter chain, see the diff

--- cluster_ip_13602_tcp.json   2020-03-20 21:07:51.000000000 -0700
+++ cluster_ip_13602_mongo.json 2020-03-20 21:09:20.000000000 -0700
@@ -10,6 +10,13 @@
     {
       "filters": [
         {
+          "name": "envoy.mongo_proxy",
+          "typedConfig": {
+            "@type": "type.googleapis.com/envoy.config.filter.network.mongo_proxy.v2.MongoProxy",
+            "statPrefix": "outbound|13602||break.${NAMESPACE}.svc.cluster.local"
+          }
+        },
+        {
           "name": "envoy.filters.network.wasm",
           "typedConfig": {
             "@type": "type.googleapis.com/udpa.type.v1.TypedStruct",

mysql

NOTE: Needs PILOT_ENABLE_MYSQL_FILTER set in components.pilot.k8s.env

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"mysql-break\",\"port\":${TCP_PORT}}]}}"
kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '.[0]' > cluster_ip_${TCP_PORT}_mysql.json

/bin/cat cluster_ip_${TCP_PORT}_mysql.json | jq '.name' -r
# 10.101.20.10_13602

diff cluster_ip_${TCP_PORT}_tcp.json cluster_ip_${TCP_PORT}_mysql.json

This adds an envoy.filters.network.mysql_proxy filter to the tcp listener at the beginning of the filter chain, see the diff

--- cluster_ip_13602_tcp.json   2020-03-20 21:07:51.000000000 -0700
+++ cluster_ip_13602_mysql.json 2020-03-20 21:26:36.000000000 -0700
@@ -10,6 +10,13 @@
     {
       "filters": [
         {
+          "name": "envoy.filters.network.mysql_proxy",
+          "typedConfig": {
+            "@type": "type.googleapis.com/envoy.config.filter.network.mysql_proxy.v1alpha1.MySQLProxy",
+            "statPrefix": "outbound|13602||break.${NAMESPACE}.svc.cluster.local"
+          }
+        },
+        {
           "name": "envoy.filters.network.wasm",
           "typedConfig": {
             "@type": "type.googleapis.com/udpa.type.v1.TypedStruct",

redis

NOTE: Needs PILOT_ENABLE_REDIS_FILTER set in components.pilot.k8s.env

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"redis-break\",\"port\":${TCP_PORT}}]}}"
kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '.[0]' > cluster_ip_${TCP_PORT}_redis.json

/bin/cat cluster_ip_${TCP_PORT}_redis.json | jq '.name' -r
# 10.101.20.10_13602

diff cluster_ip_${TCP_PORT}_tcp.json cluster_ip_${TCP_PORT}_redis.json

This adds an envoy.redis_proxy filter to the tcp listener at the beginning of the filter chain and removes the envoy.filters.network.wasm and envoy.tcp_proxy filters, see the diff

--- cluster_ip_13602_tcp.json   2020-03-20 21:07:51.000000000 -0700
+++ cluster_ip_13602_redis.json 2020-03-20 21:24:55.000000000 -0700
@@ -10,34 +10,20 @@
     {
       "filters": [
         {
-          "name": "envoy.filters.network.wasm",
+          "name": "envoy.redis_proxy",
           "typedConfig": {
-            "@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
-            "typeUrl": "type.googleapis.com/envoy.config.filter.network.wasm.v2.Wasm",
-            "value": {
-              "config": {
-                "configuration": "{\n  \"debug\": \"false\",\n  \"stat_prefix\": \"istio\",\n}\n",
-                "root_id": "stats_outbound",
-                "vm_config": {
-                  "code": {
-                    "local": {
-                      "inline_string": "envoy.wasm.stats"
-                    }
-                  },
-                  "runtime": "envoy.wasm.runtime.null",
-                  "vm_id": "stats_outbound"
-                }
+            "@type": "type.googleapis.com/envoy.config.filter.network.redis_proxy.v2.RedisProxy",
+            "statPrefix": "outbound|13602||break.${NAMESPACE}.svc.cluster.local",
+            "settings": {
+              "opTimeout": "5s"
+            },
+            "latencyInMicros": true,
+            "prefixRoutes": {
+              "catchAllRoute": {
+                "cluster": "outbound|13602||break.${NAMESPACE}.svc.cluster.local"
               }
             }
           }
-        },
-        {
-          "name": "envoy.tcp_proxy",
-          "typedConfig": {
-            "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
-            "statPrefix": "outbound|13602||break.${NAMESPACE}.svc.cluster.local",
-            "cluster": "outbound|13602||break.${NAMESPACE}.svc.cluster.local"
-          }
         }
       ]
     }

udp

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"udp-break\",\"port\":${TCP_PORT}}]}}"
kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '. | length'
# 0

We see that no listener is created for UDP traffic.

No Prefix

JSON_PATCH="{\"spec\":{\"ports\":[{\"name\":\"no-prefix-break\",\"port\":${TCP_PORT}}]}}"
kubectl patch service --namespace "${NAMESPACE}" break --patch "${JSON_PATCH}"

./istio-1.5.0/bin/istioctl proxy-config listeners \
  --namespace "${NAMESPACE}" "${POD_NAME}" \
  --output json --port "${TCP_PORT}" \
  | jq '.[0]' > cluster_ip_${TCP_PORT}_no_prefix.json

/bin/cat cluster_ip_${TCP_PORT}_no_prefix.json | jq '.name' -r
# 10.101.20.10_13602

diff --report-identical-files 0.0.0.0_${TCP_PORT}_http.json cluster_ip_${TCP_PORT}_no_prefix.json

Most listeners are identical or very similar to the "base" listeners 0.0.0.0_${TCP_PORT}_http.json and cluster_ip_${TCP_PORT}_tcp.json. However this listener has all the good parts of the HTTP listener without the bad (i.e. it isn't bound to 0.0.0.0 and doesn't have a filter sending traffic to BlackHoleCluster):

--- 0.0.0.0_13602_http.json	        2020-03-20 21:05:57.000000000 -0700
+++ cluster_ip_13602_no_prefix.json	2020-03-20 21:51:20.000000000 -0700
@@ -1,21 +1,13 @@
 {
-  "name": "0.0.0.0_13602",
+  "name": "10.101.20.10_13602",
   "address": {
     "socketAddress": {
-      "address": "0.0.0.0",
+      "address": "10.101.20.10",
       "portValue": 13602
     }
   },
   "filterChains": [
     {
-      "filterChainMatch": {
-        "prefixRanges": [
-          {
-            "addressPrefix": "10.101.151.188",
-            "prefixLen": 32
-          }
-        ]
-      },
       "filters": [
         {
           "name": "envoy.filters.network.wasm",
@@ -43,24 +35,31 @@
           "name": "envoy.tcp_proxy",
           "typedConfig": {
             "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
-            "statPrefix": "BlackHoleCluster",
-            "cluster": "BlackHoleCluster"
+            "statPrefix": "outbound|13602||break.${NAMESPACE}.svc.cluster.local",
+            "cluster": "outbound|13602||break.${NAMESPACE}.svc.cluster.local"
           }
         }
       ]
     },
     {
+      "filterChainMatch": {
+        "applicationProtocols": [
+          "http/1.0",
+          "http/1.1",
+          "h2c"
+        ]
+      },
       "filters": [
         {
           "name": "envoy.http_connection_manager",
           "typedConfig": {
             "@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
-            "statPrefix": "outbound_0.0.0.0_13602",
+            "statPrefix": "outbound_10.101.20.10_13602",
             "rds": {
               "configSource": {
                 "ads": {}
               },
-              "routeConfigName": "13602"
+              "routeConfigName": "break.${NAMESPACE}.svc.cluster.local:13602"
             },
             "httpFilters": [
               {
@@ -171,5 +170,15 @@
   "deprecatedV1": {
     "bindToPort": false
   },
+  "listenerFilters": [
+    {
+      "name": "envoy.listener.tls_inspector"
+    },
+    {
+      "name": "envoy.listener.http_inspector"
+    }
+  ],
+  "listenerFiltersTimeout": "0.100s",
+  "continueOnListenerFiltersTimeout": true,
   "trafficDirection": "OUTBOUND"
 }

Using a ServiceEntry

Placeholder: TODO

Cleanup

kubectl delete --namespace "${NAMESPACE}" --filename ./missing.yml
kubectl delete --namespace "${NAMESPACE}" --filename ./break.yml

rm -f \
  missing-template.yml \
  missing.yml \
  listeners-004.json \
  0.0.0.0_${TCP_PORT}_missing.json \
  listeners-005.json \
  0.0.0.0_${TCP_PORT}_grpc.json \
  0.0.0.0_${TCP_PORT}_grpc-web.json \
  0.0.0.0_${TCP_PORT}_http.json \
  0.0.0.0_${TCP_PORT}_http2.json \
  cluster_ip_${TCP_PORT}_https.json \
  cluster_ip_${TCP_PORT}_mongo.json \
  cluster_ip_${TCP_PORT}_mysql.json \
  cluster_ip_${TCP_PORT}_redis.json \
  cluster_ip_${TCP_PORT}_tcp.json \
  cluster_ip_${TCP_PORT}_tls.json \
  cluster_ip_${TCP_PORT}_no_prefix.json
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment