Skip to content

Instantly share code, notes, and snippets.

@kunickiaj
Created April 5, 2019 22:55
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kunickiaj/3d40b53a4311bf904e07672f69301ad2 to your computer and use it in GitHub Desktop.
Save kunickiaj/3d40b53a4311bf904e07672f69301ad2 to your computer and use it in GitHub Desktop.
0.0 TEL | Telepresence 0.98 launched at Fri Apr 5 15:52:47 2019
0.0 TEL | /usr/local/bin/telepresence --verbose --run-shell
0.0 TEL | Platform: darwin
0.0 TEL | Python 3.6.6 (default, Oct 4 2018, 20:50:27)
0.0 TEL | [GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.11.45.2)]
0.0 TEL | [1] Running: uname -a
0.0 1 | Darwin streamsets-adam-1726337.local 18.5.0 Darwin Kernel Version 18.5.0: Mon Mar 11 20:40:32 PDT 2019; root:xnu-4903.251.3~3/RELEASE_X86_64 x86_64
0.0 TEL | [1] ran in 0.00 secs.
0.0 TEL | BEGIN SPAN main.py:40(main)
0.0 TEL | BEGIN SPAN startup.py:74(__init__)
0.0 TEL | Found kubectl -> /usr/local/bin/kubectl
0.0 TEL | Found oc -> /usr/local/bin/oc
0.0 TEL | [2] Capturing: kubectl version --short
0.7 2 | Client Version: v1.14.0
0.7 2 | Server Version: v1.12.6-gke.10
0.7 TEL | [2] captured in 0.65 secs.
0.7 TEL | [3] Capturing: kubectl config current-context
1.1 3 | sx4-dev
1.1 TEL | [3] captured in 0.44 secs.
1.1 TEL | [4] Capturing: kubectl config view -o json
1.5 4 | {
1.5 4 | "kind": "Config",
1.5 4 | "apiVersion": "v1",
1.5 4 | "preferences": {},
1.5 4 | "clusters": [
1.5 4 | {
1.5 4 | "name": "docker-for-desktop-cluster",
1.5 4 | "cluster": {
1.5 4 | "server": "https://localhost:6443",
1.5 4 | "insecure-skip-tls-verify": true
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "gke_streamsets-engineering_us-central1-a_adam",
1.5 4 | "cluster": {
1.5 4 | "server": "https://35.239.24.173",
1.5 4 | "certificate-authority-data": "DATA+OMITTED"
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "gke_streamsets-engineering_us-central1-a_keith-istio-sch",
1.5 4 | "cluster": {
1.5 4 | "server": "https://35.225.41.146",
1.5 4 | "certificate-authority-data": "DATA+OMITTED"
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "gke_streamsets-engineering_us-central1-c_adam",
1.5 4 | "cluster": {
1.5 4 | "server": "https://35.225.93.8",
1.5 4 | "certificate-authority-data": "DATA+OMITTED"
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "gke_streamsets-engineering_us-central1-c_adam-vpc-native",
1.5 4 | "cluster": {
1.5 4 | "server": "https://35.226.182.79",
1.5 4 | "certificate-authority-data": "DATA+OMITTED"
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "gke_sx4-dev_us-central1-c_dev-1-clstr",
1.5 4 | "cluster": {
1.5 4 | "server": "https://10.19.19.18",
1.5 4 | "certificate-authority-data": "DATA+OMITTED"
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "gke_sx4-dev_us-central1-c_dev-clstr-2",
1.5 4 | "cluster": {
1.5 4 | "server": "https://35.224.250.81",
1.5 4 | "certificate-authority-data": "DATA+OMITTED"
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "minikube",
1.5 4 | "cluster": {
1.5 4 | "server": "https://192.168.37.163:8443",
1.5 4 | "certificate-authority": "/Users/adam/.minikube/ca.crt"
1.5 4 | }
1.5 4 | }
1.5 4 | ],
1.5 4 | "users": [
1.5 4 | {
1.5 4 | "name": "docker-for-desktop",
1.5 4 | "user": {
1.5 4 | "client-certificate-data": "REDACTED",
1.5 4 | "client-key-data": "REDACTED"
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "gke_streamsets-engineering_us-central1-a_keith-istio-sch",
1.5 4 | "user": {
1.5 4 | "auth-provider": {
1.5 4 | "name": "gcp",
1.5 4 | "config": {
1.5 4 | "access-token": "Masked-by-Telepresence",
1.5 4 | "cmd-args": "config config-helper --format=json",
1.5 4 | "cmd-path": "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud",
1.5 4 | "expiry": "2019-04-05T22:36:14Z",
1.5 4 | "expiry-key": "{.credential.token_expiry}",
1.5 4 | "token-key": "{.credential.access_token}"
1.5 4 | }
1.5 4 | }
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "gke_sx4-dev_us-central1-c_dev-clstr-2",
1.5 4 | "user": {
1.5 4 | "auth-provider": {
1.5 4 | "name": "gcp",
1.5 4 | "config": {
1.5 4 | "access-token": "Masked-by-Telepresence",
1.5 4 | "cmd-args": "config config-helper --format=json",
1.5 4 | "cmd-path": "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud",
1.5 4 | "expiry": "2019-04-05T23:41:04Z",
1.5 4 | "expiry-key": "{.credential.token_expiry}",
1.5 4 | "token-key": "{.credential.access_token}"
1.5 4 | }
1.5 4 | }
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "minikube",
1.5 4 | "user": {
1.5 4 | "client-certificate": "/Users/adam/.minikube/client.crt",
1.5 4 | "client-key": "/Users/adam/.minikube/client.key"
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "sx4-dev-token-user",
1.5 4 | "user": {
1.5 4 | "token": "Masked-by-Telepresence"
1.5 4 | }
1.5 4 | }
1.5 4 | ],
1.5 4 | "contexts": [
1.5 4 | {
1.5 4 | "name": "docker-for-desktop",
1.5 4 | "context": {
1.5 4 | "cluster": "docker-for-desktop-cluster",
1.5 4 | "user": "docker-for-desktop"
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "gke-keith",
1.5 4 | "context": {
1.5 4 | "cluster": "gke_streamsets-engineering_us-central1-a_keith-istio-sch",
1.5 4 | "user": "gke_streamsets-engineering_us-central1-a_keith-istio-sch",
1.5 4 | "namespace": "default"
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "minikube",
1.5 4 | "context": {
1.5 4 | "cluster": "minikube",
1.5 4 | "user": "minikube"
1.5 4 | }
1.5 4 | },
1.5 4 | {
1.5 4 | "name": "sx4-dev",
1.5 4 | "context": {
1.5 4 | "cluster": "gke_sx4-dev_us-central1-c_dev-clstr-2",
1.5 4 | "user": "gke_sx4-dev_us-central1-c_dev-clstr-2",
1.5 4 | "namespace": "staging"
1.5 4 | }
1.5 4 | }
1.5 4 | ],
1.5 4 | "current-context": "sx4-dev"
1.5 4 | }
1.5 TEL | [4] captured in 0.42 secs.
1.5 TEL | [5] Capturing: kubectl --context sx4-dev get ns staging
2.2 5 | NAME STATUS AGE
2.2 5 | staging Active 6d23h
2.2 TEL | [5] captured in 0.63 secs.
2.4 TEL | Command: kubectl 1.14.0
2.4 TEL | Context: sx4-dev, namespace: staging, version: 1.12.6-gke.10
2.4 TEL | Warning: kubectl 1.14.0 may not work correctly with cluster version 1.12.6-gke.10 due to the version discrepancy. See https://kubernetes.io/docs/setup/version-skew-policy/ for more information.
2.4 TEL | END SPAN startup.py:74(__init__) 2.4s
2.4 TEL | Found ssh -> /usr/bin/ssh
2.4 TEL | [6] Capturing: ssh -V
2.4 6 | OpenSSH_7.9p1, LibreSSL 2.7.3
2.4 TEL | [6] captured in 0.04 secs.
2.4 TEL | Found bash -> /usr/local/bin/bash
2.4 TEL | Found sshuttle-telepresence -> /usr/local/Cellar/telepresence/0.98/libexec/sshuttle-telepresence
2.4 TEL | Found pfctl -> /sbin/pfctl
2.4 TEL | Found sudo -> /usr/bin/sudo
2.4 TEL | [7] Running: sudo -n echo -n
2.5 TEL | [7] ran in 0.04 secs.
2.5 >>> | Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloud hosts and headless services with --also-proxy. For a full list of method limitations see https://telepresence.io/reference/methods.html
2.5 TEL | Found sshfs -> /usr/local/bin/sshfs
2.5 TEL | Found umount -> /sbin/umount
2.5 >>> | Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details.
2.5 TEL | [8] Running: kubectl --context sx4-dev --namespace staging get pods telepresence-connectivity-check --ignore-not-found
3.2 TEL | [8] ran in 0.71 secs.
3.5 TEL | Scout info: {'latest_version': '0.98', 'application': 'telepresence', 'notices': []}
3.5 TEL | BEGIN SPAN deployment.py:57(create_new_deployment)
3.5 >>> | Starting network proxy to cluster using new Deployment telepresence-1554504767-653051-38514
3.5 TEL | [9] Running: kubectl --context sx4-dev --namespace staging delete --ignore-not-found svc,deploy --selector=telepresence=4ab1a87ed83e47d8a335a1984acb5957
4.2 9 | No resources found
4.2 TEL | [9] ran in 0.71 secs.
4.2 TEL | [10] Running: kubectl --context sx4-dev --namespace staging run --restart=Always --limits=cpu=100m,memory=256Mi --requests=cpu=25m,memory=64Mi telepresence-1554504767-653051-38514 --image=datawire/telepresence-k8s:0.98 --labels=telepresence=4ab1a87ed83e47d8a335a1984acb5957
4.9 10 | kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
5.0 10 | deployment.apps/telepresence-1554504767-653051-38514 created
5.0 TEL | [10] ran in 0.75 secs.
5.0 TEL | END SPAN deployment.py:57(create_new_deployment) 1.5s
5.0 TEL | BEGIN SPAN remote.py:151(get_remote_info)
5.0 TEL | BEGIN SPAN remote.py:78(get_deployment_json)
5.0 TEL | [11] Capturing: kubectl --context sx4-dev --namespace staging get deployment -o json --selector=telepresence=4ab1a87ed83e47d8a335a1984acb5957
5.7 11 | {
5.7 11 | "apiVersion": "v1",
5.7 11 | "items": [
5.7 11 | {
5.7 11 | "apiVersion": "extensions/v1beta1",
5.7 11 | "kind": "Deployment",
5.7 11 | "metadata": {
5.7 11 | "annotations": {
5.7 11 | "deployment.kubernetes.io/revision": "1"
5.7 11 | },
5.7 11 | "creationTimestamp": "2019-04-05T22:52:52Z",
5.7 11 | "generation": 1,
5.7 11 | "labels": {
5.7 11 | "telepresence": "4ab1a87ed83e47d8a335a1984acb5957"
5.7 11 | },
5.7 11 | "name": "telepresence-1554504767-653051-38514",
5.7 11 | "namespace": "staging",
5.7 11 | "resourceVersion": "6542434",
5.7 11 | "selfLink": "/apis/extensions/v1beta1/namespaces/staging/deployments/telepresence-1554504767-653051-38514",
5.7 11 | "uid": "8b3962d7-57f5-11e9-9434-42010a80004c"
5.7 11 | },
5.7 11 | "spec": {
5.7 11 | "progressDeadlineSeconds": 600,
5.7 11 | "replicas": 1,
5.7 11 | "revisionHistoryLimit": 10,
5.7 11 | "selector": {
5.7 11 | "matchLabels": {
5.7 11 | "telepresence": "4ab1a87ed83e47d8a335a1984acb5957"
5.7 11 | }
5.7 11 | },
5.7 11 | "strategy": {
5.7 11 | "rollingUpdate": {
5.7 11 | "maxSurge": "25%",
5.7 11 | "maxUnavailable": "25%"
5.7 11 | },
5.7 11 | "type": "RollingUpdate"
5.7 11 | },
5.7 11 | "template": {
5.7 11 | "metadata": {
5.7 11 | "creationTimestamp": null,
5.7 11 | "labels": {
5.7 11 | "telepresence": "4ab1a87ed83e47d8a335a1984acb5957"
5.7 11 | }
5.7 11 | },
5.7 11 | "spec": {
5.7 11 | "containers": [
5.7 11 | {
5.7 11 | "image": "datawire/telepresence-k8s:0.98",
5.7 11 | "imagePullPolicy": "IfNotPresent",
5.7 11 | "name": "telepresence-1554504767-653051-38514",
5.7 11 | "resources": {
5.7 11 | "limits": {
5.7 11 | "cpu": "100m",
5.7 11 | "memory": "256Mi"
5.7 11 | },
5.7 11 | "requests": {
5.7 11 | "cpu": "25m",
5.7 11 | "memory": "64Mi"
5.7 11 | }
5.7 11 | },
5.7 11 | "terminationMessagePath": "/dev/termination-log",
5.7 11 | "terminationMessagePolicy": "File"
5.7 11 | }
5.7 11 | ],
5.7 11 | "dnsPolicy": "ClusterFirst",
5.7 11 | "restartPolicy": "Always",
5.7 11 | "schedulerName": "default-scheduler",
5.7 11 | "securityContext": {},
5.7 11 | "terminationGracePeriodSeconds": 30
5.7 11 | }
5.7 11 | }
5.7 11 | },
5.7 11 | "status": {
5.7 11 | "conditions": [
5.7 11 | {
5.7 11 | "lastTransitionTime": "2019-04-05T22:52:52Z",
5.7 11 | "lastUpdateTime": "2019-04-05T22:52:52Z",
5.7 11 | "message": "Deployment does not have minimum availability.",
5.7 11 | "reason": "MinimumReplicasUnavailable",
5.7 11 | "status": "False",
5.7 11 | "type": "Available"
5.7 11 | },
5.7 11 | {
5.7 11 | "lastTransitionTime": "2019-04-05T22:52:52Z",
5.7 11 | "lastUpdateTime": "2019-04-05T22:52:52Z",
5.7 11 | "message": "ReplicaSet \"telepresence-1554504767-653051-38514-8665fb57fb\" is progressing.",
5.7 11 | "reason": "ReplicaSetUpdated",
5.7 11 | "status": "True",
5.7 11 | "type": "Progressing"
5.7 11 | }
5.7 11 | ],
5.7 11 | "observedGeneration": 1,
5.7 11 | "replicas": 1,
5.7 11 | "unavailableReplicas": 1,
5.7 11 | "updatedReplicas": 1
5.7 11 | }
5.7 11 | }
5.7 11 | ],
5.7 11 | "kind": "List",
5.7 11 | "metadata": {
5.7 11 | "resourceVersion": "",
5.7 11 | "selfLink": ""
5.7 11 | }
5.7 11 | }
5.7 TEL | [11] captured in 0.68 secs.
5.7 TEL | END SPAN remote.py:78(get_deployment_json) 0.7s
5.7 TEL | Searching for Telepresence pod:
5.7 TEL | with name telepresence-1554504767-653051-38514-*
5.7 TEL | with labels {'telepresence': '4ab1a87ed83e47d8a335a1984acb5957'}
5.7 TEL | [12] Capturing: kubectl --context sx4-dev --namespace staging get pod -o json --selector=telepresence=4ab1a87ed83e47d8a335a1984acb5957
6.4 12 | {
6.4 12 | "apiVersion": "v1",
6.4 12 | "items": [
6.4 12 | {
6.4 12 | "apiVersion": "v1",
6.4 12 | "kind": "Pod",
6.4 12 | "metadata": {
6.4 12 | "annotations": {
6.4 12 | "sidecar.istio.io/status": "{\"version\":\"299b0fe3441985f893a8b9fcca72a53989f22ef3d7b53148499683a078e54915\",\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-certs\"],\"imagePullSecrets\":null}"
6.4 12 | },
6.4 12 | "creationTimestamp": "2019-04-05T22:52:52Z",
6.4 12 | "generateName": "telepresence-1554504767-653051-38514-8665fb57fb-",
6.4 12 | "labels": {
6.4 12 | "pod-template-hash": "8665fb57fb",
6.4 12 | "telepresence": "4ab1a87ed83e47d8a335a1984acb5957"
6.4 12 | },
6.4 12 | "name": "telepresence-1554504767-653051-38514-8665fb57fb-xwhmv",
6.4 12 | "namespace": "staging",
6.4 12 | "ownerReferences": [
6.4 12 | {
6.4 12 | "apiVersion": "apps/v1",
6.4 12 | "blockOwnerDeletion": true,
6.4 12 | "controller": true,
6.4 12 | "kind": "ReplicaSet",
6.4 12 | "name": "telepresence-1554504767-653051-38514-8665fb57fb",
6.4 12 | "uid": "8b3b3a8a-57f5-11e9-9434-42010a80004c"
6.4 12 | }
6.4 12 | ],
6.4 12 | "resourceVersion": "6542435",
6.4 12 | "selfLink": "/api/v1/namespaces/staging/pods/telepresence-1554504767-653051-38514-8665fb57fb-xwhmv",
6.4 12 | "uid": "8b3fc640-57f5-11e9-9434-42010a80004c"
6.4 12 | },
6.4 12 | "spec": {
6.4 12 | "containers": [
6.4 12 | {
6.4 12 | "image": "datawire/telepresence-k8s:0.98",
6.4 12 | "imagePullPolicy": "IfNotPresent",
6.4 12 | "name": "telepresence-1554504767-653051-38514",
6.4 12 | "resources": {
6.4 12 | "limits": {
6.4 12 | "cpu": "100m",
6.4 12 | "memory": "256Mi"
6.4 12 | },
6.4 12 | "requests": {
6.4 12 | "cpu": "25m",
6.4 12 | "memory": "64Mi"
6.4 12 | }
6.4 12 | },
6.4 12 | "terminationMessagePath": "/dev/termination-log",
6.4 12 | "terminationMessagePolicy": "File",
6.4 12 | "volumeMounts": [
6.4 12 | {
6.4 12 | "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
6.4 12 | "name": "default-token-zzt6d",
6.4 12 | "readOnly": true
6.4 12 | }
6.4 12 | ]
6.4 12 | },
6.4 12 | {
6.4 12 | "args": [
6.4 12 | "proxy",
6.4 12 | "sidecar",
6.4 12 | "--configPath",
6.4 12 | "/etc/istio/proxy",
6.4 12 | "--binaryPath",
6.4 12 | "/usr/local/bin/envoy",
6.4 12 | "--serviceCluster",
6.4 12 | "istio-proxy",
6.4 12 | "--drainDuration",
6.4 12 | "45s",
6.4 12 | "--parentShutdownDuration",
6.4 12 | "1m0s",
6.4 12 | "--discoveryAddress",
6.4 12 | "istio-pilot.istio-system:15005",
6.4 12 | "--discoveryRefreshDelay",
6.4 12 | "1s",
6.4 12 | "--zipkinAddress",
6.4 12 | "zipkin.istio-system:9411",
6.4 12 | "--connectTimeout",
6.4 12 | "10s",
6.4 12 | "--proxyAdminPort",
6.4 12 | "15000",
6.4 12 | "--controlPlaneAuthPolicy",
6.4 12 | "MUTUAL_TLS"
6.4 12 | ],
6.4 12 | "env": [
6.4 12 | {
6.4 12 | "name": "POD_NAME",
6.4 12 | "valueFrom": {
6.4 12 | "fieldRef": {
6.4 12 | "apiVersion": "v1",
6.4 12 | "fieldPath": "metadata.name"
6.4 12 | }
6.4 12 | }
6.4 12 | },
6.4 12 | {
6.4 12 | "name": "POD_NAMESPACE",
6.4 12 | "valueFrom": {
6.4 12 | "fieldRef": {
6.4 12 | "apiVersion": "v1",
6.4 12 | "fieldPath": "metadata.namespace"
6.4 12 | }
6.4 12 | }
6.4 12 | },
6.4 12 | {
6.4 12 | "name": "INSTANCE_IP",
6.4 12 | "valueFrom": {
6.4 12 | "fieldRef": {
6.4 12 | "apiVersion": "v1",
6.4 12 | "fieldPath": "status.podIP"
6.4 12 | }
6.4 12 | }
6.4 12 | },
6.4 12 | {
6.4 12 | "name": "ISTIO_META_POD_NAME",
6.4 12 | "valueFrom": {
6.4 12 | "fieldRef": {
6.4 12 | "apiVersion": "v1",
6.4 12 | "fieldPath": "metadata.name"
6.4 12 | }
6.4 12 | }
6.4 12 | },
6.4 12 | {
6.4 12 | "name": "ISTIO_META_INTERCEPTION_MODE",
6.4 12 | "value": "REDIRECT"
6.4 12 | },
6.4 12 | {
6.4 12 | "name": "ISTIO_METAJSON_LABELS",
6.4 12 | "value": "{\"pod-template-hash\":\"8665fb57fb\",\"telepresence\":\"4ab1a87ed83e47d8a335a1984acb5957\"}\n"
6.4 12 | }
6.4 12 | ],
6.4 12 | "image": "gcr.io/gke-release/istio/proxyv2:1.0.6-gke.3",
6.4 12 | "imagePullPolicy": "IfNotPresent",
6.4 12 | "name": "istio-proxy",
6.4 12 | "ports": [
6.4 12 | {
6.4 12 | "containerPort": 15090,
6.4 12 | "name": "http-envoy-prom",
6.4 12 | "protocol": "TCP"
6.4 12 | }
6.4 12 | ],
6.4 12 | "resources": {
6.4 12 | "requests": {
6.4 12 | "cpu": "10m"
6.4 12 | }
6.4 12 | },
6.4 12 | "securityContext": {
6.4 12 | "procMount": "Default",
6.4 12 | "readOnlyRootFilesystem": true,
6.4 12 | "runAsUser": 1337
6.4 12 | },
6.4 12 | "terminationMessagePath": "/dev/termination-log",
6.4 12 | "terminationMessagePolicy": "File",
6.4 12 | "volumeMounts": [
6.4 12 | {
6.4 12 | "mountPath": "/etc/istio/proxy",
6.4 12 | "name": "istio-envoy"
6.4 12 | },
6.4 12 | {
6.4 12 | "mountPath": "/etc/certs/",
6.4 12 | "name": "istio-certs",
6.4 12 | "readOnly": true
6.4 12 | }
6.4 12 | ]
6.4 12 | }
6.4 12 | ],
6.4 12 | "dnsPolicy": "ClusterFirst",
6.4 12 | "initContainers": [
6.4 12 | {
6.4 12 | "args": [
6.4 12 | "-p",
6.4 12 | "15001",
6.4 12 | "-u",
6.4 12 | "1337",
6.4 12 | "-m",
6.4 12 | "REDIRECT",
6.4 12 | "-i",
6.4 12 | "*",
6.4 12 | "-x",
6.4 12 | "",
6.4 12 | "-b",
6.4 12 | "",
6.4 12 | "-d",
6.4 12 | ""
6.4 12 | ],
6.4 12 | "image": "gcr.io/gke-release/istio/proxy_init:1.0.6-gke.3",
6.4 12 | "imagePullPolicy": "IfNotPresent",
6.4 12 | "name": "istio-init",
6.4 12 | "resources": {},
6.4 12 | "securityContext": {
6.4 12 | "capabilities": {
6.4 12 | "add": [
6.4 12 | "NET_ADMIN"
6.4 12 | ]
6.4 12 | },
6.4 12 | "privileged": true,
6.4 12 | "procMount": "Default"
6.4 12 | },
6.4 12 | "terminationMessagePath": "/dev/termination-log",
6.4 12 | "terminationMessagePolicy": "File"
6.4 12 | }
6.4 12 | ],
6.4 12 | "nodeName": "gke-dev-clstr-2-pool-3-4d7fa6d1-43kj",
6.4 12 | "priority": 0,
6.4 12 | "restartPolicy": "Always",
6.4 12 | "schedulerName": "default-scheduler",
6.4 12 | "securityContext": {},
6.4 12 | "serviceAccount": "default",
6.4 12 | "serviceAccountName": "default",
6.4 12 | "terminationGracePeriodSeconds": 30,
6.4 12 | "tolerations": [
6.4 12 | {
6.4 12 | "effect": "NoExecute",
6.4 12 | "key": "node.kubernetes.io/not-ready",
6.4 12 | "operator": "Exists",
6.4 12 | "tolerationSeconds": 300
6.4 12 | },
6.4 12 | {
6.4 12 | "effect": "NoExecute",
6.4 12 | "key": "node.kubernetes.io/unreachable",
6.4 12 | "operator": "Exists",
6.4 12 | "tolerationSeconds": 300
6.4 12 | }
6.4 12 | ],
6.4 12 | "volumes": [
6.4 12 | {
6.4 12 | "name": "default-token-zzt6d",
6.4 12 | "secret": {
6.4 12 | "defaultMode": 420,
6.4 12 | "secretName": "default-token-zzt6d"
6.4 12 | }
6.4 12 | },
6.4 12 | {
6.4 12 | "emptyDir": {
6.4 12 | "medium": "Memory"
6.4 12 | },
6.4 12 | "name": "istio-envoy"
6.4 12 | },
6.4 12 | {
6.4 12 | "name": "istio-certs",
6.4 12 | "secret": {
6.4 12 | "defaultMode": 420,
6.4 12 | "optional": true,
6.4 12 | "secretName": "istio.default"
6.4 12 | }
6.4 12 | }
6.4 12 | ]
6.4 12 | },
6.4 12 | "status": {
6.4 12 | "conditions": [
6.4 12 | {
6.4 12 | "lastProbeTime": null,
6.4 12 | "lastTransitionTime": "2019-04-05T22:52:52Z",
6.4 12 | "message": "containers with incomplete status: [istio-init]",
6.4 12 | "reason": "ContainersNotInitialized",
6.4 12 | "status": "False",
6.4 12 | "type": "Initialized"
6.4 12 | },
6.4 12 | {
6.4 12 | "lastProbeTime": null,
6.4 12 | "lastTransitionTime": "2019-04-05T22:52:52Z",
6.4 12 | "message": "containers with unready status: [telepresence-1554504767-653051-38514 istio-proxy]",
6.4 12 | "reason": "ContainersNotReady",
6.4 12 | "status": "False",
6.4 12 | "type": "Ready"
6.4 12 | },
6.4 12 | {
6.4 12 | "lastProbeTime": null,
6.4 12 | "lastTransitionTime": "2019-04-05T22:52:52Z",
6.4 12 | "message": "containers with unready status: [telepresence-1554504767-653051-38514 istio-proxy]",
6.4 12 | "reason": "ContainersNotReady",
6.4 12 | "status": "False",
6.4 12 | "type": "ContainersReady"
6.4 12 | },
6.4 12 | {
6.4 12 | "lastProbeTime": null,
6.4 12 | "lastTransitionTime": "2019-04-05T22:52:52Z",
6.4 12 | "status": "True",
6.4 12 | "type": "PodScheduled"
6.4 12 | }
6.4 12 | ],
6.4 12 | "containerStatuses": [
6.4 12 | {
6.4 12 | "image": "gcr.io/gke-release/istio/proxyv2:1.0.6-gke.3",
6.4 12 | "imageID": "",
6.4 12 | "lastState": {},
6.4 12 | "name": "istio-proxy",
6.4 12 | "ready": false,
6.4 12 | "restartCount": 0,
6.4 12 | "state": {
6.4 12 | "waiting": {
6.4 12 | "reason": "PodInitializing"
6.4 12 | }
6.4 12 | }
6.4 12 | },
6.4 12 | {
6.4 12 | "image": "datawire/telepresence-k8s:0.98",
6.4 12 | "imageID": "",
6.4 12 | "lastState": {},
6.4 12 | "name": "telepresence-1554504767-653051-38514",
6.4 12 | "ready": false,
6.4 12 | "restartCount": 0,
6.4 12 | "state": {
6.4 12 | "waiting": {
6.4 12 | "reason": "PodInitializing"
6.4 12 | }
6.4 12 | }
6.4 12 | }
6.4 12 | ],
6.4 12 | "hostIP": "172.16.0.18",
6.4 12 | "initContainerStatuses": [
6.4 12 | {
6.4 12 | "image": "gcr.io/gke-release/istio/proxy_init:1.0.6-gke.3",
6.4 12 | "imageID": "",
6.4 12 | "lastState": {},
6.4 12 | "name": "istio-init",
6.4 12 | "ready": false,
6.4 12 | "restartCount": 0,
6.4 12 | "state": {
6.4 12 | "waiting": {
6.4 12 | "reason": "PodInitializing"
6.4 12 | }
6.4 12 | }
6.4 12 | }
6.4 12 | ],
6.4 12 | "phase": "Pending",
6.4 12 | "qosClass": "Burstable",
6.4 12 | "startTime": "2019-04-05T22:52:52Z"
6.4 12 | }
6.4 12 | }
6.4 12 | ],
6.4 12 | "kind": "List",
6.4 12 | "metadata": {
6.4 12 | "resourceVersion": "",
6.4 12 | "selfLink": ""
6.4 12 | }
6.4 12 | }
6.4 TEL | [12] captured in 0.69 secs.
6.4 TEL | Checking telepresence-1554504767-653051-38514-8665fb57fb-xwhmv
6.4 TEL | Looks like we've found our pod!
6.4 TEL | BEGIN SPAN remote.py:113(wait_for_pod)
6.4 TEL | [13] Capturing: kubectl --context sx4-dev --namespace staging get pod telepresence-1554504767-653051-38514-8665fb57fb-xwhmv -o json
7.0 13 | {
7.0 13 | "apiVersion": "v1",
7.0 13 | "kind": "Pod",
7.0 13 | "metadata": {
7.0 13 | "annotations": {
7.0 13 | "sidecar.istio.io/status": "{\"version\":\"299b0fe3441985f893a8b9fcca72a53989f22ef3d7b53148499683a078e54915\",\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-certs\"],\"imagePullSecrets\":null}"
7.0 13 | },
7.0 13 | "creationTimestamp": "2019-04-05T22:52:52Z",
7.0 13 | "generateName": "telepresence-1554504767-653051-38514-8665fb57fb-",
7.0 13 | "labels": {
7.0 13 | "pod-template-hash": "8665fb57fb",
7.0 13 | "telepresence": "4ab1a87ed83e47d8a335a1984acb5957"
7.0 13 | },
7.0 13 | "name": "telepresence-1554504767-653051-38514-8665fb57fb-xwhmv",
7.0 13 | "namespace": "staging",
7.0 13 | "ownerReferences": [
7.0 13 | {
7.0 13 | "apiVersion": "apps/v1",
7.0 13 | "blockOwnerDeletion": true,
7.0 13 | "controller": true,
7.0 13 | "kind": "ReplicaSet",
7.0 13 | "name": "telepresence-1554504767-653051-38514-8665fb57fb",
7.0 13 | "uid": "8b3b3a8a-57f5-11e9-9434-42010a80004c"
7.0 13 | }
7.0 13 | ],
7.0 13 | "resourceVersion": "6542447",
7.0 13 | "selfLink": "/api/v1/namespaces/staging/pods/telepresence-1554504767-653051-38514-8665fb57fb-xwhmv",
7.0 13 | "uid": "8b3fc640-57f5-11e9-9434-42010a80004c"
7.0 13 | },
7.0 13 | "spec": {
7.0 13 | "containers": [
7.0 13 | {
7.0 13 | "image": "datawire/telepresence-k8s:0.98",
7.0 13 | "imagePullPolicy": "IfNotPresent",
7.0 13 | "name": "telepresence-1554504767-653051-38514",
7.0 13 | "resources": {
7.0 13 | "limits": {
7.0 13 | "cpu": "100m",
7.0 13 | "memory": "256Mi"
7.0 13 | },
7.0 13 | "requests": {
7.0 13 | "cpu": "25m",
7.0 13 | "memory": "64Mi"
7.0 13 | }
7.0 13 | },
7.0 13 | "terminationMessagePath": "/dev/termination-log",
7.0 13 | "terminationMessagePolicy": "File",
7.0 13 | "volumeMounts": [
7.0 13 | {
7.0 13 | "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
7.0 13 | "name": "default-token-zzt6d",
7.0 13 | "readOnly": true
7.0 13 | }
7.0 13 | ]
7.0 13 | },
7.0 13 | {
7.0 13 | "args": [
7.0 13 | "proxy",
7.0 13 | "sidecar",
7.0 13 | "--configPath",
7.0 13 | "/etc/istio/proxy",
7.0 13 | "--binaryPath",
7.0 13 | "/usr/local/bin/envoy",
7.0 13 | "--serviceCluster",
7.0 13 | "istio-proxy",
7.0 13 | "--drainDuration",
7.0 13 | "45s",
7.0 13 | "--parentShutdownDuration",
7.0 13 | "1m0s",
7.0 13 | "--discoveryAddress",
7.0 13 | "istio-pilot.istio-system:15005",
7.0 13 | "--discoveryRefreshDelay",
7.0 13 | "1s",
7.0 13 | "--zipkinAddress",
7.0 13 | "zipkin.istio-system:9411",
7.0 13 | "--connectTimeout",
7.0 13 | "10s",
7.0 13 | "--proxyAdminPort",
7.0 13 | "15000",
7.0 13 | "--controlPlaneAuthPolicy",
7.0 13 | "MUTUAL_TLS"
7.0 13 | ],
7.0 13 | "env": [
7.0 13 | {
7.0 13 | "name": "POD_NAME",
7.0 13 | "valueFrom": {
7.0 13 | "fieldRef": {
7.0 13 | "apiVersion": "v1",
7.0 13 | "fieldPath": "metadata.name"
7.0 13 | }
7.0 13 | }
7.0 13 | },
7.0 13 | {
7.0 13 | "name": "POD_NAMESPACE",
7.0 13 | "valueFrom": {
7.0 13 | "fieldRef": {
7.0 13 | "apiVersion": "v1",
7.0 13 | "fieldPath": "metadata.namespace"
7.0 13 | }
7.0 13 | }
7.0 13 | },
7.0 13 | {
7.0 13 | "name": "INSTANCE_IP",
7.0 13 | "valueFrom": {
7.0 13 | "fieldRef": {
7.0 13 | "apiVersion": "v1",
7.0 13 | "fieldPath": "status.podIP"
7.0 13 | }
7.0 13 | }
7.0 13 | },
7.0 13 | {
7.0 13 | "name": "ISTIO_META_POD_NAME",
7.0 13 | "valueFrom": {
7.0 13 | "fieldRef": {
7.0 13 | "apiVersion": "v1",
7.0 13 | "fieldPath": "metadata.name"
7.0 13 | }
7.0 13 | }
7.0 13 | },
7.0 13 | {
7.0 13 | "name": "ISTIO_META_INTERCEPTION_MODE",
7.0 13 | "value": "REDIRECT"
7.0 13 | },
7.0 13 | {
7.0 13 | "name": "ISTIO_METAJSON_LABELS",
7.0 13 | "value": "{\"pod-template-hash\":\"8665fb57fb\",\"telepresence\":\"4ab1a87ed83e47d8a335a1984acb5957\"}\n"
7.0 13 | }
7.0 13 | ],
7.0 13 | "image": "gcr.io/gke-release/istio/proxyv2:1.0.6-gke.3",
7.0 13 | "imagePullPolicy": "IfNotPresent",
7.0 13 | "name": "istio-proxy",
7.0 13 | "ports": [
7.0 13 | {
7.0 13 | "containerPort": 15090,
7.0 13 | "name": "http-envoy-prom",
7.0 13 | "protocol": "TCP"
7.0 13 | }
7.0 13 | ],
7.0 13 | "resources": {
7.0 13 | "requests": {
7.0 13 | "cpu": "10m"
7.0 13 | }
7.0 13 | },
7.0 13 | "securityContext": {
7.0 13 | "procMount": "Default",
7.0 13 | "readOnlyRootFilesystem": true,
7.0 13 | "runAsUser": 1337
7.0 13 | },
7.0 13 | "terminationMessagePath": "/dev/termination-log",
7.0 13 | "terminationMessagePolicy": "File",
7.0 13 | "volumeMounts": [
7.0 13 | {
7.0 13 | "mountPath": "/etc/istio/proxy",
7.0 13 | "name": "istio-envoy"
7.0 13 | },
7.0 13 | {
7.0 13 | "mountPath": "/etc/certs/",
7.0 13 | "name": "istio-certs",
7.0 13 | "readOnly": true
7.0 13 | }
7.0 13 | ]
7.0 13 | }
7.0 13 | ],
7.0 13 | "dnsPolicy": "ClusterFirst",
7.0 13 | "initContainers": [
7.0 13 | {
7.0 13 | "args": [
7.0 13 | "-p",
7.0 13 | "15001",
7.0 13 | "-u",
7.0 13 | "1337",
7.0 13 | "-m",
7.0 13 | "REDIRECT",
7.0 13 | "-i",
7.0 13 | "*",
7.0 13 | "-x",
7.0 13 | "",
7.0 13 | "-b",
7.0 13 | "",
7.0 13 | "-d",
7.0 13 | ""
7.0 13 | ],
7.0 13 | "image": "gcr.io/gke-release/istio/proxy_init:1.0.6-gke.3",
7.0 13 | "imagePullPolicy": "IfNotPresent",
7.0 13 | "name": "istio-init",
7.0 13 | "resources": {},
7.0 13 | "securityContext": {
7.0 13 | "capabilities": {
7.0 13 | "add": [
7.0 13 | "NET_ADMIN"
7.0 13 | ]
7.0 13 | },
7.0 13 | "privileged": true,
7.0 13 | "procMount": "Default"
7.0 13 | },
7.0 13 | "terminationMessagePath": "/dev/termination-log",
7.0 13 | "terminationMessagePolicy": "File"
7.0 13 | }
7.0 13 | ],
7.0 13 | "nodeName": "gke-dev-clstr-2-pool-3-4d7fa6d1-43kj",
7.0 13 | "priority": 0,
7.0 13 | "restartPolicy": "Always",
7.0 13 | "schedulerName": "default-scheduler",
7.0 13 | "securityContext": {},
7.0 13 | "serviceAccount": "default",
7.0 13 | "serviceAccountName": "default",
7.0 13 | "terminationGracePeriodSeconds": 30,
7.0 13 | "tolerations": [
7.0 13 | {
7.0 13 | "effect": "NoExecute",
7.0 13 | "key": "node.kubernetes.io/not-ready",
7.0 13 | "operator": "Exists",
7.0 13 | "tolerationSeconds": 300
7.0 13 | },
7.0 13 | {
7.0 13 | "effect": "NoExecute",
7.0 13 | "key": "node.kubernetes.io/unreachable",
7.0 13 | "operator": "Exists",
7.0 13 | "tolerationSeconds": 300
7.0 13 | }
7.0 13 | ],
7.0 13 | "volumes": [
7.0 13 | {
7.0 13 | "name": "default-token-zzt6d",
7.0 13 | "secret": {
7.0 13 | "defaultMode": 420,
7.0 13 | "secretName": "default-token-zzt6d"
7.0 13 | }
7.0 13 | },
7.0 13 | {
7.0 13 | "emptyDir": {
7.0 13 | "medium": "Memory"
7.0 13 | },
7.0 13 | "name": "istio-envoy"
7.0 13 | },
7.0 13 | {
7.0 13 | "name": "istio-certs",
7.0 13 | "secret": {
7.0 13 | "defaultMode": 420,
7.0 13 | "optional": true,
7.0 13 | "secretName": "istio.default"
7.0 13 | }
7.0 13 | }
7.0 13 | ]
7.0 13 | },
7.0 13 | "status": {
7.0 13 | "conditions": [
7.0 13 | {
7.0 13 | "lastProbeTime": null,
7.0 13 | "lastTransitionTime": "2019-04-05T22:52:54Z",
7.0 13 | "status": "True",
7.0 13 | "type": "Initialized"
7.0 13 | },
7.0 13 | {
7.0 13 | "lastProbeTime": null,
7.0 13 | "lastTransitionTime": "2019-04-05T22:52:52Z",
7.0 13 | "message": "containers with unready status: [telepresence-1554504767-653051-38514 istio-proxy]",
7.0 13 | "reason": "ContainersNotReady",
7.0 13 | "status": "False",
7.0 13 | "type": "Ready"
7.0 13 | },
7.0 13 | {
7.0 13 | "lastProbeTime": null,
7.0 13 | "lastTransitionTime": "2019-04-05T22:52:52Z",
7.0 13 | "message": "containers with unready status: [telepresence-1554504767-653051-38514 istio-proxy]",
7.0 13 | "reason": "ContainersNotReady",
7.0 13 | "status": "False",
7.0 13 | "type": "ContainersReady"
7.0 13 | },
7.0 13 | {
7.0 13 | "lastProbeTime": null,
7.0 13 | "lastTransitionTime": "2019-04-05T22:52:52Z",
7.0 13 | "status": "True",
7.0 13 | "type": "PodScheduled"
7.0 13 | }
7.0 13 | ],
7.0 13 | "containerStatuses": [
7.0 13 | {
7.0 13 | "image": "gcr.io/gke-release/istio/proxyv2:1.0.6-gke.3",
7.0 13 | "imageID": "",
7.0 13 | "lastState": {},
7.0 13 | "name": "istio-proxy",
7.0 13 | "ready": false,
7.0 13 | "restartCount": 0,
7.0 13 | "state": {
7.0 13 | "waiting": {
7.0 13 | "reason": "PodInitializing"
7.0 13 | }
7.0 13 | }
7.0 13 | },
7.0 13 | {
7.0 13 | "image": "datawire/telepresence-k8s:0.98",
7.0 13 | "imageID": "",
7.0 13 | "lastState": {},
7.0 13 | "name": "telepresence-1554504767-653051-38514",
7.0 13 | "ready": false,
7.0 13 | "restartCount": 0,
7.0 13 | "state": {
7.0 13 | "waiting": {
7.0 13 | "reason": "PodInitializing"
7.0 13 | }
7.0 13 | }
7.0 13 | }
7.0 13 | ],
7.0 13 | "hostIP": "172.16.0.18",
7.0 13 | "initContainerStatuses": [
7.0 13 | {
7.0 13 | "containerID": "docker://e6e6b676c795482feaa7b0d1a0e2a550d17daa696498ed0e6672a38a7e14487b",
7.0 13 | "image": "gcr.io/gke-release/istio/proxy_init:1.0.6-gke.3",
7.0 13 | "imageID": "docker-pullable://gcr.io/gke-release/istio/proxy_init@sha256:4b2b224bf10f3623b0233a20bc2c1ba051b9cdd7f292f055281e64aecb542f51",
7.0 13 | "lastState": {},
7.0 13 | "name": "istio-init",
7.0 13 | "ready": true,
7.0 13 | "restartCount": 0,
7.0 13 | "state": {
7.0 13 | "terminated": {
7.0 13 | "containerID": "docker://e6e6b676c795482feaa7b0d1a0e2a550d17daa696498ed0e6672a38a7e14487b",
7.0 13 | "exitCode": 0,
7.0 13 | "finishedAt": "2019-04-05T22:52:54Z",
7.0 13 | "reason": "Completed",
7.0 13 | "startedAt": "2019-04-05T22:52:53Z"
7.0 13 | }
7.0 13 | }
7.0 13 | }
7.0 13 | ],
7.0 13 | "phase": "Pending",
7.0 13 | "podIP": "172.19.6.33",
7.0 13 | "qosClass": "Burstable",
7.0 13 | "startTime": "2019-04-05T22:52:52Z"
7.0 13 | }
7.0 13 | }
7.0 TEL | [13] captured in 0.66 secs.
7.3 TEL | [14] Capturing: kubectl --context sx4-dev --namespace staging get pod telepresence-1554504767-653051-38514-8665fb57fb-xwhmv -o json
8.0 14 | {
8.0 14 | "apiVersion": "v1",
8.0 14 | "kind": "Pod",
8.0 14 | "metadata": {
8.0 14 | "annotations": {
8.0 14 | "sidecar.istio.io/status": "{\"version\":\"299b0fe3441985f893a8b9fcca72a53989f22ef3d7b53148499683a078e54915\",\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-certs\"],\"imagePullSecrets\":null}"
8.0 14 | },
8.0 14 | "creationTimestamp": "2019-04-05T22:52:52Z",
8.0 14 | "generateName": "telepresence-1554504767-653051-38514-8665fb57fb-",
8.0 14 | "labels": {
8.0 14 | "pod-template-hash": "8665fb57fb",
8.0 14 | "telepresence": "4ab1a87ed83e47d8a335a1984acb5957"
8.0 14 | },
8.0 14 | "name": "telepresence-1554504767-653051-38514-8665fb57fb-xwhmv",
8.0 14 | "namespace": "staging",
8.0 14 | "ownerReferences": [
8.0 14 | {
8.0 14 | "apiVersion": "apps/v1",
8.0 14 | "blockOwnerDeletion": true,
8.0 14 | "controller": true,
8.0 14 | "kind": "ReplicaSet",
8.0 14 | "name": "telepresence-1554504767-653051-38514-8665fb57fb",
8.0 14 | "uid": "8b3b3a8a-57f5-11e9-9434-42010a80004c"
8.0 14 | }
8.0 14 | ],
8.0 14 | "resourceVersion": "6542452",
8.0 14 | "selfLink": "/api/v1/namespaces/staging/pods/telepresence-1554504767-653051-38514-8665fb57fb-xwhmv",
8.0 14 | "uid": "8b3fc640-57f5-11e9-9434-42010a80004c"
8.0 14 | },
8.0 14 | "spec": {
8.0 14 | "containers": [
8.0 14 | {
8.0 14 | "image": "datawire/telepresence-k8s:0.98",
8.0 14 | "imagePullPolicy": "IfNotPresent",
8.0 14 | "name": "telepresence-1554504767-653051-38514",
8.0 14 | "resources": {
8.0 14 | "limits": {
8.0 14 | "cpu": "100m",
8.0 14 | "memory": "256Mi"
8.0 14 | },
8.0 14 | "requests": {
8.0 14 | "cpu": "25m",
8.0 14 | "memory": "64Mi"
8.0 14 | }
8.0 14 | },
8.0 14 | "terminationMessagePath": "/dev/termination-log",
8.0 14 | "terminationMessagePolicy": "File",
8.0 14 | "volumeMounts": [
8.0 14 | {
8.0 14 | "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
8.0 14 | "name": "default-token-zzt6d",
8.0 14 | "readOnly": true
8.0 14 | }
8.0 14 | ]
8.0 14 | },
8.0 14 | {
8.0 14 | "args": [
8.0 14 | "proxy",
8.0 14 | "sidecar",
8.0 14 | "--configPath",
8.0 14 | "/etc/istio/proxy",
8.0 14 | "--binaryPath",
8.0 14 | "/usr/local/bin/envoy",
8.0 14 | "--serviceCluster",
8.0 14 | "istio-proxy",
8.0 14 | "--drainDuration",
8.0 14 | "45s",
8.0 14 | "--parentShutdownDuration",
8.0 14 | "1m0s",
8.0 14 | "--discoveryAddress",
8.0 14 | "istio-pilot.istio-system:15005",
8.0 14 | "--discoveryRefreshDelay",
8.0 14 | "1s",
8.0 14 | "--zipkinAddress",
8.0 14 | "zipkin.istio-system:9411",
8.0 14 | "--connectTimeout",
8.0 14 | "10s",
8.0 14 | "--proxyAdminPort",
8.0 14 | "15000",
8.0 14 | "--controlPlaneAuthPolicy",
8.0 14 | "MUTUAL_TLS"
8.0 14 | ],
8.0 14 | "env": [
8.0 14 | {
8.0 14 | "name": "POD_NAME",
8.0 14 | "valueFrom": {
8.0 14 | "fieldRef": {
8.0 14 | "apiVersion": "v1",
8.0 14 | "fieldPath": "metadata.name"
8.0 14 | }
8.0 14 | }
8.0 14 | },
8.0 14 | {
8.0 14 | "name": "POD_NAMESPACE",
8.0 14 | "valueFrom": {
8.0 14 | "fieldRef": {
8.0 14 | "apiVersion": "v1",
8.0 14 | "fieldPath": "metadata.namespace"
8.0 14 | }
8.0 14 | }
8.0 14 | },
8.0 14 | {
8.0 14 | "name": "INSTANCE_IP",
8.0 14 | "valueFrom": {
8.0 14 | "fieldRef": {
8.0 14 | "apiVersion": "v1",
8.0 14 | "fieldPath": "status.podIP"
8.0 14 | }
8.0 14 | }
8.0 14 | },
8.0 14 | {
8.0 14 | "name": "ISTIO_META_POD_NAME",
8.0 14 | "valueFrom": {
8.0 14 | "fieldRef": {
8.0 14 | "apiVersion": "v1",
8.0 14 | "fieldPath": "metadata.name"
8.0 14 | }
8.0 14 | }
8.0 14 | },
8.0 14 | {
8.0 14 | "name": "ISTIO_META_INTERCEPTION_MODE",
8.0 14 | "value": "REDIRECT"
8.0 14 | },
8.0 14 | {
8.0 14 | "name": "ISTIO_METAJSON_LABELS",
8.0 14 | "value": "{\"pod-template-hash\":\"8665fb57fb\",\"telepresence\":\"4ab1a87ed83e47d8a335a1984acb5957\"}\n"
8.0 14 | }
8.0 14 | ],
8.0 14 | "image": "gcr.io/gke-release/istio/proxyv2:1.0.6-gke.3",
8.0 14 | "imagePullPolicy": "IfNotPresent",
8.0 14 | "name": "istio-proxy",
8.0 14 | "ports": [
8.0 14 | {
8.0 14 | "containerPort": 15090,
8.0 14 | "name": "http-envoy-prom",
8.0 14 | "protocol": "TCP"
8.0 14 | }
8.0 14 | ],
8.0 14 | "resources": {
8.0 14 | "requests": {
8.0 14 | "cpu": "10m"
8.0 14 | }
8.0 14 | },
8.0 14 | "securityContext": {
8.0 14 | "procMount": "Default",
8.0 14 | "readOnlyRootFilesystem": true,
8.0 14 | "runAsUser": 1337
8.0 14 | },
8.0 14 | "terminationMessagePath": "/dev/termination-log",
8.0 14 | "terminationMessagePolicy": "File",
8.0 14 | "volumeMounts": [
8.0 14 | {
8.0 14 | "mountPath": "/etc/istio/proxy",
8.0 14 | "name": "istio-envoy"
8.0 14 | },
8.0 14 | {
8.0 14 | "mountPath": "/etc/certs/",
8.0 14 | "name": "istio-certs",
8.0 14 | "readOnly": true
8.0 14 | }
8.0 14 | ]
8.0 14 | }
8.0 14 | ],
8.0 14 | "dnsPolicy": "ClusterFirst",
8.0 14 | "initContainers": [
8.0 14 | {
8.0 14 | "args": [
8.0 14 | "-p",
8.0 14 | "15001",
8.0 14 | "-u",
8.0 14 | "1337",
8.0 14 | "-m",
8.0 14 | "REDIRECT",
8.0 14 | "-i",
8.0 14 | "*",
8.0 14 | "-x",
8.0 14 | "",
8.0 14 | "-b",
8.0 14 | "",
8.0 14 | "-d",
8.0 14 | ""
8.0 14 | ],
8.0 14 | "image": "gcr.io/gke-release/istio/proxy_init:1.0.6-gke.3",
8.0 14 | "imagePullPolicy": "IfNotPresent",
8.0 14 | "name": "istio-init",
8.0 14 | "resources": {},
8.0 14 | "securityContext": {
8.0 14 | "capabilities": {
8.0 14 | "add": [
8.0 14 | "NET_ADMIN"
8.0 14 | ]
8.0 14 | },
8.0 14 | "privileged": true,
8.0 14 | "procMount": "Default"
8.0 14 | },
8.0 14 | "terminationMessagePath": "/dev/termination-log",
8.0 14 | "terminationMessagePolicy": "File"
8.0 14 | }
8.0 14 | ],
8.0 14 | "nodeName": "gke-dev-clstr-2-pool-3-4d7fa6d1-43kj",
8.0 14 | "priority": 0,
8.0 14 | "restartPolicy": "Always",
8.0 14 | "schedulerName": "default-scheduler",
8.0 14 | "securityContext": {},
8.0 14 | "serviceAccount": "default",
8.0 14 | "serviceAccountName": "default",
8.0 14 | "terminationGracePeriodSeconds": 30,
8.0 14 | "tolerations": [
8.0 14 | {
8.0 14 | "effect": "NoExecute",
8.0 14 | "key": "node.kubernetes.io/not-ready",
8.0 14 | "operator": "Exists",
8.0 14 | "tolerationSeconds": 300
8.0 14 | },
8.0 14 | {
8.0 14 | "effect": "NoExecute",
8.0 14 | "key": "node.kubernetes.io/unreachable",
8.0 14 | "operator": "Exists",
8.0 14 | "tolerationSeconds": 300
8.0 14 | }
8.0 14 | ],
8.0 14 | "volumes": [
8.0 14 | {
8.0 14 | "name": "default-token-zzt6d",
8.0 14 | "secret": {
8.0 14 | "defaultMode": 420,
8.0 14 | "secretName": "default-token-zzt6d"
8.0 14 | }
8.0 14 | },
8.0 14 | {
8.0 14 | "emptyDir": {
8.0 14 | "medium": "Memory"
8.0 14 | },
8.0 14 | "name": "istio-envoy"
8.0 14 | },
8.0 14 | {
8.0 14 | "name": "istio-certs",
8.0 14 | "secret": {
8.0 14 | "defaultMode": 420,
8.0 14 | "optional": true,
8.0 14 | "secretName": "istio.default"
8.0 14 | }
8.0 14 | }
8.0 14 | ]
8.0 14 | },
8.0 14 | "status": {
8.0 14 | "conditions": [
8.0 14 | {
8.0 14 | "lastProbeTime": null,
8.0 14 | "lastTransitionTime": "2019-04-05T22:52:54Z",
8.0 14 | "status": "True",
8.0 14 | "type": "Initialized"
8.0 14 | },
8.0 14 | {
8.0 14 | "lastProbeTime": null,
8.0 14 | "lastTransitionTime": "2019-04-05T22:52:55Z",
8.0 14 | "status": "True",
8.0 14 | "type": "Ready"
8.0 14 | },
8.0 14 | {
8.0 14 | "lastProbeTime": null,
8.0 14 | "lastTransitionTime": "2019-04-05T22:52:55Z",
8.0 14 | "status": "True",
8.0 14 | "type": "ContainersReady"
8.0 14 | },
8.0 14 | {
8.0 14 | "lastProbeTime": null,
8.0 14 | "lastTransitionTime": "2019-04-05T22:52:52Z",
8.0 14 | "status": "True",
8.0 14 | "type": "PodScheduled"
8.0 14 | }
8.0 14 | ],
8.0 14 | "containerStatuses": [
8.0 14 | {
8.0 14 | "containerID": "docker://a84e49fb161c1457e7127520610b9a6bf29fc7fa743ef1222723a7ffaa0cf563",
8.0 14 | "image": "gcr.io/gke-release/istio/proxyv2:1.0.6-gke.3",
8.0 14 | "imageID": "docker-pullable://gcr.io/gke-release/istio/proxyv2@sha256:1f6dac3dbf69c0b0d519398003b6572e0218f695f1be295a2ffff95df2395ed0",
8.0 14 | "lastState": {},
8.0 14 | "name": "istio-proxy",
8.0 14 | "ready": true,
8.0 14 | "restartCount": 0,
8.0 14 | "state": {
8.0 14 | "running": {
8.0 14 | "startedAt": "2019-04-05T22:52:54Z"
8.0 14 | }
8.0 14 | }
8.0 14 | },
8.0 14 | {
8.0 14 | "containerID": "docker://39bc917d761c904c854361571b79f10dcd381a618ea1b36240f89e1099ac4a74",
8.0 14 | "image": "datawire/telepresence-k8s:0.98",
8.0 14 | "imageID": "docker-pullable://datawire/telepresence-k8s@sha256:9d9315b577a1bc1ae75c48902ea19e3a9ecf8d6b04ba606de92148785a8aeb4a",
8.0 14 | "lastState": {},
8.0 14 | "name": "telepresence-1554504767-653051-38514",
8.0 14 | "ready": true,
8.0 14 | "restartCount": 0,
8.0 14 | "state": {
8.0 14 | "running": {
8.0 14 | "startedAt": "2019-04-05T22:52:54Z"
8.0 14 | }
8.0 14 | }
8.0 14 | }
8.0 14 | ],
8.0 14 | "hostIP": "172.16.0.18",
8.0 14 | "initContainerStatuses": [
8.0 14 | {
8.0 14 | "containerID": "docker://e6e6b676c795482feaa7b0d1a0e2a550d17daa696498ed0e6672a38a7e14487b",
8.0 14 | "image": "gcr.io/gke-release/istio/proxy_init:1.0.6-gke.3",
8.0 14 | "imageID": "docker-pullable://gcr.io/gke-release/istio/proxy_init@sha256:4b2b224bf10f3623b0233a20bc2c1ba051b9cdd7f292f055281e64aecb542f51",
8.0 14 | "lastState": {},
8.0 14 | "name": "istio-init",
8.0 14 | "ready": true,
8.0 14 | "restartCount": 0,
8.0 14 | "state": {
8.0 14 | "terminated": {
8.0 14 | "containerID": "docker://e6e6b676c795482feaa7b0d1a0e2a550d17daa696498ed0e6672a38a7e14487b",
8.0 14 | "exitCode": 0,
8.0 14 | "finishedAt": "2019-04-05T22:52:54Z",
8.0 14 | "reason": "Completed",
8.0 14 | "startedAt": "2019-04-05T22:52:53Z"
8.0 14 | }
8.0 14 | }
8.0 14 | }
8.0 14 | ],
8.0 14 | "phase": "Running",
8.0 14 | "podIP": "172.19.6.33",
8.0 14 | "qosClass": "Burstable",
8.0 14 | "startTime": "2019-04-05T22:52:52Z"
8.0 14 | }
8.0 14 | }
8.0 TEL | [14] captured in 0.69 secs.
8.0 TEL | END SPAN remote.py:113(wait_for_pod) 1.6s
8.0 TEL | END SPAN remote.py:151(get_remote_info) 3.0s
8.0 TEL | BEGIN SPAN connect.py:36(connect)
8.0 TEL | [15] Launching kubectl logs: kubectl --context sx4-dev --namespace staging logs -f telepresence-1554504767-653051-38514-8665fb57fb-xwhmv --container telepresence-1554504767-653051-38514 --tail=10
8.4 TEL | [16] Launching kubectl port-forward: kubectl --context sx4-dev --namespace staging port-forward telepresence-1554504767-653051-38514-8665fb57fb-xwhmv 61945:8022
8.8 TEL | [17] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -vv -p 61945 telepresence@127.0.0.1 /bin/true
8.8 17 | OpenSSH_7.9p1, LibreSSL 2.7.3
8.8 17 | debug1: Reading configuration data /dev/null
8.8 17 | debug2: resolve_canonicalize: hostname 127.0.0.1 is address
8.8 17 | debug2: ssh_connect_direct
8.8 17 | debug1: Connecting to 127.0.0.1 [127.0.0.1] port 61945.
8.8 17 | debug1: connect to address 127.0.0.1 port 61945: Connection refused
8.8 17 | ssh: connect to host 127.0.0.1 port 61945: Connection refused
8.8 TEL | [17] exit 255 in 0.03 secs.
9.0 TEL | [18] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -vv -p 61945 telepresence@127.0.0.1 /bin/true
9.1 18 | OpenSSH_7.9p1, LibreSSL 2.7.3
9.1 18 | debug1: Reading configuration data /dev/null
9.1 18 | debug2: resolve_canonicalize: hostname 127.0.0.1 is address
9.1 18 | debug2: ssh_connect_direct
9.1 18 | debug1: Connecting to 127.0.0.1 [127.0.0.1] port 61945.
9.1 18 | debug1: connect to address 127.0.0.1 port 61945: Connection refused
9.1 18 | ssh: connect to host 127.0.0.1 port 61945: Connection refused
9.1 TEL | [18] exit 255 in 0.04 secs.
9.3 TEL | [19] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -vv -p 61945 telepresence@127.0.0.1 /bin/true
9.3 16 | Forwarding from 127.0.0.1:61945 -> 8022
9.3 16 | Forwarding from [::1]:61945 -> 8022
9.4 19 | OpenSSH_7.9p1, LibreSSL 2.7.3
9.4 19 | debug1: Reading configuration data /dev/null
9.4 19 | debug2: resolve_canonicalize: hostname 127.0.0.1 is address
9.4 19 | debug2: ssh_connect_direct
9.4 19 | debug1: Connecting to 127.0.0.1 [127.0.0.1] port 61945.
9.4 19 | debug1: Connection established.
9.4 16 | Handling connection for 61945
9.4 19 | debug1: identity file /Users/adam/.ssh/id_rsa type 0
9.4 19 | debug1: identity file /Users/adam/.ssh/id_rsa-cert type -1
9.4 19 | debug1: identity file /Users/adam/.ssh/id_dsa type -1
9.4 19 | debug1: identity file /Users/adam/.ssh/id_dsa-cert type -1
9.4 19 | debug1: identity file /Users/adam/.ssh/id_ecdsa type -1
9.4 19 | debug1: identity file /Users/adam/.ssh/id_ecdsa-cert type -1
9.4 19 | debug1: identity file /Users/adam/.ssh/id_ed25519 type 3
9.4 19 | debug1: identity file /Users/adam/.ssh/id_ed25519-cert type -1
9.4 19 | debug1: identity file /Users/adam/.ssh/id_xmss type -1
9.4 19 | debug1: identity file /Users/adam/.ssh/id_xmss-cert type -1
9.4 19 | debug1: Local version string SSH-2.0-OpenSSH_7.9
9.7 19 | debug1: Remote protocol version 2.0, remote software version OpenSSH_7.5
9.7 19 | debug1: match: OpenSSH_7.5 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002
9.7 19 | debug2: fd 3 setting O_NONBLOCK
9.7 19 | debug1: Authenticating to 127.0.0.1:61945 as 'telepresence'
9.7 19 | debug1: SSH2_MSG_KEXINIT sent
9.7 19 | debug1: SSH2_MSG_KEXINIT received
9.7 19 | debug2: local client KEXINIT proposal
9.7 19 | debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c
9.7 19 | debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa
9.7 19 | debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
9.7 19 | debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
9.7 19 | debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
9.7 19 | debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
9.7 19 | debug2: compression ctos: none,zlib@openssh.com,zlib
9.7 19 | debug2: compression stoc: none,zlib@openssh.com,zlib
9.7 19 | debug2: languages ctos:
9.7 19 | debug2: languages stoc:
9.7 19 | debug2: first_kex_follows 0
9.7 19 | debug2: reserved 0
9.7 19 | debug2: peer server KEXINIT proposal
9.7 19 | debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1
9.7 19 | debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519
9.7 19 | debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
9.7 19 | debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
9.7 19 | debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
9.7 19 | debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
9.7 19 | debug2: compression ctos: none,zlib@openssh.com
9.7 19 | debug2: compression stoc: none,zlib@openssh.com
9.7 19 | debug2: languages ctos:
9.7 19 | debug2: languages stoc:
9.7 19 | debug2: first_kex_follows 0
9.7 19 | debug2: reserved 0
9.7 19 | debug1: kex: algorithm: curve25519-sha256
9.7 19 | debug1: kex: host key algorithm: ecdsa-sha2-nistp256
9.7 19 | debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
9.7 19 | debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
9.7 19 | debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
9.8 19 | debug1: Server host key: ecdsa-sha2-nistp256 SHA256:ibuuR6HIRqQcY62IbZ71qViR4fyaPnSX3BFlJsRne8U
9.8 19 | debug1: checking without port identifier
9.8 19 | Warning: Permanently added '[127.0.0.1]:61945' (ECDSA) to the list of known hosts.
9.8 19 | debug2: set_newkeys: mode 1
9.8 19 | debug1: rekey after 134217728 blocks
9.8 19 | debug1: SSH2_MSG_NEWKEYS sent
9.8 19 | debug1: expecting SSH2_MSG_NEWKEYS
9.8 19 | debug1: SSH2_MSG_NEWKEYS received
9.8 19 | debug2: set_newkeys: mode 0
9.8 19 | debug1: rekey after 134217728 blocks
9.8 19 | debug1: Will attempt key: /Users/adam/.ssh/id_ed25519 ED25519 SHA256:Sh2SXOUqCBDE7LCdK52m7RjoyHyh846nZhwgaVTv8N8 agent
9.8 19 | debug1: Will attempt key: adam@streamsets-adam.nerdworld.xyz ED25519 SHA256:O/1ciFkD22ip3leLvZXvhDRCMw4jJdv8V1C+rhud60Y agent
9.8 19 | debug1: Will attempt key: /Users/adam/.ssh/google_compute_engine RSA SHA256:K3JxMOcMlwbgDYjpqoskza+VkamKAkhx9FPOsGTkHdw agent
9.8 19 | debug1: Will attempt key: /Users/adam/.ssh/adam-ss.pem RSA SHA256:bcF3nJvs3A7w/9Q6TMHGS22bA7VImTdrryXS7Ei7ifQ agent
9.8 19 | debug1: Will attempt key: /Users/adam/.ssh/id_rsa RSA SHA256:CvbwMEiejKgwuprW3YfmROPeiSW1yu0bXl+b3vPjvJo
9.8 19 | debug1: Will attempt key: /Users/adam/.ssh/id_dsa
9.8 19 | debug1: Will attempt key: /Users/adam/.ssh/id_ecdsa
9.8 19 | debug1: Will attempt key: /Users/adam/.ssh/id_xmss
9.8 19 | debug2: pubkey_prepare: done
9.8 19 | debug1: SSH2_MSG_EXT_INFO received
9.8 19 | debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>
9.8 19 | debug2: service_accept: ssh-userauth
9.8 19 | debug1: SSH2_MSG_SERVICE_ACCEPT received
10.0 19 | debug1: Authentication succeeded (none).
10.0 19 | Authenticated to 127.0.0.1 ([127.0.0.1]:61945).
10.0 19 | debug2: fd 5 setting O_NONBLOCK
10.0 19 | debug2: fd 6 setting O_NONBLOCK
10.0 19 | debug2: fd 7 setting O_NONBLOCK
10.0 19 | debug1: channel 0: new [client-session]
10.0 19 | debug2: channel 0: send open
10.0 19 | debug1: Requesting no-more-sessions@openssh.com
10.0 19 | debug1: Entering interactive session.
10.0 19 | debug1: pledge: network
10.0 19 | debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0
10.0 19 | debug2: channel_input_open_confirmation: channel 0: callback start
10.0 19 | debug2: fd 3 setting TCP_NODELAY
10.0 19 | debug2: client_session2_setup: id 0
10.0 19 | debug1: Sending command: /bin/true
10.0 19 | debug2: channel 0: request exec confirm 1
10.0 19 | debug2: channel_input_open_confirmation: channel 0: callback done
10.0 19 | debug2: channel 0: open confirm rwindow 0 rmax 32768
10.1 19 | debug2: channel 0: rcvd adjust 87380
10.1 19 | debug2: channel_input_status_confirm: type 99 id 0
10.1 19 | debug2: exec request accepted on channel 0
10.1 19 | debug2: channel 0: read<=0 rfd 5 len 0
10.1 19 | debug2: channel 0: read failed
10.1 19 | debug2: channel 0: chan_shutdown_read (i0 o0 sock -1 wfd 5 efd 7 [write])
10.1 19 | debug2: channel 0: input open -> drain
10.1 19 | debug2: channel 0: ibuf empty
10.1 19 | debug2: channel 0: send eof
10.1 19 | debug2: channel 0: input drain -> closed
10.2 19 | debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
10.2 19 | debug2: channel 0: rcvd eof
10.2 19 | debug2: channel 0: output open -> drain
10.2 19 | debug2: channel 0: obuf empty
10.2 19 | debug2: channel 0: chan_shutdown_write (i3 o1 sock -1 wfd 6 efd 7 [write])
10.2 19 | debug2: channel 0: output drain -> closed
10.2 19 | debug2: channel 0: rcvd close
10.2 19 | debug2: channel 0: almost dead
10.2 19 | debug2: channel 0: gc: notify user
10.2 19 | debug2: channel 0: gc: user detached
10.2 19 | debug2: channel 0: send close
10.2 19 | debug2: channel 0: is dead
10.2 19 | debug2: channel 0: garbage collecting
10.2 19 | debug1: channel 0: free: client-session, nchannels 1
10.2 19 | debug1: fd 0 clearing O_NONBLOCK
10.2 19 | debug1: fd 2 clearing O_NONBLOCK
10.2 19 | Transferred: sent 1736, received 2624 bytes, in 0.2 seconds
10.2 19 | Bytes per second: sent 8619.0, received 13027.8
10.2 19 | debug1: Exit status 0
10.2 TEL | [19] ran in 0.82 secs.
10.2 >>> |
10.2 >>> | No traffic is being forwarded from the remote Deployment to your local machine. You can use the --expose option to specify which ports you want to forward.
10.2 >>> |
10.2 TEL | Launching Web server for proxy poll
10.2 TEL | [20] Launching SSH port forward (socks and proxy poll): ssh -N -oServerAliveInterval=1 -oServerAliveCountMax=10 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -vv -p 61945 telepresence@127.0.0.1 -L127.0.0.1:61957:127.0.0.1:9050 -R9055:127.0.0.1:61958
10.2 TEL | END SPAN connect.py:36(connect) 2.2s
10.2 TEL | BEGIN SPAN remote_env.py:28(get_remote_env)
10.2 TEL | [21] Capturing: kubectl --context sx4-dev --namespace staging exec telepresence-1554504767-653051-38514-8665fb57fb-xwhmv --container telepresence-1554504767-653051-38514 -- python3 podinfo.py
10.2 20 | OpenSSH_7.9p1, LibreSSL 2.7.3
10.2 16 | Handling connection for 61945
10.2 20 | debug1: Reading configuration data /dev/null
10.2 20 | debug2: resolve_canonicalize: hostname 127.0.0.1 is address
10.2 20 | debug2: ssh_connect_direct
10.2 20 | debug1: Connecting to 127.0.0.1 [127.0.0.1] port 61945.
10.2 20 | debug1: Connection established.
10.2 20 | debug1: identity file /Users/adam/.ssh/id_rsa type 0
10.2 20 | debug1: identity file /Users/adam/.ssh/id_rsa-cert type -1
10.2 20 | debug1: identity file /Users/adam/.ssh/id_dsa type -1
10.2 20 | debug1: identity file /Users/adam/.ssh/id_dsa-cert type -1
10.2 20 | debug1: identity file /Users/adam/.ssh/id_ecdsa type -1
10.2 20 | debug1: identity file /Users/adam/.ssh/id_ecdsa-cert type -1
10.2 20 | debug1: identity file /Users/adam/.ssh/id_ed25519 type 3
10.2 20 | debug1: identity file /Users/adam/.ssh/id_ed25519-cert type -1
10.2 20 | debug1: identity file /Users/adam/.ssh/id_xmss type -1
10.2 20 | debug1: identity file /Users/adam/.ssh/id_xmss-cert type -1
10.2 20 | debug1: Local version string SSH-2.0-OpenSSH_7.9
10.4 20 | debug1: Remote protocol version 2.0, remote software version OpenSSH_7.5
10.4 20 | debug1: match: OpenSSH_7.5 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002
10.4 20 | debug2: fd 3 setting O_NONBLOCK
10.4 20 | debug1: Authenticating to 127.0.0.1:61945 as 'telepresence'
10.4 20 | debug1: SSH2_MSG_KEXINIT sent
10.5 20 | debug1: SSH2_MSG_KEXINIT received
10.5 20 | debug2: local client KEXINIT proposal
10.5 20 | debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c
10.5 20 | debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa
10.5 20 | debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
10.5 20 | debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
10.5 20 | debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
10.5 20 | debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
10.5 20 | debug2: compression ctos: none,zlib@openssh.com,zlib
10.5 20 | debug2: compression stoc: none,zlib@openssh.com,zlib
10.5 20 | debug2: languages ctos:
10.5 20 | debug2: languages stoc:
10.5 20 | debug2: first_kex_follows 0
10.5 20 | debug2: reserved 0
10.5 20 | debug2: peer server KEXINIT proposal
10.5 20 | debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1
10.5 20 | debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519
10.5 20 | debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
10.5 20 | debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
10.5 20 | debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
10.5 20 | debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
10.5 20 | debug2: compression ctos: none,zlib@openssh.com
10.5 20 | debug2: compression stoc: none,zlib@openssh.com
10.5 20 | debug2: languages ctos:
10.5 20 | debug2: languages stoc:
10.5 20 | debug2: first_kex_follows 0
10.5 20 | debug2: reserved 0
10.5 20 | debug1: kex: algorithm: curve25519-sha256
10.5 20 | debug1: kex: host key algorithm: ecdsa-sha2-nistp256
10.5 20 | debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
10.5 20 | debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
10.5 20 | debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
10.6 20 | debug1: Server host key: ecdsa-sha2-nistp256 SHA256:ibuuR6HIRqQcY62IbZ71qViR4fyaPnSX3BFlJsRne8U
10.6 20 | debug1: checking without port identifier
10.6 20 | Warning: Permanently added '[127.0.0.1]:61945' (ECDSA) to the list of known hosts.
10.6 20 | debug2: set_newkeys: mode 1
10.6 20 | debug1: rekey after 134217728 blocks
10.6 20 | debug1: SSH2_MSG_NEWKEYS sent
10.6 20 | debug1: expecting SSH2_MSG_NEWKEYS
10.6 20 | debug1: SSH2_MSG_NEWKEYS received
10.6 20 | debug2: set_newkeys: mode 0
10.6 20 | debug1: rekey after 134217728 blocks
10.6 20 | debug1: Will attempt key: /Users/adam/.ssh/id_ed25519 ED25519 SHA256:Sh2SXOUqCBDE7LCdK52m7RjoyHyh846nZhwgaVTv8N8 agent
10.6 20 | debug1: Will attempt key: adam@streamsets-adam.nerdworld.xyz ED25519 SHA256:O/1ciFkD22ip3leLvZXvhDRCMw4jJdv8V1C+rhud60Y agent
10.6 20 | debug1: Will attempt key: /Users/adam/.ssh/google_compute_engine RSA SHA256:K3JxMOcMlwbgDYjpqoskza+VkamKAkhx9FPOsGTkHdw agent
10.6 20 | debug1: Will attempt key: /Users/adam/.ssh/adam-ss.pem RSA SHA256:bcF3nJvs3A7w/9Q6TMHGS22bA7VImTdrryXS7Ei7ifQ agent
10.6 20 | debug1: Will attempt key: /Users/adam/.ssh/id_rsa RSA SHA256:CvbwMEiejKgwuprW3YfmROPeiSW1yu0bXl+b3vPjvJo
10.6 20 | debug1: Will attempt key: /Users/adam/.ssh/id_dsa
10.6 20 | debug1: Will attempt key: /Users/adam/.ssh/id_ecdsa
10.6 20 | debug1: Will attempt key: /Users/adam/.ssh/id_xmss
10.6 20 | debug2: pubkey_prepare: done
10.6 20 | debug1: SSH2_MSG_EXT_INFO received
10.6 20 | debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>
10.7 20 | debug2: service_accept: ssh-userauth
10.7 20 | debug1: SSH2_MSG_SERVICE_ACCEPT received
10.8 20 | debug1: Authentication succeeded (none).
10.8 20 | Authenticated to 127.0.0.1 ([127.0.0.1]:61945).
10.8 20 | debug1: Local connections to 127.0.0.1:61957 forwarded to remote address 127.0.0.1:9050
10.8 20 | debug1: Local forwarding listening on 127.0.0.1 port 61957.
10.8 20 | debug2: fd 5 setting O_NONBLOCK
10.8 20 | debug1: channel 0: new [port listener]
10.8 20 | debug1: Remote connections from LOCALHOST:9055 forwarded to local address 127.0.0.1:61958
10.8 20 | debug2: fd 3 setting TCP_NODELAY
10.8 20 | debug1: Requesting no-more-sessions@openssh.com
10.8 20 | debug1: Entering interactive session.
10.8 20 | debug1: pledge: network
10.8 20 | debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0
10.8 20 | debug1: Remote: Forwarding listen address "localhost" overridden by server GatewayPorts
10.8 20 | debug1: remote forward success for: listen 9055, connect 127.0.0.1:61958
10.8 20 | debug1: All remote forwarding requests processed
11.9 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
12.4 21 | {"env": {"PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME": "telepresence-1554504767-653051-38514-8665fb57fb-xwhmv", "S4_PIPELINE_PORT_9000_TCP_ADDR": "172.19.248.42", "S4_SECURITY_PORT_9000_TCP_PROTO": "tcp", "S4_ENVIRONMENT_PORT_10000_TCP_PORT": "10000", "S4_PIPELINE_PORT_9000_TCP": "tcp://172.19.248.42:9000", "S4_PIPELINE_SERVICE_PORT": "9000", "KUBERNETES_SERVICE_PORT": "443", "KUBERNETES_PORT_443_TCP_ADDR": "172.19.240.1", "S4_MESSAGING_SERVICE_HOST": "172.19.244.121", "S4_MESSAGING_SERVICE_PORT_HTTP": "9000", "KUBERNETES_SERVICE_PORT_HTTPS": "443", "S4_PLATFORM_UI_PORT_80_TCP_PORT": "80", "S4_SECURITY_PORT_10000_TCP_PORT": "10000", "S4_PLATFORM_UI_SERVICE_PORT_HTTP": "80", "S4_SECURITY_SERVICE_HOST": "172.19.254.211", "S4_SECURITY_SERVICE_PORT": "9000", "S4_MESSAGING_PORT": "tcp://172.19.244.121:9000", "S4_PIPELINE_PORT_9000_TCP_PORT": "9000", "S4_MESSAGING_PORT_9000_TCP": "tcp://172.19.244.121:9000", "S4_ENVIRONMENT_PORT_9000_TCP": "tcp://172.19.240.48:9000", "S4_ENVIRONMENT_PORT_10000_TCP_PROTO": "tcp", "S4_PLATFORM_UI_SERVICE_HOST": "172.19.252.80", "S4_PLATFORM_UI_PORT_80_TCP_PROTO": "tcp", "S4_MESSAGING_PORT_9000_TCP_PROTO": "tcp", "S4_MESSAGING_PORT_9000_TCP_ADDR": "172.19.244.121", "S4_ENVIRONMENT_SERVICE_HOST": "172.19.240.48", "S4_ENVIRONMENT_PORT_10000_TCP_ADDR": "172.19.240.48", "S4_PIPELINE_SERVICE_HOST": "172.19.248.42", "S4_SECURITY_PORT_9000_TCP_PORT": "9000", "S4_DB_MYSQL_SERVICE_HOST": "172.19.241.44", "S4_DB_MYSQL_SERVICE_PORT_MYSQL_TCP": "3306", "S4_PLATFORM_UI_PORT_80_TCP": "tcp://172.19.252.80:80", "S4_PIPELINE_SERVICE_PORT_HTTP": "9000", "S4_DB_MYSQL_PORT_3306_TCP": "tcp://172.19.241.44:3306", "S4_DB_MYSQL_PORT_3306_TCP_ADDR": "172.19.241.44", "S4_ENVIRONMENT_PORT_9000_TCP_PORT": "9000", "S4_SECURITY_PORT_10000_TCP": "tcp://172.19.254.211:10000", "KUBERNETES_SERVICE_HOST": "172.19.240.1", "S4_PLATFORM_UI_PORT_80_TCP_ADDR": "172.19.252.80", "S4_ENVIRONMENT_PORT": "tcp://172.19.240.48:9000", "S4_ENVIRONMENT_PORT_10000_TCP": "tcp://172.19.240.48:10000", "S4_PIPELINE_PORT_9000_TCP_PROTO": "tcp", "S4_PLATFORM_UI_PORT": "tcp://172.19.252.80:80", "S4_ENVIRONMENT_SERVICE_PORT_ADMIN_HTTP": "10000", "S4_ENVIRONMENT_SERVICE_PORT": "9000", "S4_SECURITY_SERVICE_PORT_HTTP": "9000", "S4_SECURITY_PORT_9000_TCP_ADDR": "172.19.254.211", "KUBERNETES_PORT_443_TCP": "tcp://172.19.240.1:443", "S4_PLATFORM_UI_SERVICE_PORT": "80", "S4_DB_MYSQL_PORT_3306_TCP_PORT": "3306", "S4_ENVIRONMENT_SERVICE_PORT_HTTP": "9000", "S4_ENVIRONMENT_PORT_9000_TCP_PROTO": "tcp", "S4_ENVIRONMENT_PORT_9000_TCP_ADDR": "172.19.240.48", "S4_SECURITY_PORT": "tcp://172.19.254.211:9000", "S4_SECURITY_PORT_10000_TCP_ADDR": "172.19.254.211", "KUBERNETES_PORT_443_TCP_PROTO": "tcp", "S4_DB_MYSQL_PORT": "tcp://172.19.241.44:3306", "S4_MESSAGING_PORT_9000_TCP_PORT": "9000", "S4_SECURITY_PORT_9000_TCP": "tcp://172.19.254.211:9000", "S4_SECURITY_SERVICE_PORT_ADMIN_HTTP": "10000", "S4_SECURITY_PORT_10000_TCP_PROTO": "tcp", "KUBERNETES_PORT_443_TCP_PORT": "443", "S4_PIPELINE_PORT": "tcp://172.19.248.42:9000", "S4_DB_MYSQL_PORT_3306_TCP_PROTO": "tcp", "S4_MESSAGING_SERVICE_PORT": "9000", "KUBERNETES_PORT": "tcp://172.19.240.1:443", "S4_DB_MYSQL_SERVICE_PORT": "3306", "HOME": "/usr/src/app"}, "hostname": "telepresence-1554504767-653051-38514-8665fb57fb-xwhmv\n", "resolv": "nameserver 172.19.240.10\nsearch staging.svc.cluster.local svc.cluster.local cluster.local c.sx4-dev.internal google.internal\noptions ndots:5\n", "mountpoints": ["/var/run/secrets/kubernetes.io/serviceaccount"]}
12.6 TEL | [21] captured in 2.39 secs.
12.6 TEL | END SPAN remote_env.py:28(get_remote_env) 2.4s
12.6 TEL | BEGIN SPAN mount.py:32(mount_remote_volumes)
12.6 TEL | [22] Running: sshfs -p 61945 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null telepresence@127.0.0.1:/ /tmp/tel-jyof7pnv/fs
12.6 16 | Handling connection for 61945
13.0 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
13.5 TEL | [22] ran in 0.95 secs.
13.5 TEL | END SPAN mount.py:32(mount_remote_volumes) 1.0s
13.5 TEL | BEGIN SPAN vpn.py:245(connect_sshuttle)
13.5 TEL | BEGIN SPAN vpn.py:75(get_proxy_cidrs)
13.5 TEL | END SPAN vpn.py:75(get_proxy_cidrs) 0.0s
13.5 TEL | [23] Launching sshuttle: sshuttle-telepresence -v --dns --method auto -e 'ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null' -r telepresence@127.0.0.1:61945 --to-ns 127.0.0.1:9053 172.19.8.0/24 172.19.7.0/24 172.19.240.0/20 172.19.6.0/24
13.5 TEL | BEGIN SPAN vpn.py:268(connect_sshuttle,sshuttle-wait)
13.5 TEL | Wait for vpn-tcp connection: hellotelepresence-0
14.0 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
14.1 23 | Starting sshuttle proxy.
14.7 23 | firewall manager: Starting firewall with Python version 3.6.6
14.7 23 | firewall manager: ready method name pf.
14.7 23 | IPv6 enabled: True
14.7 23 | UDP enabled: False
14.7 23 | DNS enabled: True
14.7 23 | TCP redirector listening on ('::1', 12300, 0, 0).
14.7 23 | TCP redirector listening on ('127.0.0.1', 12300).
14.7 23 | DNS listening on ('::1', 12300, 0, 0).
14.7 23 | DNS listening on ('127.0.0.1', 12300).
14.7 23 | Starting client with Python version 3.6.6
14.7 23 | c : connecting to server...
14.8 15 | Listening...
14.8 15 | 2019-04-05T22:53:00+0000 [-] Loading ./forwarder.py...
14.8 15 | 2019-04-05T22:53:02+0000 [-] /etc/resolv.conf changed, reparsing
14.8 15 | 2019-04-05T22:53:02+0000 [-] Resolver added ('172.19.240.10', 53) to server list
14.8 15 | 2019-04-05T22:53:02+0000 [-] SOCKSv5Factory starting on 9050
14.8 15 | 2019-04-05T22:53:02+0000 [socks.SOCKSv5Factory#info] Starting factory <socks.SOCKSv5Factory object at 0x7fd02bd073c8>
14.8 15 | 2019-04-05T22:53:02+0000 [-] DNSDatagramProtocol starting on 9053
14.8 15 | 2019-04-05T22:53:02+0000 [-] Starting protocol <twisted.names.dns.DNSDatagramProtocol object at 0x7fd0288bee48>
14.8 15 | 2019-04-05T22:53:02+0000 [-] Loaded.
14.8 15 | 2019-04-05T22:53:02+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] twistd 18.9.0 (/usr/bin/python3.6 3.6.5) starting up.
14.8 15 | 2019-04-05T22:53:02+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] reactor class: twisted.internet.epollreactor.EPollReactor.
14.8 16 | Handling connection for 61945
15.0 23 | Warning: Permanently added '[127.0.0.1]:61945' (ECDSA) to the list of known hosts.
16.1 23 | Starting server with Python version 3.6.5
16.1 23 | s: latency control setting = True
16.1 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
16.2 23 | s: available routes:
16.2 23 | s: 2/172.19.6.0/24
16.2 23 | c : Connected.
16.2 23 | firewall manager: setting up.
16.2 23 | >> pfctl -s Interfaces -i lo -v
16.2 23 | >> pfctl -s all
16.2 23 | >> pfctl -a sshuttle6-12300 -f /dev/stdin
16.2 23 | >> pfctl -E
16.2 23 | >> pfctl -s Interfaces -i lo -v
16.2 23 | >> pfctl -s all
16.2 23 | >> pfctl -a sshuttle-12300 -f /dev/stdin
16.2 23 | >> pfctl -E
18.1 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
19.2 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
20.2 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
21.3 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
22.4 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
23.4 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
24.5 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
25.5 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
26.6 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
27.6 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
28.7 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
29.8 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
30.8 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
31.9 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
32.5 TEL | [24] Running: sudo -n echo -n
32.6 TEL | [24] ran in 0.05 secs.
32.9 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
34.0 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
35.0 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
36.1 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
37.2 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
38.2 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
39.3 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
39.6 23 | c : DNS request from ('192.168.128.41', 60709) to None: 28 bytes
39.7 15 | 2019-04-05T22:53:27+0000 [stdout#info] A query: b'github.com'
39.8 15 | 2019-04-05T22:53:27+0000 [stdout#info] Result for b'github.com' is ['192.30.253.112', '192.30.253.113']
40.3 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
41.4 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
41.4 23 | c : DNS request from ('192.168.128.41', 49247) to None: 39 bytes
41.5 15 | 2019-04-05T22:53:29+0000 [stdout#info] A query: b'github.map.fastly.net'
41.5 15 | 2019-04-05T22:53:29+0000 [stdout#info] Result for b'github.map.fastly.net' is ['151.101.0.133', '151.101.64.133', '151.101.128.133', '151.101.192.133']
41.5 23 | c : DNS request from ('192.168.128.41', 62062) to None: 38 bytes
41.6 15 | 2019-04-05T22:53:29+0000 [stdout#info] A query: b's3-1-w.amazonaws.com'
41.6 15 | 2019-04-05T22:53:29+0000 [stdout#info] Result for b's3-1-w.amazonaws.com' is ['52.216.105.187']
42.0 23 | c : DNS request from ('192.168.128.41', 64589) to None: 37 bytes
42.0 23 | c : DNS request from ('192.168.128.41', 59308) to None: 32 bytes
42.1 15 | 2019-04-05T22:53:29+0000 [stdout#info] A query: b'clients1.google.com'
42.2 15 | 2019-04-05T22:53:29+0000 [stdout#info] A query: b'api.github.com'
42.2 15 | 2019-04-05T22:53:29+0000 [stdout#info] Result for b'clients1.google.com' is ['108.177.111.100', '108.177.111.139', '108.177.111.138', '108.177.111.102', '108.177.111.113', '108.177.111.101']
42.2 15 | 2019-04-05T22:53:29+0000 [stdout#info] Result for b'api.github.com' is ['192.30.253.117', '192.30.253.116']
42.2 23 | c : DNS request from ('192.168.128.41', 56863) to None: 33 bytes
42.3 15 | 2019-04-05T22:53:29+0000 [stdout#info] A query: b'live.github.com'
42.3 23 | c : DNS request from ('192.168.128.41', 55951) to None: 33 bytes
42.3 15 | 2019-04-05T22:53:29+0000 [stdout#info] Result for b'live.github.com' is ['192.30.253.125', '192.30.253.124']
42.3 15 | 2019-04-05T22:53:29+0000 [stdout#info] A query: b'sourcegraph.com'
42.4 15 | 2019-04-05T22:53:29+0000 [stdout#info] Result for b'sourcegraph.com' is ['104.25.19.30', '104.25.18.30']
42.4 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
43.5 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
44.6 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
45.2 20 | debug1: client_input_channel_open: ctype forwarded-tcpip rchan 2 win 87380 max 32768
45.2 20 | debug1: client_request_forwarded_tcpip: listen localhost port 9055, originator 127.0.0.1 port 34348
45.2 20 | debug2: fd 6 setting O_NONBLOCK
45.2 20 | debug2: fd 6 setting TCP_NODELAY
45.2 20 | debug1: connect_next: host 127.0.0.1 ([127.0.0.1]:61958) in progress, fd=6
45.2 20 | debug1: channel 1: new [127.0.0.1]
45.2 20 | debug1: confirm forwarded-tcpip
45.2 20 | debug1: channel 1: connected to 127.0.0.1 port 61958
45.2 TEL | (proxy checking local liveness)
45.2 20 | debug2: channel 1: read<=0 rfd 6 len 0
45.2 20 | debug2: channel 1: read failed
45.2 20 | debug2: channel 1: chan_shutdown_read (i0 o0 sock 6 wfd 6 efd -1 [closed])
45.2 20 | debug2: channel 1: input open -> drain
45.2 20 | debug2: channel 1: ibuf empty
45.2 20 | debug2: channel 1: send eof
45.2 20 | debug2: channel 1: input drain -> closed
45.3 15 | 2019-04-05T22:53:32+0000 [Poll#info] Checkpoint
45.3 20 | debug2: channel 1: rcvd eof
45.3 20 | debug2: channel 1: output open -> drain
45.3 20 | debug2: channel 1: obuf empty
45.3 20 | debug2: channel 1: chan_shutdown_write (i3 o1 sock 6 wfd 6 efd -1 [closed])
45.3 20 | debug2: channel 1: output drain -> closed
45.3 20 | debug2: channel 1: rcvd close
45.3 20 | debug2: channel 1: send close
45.3 20 | debug2: channel 1: is dead
45.3 20 | debug2: channel 1: garbage collecting
45.3 20 | debug1: channel 1: free: 127.0.0.1, nchannels 2
45.5 TEL | CRASH: vpn-tcp tunnel did not connect
45.5 TEL | Traceback (most recent call last):
45.5 TEL | File "/usr/local/bin/telepresence/telepresence/cli.py", line 130, in crash_reporting
45.5 TEL | yield
45.5 TEL | File "/usr/local/bin/telepresence/telepresence/main.py", line 77, in main
45.5 TEL | runner, remote_info, env, socks_port, ssh, mount_dir, pod_info
45.5 TEL | File "/usr/local/bin/telepresence/telepresence/outbound/setup.py", line 73, in launch
45.5 TEL | runner_, remote_info, command, args.also_proxy, env, ssh
45.5 TEL | File "/usr/local/bin/telepresence/telepresence/outbound/local.py", line 121, in launch_vpn
45.5 TEL | connect_sshuttle(runner, remote_info, also_proxy, ssh)
45.5 TEL | File "/usr/local/bin/telepresence/telepresence/outbound/vpn.py", line 295, in connect_sshuttle
45.5 TEL | raise RuntimeError("vpn-tcp tunnel did not connect")
45.5 TEL | RuntimeError: vpn-tcp tunnel did not connect
45.5 TEL | (calling crash reporter...)
46.3 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
47.4 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
48.4 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
49.5 20 | debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1
50.2 >>> | Exit cleanup in progress
50.2 TEL | (Cleanup) Kill BG process [23] sshuttle
50.2 23 | >> pfctl -a sshuttle6-12300 -F all
50.2 TEL | (Cleanup) Unmount remote filesystem
50.2 TEL | [25] Running: umount -f /tmp/tel-jyof7pnv/fs
50.2 23 | >> pfctl -X 10176216948847074923
50.3 23 | >> pfctl -a sshuttle-12300 -F all
50.3 23 | >> pfctl -X 10176216948847075499
50.4 TEL | [23] sshuttle: exit -15
50.4 TEL | [25] ran in 0.18 secs.
50.4 TEL | (Cleanup) Kill BG process [20] SSH port forward (socks and proxy poll)
50.4 20 | debug1: channel 0: free: port listener, nchannels 1
50.4 20 | Transferred: sent 3456, received 4380 bytes, in 39.7 seconds
50.4 20 | Bytes per second: sent 87.1, received 110.4
50.4 20 | debug1: Exit status 0
50.4 TEL | [20] SSH port forward (socks and proxy poll): exit 0
50.4 TEL | (Cleanup) Kill Web server for proxy poll
50.7 TEL | (Cleanup) Kill BG process [16] kubectl port-forward
50.7 TEL | [16] kubectl port-forward: exit -15
50.7 TEL | (Cleanup) Kill BG process [15] kubectl logs
50.7 TEL | [15] kubectl logs: exit -15
50.7 TEL | (Cleanup) Delete new deployment
50.7 >>> | Cleaning up Deployment telepresence-1554504767-653051-38514
50.7 TEL | Background process (kubectl logs) exited with return code -15. Command was:
50.8 TEL | kubectl --context sx4-dev --namespace staging logs -f telepresence-1554504767-653051-38514-8665fb57fb-xwhmv --container telepresence-1554504767-653051-38514 --tail=10
50.8 TEL | [26] Running: kubectl --context sx4-dev --namespace staging delete --ignore-not-found svc,deploy --selector=telepresence=4ab1a87ed83e47d8a335a1984acb5957
50.8 TEL |
50.8 TEL | Recent output was:
50.8 TEL | 2019-04-05T22:53:29+0000 [stdout#info] Result for b's3-1-w.amazonaws.com' is ['52.216.105.187']
50.8 TEL | 2019-04-05T22:53:29+0000 [stdout#info] A query: b'clients1.google.com'
50.8 TEL | 2019-04-05T22:53:29+0000 [stdout#info] A query: b'api.github.com'
50.8 TEL | 2019-04-05T22:53:29+0000 [stdout#info] Result for b'clients1.google.com' is ['108.177.111.100', '108.177.111.139', '108.177.111.138', '108.177.111.102', '108.177.111.113', '108.177.111.101']
50.8 TEL | 2019-04-05T22:53:29+0000 [stdout#info] Result for b'api.github.com' is ['192.30.253.117', '192.30.253.116']
50.8 TEL | 2019-04-05T22:53:29+0000 [stdout#info] A query: b'live.github.com'
50.8 TEL | 2019-04-05T22:53:29+0000 [stdout#info] Result for b'live.github.com' is ['192.30.253.125', '192.30.253.124']
50.8 TEL | 2019-04-05T22:53:29+0000 [stdout#info] A query: b'sourcegraph.com'
50.8 TEL | 2019-04-05T22:53:29+0000 [stdout#info] Result for b'sourcegraph.com' is ['104.25.19.30', '104.25.18.30']
50.8 TEL | 2019-04-05T22:53:32+0000 [Poll#info] Checkpoint
52.6 26 | deployment.extensions "telepresence-1554504767-653051-38514" deleted
52.6 TEL | [26] ran in 1.89 secs.
52.6 TEL | (Cleanup) Kill sudo privileges holder
52.6 TEL | (Cleanup) Stop time tracking
52.6 TEL | END SPAN main.py:40(main) 52.6s
52.6 TEL | (Cleanup) Remove temporary directory
52.6 TEL | (Cleanup) Save caches
53.6 TEL | (sudo privileges holder thread exiting)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment