Skip to content

Instantly share code, notes, and snippets.

@ElliotJH
Created July 31, 2019 23:28
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ElliotJH/f7e4252ba435948663b25993b06634cd to your computer and use it in GitHub Desktop.
Save ElliotJH/f7e4252ba435948663b25993b06634cd to your computer and use it in GitHub Desktop.
0.0 TEL | Telepresence 0.101 launched at Wed Jul 31 20:22:41 2019
0.0 TEL | /usr/local/bin/telepresence --run curl http://mazes-mazes:8080/
0.0 TEL | Platform: darwin
0.0 TEL | Python 3.7.4 (default, Jul 9 2019, 18:13:23)
0.0 TEL | [Clang 10.0.1 (clang-1001.0.46.4)]
0.0 TEL | [1] Running: uname -a
0.0 1 | Darwin elliots-mbp.lan 18.5.0 Darwin Kernel Version 18.5.0: Mon Mar 11 20:40:32 PDT 2019; root:xnu-4903.251.3~3/RELEASE_X86_64 x86_64
0.0 TEL | [1] ran in 0.01 secs.
0.0 TEL | BEGIN SPAN main.py:40(main)
0.0 TEL | BEGIN SPAN startup.py:74(__init__)
0.1 TEL | Found kubectl -> /usr/local/bin/kubectl
0.1 TEL | [2] Capturing: kubectl version --short
1.1 TEL | [2] captured in 1.02 secs.
1.1 TEL | [3] Capturing: kubectl config current-context
1.1 TEL | [3] captured in 0.07 secs.
1.1 TEL | [4] Capturing: kubectl config view -o json
1.2 TEL | [4] captured in 0.06 secs.
1.2 TEL | [5] Capturing: kubectl --context test get ns default
2.4 TEL | [5] captured in 1.24 secs.
2.4 TEL | Command: kubectl 1.15.1
2.4 TEL | Context: test, namespace: default, version: 1.12.8
2.4 >>> | Warning: kubectl 1.15.1 may not work correctly with cluster version 1.12.8 due to the version discrepancy. See https://kubernetes.io/docs/setup/version-skew-policy/ for more information.
2.4 >>> |
2.4 TEL | END SPAN startup.py:74(__init__) 2.4s
2.4 TEL | Found ssh -> /usr/bin/ssh
2.4 TEL | [6] Capturing: ssh -V
2.5 TEL | [6] captured in 0.05 secs.
2.5 TEL | Found curl -> /usr/bin/curl
2.5 TEL | Found sshuttle-telepresence -> /usr/local/Cellar/telepresence/0.101/libexec/sshuttle-telepresence
2.5 TEL | Found pfctl -> /sbin/pfctl
2.5 TEL | Found sudo -> /usr/bin/sudo
2.5 TEL | [7] Running: sudo -n echo -n
2.7 7 | sudo: a password is required
2.7 TEL | [7] exit 1 in 0.24 secs.
2.7 >>> | How Telepresence uses sudo: https://www.telepresence.io/reference/install#dependencies
2.7 >>> | Invoking sudo. Please enter your sudo password.
2.7 TEL | [8] Running: sudo echo -n
10.2 TEL | [8] ran in 7.40 secs.
10.2 >>> | Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloud hosts and headless services with --also-proxy. For a full list of method limitations see https://telepresence.io/reference/methods.html
10.2 TEL | Found sshfs -> /usr/local/bin/sshfs
10.2 TEL | Found umount -> /sbin/umount
10.2 >>> | Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details.
10.2 TEL | [9] Running: kubectl --context test --namespace default get pods telepresence-connectivity-check --ignore-not-found
10.6 TEL | [9] ran in 0.46 secs.
11.2 TEL | Scout info: {'latest_version': '0.101', 'application': 'telepresence', 'notices': []}
11.2 TEL | BEGIN SPAN deployment.py:97(create_new_deployment)
11.2 >>> | Starting network proxy to cluster using new Deployment telepresence-1564615361-0072582-23381
11.2 TEL | [10] Running: kubectl --context test --namespace default delete --ignore-not-found svc,deploy --selector=telepresence=c49ec35beaa24138ab9d66da1cb7a4e5
11.7 10 | No resources found
11.7 TEL | [10] ran in 0.56 secs.
11.7 TEL | [11] Running: kubectl --context test --namespace default run --restart=Always --limits=cpu=100m,memory=256Mi --requests=cpu=25m,memory=64Mi telepresence-1564615361-0072582-23381 --image=datawire/telepresence-k8s:0.101 --labels=telepresence=c49ec35beaa24138ab9d66da1cb7a4e5
12.2 11 | kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
12.3 11 | deployment.apps/telepresence-1564615361-0072582-23381 created
12.4 TEL | [11] ran in 0.61 secs.
12.4 TEL | END SPAN deployment.py:97(create_new_deployment) 1.2s
12.4 TEL | BEGIN SPAN remote.py:151(get_remote_info)
12.4 TEL | BEGIN SPAN remote.py:78(get_deployment_json)
12.4 TEL | [12] Capturing: kubectl --context test --namespace default get deployment -o json --selector=telepresence=c49ec35beaa24138ab9d66da1cb7a4e5
12.8 TEL | [12] captured in 0.46 secs.
12.8 TEL | END SPAN remote.py:78(get_deployment_json) 0.5s
12.8 TEL | Searching for Telepresence pod:
12.8 TEL | with name telepresence-1564615361-0072582-23381-*
12.8 TEL | with labels {'telepresence': 'c49ec35beaa24138ab9d66da1cb7a4e5'}
12.8 TEL | [13] Capturing: kubectl --context test --namespace default get pod -o json --selector=telepresence=c49ec35beaa24138ab9d66da1cb7a4e5
13.3 TEL | [13] captured in 0.47 secs.
13.3 TEL | Checking telepresence-1564615361-0072582-23381-747bc469f4-md64n
13.3 TEL | Looks like we've found our pod!
13.3 TEL | BEGIN SPAN remote.py:113(wait_for_pod)
13.3 TEL | [14] Capturing: kubectl --context test --namespace default get pod telepresence-1564615361-0072582-23381-747bc469f4-md64n -o json
13.7 TEL | [14] captured in 0.42 secs.
14.0 TEL | [15] Capturing: kubectl --context test --namespace default get pod telepresence-1564615361-0072582-23381-747bc469f4-md64n -o json
14.4 TEL | [15] captured in 0.44 secs.
14.7 TEL | [16] Capturing: kubectl --context test --namespace default get pod telepresence-1564615361-0072582-23381-747bc469f4-md64n -o json
15.1 TEL | [16] captured in 0.44 secs.
15.1 TEL | END SPAN remote.py:113(wait_for_pod) 1.8s
15.1 TEL | END SPAN remote.py:151(get_remote_info) 2.8s
15.1 TEL | BEGIN SPAN connect.py:36(connect)
15.1 TEL | [17] Launching kubectl logs: kubectl --context test --namespace default logs -f telepresence-1564615361-0072582-23381-747bc469f4-md64n --container telepresence-1564615361-0072582-23381 --tail=10
15.1 TEL | [18] Launching kubectl port-forward: kubectl --context test --namespace default port-forward telepresence-1564615361-0072582-23381-747bc469f4-md64n 53364:8022
15.1 TEL | [19] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 53364 telepresence@127.0.0.1 /bin/true
15.2 TEL | [19] exit 255 in 0.08 secs.
15.5 TEL | [20] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 53364 telepresence@127.0.0.1 /bin/true
15.5 TEL | [20] exit 255 in 0.04 secs.
15.8 TEL | [21] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 53364 telepresence@127.0.0.1 /bin/true
15.8 TEL | [21] exit 255 in 0.03 secs.
16.0 TEL | [22] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 53364 telepresence@127.0.0.1 /bin/true
16.1 TEL | [22] exit 255 in 0.04 secs.
16.2 18 | Forwarding from 127.0.0.1:53364 -> 8022
16.2 18 | Forwarding from [::1]:53364 -> 8022
16.3 TEL | [23] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 53364 telepresence@127.0.0.1 /bin/true
16.4 18 | Handling connection for 53364
17.5 TEL | [23] ran in 1.14 secs.
17.5 >>> |
17.5 >>> | No traffic is being forwarded from the remote Deployment to your local machine. You can use the --expose option to specify which ports you want to forward.
17.5 >>> |
17.5 TEL | Launching Web server for proxy poll
17.5 TEL | [24] Launching SSH port forward (socks and proxy poll): ssh -N -oServerAliveInterval=1 -oServerAliveCountMax=10 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 53364 telepresence@127.0.0.1 -L127.0.0.1:53377:127.0.0.1:9050 -R9055:127.0.0.1:53378
17.5 TEL | END SPAN connect.py:36(connect) 2.4s
17.5 TEL | BEGIN SPAN remote_env.py:28(get_remote_env)
17.5 TEL | [25] Capturing: kubectl --context test --namespace default exec telepresence-1564615361-0072582-23381-747bc469f4-md64n --container telepresence-1564615361-0072582-23381 -- python3 podinfo.py
17.5 18 | Handling connection for 53364
19.9 TEL | [25] captured in 2.44 secs.
19.9 TEL | END SPAN remote_env.py:28(get_remote_env) 2.4s
19.9 TEL | BEGIN SPAN mount.py:32(mount_remote_volumes)
19.9 TEL | [26] Running: sshfs -p 53364 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null telepresence@127.0.0.1:/ /tmp/tel-z1nwmjpt/fs
20.1 18 | Handling connection for 53364
20.4 17 | Listening...
20.4 17 | 2019-07-31T23:22:59+0000 [-] Loading ./forwarder.py...
20.4 17 | 2019-07-31T23:23:01+0000 [-] /etc/resolv.conf changed, reparsing
20.4 17 | 2019-07-31T23:23:01+0000 [-] Resolver added ('10.0.0.10', 53) to server list
20.4 17 | 2019-07-31T23:23:01+0000 [-] SOCKSv5Factory starting on 9050
20.4 17 | 2019-07-31T23:23:01+0000 [socks.SOCKSv5Factory#info] Starting factory <socks.SOCKSv5Factory object at 0x7feed39a2240>
20.4 17 | 2019-07-31T23:23:01+0000 [-] DNSDatagramProtocol starting on 9053
20.4 17 | 2019-07-31T23:23:01+0000 [-] Starting protocol <twisted.names.dns.DNSDatagramProtocol object at 0x7feed39a25f8>
20.4 17 | 2019-07-31T23:23:01+0000 [-] Loaded.
20.4 17 | 2019-07-31T23:23:01+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] twistd 19.2.1 (/usr/bin/python3.6 3.6.8) starting up.
20.5 17 | 2019-07-31T23:23:01+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] reactor class: twisted.internet.epollreactor.EPollReactor.
21.1 TEL | [26] ran in 1.18 secs.
21.1 TEL | END SPAN mount.py:32(mount_remote_volumes) 1.2s
21.1 TEL | BEGIN SPAN vpn.py:264(connect_sshuttle)
21.1 TEL | BEGIN SPAN vpn.py:74(get_proxy_cidrs)
21.1 TEL | END SPAN vpn.py:74(get_proxy_cidrs) 0.0s
21.1 TEL | [27] Launching sshuttle: sshuttle-telepresence -v --dns --method auto -e 'ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null' -r telepresence@127.0.0.1:53364 --to-ns 127.0.0.1:9053 10.0.0.0/16 10.244.0.0/24
21.1 TEL | BEGIN SPAN vpn.py:287(connect_sshuttle,sshuttle-wait)
21.1 TEL | Wait for vpn-tcp connection: hellotelepresence-0
21.1 TEL | [28] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-0")'
21.3 TEL | [28] exit 1 in 0.18 secs.
21.3 TEL | [29] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-0.a.sanity.check.telepresence.io")'
21.4 TEL | [29] exit 1 in 0.13 secs.
21.5 TEL | Wait for vpn-tcp connection: hellotelepresence-1
21.5 TEL | [30] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-1")'
21.7 TEL | [30] exit 1 in 0.12 secs.
21.7 TEL | [31] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-1.a.sanity.check.telepresence.io")'
21.8 TEL | [31] exit 1 in 0.13 secs.
21.9 TEL | Wait for vpn-tcp connection: hellotelepresence-2
21.9 TEL | [32] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-2")'
22.0 TEL | [32] exit 1 in 0.13 secs.
22.0 TEL | [33] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-2.a.sanity.check.telepresence.io")'
22.2 27 | Starting sshuttle proxy.
22.2 TEL | [33] exit 1 in 0.17 secs.
22.3 TEL | Wait for vpn-tcp connection: hellotelepresence-3
22.3 TEL | [34] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-3")'
22.4 TEL | [34] exit 1 in 0.08 secs.
22.4 TEL | [35] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-3.a.sanity.check.telepresence.io")'
22.5 TEL | [35] exit 1 in 0.18 secs.
22.7 TEL | Wait for vpn-tcp connection: hellotelepresence-4
22.7 TEL | [36] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-4")'
22.7 TEL | [36] exit 1 in 0.07 secs.
22.7 TEL | [37] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-4.a.sanity.check.telepresence.io")'
22.9 TEL | [37] exit 1 in 0.18 secs.
22.9 27 | firewall manager: Starting firewall with Python version 3.7.4
22.9 27 | firewall manager: ready method name pf.
22.9 27 | IPv6 enabled: True
22.9 27 | UDP enabled: False
22.9 27 | DNS enabled: True
22.9 27 | TCP redirector listening on ('::1', 12300, 0, 0).
22.9 27 | TCP redirector listening on ('127.0.0.1', 12300).
22.9 27 | DNS listening on ('::1', 12300, 0, 0).
22.9 27 | DNS listening on ('127.0.0.1', 12300).
22.9 27 | Starting client with Python version 3.7.4
22.9 27 | c : connecting to server...
23.0 18 | Handling connection for 53364
23.0 TEL | Wait for vpn-tcp connection: hellotelepresence-5
23.0 TEL | [38] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-5")'
23.1 TEL | [38] exit 1 in 0.07 secs.
23.1 TEL | [39] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-5.a.sanity.check.telepresence.io")'
23.2 TEL | [39] exit 1 in 0.12 secs.
23.3 TEL | Wait for vpn-tcp connection: hellotelepresence-6
23.3 TEL | [40] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-6")'
23.3 27 | Warning: Permanently added '[127.0.0.1]:53364' (ECDSA) to the list of known hosts.
23.4 TEL | [40] exit 1 in 0.07 secs.
23.4 TEL | [41] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-6.a.sanity.check.telepresence.io")'
23.6 TEL | [41] exit 1 in 0.18 secs.
23.7 TEL | Wait for vpn-tcp connection: hellotelepresence-7
23.7 TEL | [42] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-7")'
23.7 TEL | [42] exit 1 in 0.07 secs.
23.7 TEL | [43] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-7.a.sanity.check.telepresence.io")'
23.9 TEL | [43] exit 1 in 0.18 secs.
24.0 TEL | Wait for vpn-tcp connection: hellotelepresence-8
24.0 TEL | [44] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-8")'
24.1 TEL | [44] exit 1 in 0.07 secs.
24.1 TEL | [45] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-8.a.sanity.check.telepresence.io")'
24.2 TEL | [45] exit 1 in 0.12 secs.
24.3 TEL | Wait for vpn-tcp connection: hellotelepresence-9
24.3 TEL | [46] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-9")'
24.4 TEL | [46] exit 1 in 0.08 secs.
24.4 TEL | [47] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-9.a.sanity.check.telepresence.io")'
24.4 27 | Starting server with Python version 3.6.8
24.4 27 | s: latency control setting = True
24.5 27 | s: available routes:
24.5 27 | s: 2/10.244.0.0/24
24.5 27 | c : Connected.
24.5 27 | firewall manager: setting up.
24.5 27 | >> pfctl -s Interfaces -i lo -v
24.5 27 | >> pfctl -s all
24.5 27 | >> pfctl -a sshuttle6-12300 -f /dev/stdin
24.5 27 | >> pfctl -E
24.5 TEL | [47] exit 1 in 0.13 secs.
24.5 27 | >> pfctl -s Interfaces -i lo -v
24.5 27 | >> pfctl -s all
24.6 27 | >> pfctl -a sshuttle-12300 -f /dev/stdin
24.6 27 | >> pfctl -E
24.6 TEL | Wait for vpn-tcp connection: hellotelepresence-10
24.6 TEL | [48] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-10")'
29.6 TEL | [48] timed out after 5.01 secs.
29.6 TEL | [49] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-10.a.sanity.check.telepresence.io")'
30.7 TEL | [49] timed out after 1.01 secs.
30.8 TEL | Wait for vpn-tcp connection: hellotelepresence-11
30.8 TEL | [50] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-11")'
35.8 TEL | [50] timed out after 5.01 secs.
35.8 TEL | [51] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-11.a.sanity.check.telepresence.io")'
36.8 TEL | [51] timed out after 1.01 secs.
36.9 TEL | Wait for vpn-tcp connection: hellotelepresence-12
36.9 TEL | [52] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-12")'
40.2 TEL | [53] Running: sudo -n echo -n
40.3 TEL | [53] ran in 0.11 secs.
41.9 TEL | [52] timed out after 5.02 secs.
41.9 TEL | [54] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-12.a.sanity.check.telepresence.io")'
42.9 TEL | [54] timed out after 1.02 secs.
43.0 TEL | Wait for vpn-tcp connection: hellotelepresence-13
43.0 TEL | [55] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-13")'
48.0 TEL | [55] timed out after 5.02 secs.
48.0 TEL | [56] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-13.a.sanity.check.telepresence.io")'
49.1 TEL | [56] timed out after 1.01 secs.
49.2 TEL | Wait for vpn-tcp connection: hellotelepresence-14
49.2 TEL | [57] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-14")'
50.5 TEL | (proxy checking local liveness)
50.6 17 | 2019-07-31T23:23:31+0000 [Poll#info] Checkpoint
54.2 TEL | [57] timed out after 5.02 secs.
54.2 TEL | [58] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-14.a.sanity.check.telepresence.io")'
55.2 TEL | [58] timed out after 1.02 secs.
55.3 TEL | Wait for vpn-tcp connection: hellotelepresence-15
55.3 TEL | [59] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-15")'
60.3 TEL | [59] timed out after 5.01 secs.
60.3 TEL | [60] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-15.a.sanity.check.telepresence.io")'
61.3 TEL | [60] timed out after 1.01 secs.
63.3 TEL | CRASH: vpn-tcp tunnel did not connect
63.3 TEL | Traceback (most recent call last):
63.3 TEL | File "/usr/local/bin/telepresence/telepresence/cli.py", line 136, in crash_reporting
63.3 TEL | yield
63.3 TEL | File "/usr/local/bin/telepresence/telepresence/main.py", line 77, in main
63.3 TEL | runner, remote_info, env, socks_port, ssh, mount_dir, pod_info
63.3 TEL | File "/usr/local/bin/telepresence/telepresence/outbound/setup.py", line 98, in launch
63.3 TEL | runner_, remote_info, command, args.also_proxy, env, ssh
63.3 TEL | File "/usr/local/bin/telepresence/telepresence/outbound/local.py", line 121, in launch_vpn
63.3 TEL | connect_sshuttle(runner, remote_info, also_proxy, ssh)
63.3 TEL | File "/usr/local/bin/telepresence/telepresence/outbound/vpn.py", line 312, in connect_sshuttle
63.3 TEL | raise RuntimeError("vpn-tcp tunnel did not connect")
63.3 TEL | RuntimeError: vpn-tcp tunnel did not connect
63.3 TEL | (calling crash reporter...)
70.4 TEL | [61] Running: sudo -n echo -n
70.5 TEL | [61] ran in 0.07 secs.
80.6 TEL | (proxy checking local liveness)
80.7 17 | 2019-07-31T23:24:01+0000 [Poll#info] Checkpoint
86.8 >>> | Exit cleanup in progress
86.8 TEL | (Cleanup) Diagnose vpn-tcp
86.8 TEL | [62] Running: ls -l /etc/resolv.conf
86.8 62 | lrwxr-xr-x 1 root wheel 22 Oct 4 2018 /etc/resolv.conf -> ../var/run/resolv.conf
86.8 TEL | [62] ran in 0.03 secs.
86.8 TEL | [63] Running: grep -v '^#' /etc/resolv.conf
86.8 63 | domain lan
86.8 63 | nameserver 192.168.86.1
86.8 TEL | [63] ran in 0.01 secs.
86.8 TEL | [64] Running: ls -l /etc/resolvconf
86.8 64 | ls: /etc/resolvconf: No such file or directory
86.8 TEL | [64] exit 1 in 0.01 secs.
86.8 TEL | [65] Running: cat /etc/nsswitch.conf
86.8 65 | cat: /etc/nsswitch.conf: No such file or directory
86.8 TEL | [65] exit 1 in 0.01 secs.
86.8 TEL | [66] Running: ls -l /etc/resolver
86.9 66 | ls: /etc/resolver: No such file or directory
86.9 TEL | [66] exit 1 in 0.01 secs.
86.9 TEL | (Cleanup) Kill BG process [27] sshuttle
86.9 27 | >> pfctl -a sshuttle6-12300 -F all
86.9 TEL | (Cleanup) Unmount remote filesystem
86.9 TEL | [67] Running: umount -f /tmp/tel-z1nwmjpt/fs
86.9 27 | >> pfctl -X 13194900602965905683
86.9 27 | >> pfctl -a sshuttle-12300 -F all
86.9 27 | >> pfctl -X 13194900602769430611
87.0 TEL | [67] ran in 0.10 secs.
87.0 TEL | (Cleanup) Kill BG process [24] SSH port forward (socks and proxy poll)
87.0 TEL | (Cleanup) Kill Web server for proxy poll
87.0 TEL | [27] sshuttle: exit -15
87.0 TEL | [24] SSH port forward (socks and proxy poll): exit 0
87.1 TEL | (Cleanup) Kill BG process [18] kubectl port-forward
87.1 TEL | [18] kubectl port-forward: exit -15
87.1 TEL | (Cleanup) Kill BG process [17] kubectl logs
87.1 TEL | [17] kubectl logs: exit -15
87.1 TEL | Background process (kubectl logs) exited with return code -15. Command was:
87.1 TEL | kubectl --context test --namespace default logs -f telepresence-1564615361-0072582-23381-747bc469f4-md64n --container telepresence-1564615361-0072582-23381 --tail=10
87.1 TEL |
87.1 TEL | Recent output was:
87.1 TEL | 2019-07-31T23:23:01+0000 [-] Resolver added ('10.0.0.10', 53) to server list
87.1 TEL | 2019-07-31T23:23:01+0000 [-] SOCKSv5Factory starting on 9050
87.1 TEL | 2019-07-31T23:23:01+0000 [socks.SOCKSv5Factory#info] Starting factory <socks.SOCKSv5Factory object at 0x7feed39a2240>
87.1 TEL | 2019-07-31T23:23:01+0000 [-] DNSDatagramProtocol starting on 9053
87.1 TEL | 2019-07-31T23:23:01+0000 [-] Starting protocol <twisted.names.dns.DNSDatagramProtocol object at 0x7feed39a25f8>
87.1 TEL | 2019-07-31T23:23:01+0000 [-] Loaded.
87.1 TEL | 2019-07-31T23:23:01+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] twistd 19.2.1 (/usr/bin/python3.6 3.6.8) starting up.
87.1 TEL | 2019-07-31T23:23:01+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] reactor class: twisted.internet.epollreactor.EPollReactor.
87.1 TEL | 2019-07-31T23:23:31+0000 [Poll#info] Checkpoint
87.1 TEL | 2019-07-31T23:24:01+0000 [Poll#info] Checkpoint
87.1 TEL | (Cleanup) Delete new deployment
87.1 >>> | Cleaning up Deployment telepresence-1564615361-0072582-23381
87.1 TEL | [68] Running: kubectl --context test --namespace default delete --ignore-not-found svc,deploy --selector=telepresence=c49ec35beaa24138ab9d66da1cb7a4e5
87.9 68 | deployment.extensions "telepresence-1564615361-0072582-23381" deleted
88.0 TEL | [68] ran in 0.89 secs.
88.0 TEL | (Cleanup) Kill sudo privileges holder
88.0 TEL | (Cleanup) Stop time tracking
88.0 TEL | END SPAN main.py:40(main) 88.0s
88.0 TEL | (Cleanup) Remove temporary directory
88.0 TEL | (Cleanup) Save caches
88.5 TEL | (sudo privileges holder thread exiting)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment