Skip to content

Instantly share code, notes, and snippets.

@PaulisMatrix
Created January 10, 2023 14:01
Show Gist options
  • Save PaulisMatrix/bf08fa1acda79997597f5969901a09f2 to your computer and use it in GitHub Desktop.
Save PaulisMatrix/bf08fa1acda79997597f5969901a09f2 to your computer and use it in GitHub Desktop.
telepresence log [10-1-2023]
0.0 TEL | Telepresence 0.73-1694-g146d6ec launched at Tue Jan 10 19:15:06 2023
0.0 TEL | ./usr/local/bin/telepresence --mount=false --also-proxy=10.0.0.0/8 --run zsh
0.0 TEL | Using images version 0.73 (dev)
0.0 TEL | uname: uname_result(system='Darwin', node='192.168.1.2', release='21.5.0', version='Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:29 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T8101', machine='x86_64')
0.0 TEL | Platform: darwin
0.0 TEL | WSL: False
0.0 TEL | Python 3.9.6 (v3.9.6:db3ff76da1, Jun 28 2021, 11:49:53)
0.0 TEL | [Clang 6.0 (clang-600.0.57)]
0.0 TEL | BEGIN SPAN main.py:40(main)
0.0 TEL | BEGIN SPAN startup.py:83(set_kube_command)
0.0 TEL | Found kubectl -> /usr/local/bin/kubectl
0.0 TEL | [1] Capturing: kubectl config current-context
0.1 TEL | [1] captured in 0.12 secs.
0.1 TEL | [2] Capturing: kubectl --context dev version --short
0.2 2 | Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
0.4 TEL | [2] captured in 0.28 secs.
0.4 TEL | [3] Capturing: kubectl --context dev config view -o json
0.5 TEL | [3] captured in 0.05 secs.
0.5 TEL | [4] Capturing: kubectl --context dev api-versions
0.7 TEL | [4] captured in 0.25 secs.
0.7 TEL | Command: kubectl 1.24.0
0.7 TEL | Context: dev, namespace: default, version: 4.5.4
0.7 >>> | Warning: Telepresence has only been testing on version 1.* clusters
0.7 >>> | Warning: kubectl 1.24.0 may not work correctly with cluster version 4.5.4 due to the version discrepancy. See https://kubernetes.io/docs/setup/version-skew-policy/ for more information.
0.7 >>> |
0.7 TEL | END SPAN startup.py:83(set_kube_command) 0.7s
0.7 >>> | Using a Pod instead of a Deployment for the Telepresence proxy. If you experience problems, please file an issue!
0.7 >>> | Set the environment variable TELEPRESENCE_USE_DEPLOYMENT to any non-empty value to force the old behavior, e.g.,
0.7 >>> | env TELEPRESENCE_USE_DEPLOYMENT=1 telepresence --run curl hello
0.7 >>> |
0.7 TEL | Found ssh -> /usr/bin/ssh
0.7 TEL | [5] Capturing: ssh -V
0.8 TEL | [5] captured in 0.06 secs.
0.8 TEL | Found zsh -> /bin/zsh
0.8 TEL | Found sshuttle-telepresence -> /Users/rushiyadwade/tele/telepresence/usr/local/libexec/sshuttle-telepresence
0.8 TEL | Found pfctl -> /sbin/pfctl
0.8 TEL | Found sudo -> /usr/bin/sudo
0.8 TEL | [6] Running: sudo -n echo -n
0.9 6 | sudo: a password is required
0.9 TEL | [6] exit 1 in 0.16 secs.
1.0 >>> | How Telepresence uses sudo: https://www.telepresence.io/reference/install#dependencies
1.0 >>> | Invoking sudo. Please enter your sudo password.
1.0 TEL | [7] Running: sudo echo -n
5.0 TEL | [7] ran in 4.08 secs.
5.0 >>> | Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloud hosts and headless services with --also-proxy. For a full list of method limitations see https://telepresence.io/reference/methods.html
5.0 TEL | [8] Running: kubectl --context dev --namespace default get pods telepresence-connectivity-check --ignore-not-found
7.7 TEL | [8] ran in 2.69 secs.
8.5 TEL | Scout info: {'latest_version': '0.73-1694-g146d6ec', 'FAILED': '<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)>'}
8.5 >>> | Starting network proxy to cluster using new Pod telepresence-1673358306-759237-26882
8.5 TEL | [9] Running: kubectl --context dev --namespace default create -f -
8.9 9 | pod/telepresence-1673358306-759237-26882 created
8.9 TEL | [9] ran in 0.38 secs.
8.9 TEL | BEGIN SPAN remote.py:109(wait_for_pod)
8.9 TEL | [10] Running: kubectl --context dev --namespace default wait --for=condition=ready --timeout=60s pod/telepresence-1673358306-759237-26882
23.3 10 | pod/telepresence-1673358306-759237-26882 condition met
23.3 TEL | [10] ran in 14.47 secs.
23.3 TEL | [11] Capturing: kubectl --context dev --namespace default get pod telepresence-1673358306-759237-26882 -o json
23.5 TEL | [11] captured in 0.16 secs.
23.5 TEL | END SPAN remote.py:109(wait_for_pod) 14.6s
23.5 TEL | BEGIN SPAN connect.py:37(connect)
23.5 TEL | [12] Launching kubectl logs: kubectl --context dev --namespace default logs -f telepresence-1673358306-759237-26882 --container telepresence --tail=10
23.5 TEL | [13] Launching kubectl port-forward: kubectl --context dev --namespace default port-forward telepresence-1673358306-759237-26882 55110:8022
23.5 TEL | [14] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oConnectTimeout=5 -q -p 55110 telepresence@127.0.0.1 /bin/true
23.6 TEL | [14] exit 255 in 0.05 secs.
23.7 12 | 2023-01-10T13:45:29+0000 [-] Loading ./forwarder.py...
23.7 12 | 2023-01-10T13:45:29+0000 [-] SOCKSv5Factory starting on 9050
23.7 12 | 2023-01-10T13:45:29+0000 [socks.SOCKSv5Factory#info] Starting factory <socks.SOCKSv5Factory object at 0x7f441b3ebcc0>
23.7 12 | 2023-01-10T13:45:29+0000 [-] /etc/resolv.conf changed, reparsing
23.7 12 | 2023-01-10T13:45:29+0000 [-] Resolver added ('10.16.0.10', 53) to server list
23.7 12 | 2023-01-10T13:45:29+0000 [-] DNSDatagramProtocol starting on 9053
23.7 12 | 2023-01-10T13:45:29+0000 [-] Starting protocol <twisted.names.dns.DNSDatagramProtocol object at 0x7f441b385fd0>
23.7 12 | 2023-01-10T13:45:29+0000 [-] Loaded.
23.7 12 | 2023-01-10T13:45:29+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] twistd 17.9.0 (/usr/bin/python3.6 3.6.1) starting up.
23.7 12 | 2023-01-10T13:45:29+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] reactor class: twisted.internet.epollreactor.EPollReactor.
23.8 13 | Forwarding from 127.0.0.1:55110 -> 8022
23.8 13 | Forwarding from [::1]:55110 -> 8022
23.8 TEL | [15] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oConnectTimeout=5 -q -p 55110 telepresence@127.0.0.1 /bin/true
23.9 13 | Handling connection for 55110
24.1 TEL | [15] ran in 0.31 secs.
24.1 >>> |
24.1 >>> | No traffic is being forwarded from the remote Deployment to your local machine. You can use the --expose option to specify which ports you want to forward.
24.1 >>> |
24.1 TEL | Launching Web server for proxy poll
24.1 TEL | [16] Launching SSH port forward (socks and proxy poll): ssh -N -oServerAliveInterval=1 -oServerAliveCountMax=10 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oConnectTimeout=5 -q -p 55110 telepresence@127.0.0.1 -L127.0.0.1:55120:127.0.0.1:9050 -R9055:127.0.0.1:55121
24.1 TEL | END SPAN connect.py:37(connect) 0.7s
24.2 TEL | BEGIN SPAN remote_env.py:29(get_remote_env)
24.2 TEL | [17] Capturing: kubectl --context dev --namespace default exec telepresence-1673358306-759237-26882 --container telepresence -- python3 podinfo.py
24.2 13 | Handling connection for 55110
24.6 17 | python3: can't open file 'podinfo.py': [Errno 2] No such file or directory
24.6 17 | command terminated with exit code 2
24.6 TEL | [17] exit 2 in 0.42 secs.
24.6 TEL | END SPAN remote_env.py:29(get_remote_env) 0.4s
26.6 TEL | CRASH: Command '['kubectl', '--context', 'dev', '--namespace', 'default', 'exec', 'telepresence-1673358306-759237-26882', '--container', 'telepresence', '--', 'python3', 'podinfo.py']' returned non-zero exit status 2.
26.6 TEL | Traceback (most recent call last):
26.6 TEL | File "/Users/rushiyadwade/tele/telepresence/./usr/local/bin/telepresence/telepresence/cli.py", line 135, in crash_reporting
26.6 TEL | yield
26.6 TEL | File "/Users/rushiyadwade/tele/telepresence/./usr/local/bin/telepresence/telepresence/main.py", line 71, in main
26.6 TEL | env, pod_info = get_remote_env(runner, ssh, remote_info)
26.6 TEL | File "/Users/rushiyadwade/tele/telepresence/./usr/local/bin/telepresence/telepresence/remote_env.py", line 32, in get_remote_env
26.6 TEL | json_data = runner.get_output(
26.6 TEL | File "/Users/rushiyadwade/tele/telepresence/./usr/local/bin/telepresence/telepresence/runner/runner.py", line 479, in get_output
26.6 TEL | output = self._run_command_sync(
26.6 TEL | File "/Users/rushiyadwade/tele/telepresence/./usr/local/bin/telepresence/telepresence/runner/runner.py", line 437, in _run_command_sync
26.6 TEL | raise CalledProcessError(
26.6 TEL | subprocess.CalledProcessError: Command '['kubectl', '--context', 'dev', '--namespace', 'default', 'exec', 'telepresence-1673358306-759237-26882', '--container', 'telepresence', '--', 'python3', 'podinfo.py']' returned non-zero exit status 2.
26.6 TEL | (calling crash reporter...)
35.1 TEL | [18] Running: sudo -n echo -n
35.2 TEL | [18] ran in 0.06 secs.
54.1 >>> | Exit cleanup in progress
54.1 TEL | (Cleanup) Kill BG process [16] SSH port forward (socks and proxy poll)
54.1 TEL | [16] SSH port forward (socks and proxy poll): exit 0
54.1 TEL | (Cleanup) Kill Web server for proxy poll
54.2 TEL | (Cleanup) Kill BG process [13] kubectl port-forward
54.2 TEL | [13] kubectl port-forward: exit -15
54.2 TEL | (Cleanup) Kill BG process [12] kubectl logs
54.2 TEL | [12] kubectl logs: exit -15
54.2 TEL | Background process (kubectl logs) exited with return code -15. Command was:
54.2 TEL | kubectl --context dev --namespace default logs -f telepresence-1673358306-759237-26882 --container telepresence --tail=10
54.2 TEL |
54.2 TEL | Recent output was:
54.2 TEL | 2023-01-10T13:45:29+0000 [-] Loading ./forwarder.py...
54.2 TEL | 2023-01-10T13:45:29+0000 [-] SOCKSv5Factory starting on 9050
54.2 TEL | 2023-01-10T13:45:29+0000 [socks.SOCKSv5Factory#info] Starting factory <socks.SOCKSv5Factory object at 0x7f441b3ebcc0>
54.2 TEL | 2023-01-10T13:45:29+0000 [-] /etc/resolv.conf changed, reparsing
54.2 TEL | 2023-01-10T13:45:29+0000 [-] Resolver added ('10.16.0.10', 53) to server list
54.2 TEL | 2023-01-10T13:45:29+0000 [-] DNSDatagramProtocol starting on 9053
54.2 TEL | 2023-01-10T13:45:29+0000 [-] Starting protocol <twisted.names.dns.DNSDatagramProtocol object at 0x7f441b385fd0>
54.2 TEL | 2023-01-10T13:45:29+0000 [-] Loaded.
54.2 TEL | 2023-01-10T13:45:29+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] twistd 17.9.0 (/usr/bin/python3.6 3.6.1) starting up.
54.2 TEL | 2023-01-10T13:45:29+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] reactor class: twisted.internet.epollreactor.EPollReactor.
54.2 TEL | (Cleanup) Delete proxy Pod
54.2 >>> | Cleaning up Pod
54.2 TEL | [19] Running: kubectl --context dev --namespace default delete --ignore-not-found --wait=false --selector=telepresence=b9e17dfcc4494a56afb1c23ea2394962 Pod
54.5 19 | pod "telepresence-1673358306-759237-26882" deleted
54.5 TEL | [19] ran in 0.24 secs.
54.5 TEL | (Cleanup) Kill sudo privileges holder
54.5 TEL | (Cleanup) Stop time tracking
54.5 TEL | END SPAN main.py:40(main) 54.4s
54.5 TEL | (Cleanup) Remove temporary directory
54.5 TEL | (Cleanup) Save caches
55.3 TEL | (sudo privileges holder thread exiting)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment