Skip to content

Instantly share code, notes, and snippets.

@gngdb gngdb/
Last active Sep 9, 2019

What would you like to do?

So, you want to be able to work from anywhere. You want to be on a mountain somewhere, two bars of 3G signal, and you forward that to your laptop with a WiFi hotspot. Open your laptop and your shell on remote is already open and as responsive as possible. Work/life balance? With power like this, who cares?

Problem Scenario

Often, in academic institutions at least, you have the following situation:

                                  public IP                               internal
[ local (laptop) ] <--- ssh ---> [remote (gateway server)] <--- ssh ---> [remote (compute server)]

To connect, you ssh into the gateway server from your laptop, then ssh to the compute server from there. You can run jobs and do work there. Maybe you run tmux so you can disconnect and reconnect later, picking up where you left off.

Then, you want to view a graph or something, so you start a Jupyter server, or use two hop X server forwarding. In the first case, you need to tunnel a tcp port all the way back to your local laptop, which is annoying.

Then, your WiFi connects drops for a second and the ssh connection freezes. You have to reconnect, which is annoying, and it can be slow or unresponsive.

Let's make a list of what we want to get around this problem:

  • A responsive, roaming shell on the compute server.
    • roaming: reconnects when the laptop regains an internet connection.
  • SSH public key authentication everywhere, using only the key on the local machine.
  • Assume we can only connect to the gateway by ssh, and we have sudo only on the local machine.
  • Simple tcp port tunneling, preferably without typing anything.


For the responsive, roaming shell, we'll be using mosh, but we can't see the UDP ports on any of the remotes, and even if we could, we can't install a mosh server on any of them. So, we're also going to need another host. Since we have that server, we're also going to use that for persistent ssh tunnels, then connect to those tunnels whenever we need to using wireguard. Since we're gonnecting from this intermediate host to the gateway, we need to forward our ssh agent, but we're using mosh, so we're going to have to use guardian-agent In summary:

  • Use a public-facing host with a reliable internet connection (I'll be using a VM on GCE).
  • Run a wireguard VPN server on that host.
    • All TCP ports tunneled to this host are now visible on local.
  • mosh to that host, and then use ssh to connect to gateway, and then compute.
  • Finally, supply config and utilities so that we can open ssh tunnels to chosen tcp ports on compute that will stay active, and easy reconnect to a tmux session on compute:
    • Proxy connection to compute, set up in .ssh/config, using ssh agent forwarding.
    • Config file specifying ports to tunnel.
    • alias to ssh command that tunnels all ports specified, and runs tmux on connection, reconnecting to the same tmux session, or creating it if it doesn't exist.

Now our connection looks like this:

           |--- wireguard --|
[ laptop ] | <--- mosh ---> | [ intermediate ] <--- ssh ---> [ gateway ] <--- ssh ---> [ compute ]


First, we need to set up a server with a wireguard VPN and shell access. Luckily, this can be achieved easily using algo. I'm going to be doing this on GCE because I still have most of my $300 of introductory credits.

algo on GCE

The documentation for algo isn't a perfect quickstart for GCE. They supply Google Cloud Platform setup instructions, but don't link to them from the README instructions. The easiest way to run this setup is using the cloud shell. It already has gcloud set up and configured.

Alternatively, set up gcloud on your local machine first.

If you are using google cloud shell, it is useful to enable paste with ctrl-shift-v in the cloud shell settings (the spanner icon) because the following involves copying and pasting many commands.

First, clone the algo repository and cd into it:

git clone
cd algo

Either follow the steps supplied by algo (but don't run the final line to deploy a server yet) or just do the following:

cat docs/ | sed -n '/```bash/,/```/p' | sed '1d;$d' | sed '$d' | sed '$d' | bash

Now you should have:

  • Access to a configured gcloud shell where you can configure GCE.
  • A dedicated Project for algo-vpn servers, with the appropriate settings.

deploying algo from cloud shell

The cloud shell is an ephemeral environment where we can install the dependencies necessary to deploy an algo server. This means that it will forget all the installed dependencies after each session, requiring reinstallation before you can deploy another algo server. To make this less annoying, use this:

cat | sed -n '/```bash/,/```/p' | sed "s/^[ \t]*//" | sed '/```/d' | sed '1,2d' | sed 's/\$ //g' >

Configure the users you require now by editing config.cfg and specifying the usernames you want to use.

To begin deployment, run:

./algo -e "provider=gce" -e "gce_credentials_file=$(pwd)/configs/gce.json"

Choose the config options depending on your requirements, but do not enable DNS adblock. I immediately ran into issues with DNS when it was enabled with this recipe.

Now we have:

  • A deployed algo server with ssh access
  • A username and IP address from the success message (either root or ubuntu) make a note of these
  • A set of client configs in configs/
  • An ssh key for administration: configs/algo.pem (gives root access on a public facing IP, so keep this safe)

Wireguard client configuration

I'm on a chromebook, so I use the android client. See the algo docs to find your appropriate client. Configuration is just importing the configs/<name>.conf file for the device you're on.

Downloading them from cloud shell is a pain, because you have to type in absolute paths. An easier way is to just cat <name>.conf select it, copy it and paste it into a file on your local machine.

mosh configuration

We want to connect to the algo node using mosh because it works better on slow networks and supports roaming by default. However, for security the algo node we have doesn't have the necessary ports open for us to connect using mosh on the public interface. We could just open them, but that's opening more ports on an external interface and my vague understanding of network security says this is bad. When the wireguard VPN is active, we can access the VPN server on this private network instead. Paste this block to find the right IP (thanks stackoverflow):

ALGO_IP=`ip addr | awk '
/^[0-9]+:/ {
  sub(/:/,"",$2); iface=$2 }
/^[[:space:]]*inet / {
  split($2, a, "/")
  print iface" : "a[1]
}' | grep tun | cut -b 8- | cut -d "." -f 1-3 | sed 's/$/.1/'`

To connect with mosh UDP ports 60000-61000 need to be open. Download algo.pem from above and run the following to open them on the VPC network interface wg0:

ssh -i algo.pem ubuntu@$ALGO_IP 'sudo iptables -I INPUT 1 -p udp --dport 60000:61000 -j ACCEPT -i wg0'

Now, when connected with wireguard we can access the algo server with:

mosh --ssh="ssh -i algo.pem" ubuntu@$ALGO_IP

To avoid pasting in that line above every time, use this script that sticks those two commands together.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.