Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Opening and closing an SSH tunnel in a shell script the smart way

Opening and closing an SSH tunnel in a shell script the smart way

I recently had the following problem:

  • From an unattended shell script (called by Jenkins), run a command-line tool that accesses the MySQL database on another host.
  • That tool doesn't know that the database is on another host, plus the MySQL port on that host is firewalled and not accessible from other machines.

We didn't want to open the MySQL port to the network, but it's possible to SSH from the Jenkins machine to the MySQL machine. So, basically you would do something like

ssh -L 3306:localhost:3306 remotehost

… well, and then what? Now you have a shell on the remote machine open and your script execution stops until that connection is terminated again.

Putting SSH in the background

If you want your local script to continue to run, you'd possibly send that SSH process to the background using something like ssh -L 3306:localhost:3306 remotehost & (note the ampersand) or ssh -fN -L 3306:localhost:3306 remotehost (with -f for "fork into background" and -N for "run no command"). But then you have to close that SSH connection again when you're done, and even if your script crashes or is killed etc.

Also, closing the connection isn't that easy. If you background it with &, you can kill $! or kill %1, but not with -f. And you want -f because it has a really cool feature: It waits until the connection and (if combined with -o ExitOnForwardFailure=yes) the port forwardings have been set up successfully before going into the background. Without that, all following commands risk trying to connect to a port that has not been opened yet.

(Yes, you could sleep 3 or something after backgrounding SSH with & or even do sophisticated checks, but that's really ugly and there's a far better way of doing it.)

Not putting it in the background at all

So I thought about how to

  • start SSH when a certain command starts running and
  • make SSH terminate after that command terminates and
  • not do nasty things with sleep or whatever

And then I came up with this:

mysql -e 'SHOW DATABASES;' -h | ssh -L 3306:localhost:3306 remotehost cat

(Note that I'm using on purpose here: When using localhost, I found that MySQL often uses the Unix socket instead of TCP.)

You might want to think about that for a second. What is happening here?

This will open an SSH connection to remotehost, set up port forwarding and call cat on the remote host. It also instructs the local MySQL to connect to what it assumes to be the local host and run a command there. The cat is sitting on the remote host (awww!) and will send everything it receives to stdin (which is the stdin of ssh, which is the stdout of mysql) to stdout (which is your terminal). mysql will send the command to the database and display the results to stdout, i.e. ssh, i.e. cat, i.e. your terminal.

And now comes the fun part: After that output, MySQL will close its output stream. SSH notices that, closes stdin for cat and cat terminates.

Not losing things along the way

This actually works pretty well, until I noticed that if the MySQL command fails for some reason, the return code of that pipe will still be 0, because ssh started and terminated successfully. Since the script that I'm talking about is a Jenkins build and test script, that's a really bad thing: I need to get that return code in order to find out that something went wrong and the rest of the script should not continue. (In fact, that script has a shebang line of #!/bin/sh -e to make it terminate on every error.)

So, how do you get the return code of any other command in a pipe except for the last? Or at least make the complete pipe fail then?

Well, there's set -o pipefail, but that's only in Bash, and only in version 3 and above. Also, Bash has the $PIPESTATUS array to get the return code of any command in a pipe, but again, that's only in Bash.

I was really disappointed that it seemed that I had to change my shebang to use bash instead. And then I stumbled across an article called Auto-closing SSH tunnels. And what it suggested was a really nice idea:

ssh -f -o ExitOnForwardFailure=yes -L 3306:localhost:3306 sleep 10
mysql -e 'SHOW DATABASES;' -h

That way, the mysql call is completely free from any pipes whatsoever and you can get its return code without any problems, even in a POSIX-compatible way.

The magic here is -f combined with sleep 10, which basically says "wait until the connection is there and the ports are open before you go into the background, but close yourself after 10 seconds". And here comes the fun part: SSH won't terminate as long as forwarded ports are still in use. So what it really means is that subsequent scripts have 10 seconds to open the port and then can keep it open as long as they want to.

This is however the weakness of that approach: If the command that will use the port doesn't open it time, or if it closes it and tries to open it again at a later time, this approach will not work for you.

Copy link

sean9999 commented Jan 5, 2018

ingenious! thank you

Copy link

Hubro commented Apr 14, 2018

Very informative. I needed this to connect to my production server's Docker daemon:

# Echoes an available, randomly selected local port
random_local_port() {
    python -c 'import socket; s = socket.socket(); s.bind(("", 0)); print s.getsockname()[1]; s.close();'

# Convenience function for running a docker command on the production server's
# docker daemon. Usage examples:
#     docker_prod docker ps
#     docker_prod docker-compose up
docker_prod() {

    ssh -f -o ExitOnForwardFailure=yes \
        -L "$PORT:" "$PRODSERVER" \
        sleep 5

    DOCKER_HOST="$PORT" "$@"

Luckily Docker appears to use persistent HTTP connections, so the tunnel doesn't close between API calls during long-running commands like docker-compose up -d --build.

Copy link

ilijaz commented Aug 2, 2018

Excelent one line backup script. Thanks a lot!

Copy link

simonwiles commented Oct 6, 2018

Anyone know a way to do this (or something very similar) with “dynamic” application-level port forwarding (i.e. ssh -D)?

Copy link

I found this to be a much nicer approach:


# ip=""

socket=$(mktemp -t deploy-ssh-socket)
rm ${socket} # delete socket file so path can be used by ssh


cleanup () {
    # Stop SSH port forwarding process, this function may be
    # called twice, so only terminate port forwarding if the
    # socket still exists
    if [ -S ${socket} ]; then
        echo "Sending exit signal to SSH process"
        ssh -S ${socket} -O exit root@${ip}
    exit $exit_code

trap cleanup EXIT ERR INT TERM

# Start SSH port forwarding process for consul (8500) and nomad (4646)
ssh -M -S ${socket} -fNT -L 8500:localhost:8500 -L 4646:localhost:4646 root@${ip}

ssh -S ${socket} -O check root@${ip}

# launching a shell here causes the script to not exit and allows you
# to keep the forwarding running for as long as you want.
# I also like to customise the prompt to indicate that this isn't a normal shell.

bash --rcfile <(echo 'PS1="\nwith-ports> "')

Copy link

Great a tutorial.

Copy link

Thanks for the idea! However, I'd like to ask if someone knows what to do with Windows command line and ANSI codes, because, when I try to launch the command (docker-compose) this way, ANSI codes in output are turned into garbage. Is there any way to pass them through and view the stdout of the first command as-is?

Copy link

gantaa commented Feb 11, 2021

splendid! This all works great for me in TeamCity CI for running my DB migrations in my private subnets through a bastion jump server in my public subnets.

Copy link

sadams commented Jul 14, 2021

Really useful and well explained. ❤️ that sleep trick! 👍

Copy link

evokateur commented Jul 17, 2021

This inspired me to do something lazy!

I keep a file around called


me=`basename "$0"`

if [ -z "$1" ]
    echo "Hello, footpad!"
    if [ -S "${file}" ]
        echo "The ~/${me}.socket is already open"
        echo "Opening ~/${me}.socket"
        ssh -M -S ~/${me}.socket -fnNT ${me}
    if [ "$1" == "exit" ]
        echo "Exiting ~/${me}.socket"
        ssh -S ~/${me}.socket -O exit ${me}
    elif [ "$1" == "check" ]
        echo "Checking ~/${me}.socket"
        ssh -S ~/${me}.socket -O check ${me}
        echo "I don't know how to ${1} a ~/${me}.socket"

I symlink to it in my path with an arbitrary name, like frobozz

The name of the symlink becomes the name of the socket (~/frobozz.socket) and is also the name of the host in ~/.ssh/config where the HostName, User, and LocalForward(s) are configured:

Host frobozz
    HostName server
    User user
    LocalForward 33184
    LocalForward 33121
    LocalForward 33067

In everyday life it looks like:

$ frobozz
Hello, footpad!
Opening ~/frobozz.socket

$ frobozz exit
Exiting ~/frobozz.socket
Exit request sent.

If I need to port forward using a different server I make a new symlink and corresponding ~/.ssh.config entry. Lazy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment