Instantly share code, notes, and snippets.

What would you like to do?
Opening and closing an SSH tunnel in a shell script the smart way

Opening and closing an SSH tunnel in a shell script the smart way

I recently had the following problem:

  • From an unattended shell script (called by Jenkins), run a command-line tool that accesses the MySQL database on another host.
  • That tool doesn't know that the database is on another host, plus the MySQL port on that host is firewalled and not accessible from other machines.

We didn't want to open the MySQL port to the network, but it's possible to SSH from the Jenkins machine to the MySQL machine. So, basically you would do something like

ssh -L 3306:localhost:3306 remotehost

… well, and then what? Now you have a shell on the remote machine open and your script execution stops until that connection is terminated again.

Putting SSH in the background

If you want your local script to continue to run, you'd possibly send that SSH process to the background using something like ssh -L 3306:localhost:3306 remotehost & (note the ampersand) or ssh -fN -L 3306:localhost:3306 remotehost (with -f for "fork into background" and -N for "run no command"). But then you have to close that SSH connection again when you're done, and even if your script crashes or is killed etc.

Also, closing the connection isn't that easy. If you background it with &, you can kill $! or kill %1, but not with -f. And you want -f because it has a really cool feature: It waits until the connection and (if combined with -o ExitOnForwardFailure=yes) the port forwardings have been set up successfully before going into the background. Without that, all following commands risk trying to connect to a port that has not been opened yet.

(Yes, you could sleep 3 or something after backgrounding SSH with & or even do sophisticated checks, but that's really ugly and there's a far better way of doing it.)

Not putting it in the background at all

So I thought about how to

  • start SSH when a certain command starts running and
  • make SSH terminate after that command terminates and
  • not do nasty things with sleep or whatever

And then I came up with this:

mysql -e 'SHOW DATABASES;' -h | ssh -L 3306:localhost:3306 remotehost cat

(Note that I'm using on purpose here: When using localhost, I found that MySQL often uses the Unix socket instead of TCP.)

You might want to think about that for a second. What is happening here?

This will open an SSH connection to remotehost, set up port forwarding and call cat on the remote host. It also instructs the local MySQL to connect to what it assumes to be the local host and run a command there. The cat is sitting on the remote host (awww!) and will send everything it receives to stdin (which is the stdin of ssh, which is the stdout of mysql) to stdout (which is your terminal). mysql will send the command to the database and display the results to stdout, i.e. ssh, i.e. cat, i.e. your terminal.

And now comes the fun part: After that output, MySQL will close its output stream. SSH notices that, closes stdin for cat and cat terminates.

Not losing things along the way

This actually works pretty well, until I noticed that if the MySQL command fails for some reason, the return code of that pipe will still be 0, because ssh started and terminated successfully. Since the script that I'm talking about is a Jenkins build and test script, that's a really bad thing: I need to get that return code in order to find out that something went wrong and the rest of the script should not continue. (In fact, that script has a shebang line of #!/bin/sh -e to make it terminate on every error.)

So, how do you get the return code of any other command in a pipe except for the last? Or at least make the complete pipe fail then?

Well, there's set -o pipefail, but that's only in Bash, and only in version 3 and above. Also, Bash has the $PIPESTATUS array to get the return code of any command in a pipe, but again, that's only in Bash.

I was really disappointed that it seemed that I had to change my shebang to use bash instead. And then I stumbled across an article called Auto-closing SSH tunnels. And what it suggested was a really nice idea:

ssh -f -o ExitOnForwardFailure=yes -L 3306:localhost:3306 sleep 10
mysql -e 'SHOW DATABASES;' -h

That way, the mysql call is completely free from any pipes whatsoever and you can get its return code without any problems, even in a POSIX-compatible way.

The magic here is -f combined with sleep 10, which basically says "wait until the connection is there and the ports are open before you go into the background, but close yourself after 10 seconds". And here comes the fun part: SSH won't terminate as long as forwarded ports are still in use. So what it really means is that subsequent scripts have 10 seconds to open the port and then can keep it open as long as they want to.

This is however the weakness of that approach: If the command that will use the port doesn't open it time, or if it closes it and tries to open it again at a later time, this approach will not work for you.


This comment has been minimized.

Copy link

inizovtsev commented Nov 17, 2014


This comment has been minimized.

Copy link

joe99de commented Dec 16, 2014

thank you for your tutorial.



This comment has been minimized.

Copy link

gavinsun2008 commented Apr 17, 2015

greatful, I use this way to transfer files beyond gateway machine.


This comment has been minimized.

Copy link

yonatanp commented Aug 1, 2015

Very elegant and useful, thank you.


This comment has been minimized.

Copy link

perennialmind commented Aug 25, 2015

OpenSSH has had a clean solution for this since they introduced multiplexing support in version 3.9. You can now start an ssh session in the background and send it a stop command in much the same vein as a system service. You just need to tell ssh to use a control socket. Bonus points for checking for an existing persistent session with "-O check" and setting a time limit with ControlPersist.

In ~/.ssh/config

Host remotehost-proxy
    HostName remotehost
    ControlPath ~/.ssh/remotehost-proxy.ctl

In your script:

ssh -f -N -T -M -L 3306:localhost:3306 remotehost-proxy
mysql -e 'SHOW DATABASES;' -h
ssh -T -O "exit" remotehost-proxy

This comment has been minimized.

Copy link

wayjake commented Feb 1, 2016

I have come back to this time and time again. Excellent explanation. Thanks!


This comment has been minimized.

Copy link

Sam2045 commented Jun 15, 2016

Oh I'm glad I found this post. I'll be back for more later. Thanks~

Sam Smith
Technology Evangelist and Aspiring Chef.
Large file transfers made easy.


This comment has been minimized.

Copy link

spusnei-pbsc commented Aug 18, 2016

to avoid problems with an already open port you can simply generate a random one and connect mysql to that port


PORT=$(shuf -i 10000-65000 -n 1)
ssh -f -o ExitOnForwardFailure=yes -L $PORT:$db_host:$db_port $db_server sleep 10
mysql -P$PORT -u $db_user -h $db_name "$@"

and the usage would be

echo "SHOW DATABASES" | mysqlrun


This comment has been minimized.

Copy link

merlinblack commented Sep 9, 2016

This technique works very well for gitosis / gitolite servers that can be accessed via a gateway machine. I've written a 3 line bash script to tunnel, run git, and shut down the tunnel. I'm happily cloning, pushing and pulling from all over the place to my company's git repositories. :-)

#! /bin/bash

ssh -f -N -T -M -L $HOST
git $@
ssh -T -O "exit" $HOST

Note I have control files set up in .ssh/config for all hosts, and an entry mapping 'gitatwork' to git@localhost:3333 for use when I clone.

host *
controlmaster auto
controlpath /tmp/%r@%h:%p

host gitatwork
hostname localhost
port 3333
user git
identityfile ~/.ssh/git_rsa

And used like:
tunnelgit clone gitatwork:malvadoplandenúmerocinco.git


This comment has been minimized.

Copy link

remindxiao commented Apr 25, 2017

Thanks for sharing ! Helps a lot


This comment has been minimized.

Copy link

chrisjpalmer commented Jun 30, 2017

Hey. Your a legend. This is exactly what I needed for auto updating my database. THANK YOU!


This comment has been minimized.

Copy link

akang1 commented Oct 5, 2017

This is awesome, like chrisjpalmer, I needed this for auto updating my database that is behind a firewall.

If you are doing this in php, you may run into a couple issues like I did

  1. the mysql connection would occur before the ssh connection had been completed and fail.

    • i had to put a sleep (1 second) command between the ssh and the mysql connection commands in order for this to work.
  2. had to run the ssh command in the background even with the 'f' flag

    • shell_exec("ssh -fo ExitOnForwardFailure=yes -L 3307:{ip}:3306 user@{tunnel IP} sleep 10 > {name it w/e you want}.txt &");
    • without doing this, php would wait until the ssh command had been completed before moving on to the mysql connection command (the mysql connection would then fail since there is no longer ssh connection)

This comment has been minimized.

Copy link

ghost commented Nov 18, 2017

This is a great tutorial and there are many excellent suggestions. Excluding specific considerations, such as the inability to run certain programs, I have found that autossh is the best solution to easily and reliably establish and sustain ssh tunnels. As many of you have already described, autossh is essentially a wrapper for the ssh command with a tad bit more functionality such as monitoring of the tunnel through a different port, logging, and my favorite -- gatetime.

AUTOSSH_GATETIME    - how long must an ssh session be established
                      before we decide it really was established
                      (in seconds). Default is 30 seconds; use of -f
                      flag sets this to 0.

Here is how you run it:
autossh -M 0 -f -N -L 10080:remote:10080 root@remote


This comment has been minimized.

Copy link

sean9999 commented Jan 5, 2018

ingenious! thank you


This comment has been minimized.

Copy link

Hubro commented Apr 14, 2018

Very informative. I needed this to connect to my production server's Docker daemon:

# Echoes an available, randomly selected local port
random_local_port() {
    python -c 'import socket; s = socket.socket(); s.bind(("", 0)); print s.getsockname()[1]; s.close();'

# Convenience function for running a docker command on the production server's
# docker daemon. Usage examples:
#     docker_prod docker ps
#     docker_prod docker-compose up
docker_prod() {

    ssh -f -o ExitOnForwardFailure=yes \
        -L "$PORT:" "$PRODSERVER" \
        sleep 5

    DOCKER_HOST="$PORT" "$@"

Luckily Docker appears to use persistent HTTP connections, so the tunnel doesn't close between API calls during long-running commands like docker-compose up -d --build.


This comment has been minimized.

Copy link

ilijaz commented Aug 2, 2018

Excelent one line backup script. Thanks a lot!


This comment has been minimized.

Copy link

simonwiles commented Oct 6, 2018

Anyone know a way to do this (or something very similar) with “dynamic” application-level port forwarding (i.e. ssh -D)?


This comment has been minimized.

Copy link

martinklepsch commented Dec 5, 2018

I found this to be a much nicer approach:


# ip=""

socket=$(mktemp -t deploy-ssh-socket)
rm ${socket} # delete socket file so path can be used by ssh


cleanup () {
    # Stop SSH port forwarding process, this function may be
    # called twice, so only terminate port forwarding if the
    # socket still exists
    if [ -S ${socket} ]; then
        echo "Sending exit signal to SSH process"
        ssh -S ${socket} -O exit root@${ip}
    exit $exit_code

trap cleanup EXIT ERR INT TERM

# Start SSH port forwarding process for consul (8500) and nomad (4646)
ssh -M -S ${socket} -fNT -L 8500:localhost:8500 -L 4646:localhost:4646 root@${ip}

ssh -S ${socket} -O check root@${ip}

# launching a shell here causes the script to not exit and allows you
# to keep the forwarding running for as long as you want.
# I also like to customise the prompt to indicate that this isn't a normal shell.

bash --rcfile <(echo 'PS1="\nwith-ports> "')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment