Opening and closing an SSH tunnel in a shell script the smart way
I recently had the following problem:
- From an unattended shell script (called by Jenkins), run a command-line tool that accesses the MySQL database on another host.
- That tool doesn't know that the database is on another host, plus the MySQL port on that host is firewalled and not accessible from other machines.
We didn't want to open the MySQL port to the network, but it's possible to SSH from the Jenkins machine to the MySQL machine. So, basically you would do something like
ssh -L 3306:localhost:3306 remotehost
… well, and then what? Now you have a shell on the remote machine open and your script execution stops until that connection is terminated again.
Putting SSH in the background
If you want your local script to continue to run, you'd possibly send that SSH process to the background using something like
ssh -L 3306:localhost:3306 remotehost & (note the ampersand) or
ssh -fN -L 3306:localhost:3306 remotehost (with
-f for "fork into background" and
-N for "run no command"). But then you have to close that SSH connection again when you're done, and even if your script crashes or is killed etc.
Also, closing the connection isn't that easy. If you background it with
&, you can
kill $! or
kill %1, but not with
-f. And you want
-f because it has a really cool feature: It waits until the connection and (if combined with
-o ExitOnForwardFailure=yes) the port forwardings have been set up successfully before going into the background. Without that, all following commands risk trying to connect to a port that has not been opened yet.
(Yes, you could
sleep 3 or something after backgrounding SSH with
& or even do sophisticated checks, but that's really ugly and there's a far better way of doing it.)
Not putting it in the background at all
So I thought about how to
- start SSH when a certain command starts running and
- make SSH terminate after that command terminates and
- not do nasty things with
And then I came up with this:
mysql -e 'SHOW DATABASES;' -h 127.0.0.1 | ssh -L 3306:localhost:3306 remotehost cat
(Note that I'm using
127.0.0.1 on purpose here: When using
localhost, I found that MySQL often uses the Unix socket instead of TCP.)
You might want to think about that for a second. What is happening here?
This will open an SSH connection to
remotehost, set up port forwarding and call
cat on the remote host. It also instructs the local MySQL to connect to what it assumes to be the local host and run a command there. The
cat is sitting on the remote host (awww!) and will send everything it receives to stdin (which is the stdin of
ssh, which is the stdout of
mysql) to stdout (which is your terminal).
mysql will send the command to the database and display the results to stdout, i.e.
cat, i.e. your terminal.
And now comes the fun part: After that output, MySQL will close its output stream. SSH notices that, closes stdin for
Not losing things along the way
This actually works pretty well, until I noticed that if the MySQL command fails for some reason, the return code of that pipe will still be 0, because
ssh started and terminated successfully. Since the script that I'm talking about is a Jenkins build and test script, that's a really bad thing: I need to get that return code in order to find out that something went wrong and the rest of the script should not continue. (In fact, that script has a shebang line of
#!/bin/sh -e to make it terminate on every error.)
So, how do you get the return code of any other command in a pipe except for the last? Or at least make the complete pipe fail then?
set -o pipefail, but that's only in Bash, and only in version 3 and above. Also, Bash has the
$PIPESTATUS array to get the return code of any command in a pipe, but again, that's only in Bash.
I was really disappointed that it seemed that I had to change my shebang to use
bash instead. And then I stumbled across an article called Auto-closing SSH tunnels. And what it suggested was a really nice idea:
ssh -f -o ExitOnForwardFailure=yes -L 3306:localhost:3306 sleep 10 mysql -e 'SHOW DATABASES;' -h 127.0.0.1
That way, the
mysql call is completely free from any pipes whatsoever and you can get its return code without any problems, even in a POSIX-compatible way.
The magic here is
-f combined with
sleep 10, which basically says "wait until the connection is there and the ports are open before you go into the background, but close yourself after 10 seconds". And here comes the fun part: SSH won't terminate as long as forwarded ports are still in use. So what it really means is that subsequent scripts have 10 seconds to open the port and then can keep it open as long as they want to.
This is however the weakness of that approach: If the command that will use the port doesn't open it time, or if it closes it and tries to open it again at a later time, this approach will not work for you.
OpenSSH has had a clean solution for this since they introduced multiplexing support in version 3.9. You can now start an ssh session in the background and send it a stop command in much the same vein as a system service. You just need to tell ssh to use a control socket. Bonus points for checking for an existing persistent session with "
-O check" and setting a time limit with
In your script: