Skip to content

Instantly share code, notes, and snippets.

Star You must be signed in to star a gist
Save scy/6781836 to your computer and use it in GitHub Desktop.
Opening and closing an SSH tunnel in a shell script the smart way

Opening and closing an SSH tunnel in a shell script the smart way

I recently had the following problem:

  • From an unattended shell script (called by Jenkins), run a command-line tool that accesses the MySQL database on another host.
  • That tool doesn't know that the database is on another host, plus the MySQL port on that host is firewalled and not accessible from other machines.

We didn't want to open the MySQL port to the network, but it's possible to SSH from the Jenkins machine to the MySQL machine. So, basically you would do something like

ssh -L 3306:localhost:3306 remotehost

… well, and then what? Now you have a shell on the remote machine open and your script execution stops until that connection is terminated again.

Putting SSH in the background

If you want your local script to continue to run, you'd possibly send that SSH process to the background using something like ssh -L 3306:localhost:3306 remotehost & (note the ampersand) or ssh -fN -L 3306:localhost:3306 remotehost (with -f for "fork into background" and -N for "run no command"). But then you have to close that SSH connection again when you're done, and even if your script crashes or is killed etc.

Also, closing the connection isn't that easy. If you background it with &, you can kill $! or kill %1, but not with -f. And you want -f because it has a really cool feature: It waits until the connection and (if combined with -o ExitOnForwardFailure=yes) the port forwardings have been set up successfully before going into the background. Without that, all following commands risk trying to connect to a port that has not been opened yet.

(Yes, you could sleep 3 or something after backgrounding SSH with & or even do sophisticated checks, but that's really ugly and there's a far better way of doing it.)

Not putting it in the background at all

So I thought about how to

  • start SSH when a certain command starts running and
  • make SSH terminate after that command terminates and
  • not do nasty things with sleep or whatever

And then I came up with this:

mysql -e 'SHOW DATABASES;' -h 127.0.0.1 | ssh -L 3306:localhost:3306 remotehost cat

(Note that I'm using 127.0.0.1 on purpose here: When using localhost, I found that MySQL often uses the Unix socket instead of TCP.)

You might want to think about that for a second. What is happening here?

This will open an SSH connection to remotehost, set up port forwarding and call cat on the remote host. It also instructs the local MySQL to connect to what it assumes to be the local host and run a command there. The cat is sitting on the remote host (awww!) and will send everything it receives to stdin (which is the stdin of ssh, which is the stdout of mysql) to stdout (which is your terminal). mysql will send the command to the database and display the results to stdout, i.e. ssh, i.e. cat, i.e. your terminal.

And now comes the fun part: After that output, MySQL will close its output stream. SSH notices that, closes stdin for cat and cat terminates.

Not losing things along the way

This actually works pretty well, until I noticed that if the MySQL command fails for some reason, the return code of that pipe will still be 0, because ssh started and terminated successfully. Since the script that I'm talking about is a Jenkins build and test script, that's a really bad thing: I need to get that return code in order to find out that something went wrong and the rest of the script should not continue. (In fact, that script has a shebang line of #!/bin/sh -e to make it terminate on every error.)

So, how do you get the return code of any other command in a pipe except for the last? Or at least make the complete pipe fail then?

Well, there's set -o pipefail, but that's only in Bash, and only in version 3 and above. Also, Bash has the $PIPESTATUS array to get the return code of any command in a pipe, but again, that's only in Bash.

I was really disappointed that it seemed that I had to change my shebang to use bash instead. And then I stumbled across an article called Auto-closing SSH tunnels. And what it suggested was a really nice idea:

ssh -f -o ExitOnForwardFailure=yes -L 3306:localhost:3306 sleep 10
mysql -e 'SHOW DATABASES;' -h 127.0.0.1

That way, the mysql call is completely free from any pipes whatsoever and you can get its return code without any problems, even in a POSIX-compatible way.

The magic here is -f combined with sleep 10, which basically says "wait until the connection is there and the ports are open before you go into the background, but close yourself after 10 seconds". And here comes the fun part: SSH won't terminate as long as forwarded ports are still in use. So what it really means is that subsequent scripts have 10 seconds to open the port and then can keep it open as long as they want to.

This is however the weakness of that approach: If the command that will use the port doesn't open it time, or if it closes it and tries to open it again at a later time, this approach will not work for you.

@inizovtsev
Copy link

@joe99de
Copy link

joe99de commented Dec 16, 2014

Hi,,
thank you for your tutorial.

Thanks
joe

@gavinsun2008
Copy link

greatful, I use this way to transfer files beyond gateway machine.

@yonatanp
Copy link

yonatanp commented Aug 1, 2015

Very elegant and useful, thank you.

@perennialmind
Copy link

OpenSSH has had a clean solution for this since they introduced multiplexing support in version 3.9. You can now start an ssh session in the background and send it a stop command in much the same vein as a system service. You just need to tell ssh to use a control socket. Bonus points for checking for an existing persistent session with "-O check" and setting a time limit with ControlPersist.

In ~/.ssh/config

Host remotehost-proxy
    HostName remotehost
    ControlPath ~/.ssh/remotehost-proxy.ctl

In your script:

ssh -f -N -T -M -L 3306:localhost:3306 remotehost-proxy
mysql -e 'SHOW DATABASES;' -h 127.0.0.1
ssh -T -O "exit" remotehost-proxy

@wayjake
Copy link

wayjake commented Feb 1, 2016

I have come back to this time and time again. Excellent explanation. Thanks!

@Sam2045
Copy link

Sam2045 commented Jun 15, 2016

Oh I'm glad I found this post. I'll be back for more later. Thanks~

Sam Smith
Technology Evangelist and Aspiring Chef.
Large file transfers made easy.
www.innorix.com/en/DS

@sergey-p3
Copy link

sergey-p3 commented Aug 18, 2016

to avoid problems with an already open port you can simply generate a random one and connect mysql to that port

db_server=REMOTE_SERVER
db_host=localhost
db_user=USER
db_name=DATABASE
db_port=3306

mysqlrun(){
PORT=$(shuf -i 10000-65000 -n 1)
ssh -f -o ExitOnForwardFailure=yes -L $PORT:$db_host:$db_port $db_server sleep 10
mysql -P$PORT -u $db_user -h $db_name "$@"
}

and the usage would be

echo "SHOW DATABASES" | mysqlrun

@merlinblack
Copy link

This technique works very well for gitosis / gitolite servers that can be accessed via a gateway machine. I've written a 3 line bash script to tunnel, run git, and shut down the tunnel. I'm happily cloning, pushing and pulling from all over the place to my company's git repositories. :-)


#! /bin/bash
HOST=merlinblack@remote-access.com.au

ssh -f -N -T -M -L 3333:git@git.server.com.au:22 $HOST
git $@
ssh -T -O "exit" $HOST

Note I have control files set up in .ssh/config for all hosts, and an entry mapping 'gitatwork' to git@localhost:3333 for use when I clone.

host *
controlmaster auto
controlpath /tmp/%r@%h:%p

host gitatwork
hostname localhost
port 3333
user git
identityfile ~/.ssh/git_rsa

And used like:
tunnelgit clone gitatwork:malvadoplandenúmerocinco.git

@remindxiao
Copy link

Thanks for sharing ! Helps a lot

@chrisjpalmer
Copy link

Hey. Your a legend. This is exactly what I needed for auto updating my database. THANK YOU!

@akang1
Copy link

akang1 commented Oct 5, 2017

This is awesome, like chrisjpalmer, I needed this for auto updating my database that is behind a firewall.

If you are doing this in php, you may run into a couple issues like I did

  1. the mysql connection would occur before the ssh connection had been completed and fail.

    • i had to put a sleep (1 second) command between the ssh and the mysql connection commands in order for this to work.
  2. had to run the ssh command in the background even with the 'f' flag

    • shell_exec("ssh -fo ExitOnForwardFailure=yes -L 3307:{ip}:3306 user@{tunnel IP} sleep 10 > {name it w/e you want}.txt &");
    • without doing this, php would wait until the ssh command had been completed before moving on to the mysql connection command (the mysql connection would then fail since there is no longer ssh connection)

@veritris
Copy link

This is a great tutorial and there are many excellent suggestions. Excluding specific considerations, such as the inability to run certain programs, I have found that autossh is the best solution to easily and reliably establish and sustain ssh tunnels. As many of you have already described, autossh is essentially a wrapper for the ssh command with a tad bit more functionality such as monitoring of the tunnel through a different port, logging, and my favorite -- gatetime.

AUTOSSH_GATETIME    - how long must an ssh session be established
                      before we decide it really was established
                      (in seconds). Default is 30 seconds; use of -f
                      flag sets this to 0.

Here is how you run it:
autossh -M 0 -f -N -L 10080:remote:10080 root@remote

@sean9999
Copy link

sean9999 commented Jan 5, 2018

ingenious! thank you

@Hubro
Copy link

Hubro commented Apr 14, 2018

Very informative. I needed this to connect to my production server's Docker daemon:

# Echoes an available, randomly selected local port
random_local_port() {
    python -c 'import socket; s = socket.socket(); s.bind(("127.0.0.1", 0)); print s.getsockname()[1]; s.close();'
}

# Convenience function for running a docker command on the production server's
# docker daemon. Usage examples:
#
#     docker_prod docker ps
#     docker_prod docker-compose up
#
docker_prod() {
    PORT="$(random_local_port)"

    ssh -f -o ExitOnForwardFailure=yes \
        -L "127.0.0.1:$PORT:127.0.0.1:2375" "$PRODSERVER" \
        sleep 5

    DOCKER_HOST="127.0.0.1:$PORT" "$@"
}

Luckily Docker appears to use persistent HTTP connections, so the tunnel doesn't close between API calls during long-running commands like docker-compose up -d --build.

@ilijaz
Copy link

ilijaz commented Aug 2, 2018

Excelent one line backup script. Thanks a lot!

@simonwiles
Copy link

simonwiles commented Oct 6, 2018

Anyone know a way to do this (or something very similar) with “dynamic” application-level port forwarding (i.e. ssh -D)?

@martinklepsch
Copy link

I found this to be a much nicer approach: http://mpharrigan.com/2016/05/17/background-ssh.html

#!/bin/bash

# ip="1.1.1.1"

socket=$(mktemp -t deploy-ssh-socket)
rm ${socket} # delete socket file so path can be used by ssh

exit_code=0

cleanup () {
    # Stop SSH port forwarding process, this function may be
    # called twice, so only terminate port forwarding if the
    # socket still exists
    if [ -S ${socket} ]; then
        echo
        echo "Sending exit signal to SSH process"
        ssh -S ${socket} -O exit root@${ip}
    fi
    exit $exit_code
}

trap cleanup EXIT ERR INT TERM

# Start SSH port forwarding process for consul (8500) and nomad (4646)
ssh -M -S ${socket} -fNT -L 8500:localhost:8500 -L 4646:localhost:4646 root@${ip}

ssh -S ${socket} -O check root@${ip}

# launching a shell here causes the script to not exit and allows you
# to keep the forwarding running for as long as you want.
# I also like to customise the prompt to indicate that this isn't a normal shell.

bash --rcfile <(echo 'PS1="\nwith-ports> "')

@renatovieiradesouza
Copy link

Great a tutorial.

@Cerber-Ursi
Copy link

Thanks for the idea! However, I'd like to ask if someone knows what to do with Windows command line and ANSI codes, because, when I try to launch the command (docker-compose) this way, ANSI codes in output are turned into garbage. Is there any way to pass them through and view the stdout of the first command as-is?

@gantaa
Copy link

gantaa commented Feb 11, 2021

splendid! This all works great for me in TeamCity CI for running my DB migrations in my private subnets through a bastion jump server in my public subnets.

@sadams
Copy link

sadams commented Jul 14, 2021

Really useful and well explained. ❤️ that sleep trick! 👍

@evokateur
Copy link

evokateur commented Jul 17, 2021

This inspired me to do something lazy!

I keep a file around called ssh-fp.sh

#!/bin/bash

me=`basename "$0"`

if [ -z "$1" ]
then
    echo "Hello, footpad!"
    file=~/"${me}".socket
    if [ -S "${file}" ]
    then
        echo "The ~/${me}.socket is already open"
    else
        echo "Opening ~/${me}.socket"
        ssh -M -S ~/${me}.socket -fnNT ${me}
    fi
else
    if [ "$1" == "exit" ]
    then
        echo "Exiting ~/${me}.socket"
        ssh -S ~/${me}.socket -O exit ${me}
    elif [ "$1" == "check" ]
    then
        echo "Checking ~/${me}.socket"
        ssh -S ~/${me}.socket -O check ${me}
    else
        echo "I don't know how to ${1} a ~/${me}.socket"
    fi
fi

I symlink to it in my path with an arbitrary name, like frobozz

The name of the symlink becomes the name of the socket (~/frobozz.socket) and is also the name of the host in ~/.ssh/config where the HostName, User, and LocalForward(s) are configured:

Host frobozz
    HostName server
    User user
    LocalForward 33184 10.1.10.184:3389
    LocalForward 33121 10.1.10.121.121:3389
    LocalForward 33067 10.1.10.67:3389

In everyday life it looks like:

$ frobozz
Hello, footpad!
Opening ~/frobozz.socket

$ frobozz exit
Exiting ~/frobozz.socket
Exit request sent.

If I need to port forward using a different server I make a new symlink and corresponding ~/.ssh.config entry. Lazy.

@stokito
Copy link

stokito commented Jul 8, 2023

You may find useful my systemd service https://github.com/yurt-page/sshtunnel

@Mon-ius
Copy link

Mon-ius commented Jul 28, 2023

-o ExitOnForwardFailure=yes -o ServerAliveInterval=10 -o ServerAliveCountMax=3 could help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment