Sometimes it feels odd to type passwords for sudo authentication on remote hosts. It would be much more comfortable to just use your hardware key like a Nitrokey Start or Nitrokey Pro. The following setup has been tested with a NitroKey Pro 2 and NitroKey Start.
The trick is to forward the gpg agent from your local machine, where you plug your hardware key to your remote host via ssh socket forwarding. Then we can use the key in our NitroKey to decrypt and authenticate on the remote host.
If you use an ssh-agent and this setup to login to your remote servers and get root access there, the same can be done by an attacker who succeeds to own your local machine. So an attacker getting access to your laptop with your user privileges can just wait until you plugin your Hardware Key and unlock it. Then they can login to all your servers, gain root access and install a backdoor.
So I don't use this setup on my backup server. To authenticate there I am using timed on time passwords as described here. If you do this then of course don't use the your Nitrokey Pro to get generate the one time passwords, as the attacker could do the same. Use some different device, e.g. a SmartPhone with FreeOTP for that purpose.
We assume that you have successfully deployed your PGP key on your Nitro Key or equivalent device. If not look in this guide.
The following things need to be accomplished on the local host:
- Make gpg-agent act as an ssh-agent
- In order to decrypt stuff on remote hosts using a local nitrokey we need to forward a gpg-agent to the remote host.
- In order to authenticate sudo rights on the remote host we need an ssh-agent on the remote host. So we need to forward our local gpg-agent as ssh-agent there.
- These forwardings need to be set up when the Nitrokey device is plugged in and they should be torn down when it is removed.
Add the following lines to ~/.gnupg/gpg-agent.conf
:
keep-display
extra-socket /run/user/1000/gnupg/S.gpg-agent.extra
enable-ssh-support
Replace the the path after extra-socket
with the output of
gpgconf --list-dir agent-extra-socket
Since version 6.7 OpenSSH offers a neat way of forwarding agents to remote hosts by a directive in the Host
section of ~/.ssh/config
which looks like RemoteForward remote-socket local-socket
. So in principle for every remote host an entry like the following would suffice:
Host foo.example.org
RemoteForward /run/user/1000/gnupg/S.gpg-agent /run/user/1000/gnupg/S.gpg-agent.extra
RemoteForward /run/user/1000/gnupg/S.gpg-agent.ssh /run/user/1000/gnupg/S.gpg-agent.ssh
Replace the sockets with the outputs of
gpgconf --list-dir agent-extra-socket
and
gpgconf --list-dir agent-ssh-socket
on the local host and with
gpgconf --list-dir agent-socket
and
gpgconf --list-dir agent-ssh-socket
on the remote host.
This will forward the gpg-agent and the gpg-agent as ssh-agent to the remote host every time you open an ssh session to it. So that is actually all we need and for testing you could leave it like this. However this has some drawbacks:
- If you use mosh for your interactive sessions the forwarded sockets collapse once the mosh server takes over.
- If you connect from multiple hosts simultaneously, then you kind of loose control from which host the active socket is forwarded.
To overcome these drawbacks we use udev and systemd to setup the socket once the NitroKey is plugged in and to tear them down once it is removed.
To do this we need the following shell script to set up the forward sockets:
#!/bin/sh
for host in `sed -n 's/^Host\s\(.*\)-gpg$/\1-gpg/p' $HOME/.ssh/config`
do \
ssh -N $host &
done
sleep infinity
We put it for example into $HOME/bin
.
The script sets up the forward sockets for every host that is mentioned in ~/.ssh/config
as host-gpg
. So put an entry like
Host foo-gpg
Hostname foo.example.org
RemoteForward /run/user/1000/gnupg/S.gpg-agent /run/user/1000/gnupg/S.gpg-agent.extra
RemoteForward /run/user/1000/gnupg/S.gpg-agent.ssh /run/user/1000/gnupg/S.gpg-agent.ssh
into ~/.ssh/config
. Again replace the socket paths according to gpgconf --list-dir agent-*-socket
(see above).
We set up a udev rule, that creates a systemd device alias unit when the device is connected and removes it again when the device is disconnected. Then we can install a systemd service that starts on connection of the device and stops on removal.
This can be accomplished by putting the following content into for example /etc/udev/rules.d/99-nitrokey.rules
ACTION=="add", ENV{ID_SMARTCARD_READER}=="?*", TAG+="systemd", ENV{SYSTEMD_ALIAS}="/sys/subsystem/usb/nitrokey"
ACTION=="remove", SUBSYSTEM=="usb", ENV{PRODUCT}=="20a0/4108/*", TAG+="systemd"
ACTION=="remove", SUBSYSTEM=="usb", ENV{PRODUCT}=="20a0/4211/*", TAG+="systemd"
The first line creates the systemd /sys/subsystem/usb/nitrokey
when the NitroKey is connected. Unfortunately due to a bug in systemd this systemd device remains "plugged" even though the physical device is unplugged. Therefore we need the second (and maybe third) line as a workaround. The second if you are using the Nitrokey Pro 2, the third for the Nitrokey Start.
No we set up a systemd service by putting the following into ~/.config/systemd/user/gpg-forward.service
[Unit]
Description=Start gpg forwards to configured ssh hosts
BindsTo=sys-subsystem-usb-nitrokey.device
After=sys-subsystem-usb-nitrokey.device
[Service]
ExecStart=/home/<your username>/bin/forward_gpg.sh
ExecStop=/bin/kill $MAINPID
[Install]
WantedBy=sys-subsystem-usb-nitrokey.device
We need to enable our new service by the command
systemctl --user enable gpg-forward.service
This will call the shell script ~/bin/forward_gpg.sh
(see above). This shell script sets up the forwarding sockets and then sleeps forever. Once /sys/subsystem/usb/nitrokey
is removed when we unplug the device, the process of the shell script gets terminated and thus also the children, i.e. the forwarding socket processes are killed.
Sometimes on the remote host a gpg-agent would start automatically and override your sockets. In order to prevent this you need to add no-autostart
to ~/.gnupg/gpg.conf
, e.g. by the command
echo no-autostart >> ~/.gnupg/gpg.conf
Add the following line to /etc/ssh/sshd_config
StreamLocalBindUnlink yes
Don't forget to restart the ssh server.
You need to import your public key on your remote hosts and give it ultimate trust. Your key should have only one key with signing capabilities and one with authentication capabilities. Mine looks like this
pub rsa4096 2015-07-17 [C] [expires: 2021-04-15]
C8686D50DBF1C749EE2473144ED9F2103BD15CE4
uid [ultimate] Johannes Mueller <joh@johannes-mueller.org>
uid [ultimate] Johannes Mueller <github@johannes-mueller.org>
uid [ultimate] Johannes Mueller <joh@kern.punkto.info>
uid [ultimate] Johannes Mueller <muziko@johannes-mueller.org>
uid [ultimate] Johannes Mueller <joh@punkto.info>
sub rsa4096 2015-07-17 [E] [expires: 2021-04-15]
sub rsa4096 2019-12-18 [SA] [expires: 2021-12-17]
You can test if the secret key is available by
gpg --list-secret-keys
If the NitroKey is not connected you should get a response like
gpg: no gpg-agent running in this session
If not, you need to kill a still running gpg-agent.
Then plugin the NitroKey and check again for secret keys. Then it should show you your secret keys. Mine look like this:
sec# rsa4096 2015-07-17 [C] [expires: 2021-04-15]
C8686D50DBF1C749EE2473144ED9F2103BD15CE4
uid [ultimate] Johannes Mueller <joh@johannes-mueller.org>
uid [ultimate] Johannes Mueller <joh@punkto.info>
uid [ultimate] Johannes Mueller <joh@kern.punkto.info>
uid [ultimate] Johannes Mueller <github@johannes-mueller.org>
uid [ultimate] Johannes Mueller <muziko@johannes-mueller.org>
ssb# rsa4096 2015-07-17 [E] [expires: 2021-04-15]
ssb# rsa4096 2019-12-18 [SA] [expires: 2021-12-17]
Then you can try to decrypt something:
echo successfully decrypted | gpg --encrypt -r <your key id> | gpg --decrypt
This should output something with successfully decrypted
in the end. Probably it will ask you for your NitroKey PIN first.
Now we can check if we can sign stuff:
echo sign this | gpg --sign -u <your key id> | gpg --verify
First we need to export the public key as ssh key by
gpg --export-ssh-key <your key id>
and put the result into /root/.ssh/authorized_keys
.
The remote user's ssh installation needs to be using our forwarded socket as ssh-agent. To do this put a line like
export SSH_AUTH_SOCK=/run/user/1000/gnupg/S.gpg-agent.ssh
into your ~/.bashrc
or wherever you set up your env variables. We need to tell sudo to keep this environment variable when we try to sudo by putting a line
Defaults env_keep += SSH_AUTH_SOCK
into the sudoers file by visudo
.
Then we need to make sure that the libpam_ssh_agent_auth.so
is installed by e.g.
apt install libpam-ssh-agent-auth
Finally we need to configure pam to sudo authenticate by ssh-agent-auth by adding
auth [success=2 default=ignore] pam_ssh_agent_auth.so file=/root/.ssh/authorized_keys
to /etc/pam.d/sudo
That should be it. Now sudo authentication should work without password on the remote host.
https://mlohr.com/gpg-agent-forwarding/
https://wiki.gnupg.org/AgentForwarding
https://superuser.com/questions/1033270/how-do-i-use-envsystemd-user-wants-in-udev-rule
https://medium.com/byteschneiderei/setting-up-pam-ssh-agent-auth-for-sudo-login-7135330eb740