Discussion:
Agent forwarding failure when the socketdir was autodeleted
Andre Heinecke
2016-10-04 12:03:06 UTC
Permalink
Hi,

Using GnuPG 2.1.15 I'm trying to SSH into a remote machine with OpenSSH 6.7 as
described under:

https://wiki.gnupg.org/AgentForwarding

The problem is that the remote system uses systemd so /var/run/user/<uid>
exits and GnuPG will use it.

But if I am not logged in or there is no gnupg process running. systemd
autodeletes /var/run/user/<uid>/gnupg this causes the remote forward of the
Socket to fail because the directory for the socket does not exist and SSH
won't create it. :-/

Any ideas how to solve this without requireing changes to the root
configuration of the remote machine?

I would happily update the wiki with a solution.

Regards,
Andre
--
Andre Heinecke | ++49-541-335083-262 | http://www.intevation.de/
Intevation GmbH, Neuer Graben 17, 49074 Osnabrück | AG Osnabrück, HR B 18998
Geschäftsführer: Frank Koormann, Bernhard Reiter, Dr. Jan-Oliver Wagner
Daniel Kahn Gillmor
2016-10-04 15:26:59 UTC
Permalink
Post by Andre Heinecke
Using GnuPG 2.1.15 I'm trying to SSH into a remote machine with OpenSSH 6.7 as
https://wiki.gnupg.org/AgentForwarding
The problem is that the remote system uses systemd so /var/run/user/<uid>
exits and GnuPG will use it.
But if I am not logged in or there is no gnupg process running. systemd
autodeletes /var/run/user/<uid>/gnupg this causes the remote forward of the
Socket to fail because the directory for the socket does not exist and SSH
won't create it. :-/
If you're not logged in, then how does the remote forward work? aren't
you actually still logged in (via ssh) as long as your remote forward is
running?

--dkg
Andre Heinecke
2016-10-04 18:49:00 UTC
Permalink
Hi,
Post by Daniel Kahn Gillmor
Post by Andre Heinecke
But if I am not logged in or there is no gnupg process running. systemd
autodeletes /var/run/user/<uid>/gnupg this causes the remote forward of the
Socket to fail because the directory for the socket does not exist and SSH
won't create it. :-/
If you're not logged in, then how does the remote forward work? aren't
you actually still logged in (via ssh) as long as your remote forward is
running?
Sorry for not formulating this better. You are of course right If I'm not
logged in the remote forward is not working.

That is not what I meant to say. The problem is, that when I disconnect the
/run/.../gnupg dir is deleted and the next time I want to connect and ssh
tries to set up the forwarding this will fail because the /run/.../gnupg
directory in which the forwarded socket should be created does not exist.

Warning: remote port forwarding failed for listen path
/var/run/user/<uid>/gnupg/S.gpg-agent

My current workaround is to connect first and start dirmngr on the remote
machine (to get the socketdir created and used). And then connect with ssh
socket forwarding. This is a bit clunky to use.

I've tried placing files in that folder, or to set up permissions to 000 for
the gnupg folder (so that gnupg itself does not use it) but to no avail. It's
still removed when disconnecting and the next connect will fail.

Regards,
Andre
--
Andre Heinecke | ++49-541-335083-262 | http://www.intevation.de/
Intevation GmbH, Neuer Graben 17, 49074 Osnabrück | AG Osnabrück, HR B 18998
Geschäftsführer: Frank Koormann, Bernhard Reiter, Dr. Jan-Oliver Wagner
Daniel Kahn Gillmor
2016-10-04 19:34:25 UTC
Permalink
Hi Andre--
Post by Andre Heinecke
Post by Daniel Kahn Gillmor
Post by Andre Heinecke
But if I am not logged in or there is no gnupg process running. systemd
autodeletes /var/run/user/<uid>/gnupg this causes the remote forward of the
Socket to fail because the directory for the socket does not exist and SSH
won't create it. :-/
If you're not logged in, then how does the remote forward work? aren't
you actually still logged in (via ssh) as long as your remote forward is
running?
Sorry for not formulating this better. You are of course right If I'm not
logged in the remote forward is not working.
That is not what I meant to say. The problem is, that when I disconnect the
/run/.../gnupg dir is deleted and the next time I want to connect and ssh
tries to set up the forwarding this will fail because the /run/.../gnupg
directory in which the forwarded socket should be created does not exist.
so /run/user/<uid> exists upon ssh connection, but
/run/user/<uid>/gnupg/ does not, and therefore sshd on the remote side
of the pipe can't auto-create the remote socket -- is that the concern?
Post by Andre Heinecke
My current workaround is to connect first and start dirmngr on the remote
machine (to get the socketdir created and used). And then connect with ssh
socket forwarding. This is a bit clunky to use.
agreed, that sounds clunky and annoying.

I wonder whether ssh's remote socket forwarding ought to try to
automatically create the parent directories if they don't already exist.

This doesn't solve your problem in the near term if you can't update the
remote host, but it seems like the right place to fix this problem.
Post by Andre Heinecke
I've tried placing files in that folder, or to set up permissions to 000 for
the gnupg folder (so that gnupg itself does not use it) but to no avail. It's
still removed when disconnecting and the next connect will fail.
right, session termination (or machine reboot, etc) should clean up
/run/user/<uid> entirely -- that's part of the explicit goal of
$XDG_RUNTIME_DIR, aiui.

--dkg
Stephan Beck
2016-10-05 12:27:00 UTC
Permalink
Hi,
Post by Daniel Kahn Gillmor
Hi Andre--
Post by Andre Heinecke
Post by Daniel Kahn Gillmor
Post by Andre Heinecke
But if I am not logged in or there is no gnupg process running. systemd
autodeletes /var/run/user/<uid>/gnupg this causes the remote forward of the
Socket to fail because the directory for the socket does not exist and SSH
won't create it. :-/
If you're not logged in, then how does the remote forward work? aren't
you actually still logged in (via ssh) as long as your remote forward is
running?
Sorry for not formulating this better. You are of course right If I'm not
logged in the remote forward is not working.
That is not what I meant to say. The problem is, that when I disconnect the
/run/.../gnupg dir is deleted and the next time I want to connect and ssh
tries to set up the forwarding this will fail because the /run/.../gnupg
directory in which the forwarded socket should be created does not exist.
so /run/user/<uid> exists upon ssh connection, but
/run/user/<uid>/gnupg/ does not, and therefore sshd on the remote side
of the pipe can't auto-create the remote socket -- is that the concern?
Post by Andre Heinecke
My current workaround is to connect first and start dirmngr on the remote
machine (to get the socketdir created and used). And then connect with ssh
socket forwarding. This is a bit clunky to use.
agreed, that sounds clunky and annoying.
I wonder whether ssh's remote socket forwarding ought to try to
automatically create the parent directories if they don't already exist.
So, ssh does not even create the socket if you set
STREAMBINDUNLINK yes
in /etc/ssh/ssh_config (or give the correspondent -o option on the
command line, client-side)?
If this is the case (still no socket/socketdir creation), it may help
adjusting the ~/.profile of the remote account (if permissions allow it)
you log into, adding the following, as seen on (1), provided that you
log in from the console. Quote follows below.

Kyle Amon has provided the following bit for a .bash_profile:

#
# setup ssh-agent
#

# set environment variables if user's agent already exists
[ -z "$SSH_AUTH_SOCK" ] && SSH_AUTH_SOCK=$(ls -l /tmp/ssh-*/agent.* 2>
/dev/null | grep $(whoami) | awk '{print $9}')
[ -z "$SSH_AGENT_PID" -a -z `echo $SSH_AUTH_SOCK | cut -d. -f2` ] &&
SSH_AGENT_PID=$((`echo $SSH_AUTH_SOCK | cut -d. -f2` + 1))
[ -n "$SSH_AUTH_SOCK" ] && export SSH_AUTH_SOCK
[ -n "$SSH_AGENT_PID" ] && export SSH_AGENT_PID

# start agent if necessary
if [ -z $SSH_AGENT_PID ] && [ -z $SSH_TTY ]; then # if no agent & not
in ssh
eval `ssh-agent -s` > /dev/null
fi

[Quote end]
If you re-connect to the remote machine and login to the user account
(from the console) an ssh-agent (if not already started) is (hopefully)
being started on the remote machine and creates the directory and socket
in /tmp.
Or, another idea, is to use a specific local script and execute it on
the remote server (or to use COMMAND option from the command line and
execute it on the remote server): Quote from (2)

Executing a Local Script on a Remote Linux Server

$ ssh [user]@[server] 'bash -s' < [local_script]

If all this did not help, and, consequently, systemd (itself) on the
remote machine had to be tricked into preserving (or automatically
recreating) the /gnupg directory, I were a bit lost, but at least I had
read through a bunch of very interesting docs I can make use of myself.

(1) http://mah.everybody.org/docs/ssh
(2)
http://www.shellhacks.com/en/Running-Commands-on-a-Remote-Linux-Server-over-SSH


Cheers,

Stephan
Stephan Beck
2016-10-05 12:44:00 UTC
Permalink
Oh, just seen Werner's answer :-)

Well, I had a good time reading the mentioned docs ;-)

Cheers,

Stephan
Post by Stephan Beck
Hi,
Post by Daniel Kahn Gillmor
Hi Andre--
Post by Andre Heinecke
Post by Daniel Kahn Gillmor
Post by Andre Heinecke
But if I am not logged in or there is no gnupg process running. systemd
autodeletes /var/run/user/<uid>/gnupg this causes the remote forward of the
Socket to fail because the directory for the socket does not exist and SSH
won't create it. :-/
If you're not logged in, then how does the remote forward work? aren't
you actually still logged in (via ssh) as long as your remote forward is
running?
Sorry for not formulating this better. You are of course right If I'm not
logged in the remote forward is not working.
That is not what I meant to say. The problem is, that when I disconnect the
/run/.../gnupg dir is deleted and the next time I want to connect and ssh
tries to set up the forwarding this will fail because the /run/.../gnupg
directory in which the forwarded socket should be created does not exist.
so /run/user/<uid> exists upon ssh connection, but
/run/user/<uid>/gnupg/ does not, and therefore sshd on the remote side
of the pipe can't auto-create the remote socket -- is that the concern?
Post by Andre Heinecke
My current workaround is to connect first and start dirmngr on the remote
machine (to get the socketdir created and used). And then connect with ssh
socket forwarding. This is a bit clunky to use.
agreed, that sounds clunky and annoying.
I wonder whether ssh's remote socket forwarding ought to try to
automatically create the parent directories if they don't already exist.
So, ssh does not even create the socket if you set
STREAMBINDUNLINK yes
in /etc/ssh/ssh_config (or give the correspondent -o option on the
command line, client-side)?
If this is the case (still no socket/socketdir creation), it may help
adjusting the ~/.profile of the remote account (if permissions allow it)
you log into, adding the following, as seen on (1), provided that you
log in from the console. Quote follows below.
#
# setup ssh-agent
#
# set environment variables if user's agent already exists
[ -z "$SSH_AUTH_SOCK" ] && SSH_AUTH_SOCK=$(ls -l /tmp/ssh-*/agent.* 2>
/dev/null | grep $(whoami) | awk '{print $9}')
[ -z "$SSH_AGENT_PID" -a -z `echo $SSH_AUTH_SOCK | cut -d. -f2` ] &&
SSH_AGENT_PID=$((`echo $SSH_AUTH_SOCK | cut -d. -f2` + 1))
[ -n "$SSH_AUTH_SOCK" ] && export SSH_AUTH_SOCK
[ -n "$SSH_AGENT_PID" ] && export SSH_AGENT_PID
# start agent if necessary
if [ -z $SSH_AGENT_PID ] && [ -z $SSH_TTY ]; then # if no agent & not
in ssh
eval `ssh-agent -s` > /dev/null
fi
[Quote end]
If you re-connect to the remote machine and login to the user account
(from the console) an ssh-agent (if not already started) is (hopefully)
being started on the remote machine and creates the directory and socket
in /tmp.
Or, another idea, is to use a specific local script and execute it on
the remote server (or to use COMMAND option from the command line and
execute it on the remote server): Quote from (2)
Executing a Local Script on a Remote Linux Server
If all this did not help, and, consequently, systemd (itself) on the
remote machine had to be tricked into preserving (or automatically
recreating) the /gnupg directory, I were a bit lost, but at least I had
read through a bunch of very interesting docs I can make use of myself.
(1) http://mah.everybody.org/docs/ssh
(2)
http://www.shellhacks.com/en/Running-Commands-on-a-Remote-Linux-Server-over-SSH
Cheers,
Stephan
--
Hinweis: Diese E-Mail enthält vertrauliche Informationen und ist nur für
ihren legitimen Empfänger bestimmt. Ihre Weitergabe oder
Vervielfältigung gleich welcher Art ist nicht gestattet. Sollten Sie
diese E-Mail irrtümlich erhalten haben, löschen Sie sie bitte und
informieren Sie den Absender.
Stephan Beck
2016-10-05 12:43:00 UTC
Permalink
Oh, just seen Werner's answer :-)

Well, I had a good time reading the mentioned docs ;-)

Cheers,

Stephan
Post by Stephan Beck
Hi,
Post by Daniel Kahn Gillmor
Hi Andre--
Post by Andre Heinecke
Post by Daniel Kahn Gillmor
Post by Andre Heinecke
But if I am not logged in or there is no gnupg process running. systemd
autodeletes /var/run/user/<uid>/gnupg this causes the remote forward of the
Socket to fail because the directory for the socket does not exist and SSH
won't create it. :-/
If you're not logged in, then how does the remote forward work? aren't
you actually still logged in (via ssh) as long as your remote forward is
running?
Sorry for not formulating this better. You are of course right If I'm not
logged in the remote forward is not working.
That is not what I meant to say. The problem is, that when I disconnect the
/run/.../gnupg dir is deleted and the next time I want to connect and ssh
tries to set up the forwarding this will fail because the /run/.../gnupg
directory in which the forwarded socket should be created does not exist.
so /run/user/<uid> exists upon ssh connection, but
/run/user/<uid>/gnupg/ does not, and therefore sshd on the remote side
of the pipe can't auto-create the remote socket -- is that the concern?
Post by Andre Heinecke
My current workaround is to connect first and start dirmngr on the remote
machine (to get the socketdir created and used). And then connect with ssh
socket forwarding. This is a bit clunky to use.
agreed, that sounds clunky and annoying.
I wonder whether ssh's remote socket forwarding ought to try to
automatically create the parent directories if they don't already exist.
So, ssh does not even create the socket if you set
STREAMBINDUNLINK yes
in /etc/ssh/ssh_config (or give the correspondent -o option on the
command line, client-side)?
If this is the case (still no socket/socketdir creation), it may help
adjusting the ~/.profile of the remote account (if permissions allow it)
you log into, adding the following, as seen on (1), provided that you
log in from the console. Quote follows below.
#
# setup ssh-agent
#
# set environment variables if user's agent already exists
[ -z "$SSH_AUTH_SOCK" ] && SSH_AUTH_SOCK=$(ls -l /tmp/ssh-*/agent.* 2>
/dev/null | grep $(whoami) | awk '{print $9}')
[ -z "$SSH_AGENT_PID" -a -z `echo $SSH_AUTH_SOCK | cut -d. -f2` ] &&
SSH_AGENT_PID=$((`echo $SSH_AUTH_SOCK | cut -d. -f2` + 1))
[ -n "$SSH_AUTH_SOCK" ] && export SSH_AUTH_SOCK
[ -n "$SSH_AGENT_PID" ] && export SSH_AGENT_PID
# start agent if necessary
if [ -z $SSH_AGENT_PID ] && [ -z $SSH_TTY ]; then # if no agent & not
in ssh
eval `ssh-agent -s` > /dev/null
fi
[Quote end]
If you re-connect to the remote machine and login to the user account
(from the console) an ssh-agent (if not already started) is (hopefully)
being started on the remote machine and creates the directory and socket
in /tmp.
Or, another idea, is to use a specific local script and execute it on
the remote server (or to use COMMAND option from the command line and
execute it on the remote server): Quote from (2)
Executing a Local Script on a Remote Linux Server
If all this did not help, and, consequently, systemd (itself) on the
remote machine had to be tricked into preserving (or automatically
recreating) the /gnupg directory, I were a bit lost, but at least I had
read through a bunch of very interesting docs I can make use of myself.
(1) http://mah.everybody.org/docs/ssh
(2)
http://www.shellhacks.com/en/Running-Commands-on-a-Remote-Linux-Server-over-SSH
Cheers,
Stephan
--
Hinweis: Diese E-Mail enthält vertrauliche Informationen und ist nur für
ihren legitimen Empfänger bestimmt. Ihre Weitergabe oder
Vervielfältigung gleich welcher Art ist nicht gestattet. Sollten Sie
diese E-Mail irrtümlich erhalten haben, löschen Sie sie bitte und
informieren Sie den Absender.
Werner Koch
2016-10-05 07:42:21 UTC
Permalink
Post by Andre Heinecke
My current workaround is to connect first and start dirmngr on the remote
machine (to get the socketdir created and used). And then connect with ssh
socket forwarding. This is a bit clunky to use.
You may use

gpgconf --create-socketdir

to create the directory w/o running any daemon. It is a NOP if the
directory already exists.


Salam-Shalom,

Werner
--
Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz.
Daniel Kahn Gillmor
2016-10-05 17:46:51 UTC
Permalink
Post by Werner Koch
Post by Andre Heinecke
My current workaround is to connect first and start dirmngr on the remote
machine (to get the socketdir created and used). And then connect with ssh
socket forwarding. This is a bit clunky to use.
You may use
gpgconf --create-socketdir
to create the directory w/o running any daemon. It is a NOP if the
directory already exists.
The trouble is that the socket directory needs to be created before ssh
tries to forward the socket. when doing a forward from the command
line, the ssh channel that does socket forwarding is often established
before the channel that runs any shell or other interactive behavior.

I really think this ought to be handled in OpenSSH.

--dkg
Andre Heinecke
2016-10-05 19:35:12 UTC
Permalink
Hi,
Post by Daniel Kahn Gillmor
Post by Werner Koch
You may use
gpgconf --create-socketdir
to create the directory w/o running any daemon. It is a NOP if the
directory already exists.
Yes, that works but it's still a bit cludgy I'd like to have it working in a
single ssh command.
Post by Daniel Kahn Gillmor
The trouble is that the socket directory needs to be created before ssh
tries to forward the socket. when doing a forward from the command
line, the ssh channel that does socket forwarding is often established
before the channel that runs any shell or other interactive behavior.
I really think this ought to be handled in OpenSSH.
Exactly. I wrote a mail to openssh-unix-dev as you suggested to ask about
that. Let's see :-)

Regards,
Andre
--
Andre Heinecke | ++49-541-335083-262 | http://www.intevation.de/
Intevation GmbH, Neuer Graben 17, 49074 Osnabrück | AG Osnabrück, HR B 18998
Geschäftsführer: Frank Koormann, Bernhard Reiter, Dr. Jan-Oliver Wagner
Kristian Fiskerstrand
2016-10-10 00:40:30 UTC
Permalink
Post by Andre Heinecke
Post by Daniel Kahn Gillmor
I really think this ought to be handled in OpenSSH.
Exactly. I wrote a mail to openssh-unix-dev as you suggested to ask about
that. Let's see :-)
For record purposes, this is
http://lists.mindrot.org/pipermail/openssh-unix-dev/2016-October/035409.html
--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"A committee is a group that keeps minutes and loses hours."
(Milton Berle)
Loading...