Hacker News new | past | comments | ask | show | jobs | submit login

This may be a start:

  Host *
  IdentityFile ~/.ssh/%h
Comment: One key per host, named after the host you connect to



I have this:

    # IdentityFile magic, should be placed at very end of file
    Host *
    IdentityFile ~/.ssh/keys/id_ecdsa_%r@%h
    IdentityFile ~/.ssh/keys/id_rsa_%r@%h
    IdentityFile ~/.ssh/keys/id_ecdsa_ANY@%h
    IdentityFile ~/.ssh/keys/id_rsa_ANY@%h
    IdentityFile ~/.ssh/keys/id_ecdsa_%r@ANY
    IdentityFile ~/.ssh/keys/id_rsa_%r@ANY
    IdentityFile ~/.ssh/keys/id_ecdsa_ANY@ANY
    IdentityFile ~/.ssh/keys/id_rsa_ANY@ANY


It is not my understanding that SSH will attempt to use only the file identified by IdentityFile, and yes, that is surprising now isn't it. If the server does not cooperate with you, your proposed configuration will give the server all of the keys.

Edit to add: While I was testing this understanding (which is correct) mioelnir's comment added the setting you need to get the behavior which I thought was automatic.


You also need to set

    IdentitiesOnly yes
if I remember the config setting correct. Note however that this only limits the offered keys during the authentication phase. If you use AgentForwarding, this still has the entire keyring available afterwards.


Does anyone know how this interacts with ProxyCommand? If I fix this in ~/.ssh/config on my laptop, and I ssh to hostB through hostA using

  Host hostB
    ...
    ProxyCommand ssh hostA -W %h:%p
in the config on my laptop, do I also need to fix it on hostA?


CVE-2016-0777 doesn't interact with ProxyCommand in any special way. That said, after connecting to hostB with your example config above you will have 2 ssh sessions:

1) from your client to hostA

2) from your client to hostB

So to answer your question - no, vulnerable client on hostA is not a problem (or at least not in this particular use-case).


> Vulnerable client on hostA is not a problem.

Ok, that's what I was wondering. Thanks.



which is not very helpful if you for example want to run git via ssh or similar commands on the target server.

There's places where there are two alternatives: agent forwarding or copy the key. Using ssh agent forwarding allows me to use a smart card for the key. It still has it's problems but it's much better than having the key on the remote machine - for example the use of a smartcard mitigates this vulnerability since the key never enters the process memory.


Not sure which exact scenario do you have in mind with ssh+git. ProxyCommand method works just fine for me with my private jumphost in the middle and github on the far end.

Note that for ProxyCommand to work, you don't need a full shell on a jumphost, just "AllowTcpForwarding yes" is enough. On the other hand, with AgentForwarding method you do need a full shell on a jumphost.


I have a vm that I use as development environment, my laptop is just a dumb terminal. I need to check out code there. It's not a jump host.


If you have clean network path from your terminal to the dev vm, why need for either ProxyCommand or ForwardAgent - just ssh to the vm directly, no?

Of course local, not forwarded ssh-agent on the terminal would be super-handy to avoid typing pass-phrase time and again; but that's different and independent from ForwardAgent.


I do ssh directly to the VM. It sits behind a VPN connection at aws. I need to make ssh connections from there to github. The key resides on a smartcard in my laptops usb slot. And that's where ssh-agent/ForwardAgent comes into play. I forward my local key to the remote VM.


Fair enough. Makes perfect sense, thanks!


What if my infrastructure is over 50 hosts? How am I going to distribute key files for 50 people with 50 keys every 14 days?


You automate it. At 50 hosts some automation like puppet or ansible, etc is worth the trouble, especially if it helps you make sure that all hosts have the correct keys on them - that's just basic security - make sure there are no keys that shouldn't be there.

What do you do now if someone leaves? Remove that person's key from all 50 hosts one at a time?

Or, at the very least, you use tmux with sync panes or csshx - and log into all 50 hosts at once, and then you can issue one rm / scp command if you are still doing things manually.


Well I have an automated credentials and authentication infrastructure in place. The point is that if I have 50 hosts (and my infrastructure has considerably more) and 3 employees with 50 keys each, there will be 2 key changes a day on a 90 day rotation.

Are my guys going to spend 5 minutes every morning making and pushing keys?

What if my hosts auto provision themselves and there are 5 new hosts every morning?

Am I going to make keys as part of infrastructure deployments and push them back to the workstations and update other peoples ssh configs?

I'm just saying if you've worked at scale, you'll realize that a key per box won't scale. I mean, anything can scale if you put enough effort into it. But the chances of disaster in lack of access or security breach, from over complication is way to high here.

Automating bad processes just makes it easier for them to fuck you.


You could use something like ansible for the management of keys. Or you could centralize your authentication, which would probably be a lot more sane.


Even with central management, I don't know that I want to have 2500 keys floating around even if I have a management stack in place. That seems like an attack vector all in its own. Changing a key every other day on a 90 day rotation with 50 boxes. And fifty boxes isn't even that much. That's like 2 racks.

Even with config management this won't scale past about 2 or 3 people and 10-20 boxes.

Central auth is an option I guess but I think the better way to go would be a 2 factor with the key and hotp.


It's not that bad, it's part of the user data and should be provisioned the same way.

OpenSSH can also be used in a PKI fashion, where you use certificates instead of known_hosts and authorized_keys records. It's quite all right, but it comes with the same problems a full PKI does as you need to keep track of when the certificates expire. You also need a way to distribute CRLs so you still need configuration management.


The suggestion was not PKI though. I'd be happy with that. I already have a PKI in place. The suggestion was for a 1 key per box.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: