Hacker News new | past | comments | ask | show | jobs | submit login

> adds extra unneeded overhead by encrypting everything

Baloney. :-) Or rather, please cite something in the last 5 years showing that the overhead of the symmetric encryption is a significant cost in HTTPS.




assuming you just want to browse websites and not transfer information that needs to be private, the extra rtt for establishing the ssl connection is the uneeeded overhead. for something like streaming a movie, that won't matter because the ssl connection will be long lived. for something like web browsing, where most websites require a new tcp connection for each GET, ssl is painful.

sure, the extra rtt is preferable to javascript injection, but signing the webpage is sufficient to prevent javascript injection and it wouldn't add extra rtt delay (aside from fetching a cert in the trust chain, which https can also suffer from in the exact same way).

depending on the algorithms used, on-the-fly signing of dynamic pages might be (read: almost certainly is) more painful than ssl/tls in terms of computation time, but to the user would still be quicker for most cases than the rtt delay added by ssl/tls.


Note that the primary HTML resource for most web pages takes multiple roundtrips to transmit completely. If you sign it only at the end, you've made the browser feel slower. So you'd have to avoid the RTT by accepting, processing, and presenting the data immediately but only later authenticating it retroactively. The bad guy can just drop or delay the legitimate signature.

This is inherently error-prone. It gives the developers a big, convenient, and reassuring assumption which the attacker is able to violate. For a complex and evolving endpoint like a web browser, I don't think you'd ever see the end of security bugs. More: https://www.ietf.org/mail-archive/web/tls/current/msg04017.h...

Furthermore, retroactive authentication still doesn't preclude the encryption: https://www.ietf.org/mail-archive/web/tls/current/msg08722.h...


The issues with bad guys dropping the sig or delaying it also equally applies equally to ssl during the handshake. But waiting for the sig and the whole page to arrive combined wit the size of modern webpages is a problem I hadn't fully considered. I suppose per packet signatures might work to fix the delay issue, but then you'd have to violate abstraction layers and the added cpu time would be completely untenable. At that point you might as well just copy ssl's dh handshake followed by a block cypher but only provide integrity and not privacy, which is stupid. yeah, ok I'm convinced, stupid idea due to practical reasons. just use https with the extra rtt's.


It's not a stupid idea, it's a problem that smart folks have been banging their head against for a long time now. Take a look at the "TLS Snap Start" proposal to see the lengths to which one must be willing to go to avoid that round trip.

But some low-hanging fruit remains. Improvements to clients and servers that increase TLS session resumption rates would help too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: