Hacker News new | past | comments | ask | show | jobs | submit login
Proposal to Change the Default TLS Ciphersuites Offered by Browsers (briansmith.org)
132 points by fejr on Sept 8, 2013 | hide | past | favorite | 78 comments



Is (NIST) P-256 secure enough? I might be wrong, but I think DJB is suggesting it's not, and that Curve25519 is preferable because it meets all the needed security requirements:

http://cr.yp.to/talks/2013.05.31/slides-dan+tanja-20130531-4...

And why just 128-bit AES? AES-256 is not much slower, is it? I think I read it's only 30 percent slower than 128-bit encryption. Either way PFS should become the default for all encrypted communications.

Companies need to start setting their own minimums years ahead of what NIST is recommending. NIST recommended the use of RSA 1024-bit until 2010, so that probably means companies should've moved away from RSA 1024-bit at least 3-5 years earlier. Yet here we are, with most companies today still using RSA 1024-bit 3 years after what even NIST recommended. Time to put security over profits first.

Also, as Ptacek is saying, it's time to implement TACK for certificate pinning:

https://twitter.com/tqbf/status/375742847485358081


Bernstein's big problems with the NIST curves are that they turn out to be difficult to write fast constant-time point multiplication for them (bear in mind that they were chosen before this was a well-known concern), and that they're easy to screw up in implementations. Those are real concerns, but so to would be fresh implementations of Curve25519 in all new software.

If one good thing comes of this most recent debacle, though, it'll be an industry reconsideration of the concepts that didn't manage to become NIST standards, like modern stream ciphers and faster curves.


The difficulties with fast constant-time point multiplication seem to be precisely the type problem the NSA might know of earlier. I.e. it's possible they contrived the standards to something they knew people could easily get wrong. Not a reason not to use them, just food for thought.


All the supported curves in TLS are most likely influenced by the US government (NIST, ANSI, and SECG). So we don't really have an option to use curves with a non-US provenance.

http://tools.ietf.org/html/rfc4492#appendix-A or list all openssl curves: $ openssl ecparam -list_curves


The NIST curves weren't influenced by the US government so much as they were defined by one NSA cryptographer, Jerry Solinas. On the other hand, the rationale behind their generation is pretty straightforward.


This site uses the 'Globalsign Organization Validation CA - G2' certificate, which I've removed from my trust list. It's is a certificate that can be theoretically used to sign any ___domain (i.e., it's MITM-capable).

I semi-regularly see it on Cloudflare sites because their customers haven't bothered to send their own keys to Cloudflare.

I thought it was ironic than article about SSL security was doing something that is less secure than possible (from the end user's point-of-view).

Just to be clear, yes, I understand that the operator of brainsmith.org isn't gaining any more security by sending CF the private key to their own cert vs. using CF's G2 cert. With the first approach the end user of brainsmith.org has better security since he can view the site without having to trust an MITM-capable certificate.


In Cloudflare's case "haven't bothered to send their own keys" implies "haven't seen the need to pay Cloudflare $200/month or $3000/month for the ability to upload a certificate+key". AFAICT only the Business and Enterprise plans allow custom certificates.


Why the fuck would anyone want to use them then? Even the shittiest shared host or VPS offer that for pittance.


It's a CDN... comparing to a VPS is not even close to being fair.

I've never seen a CDN that will let you load an ssl cert for less than a few hundred dollars. SNI isn't supported widely enough yet, so they have to dedicate an IP to you.


So, Mr Smith wants to add a bunch of elliptic curve options (using the NIST curves, weakest P-256 first), while removing the widely used TLS_RSA_WITH_AES_256_CBC_SHA256 due to "concerns" about "performance" and an unsubstantiated argument that ephemeral key exchange is somehow always better. Hmm.


Mr. Smith joins pretty much the entire mainstream of cryptography in urging people to switch away from RSA and towards ECC. The NIST P-256 curve is the most common ECC curve used. It was generated by picking a prime that is fast to compute with and hashing a string with SHA-1.


As djb pointed out in "Security dangers of the NIST curves", SHA-1 does not prove much. If NSA knows a weak class of curves, they try as many strings as they want until SHA-1 of the string hits a weak curve.


It doesn't prove much; you can always just generate a new P-256-alike with the instructions NIST provided for you in that document.


Who can generate? Aren't the values fixed by the standards, mustn't both client and servers use the same as long as they support the given standard?


The standard provides a standard set of curve parameters and a NIST-sanctioned way of generating new curve parameters using the exact same method.


How can be any other parameters than the standard ones used in the current browsers and servers? I think they can't, am I right?

And how can browsers start to use any other parameters before they standardise them? I think they can't?


This is true but not particularly meaningful to me, because you can't really do anything new with crypto at all without some kind of software update. For instance, the primes and generator for conventional number theoretic DH are also pre-generated and baked into a standard.


So maybe we should generate our own curves. I propose something as follows:

1. Locate a public string. A tweet or a quote should suffice.

2. SHA-512 the string to obtain a seed.

3. Use that seed to generate b, and calculate N = #E(Fp) = n * h, and choose a base point P. Of course we need to ensure that these parameters are safe against known attacks.

4. Mandate that the new set of parameters MUST be supported wherever NIST prime curves are supported.

The last step is probably the most difficult. You don't need that if you don't need to interoperate with other implementations though.


The only thing you want not to happen is for software to start generating and negotiating its own curves, because that then requires all interoperable implementations to parse and validate random curves from attackers.


No, I didn't say that everyone generates their own curves. I meant the security community should generate our own curves. Somebody should email Thomas Pornin.


There are already several alternative curve sets, satisfying various degrees of paranoia:

- http://certivox.org/display/EXT/CertiVox+Standard+Curves

- http://tools.ietf.org/html/rfc5639

- curve25519 and the other djb et al curves.


Sorry, I didn't mean to imply you were saying that.


TLS_RSA_WITH_AES_256_CBC_SHA256 is not forward secret (if you have the certificate private key you can passively decrypt all past/future sessions) so removing it is a great idea. I'm only responding to what you said and not making a judgement about the rest of the guy's suggestions.


Beware of random cryptographers bearing suggestions? #HUMINT


I really hope TLS crypto standards won't be set (or changes rejected) based on who proposed a particular change.

Changes should stand on their own merits. I don't see why we should trust anyone, random or not. It's not about trusting people, it's about trusting algorithms. So it doesn't really matter who in particular proposes changes.


In a ideal world, but if you get a lot of "contributions" some might slip through. Remember, no one can match NSA's budget, manpower and maybe brainpower.


Some of the OpenSSH and IETF mailing list archives about SSH also make interesting reading. Exactly why does RFC 4253 mandate Group 1 and Group 14 as the only required key exchange mechanisms? (They were defined by RFC 2412, written by someone who was "assigned to the DARPA Information Technology Office".) However, perhaps it would be better to fork a separate thread for such speculations.


This proposal prefers AES-GCM. Interestingly, Adam Langley (Chrome) is against AES-GCM. https://www.imperialviolet.org/2013/01/13/rwc03.html


I think Adam Langley doesn't like GCM because its normal software implementation requires secret-dependent table lookups for speed. It's thus thought to be easy to produce a naive GCM implementation that will suffer from side channel attacks. That concern is rapidly being mitigated by newer CPUs which can do the multiplications for GCM in hardware.

From a theoretical perspective, the polynomial MACs (like GHASH) are very well understood.

But, much more importantly, the TLS GCM construction is the only modern stream encryption standardized and widely available to TLS. The block constructions in TLS suffer from being created before Encrypt-then-MAC was proven secure; they do the operations the other way around, and thus require extremely fiddly code to quash side channels that have (unlike GCM) already been shown to admit plausible attacks.

The author of this proposal probably isn't so much making a statement about GCM so much as he is suggesting that we need to deprecate the 90's-crypto parts of TLS, which is something that is hard to argue with.


My understanding is that Adam Langley prefers AES-GCM to all the other mac-then-encrypt cipher suites in tls 1.2. He's working on AES-GCM support in NSS [1].

[1] https://news.ycombinator.com/item?id=5365601



The author seems to worry a lot about ARM performance. If this "proposal" is going to take 2-3 years to be implemented in browsers, then I wouldn't worry too much about it.

The ARMv8 architecture is supposed to come with hardware optimized AES and SHA-1/SHA-2 encryption, and be "up to 10x" faster than on ARMv7. ARMv8 chips should start coming out next year. So if we're planning "for the future" here, then I'd start with ARMv8 in mind, rather than ARMv7 or even ARMv6 (which really should stop being supported already).

If better and more optimal security means that older phones and lower-end ones will experience a setback in performance on some sites for the next few years (until they get upgraded), so be it.


How many years until ARMv8 represents the majority of phones? I think writing off ARMv7 is a bad idea.


The author works for Mozilla (he's their security module peer), and is therefore focused on Firefox OS - which aims for low-end smartphones. Hence the reason for focusing on older ARM. Hope that helps.


I know that's one of the reasons he's saying it, but that's not really everyone else' concern, too, and as I said security should come first. Firefox OS is coming to ARMv6 phones right now, so at the very least they shouldn't start with that in mind when establishing the minimum of performance for encryption.

There's the ARMv7 Cortex A7 chip for low-end now, and ARMv6 chips are terrible for performance anyway. They should not be used as a starting point for very low-end phones anymore.


ARMv6 is very similar in all regards to ARMv7 and there are very fast implementations available. Are you talking about ARMv5 perhaps, which has a terrible memory model making fast multitasking implementations with memory isolation very hard to implement? ARMv7 just adds some new instructions which are while usefu, does not give the performance benefits.


Just changing the order of the suites could be done quickly. Of course committees won't let that happen.


Why not adopt the goal of deprecating all ciphersuites but one? Let's have a debate about the best ciphersuite and settle on a single one. That would decrease the attack surface and would allow the crypto community to focus on getting the implementation bug free. It would also give attackers a tighter focus, so we'd better pick the right ciphersuite! To Brian Smith's list of criteria, I would add implementation simplicity as a very important requirement.


For the past 5 years or so, we've managed to take good advantage of the fact that we had a diversity of ciphersuites, even though each of the pre-1.1 ciphersuites had problems. Trying to lock down a single known good ciphersuite would be drawing exactly the wrong lesson from recent history.


If the latest Greenwald/Schneier revelations are true, then the past 5 years of TLS have underwhelming to say the least, so I don't see what advantage has been obtained from diversity.

Going forward, how are site admins to know what ciphersuite to use? This needs to be made very clear. And how are web users to know what the green lock in their browser means, if it can mean anything at all? The current state of affairs is incomprehensible to all but a very few.

There needs to be a big public debate about what ciphersuite is the best. And there should be very careful scrutiny of the implementation. Choosing ciphersuite(s) is a political as well as a technical problem -- the goals should be as clear and simple as possible.


That's what this post was about.


Does ephemeral key exchange protect against MITM ? I don't think so. Checking for that would be difficult I think for a standrad protocol.


No, the DHE/ECDHE (ephemeral key exchanges) don't protect against MITM, it protects against passive dragnet decryption. But the RSA/ECDSA/DSS part (certificate signing) does. All TLS ciphersuites include certificate signing to protect against MITM, but not all include ephemeral key exchange.


Prefer faster over slower.

Would any security expert give a critique of this section of the article? That line seems counterintuitive: faster sometimes means easier to break. Shouldn't the goal to be as slow as possible (but no slower than a user will tolerate)?

Also, who is Brian Smith? His personal page doesn't give any info, just an email address. Searching for "brian smith cryptographer" yields a stackoverflow user page, some comments by Brian Smith on various security-related newsgroups, and a linkedin profile[1] with very strange job history -- overlapping timelines, and no entries after 2004 ... I'm assuming that's the same Brian Smith because of his interest in "Cryptography and Information Security."

If that is him, I wonder who he's worked for since 2004? (Should we even be asking these kinds of intrusive questions about people who are proposing new security standards?)

While I'm at it, I may as well ask: who is the submitter, fejr? His account is 5 days old with just one comment but 11 submissions, all related to the NSA or security. I'm curious how they came across this Brian Smith proposal in the first place, because that would give us some information about his background at least. This proposal seems new, because Wayback Machine has no record of that Brian Smith proposal URL[2] so fejr seems to be the first one to post this to any news website.

[1] http://www.linkedin.com/pub/brian-smith/6/6b3/9a7

[2] http://web.archive.org/liveweb/https://briansmith.org/browse...

EDIT: This article deserves a better top comment than mine. When I wrote this, the submission had zero comments and raised a whole lot of questions: Who is this person? Can we trust them? Why haven't we heard about this proposal till now? The answers came swiftly: Brian Smith works for Mozilla. Yes, we can trust that this proposal has no hidden agenda. We hadn't heard about it because it was originally posted to the Mozilla Crypto list two days ago.

Now that those questions are settled, I find myself in the top comment spot and entirely undeserving of the honor. It may have been necessary to at least consider the questions I raised (trust and identity of the author), but my comment addressed none of the substance of the article. I wrote it in order to get some discussion started until tptacek comes in and writes a thorough critique of the proposal's strengths and any possible weaknesses.

So please, someone, write up a good topcomment analysis of the proposal so we can upvote you. :)


Brian Smith works at Mozilla in the Security Engineering team. He's one of the brain behind NSS.


No, faster does not mean easier to break. Fast is a problem when brute force attacks are viable, as in with password hashes. There are no viable brute force attacks on 128 bit keys; that's why we use 128 bit keys.


As a noob in encryption, why is it not possible to guess the 128 bit private key by brute force?


There are 340282366920938463463374607431768211456 possible 128-bit keys. So if you had a machine that could check a trillion possible keys per second it would take over 10 quintillion years to try all possible 128-bit keys.



I could explain this, but in all seriousness, I think the better answer to this question is to urge you to pop open your Perl, Ruby, or Python prompt and work the math out.


The brute force algorithm is O(2^n)


Sorry, you're just fueling counterproductive witch-hunt hysteria. Probably due to seeking safety and wanting for trustable crypto gurus to show you the way. They don't exist. The only way to move forward is to continue to judge ideas and not people, and use what we've learned to design systems more resistant to hidden subterfuge of unknown design.

> The answers came swiftly: Brian Smith works for Mozilla. Yes, we can trust that this proposal has no hidden agenda. We hadn't heard about it because it was originally posted to the Mozilla Crypto list two days ago.

Would the NSA not have planted multiple long-term people at Mozilla? Would these people not cover for each other, and promulgate their insecurities by planting them in the minds of others who then make the on-record proposal?

The reputation of the popular cryptographers we know and love would only have suffered if their subversions were discovered, which is clearly the opposite of the NSA's goal. And if that is where their morals lie, why would they be personally worried about being uncovered? Moving to Virginia/Utah probably isn't that bad.

And BTW why hasn't Bruce Schneier released the raw documents that could shed clues on which systems are flawed? This is bona fide security-critical information with extreme relevance to the technical community, regardless of the NSA's whining. But I just have to assume he has his reasons even if I'd think they're misguided.

I apologize for any personal implications against Brian, Bruce, et al - they're meant to be completely illustrative. But any decent security person would actually tell you that you should not trust them either.


Here's some discussion on mozilla's crypto list:

http://www.mail-archive.com/[email protected]...


Thanks! I also found a presentation from the Stanford Real World Cryptography event in Jan 2013 which cites Brian's work. It's an interesting read: https://crypto.stanford.edu/RealWorldCrypto/slides/gueron.pd...


No, slower doesn't mean more secure. 3DES is a lot slower than, say, AES, but nobody's suggesting 3DES is more secure than AES.


> Also, who is Brian Smith?

Are we moving from a state that can know everything on us, but at least might be too busy, to a mob that insists on knowing everything about us, because it's too idle?

I'm pseudoanonymous on here, deleted my LinkedIn account years ago because I despise the site with an utter passion. And, fuck me, I've probably said something stupid about cryptology online because even though I love practical math, crypto is really hard, and IANAC.

I could explain that I'm pseudoanonymous here because I actually am an attorney,[1, 2] and I don't want anything I accidentally say online to be associated with any of my clients. But while that sounds very serious, it's actually pretty unlikely, so no, there's another secret reason that's utterly selfish. I'm going to reveal it here for everyone.

I don't use my true name all the time because I don't want my online reputation in every community tied to stupid shit I said online when I was 15. And when I'm 35,[1] I might not want to tie my online reputation to the stupid shit I'm saying right now. I like the freedom online communities provide to earn respect from nothing, without people relying on any preconceived notions based on how your name sounds, or even what you did last year. I take comfort in the realization that if it all doesn't work out, I can always walk away from a profile and start from scratch. It's social bankruptcy protection. Or maybe, the summer before your freshman year anytime you need it to be.

I feel like in 2002 we wasted a lot of time trying to figure out who in the sphere of public thought was a secret terrorist. I don't think we got anything out of those conversations, and I'd hate to see us repeat the same mistake digging for secret bureaucrats.

I know the recent news makes this hard, but maybe go back to "Is this guy Bruce Schneier, or someone I've never heard of? If I've never heard of him, I'll provide extra scrutiny to the proposal, not because I'm necessarily presuming malicious intent,[3] just because his reputation is not established, and it's a better use of my time to scrutinize the proposal than to scrutinize some random guy's life."

You can waste a lot of cycles trying to confirm or deny if someone has a secret life, something that's probably unknowable, and none of that work actually improves the crypto.

[1] Head start on the doxxing for anyone playing along at home.

[2] I am not your attorney, so don't get any funny ideas.

[3] Well, arguably in crypto, you should always assume malicious intent, constantly ask how any little bit could aid an adversary. But that's not all we're talking about, is it?

EDIT: Formatting (mostly footnote numbering)


> I'm pseudoanonymous on here, deleted my LinkedIn account ...

This isn't about you or any other drama queens, noone cares who you are and what you do in this context as long as you don't propose changes to crypto standards or conventions in browsers.

The reason why people are asking these questions in the context of cryptography software is perfectly valid, it was discussed here in the past few days.


"Are we moving from a state that can know everything on us, but at least might be too busy, to a mob that insists on knowing everything about us, because it's too idle?"

Yes. See also Sunil Tripathi. Everybody complains about the lack of constraints on the NSA. Nobody complains about the lack of constraints on a mob of redditards.


Yes, my apologies. My comment was overly blunt.

The reason paranoia is important in post-2013 crypto is because the NSA's knowledge of crypto is very likely ahead of academia / public sector knowledge. They've historically used that advanced knowledge to influence past standards. In one case, they seem to have secretly enhanced the security of DES.[1] But in another case, evidence suggests they may have secretly put a backdoor into their Dual_EC_DRBG proposal.[2]

This raises the question: Should we worry about NSA tampering with proposals? If that's our goal, then it seems to me that our most powerful tool is trust: the fact that we can almost always trust tptacek, cperciva, the Mozilla Security team, and other established names. It's not that they're infallible --- rather, it's that they've always been genuine with their intentions, and have spent years building up that level of trust, so it would be a huge risk for them to be willing parties in NSA tampering.

Therefore, if someone is proposing a new standard, scrutinizing their reputation would seem to be a necessary first step. If their knowledge is secretly ahead of the public sector's, and their proposal contains a secret flaw, then no one will be able to spot it. What else is there to examine if not our level of trust in their established reputation?

So if we choose to believe that the NSA is ahead of public sector knowledge, then it seems like it's valid to be concerned about identity and reputation, because it's inherently impossible to rely on the public sector to spot any hidden influences in the proposal, unless the public sector happens to make the same sort of cryptographic breakthroughs that we presume the NSA to have made.

I apologize for bringing it up in a disrespectful way. I didn't mean to dig into Brian's life. The LinkedIn profile just happened to stand out in a quick google search.

For what it's worth, all I'm trying to do here is ask the community: should we be concerned about NSA tampering? If so, should we assume their knowledge is ahead of the public sector's? If we assume that, then what else can we do except scrutinize identity/trust (since if we assume advanced knowledge, then we can't rely solely on scrutinizing the content of their proposal)? I honestly don't know whether those are valid concerns going forward, or whether they should be taken so seriously. I'm hoping the community will decide.

[1] http://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA.27...

[2] http://en.wikipedia.org/wiki/Dual_EC_DRBG


Eh, I was probably too hyperbolic.

I agree it's a difficult situation, one with a lot of really hard challenges.

I don't envy the standards communities that will have to figure out the best next steps forward.


ECDSA verification on P-256 is slower than RSA verification with 1024-bit or 2048-bit modulus, but they are thought to achieve such vastly different security levels that comparing them at all for performance is wrong.


> While I'm at it, I may as well ask: who is the submitter, fejr?

I was wondering the same thing with his previous post. I can say he has some interest in Arabic, as his name must be فجر (fejr), as in "the dawn". This has a lot of literary and political connotations and is used by a lot of people these days.

As an Arabic speaker, this caught my attention at the ironic choice of language for the name. Time will tell.


I think "as slow as possible" is good for the initial key exchange, or for storing keys at rest. The point is to make offline key recovery (by guessing) more difficult. But once you have a good key, the ciphertext is already completely scrambled. Slowing down won't make that part any harder.


Brian Smith seems to be ignoring everything Schneier has said in the past week regarding avoiding ECC, And preferring 128b AES over 256 or even 512 is so counter intuitive it is beyond reason. And this whole paper reeks to me of an exercise in finesse'.


For the record: (a) I can't find anything Schneier has ever published on ECC; it is a notable omission in his most recent crypto book (Cryptography Engineering, (b) the ECC issue is confused by Dual-EC-DRBG, the ECC-derived CSPRNG that nobody uses but is now thought to be a deliberately weak NSA design, (c) his reasoning for avoiding ECC (the constants are suspect) is a little bit of a stretch given that the most popular curves have a relatively straightforward derivation, and (d) it's downright weird to point a finger at all of elliptic curve cryptography based on a single set of constants; surely he's not implicating the Edwards curves Bernstein and Lange have been promoting, for instance.

The recommendation is confused enough that I'm inclined to dismiss it.


The deterministically derived ECC constants are derived by seeding a hash function with an extremely high entropy input (> 100 bits), and taking the first usable result. This is effectively the same as choosing the parameter. The NSA had freedom to specify this very high entropy seed value, and could have done so by iteratively trying seed values until they got a curve that weak using techniques only known to the NSA; it's just a way to disguise the origin of the curve by making it sound random. The situation with Kobolitz curves is only slightly better.


Check the date -- the proposal is a month old. So Bruce Schneier's comments may be relevant, but one can't really accuse Brian Smith of ignoring them.


Good point, I missed that on the first pass, I look forward to an update from him or a comment on this from Schneier.


For what it's worth, I submitted this link to HN a month ago: https://news.ycombinator.com/item?id=6187176


> Shouldn't the goal to be as slow as possible (but no slower than a user will tolerate)?

Slowness is an idiotic goal: slow on slower hardware is fast on faster hardware and slow on faster hardware is unusable on slower hardware. The goal should be a minimum level of security that is still practically secure, and something without a backdoor/easily churnable with adequate hardware. Do you think anything that the majority use today is adequate? I don't. So I think this is all bullshit.


I'm pretty sure Brian currently works for Mozilla.


So far I haven't seen anyone question the SHA algorithms. Both SHA-1 and SHA-2 were designed by the NSA. Is everyone okay with that?

SHA-3 wasn't designed by the NSA.

http://en.wikipedia.org/wiki/Secure_Hash_Algorithm


Probably. The SHA-1 and SHA-2 hashes are straightforward extensions of hash designs from academia, and the constants in the hashes are derived from the first N prime numbers; they aren't mysterious in any way.


SHA is just used for verification in TLS, so the security considerations are different from those when it's used to, say, hash passwords. There's not much in the way of attacks that could take advantage of SHA being broken, and most that would would involve things like MITM-replacing packets in real-time with different ones with colliding hashes. It's probably true that for some classes of crypto problems, the NSA has incrementally-better cryptanalytical capabilities than the public does, but being able to find collisions in real-time, even for SHA1, seems like a very tall order.


SHA3 isn't available in browsers yet. Perhaps it should be, eventually, but until it is, SHA1/SHA2 are the best available options.


This proposal is too aggressive at deprecating ciphers that are the only option for some browsers (e.g., IE8). It would be useful to add an icon to the address bar that indicates the security level of the crypto used (PFS or not, etc.)


Removing ciphers is aggressive, but what's wrong with deprecating ciphers?


And... Why would Google ánd Microsoft do this? They are the biggest assets on the NSA list remember?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: