Hacker News new | past | comments | ask | show | jobs | submit login
The Factoring Dead: Preparing for the Cryptopocalypse (slideshare.net)
168 points by oracuk on Aug 4, 2013 | hide | past | favorite | 39 comments



Pluggable crypto algorithms are one of the reasons for a lot of protocols being overly baroque and thus vulnerable (due to implementation flaws), or at least not widely used. This all started due to the export controls in the 1990s -- when you needed to have a US/non-US version anyway, you could then do 10 algorithms. But, if you don't have that concern anymore, it's senseless complexity.

IMO the best way to do all of this is to have one (set of) algorithm per major version of protocol. It's fine to change that (by adding new protocol handling) going forward, and potentially to deprecate old versions too, but cipher negotiation/etc. is otherwise madness-making. That, and shitty encodings, are most of the pain I have when working with stuff.

(The issue raised here of "ECC should be more widely used" is good, though -- I've been looking at blinded ECC algorithms recently.)


That's a weird conclusion to draw given that negotiated key agreement algorithms are what allows us to swap RSA for ECDSA opportunistically and without jettisoning the rest of the protocol.

Maybe this would make more sense if you could describe what was madness-making about cipher negotiation.


It certainly has upsides, but I remember pain when involved in or trying to fix things with diverse algorithms. Maybe the memory of the pain was worse than the pain itself, or the pain was from unrelated causes. Mostly it's "I'm not very good at _____, so I like things to be as simple as possible".

JCE implementation (mostly related to scope, era, etc.) -- admittedly, we only needed a non-US JCE for a few specific things, as part of an overcomplicated financial transfer system.

PGP/OpenPGP; there were lots of reasons this sucked, but multiple ciphersuites didn't help at all.

Free S/WAN was painful too (and really most of IPsec), but it's an example of the higher level problem of "too many layers of indirection and abstraction", not "too many algorithm choices."

(Poking at OpenSSL internals as a consumer of the library and trying to get it to work with some hardware shit made me mostly want to die, too, but part of that is my inferior C skills, and other things specific to OpenSSL and not to the overall concept of multiple algorithms)

OTR, ecash protocols of various types, etc. which used hardcoded algorithms were comparatively painless.

ssh is probably a good counterexample -- relatively flexible choices, although the big shift from v1 to v2 fixed a lot more things than just algorithms.


That last part about openssh suggests that the problem with SSL's cipher choices is not that there are choices; rather it's that the choices are mostly bad, and that they're hardcoded combinations rather than free choice of primitives. MD5/SHA1 lingering for years, and RC4 crippled but limping along and even preferred in some cases because of BEAST, all because there are insufficient, or insufficiently deployed, newer options. Camellia is crippled by not supporting the same ciphersuite variety as AES does in most implementations (openssl ciphers -v 'CAMELLIA', vs openssl ciphers -v 'AES'). That's right, there's no ECDHE+Camellia when talking to any server that's using openssl, it's slow DHE or no PFS at all. What's the point of even having Camellia in a SSL implementation if it's not going to be maintained with current ciphersuites? That's a great example of why hardcoding combinations of primitives is scary.


As a user of Free software consumer cryptography utillities, such as GPG and OTR instant messengers, can I configure stuff any differently to account for these developments?

What can I do to protect myself? Do you forsee projects like these responding to the concerns in the presentation in a timely manner?


So if AES were broken you'd want the TLS version bumped, some people to spend a caffeine-fueled week implementing and testing a new ciphersuite while global ecommerce slows to a crawl, and then OS maintainers and IT admins everywhere to get the new openssl/nss/schannel versions installed ASAP?

How is that better than having pluggable ciphers?

I don't really understand the combining of symmetric, asymmetric, and hash under one ciphersuite id namespace either. It's like the SSL/TLS authors had never encountered the DRY concept before. It makes adding or removing a cipher or hash into a tedious exercise, getting particular combinations into RFCs.


If AES is broken, I'm willing to accept flag day hell, in exchange for not having 8908494 possible combinations to test and implement every time before that.

It's much more likely "AES minor vulns found" happens over a period of years; the push to change algorithms happens somewhat more reasonably like most security patches. Taking a year to get rid of MD5, DES, SHA1, etc. through protocol version upgrades isn't any worse than 5-10 (and counting) we got by allowing pluggability.

In the SaaS, browser, and mobile client world, upgrading software and upgrading configs aren't that different in pain, in practice.


If the implementations of hashes and various ciphers are done properly, and the combinations were truly pluggable rather than hardcoded as individual ciphersuites, you don't need to test every combination (other than for interop, to ensure that nobody else has stupid bugs due to not implementing them in a truly pluggable manner). What's the theory, that SHA3-256 that passes basic validation tests and has been audited, still won't hash certain things properly? Or that Salsa20, properly audited and validated, will work with 256 bit ecdsa but not 384 bit?

Getting rid of MD5, DES, and SHA1 is easy with pluggable algorithms. Just remove those ciphersuites from the ssl implementations.

The reason it's a pain in the ass to get rid of those old ciphers and hashes is not that the algorithms are pluggable, it's that nobody wants to force everyone else to upgrade (and force other ssl implementations to implement enough new ciphersuites that the old ones can be removed), right?


My name is on this talk, but you should know: not only did I not go to Black Hat to help deliver the talk, but I did zero work on its preparation; this is Javed Samuel and Tom Ritter's talk all the way.

How my name came to be on it is a long, boring story.


Is this an anti-endorsement of the talk, then, or are you just trying to give credit where it's due? I thought it was pretty interesting, and I'm going to try to move to ECC over RSA where possible (for private SSL certs internally, SSH keys, etc). I know that isn't quite the point, but it seems like it might help future-proof a bit.


I endorse the talk wholeheartedly but for reasons even I don't understand my name is first on the title slide and I don't want to claim any undue credit.


Would you be willing to answer what sort of linear algebra is required in the function field sieve that the talk claims is difficult to parallelize? I write open source parallel linear algebra software [1] and would be happy to contribute.

[1] http://libelemental.org


The matrices involved are generally modulo a prime (2 in factorization, a fairly large prime for the discrete logarithm), and very sparse. The latest factorization record, RSA-768, involved finding elements of the nullspace of a 192e6 x 192e6 matrix, with an average of 144 nonzeroes per row.


Interesting. That isn't excessively large for distributed-memory Krylov-subspace eigensolvers (e.g., SLEPc). Do you have a good reference for the current algorithms used? And how large is the nullspace typically?

EDIT: I assume this [1] is what you're referring to, which draws from the following algorithms of Coppersmith [2,3].

[1]: http://eprint.iacr.org/2010/006.pdf

[2]: https://www.sciencedirect.com/science/article/pii/0024379593...

[3]: http://thales.doa.fmph.uniba.sk/macaj/skola/seminar/Coppersm...


Yes, that is correct. Generally the block Lanczos variant used in practice is Montgomery's [1], though, not Coppersmith's.

Lanczos was popular until a few years ago, as it requires slightly less actual matrix-vector multiplications, but since Wiedemann can be partially distributed it is becoming more desirable.

[1] http://link.springer.com/content/pdf/10.1007%2F3-540-49264-X...



"Most systems are not designed for cryptographic agility" but even if they were, that's yet another vector for attack. If your software is configurable in some way around cryptography (new algorithms in a shared library, new configuration files describing padding or negotiation methods), the the attacker just has to reconfigure your device. When you distribute those changes, they have to be signed; with a configuration currently supported by the app; which probably involves the weak or broken algorithm you're trying to replace.

The Problem® as I see it is one of computational power. Everyone carries mobile devices. Everyone expects their mobile devices to be secure. But everyone appears to be more interested in the perceived speed of their devices. How do we make most algorithms more secure? Larger keys, or some other form of increased work for attackers. That slows down smartphones and now people are having a Bad User Experience™ and well, we can't have that.

It seems like every time someone makes security more user-friendly, they necessarily introduce weaknesses into the entire system. I gave up on trying to solve these kinds of problems long ago when I realized I was terrible at it.


For the most part, the concerns over breaking crypto are related to breaking public key cryptography. Becuase of its relative slowness, public key crypto is generally used for key exchange before the devices start using symetric cryptography. Symetric crypto has its own concerns because we have no theoretical foundation for believing it is secure other than the fact that we have yet to break it. At least RSA is backed by factoring, which we have spent much longer trying to break (and also haven't proven to be secure).


Nitpick: RSA isn't known to be backed by factoring. More precisely, there is no proof that if textbook-RSA encrypted messages could be decrypted quickly that integer factorization could also be done quickly. (Of course if you can factor integers you can decrypt RSA messages quickly).

http://en.wikipedia.org/wiki/RSA_problem

There are encryption schemes where decyption of an encrypted message is provably equivalent to factoring.

http://en.wikipedia.org/wiki/Rabin_encryption


An interesting analogy is post-quantum cryptography which gets to more or less the same place (can't depend on factoring being hard) by somewhat different path.

There's a nice wikipedia category for post quantum crypto.

http://en.wikipedia.org/wiki/Category:Post-quantum_cryptogra...

The slideshow (which I paged thru pretty quickly) seemed to be a good general coverage of the whole situation, followed by great detail of exactly precisely one solution, ECC. There's lattice, hash, and multivariate. That's too bad that the slides were so general and then suddenly laser focused. I realize there's time limits, etc.

There are also system based issues. What I mean is, lets say you decide to implement a signing only hash algo as a "replacement" for a pure encryption algo which is now magically broken. Well that requires major system changes to move from an encryption system to a signing system to accomplish the same meat-level goal. Its not as simple as search and replace library names. Which is an example of the old saying that crypto is hard.


The author's argument is that the probability of impact on ECC of this line of research is less than probability of an impact on factoring. Post-quantum cryptography has to avoid ECC since the elliptic curve discrete log problem is easily solved by quantum computers.

Regarding the possible solutions other than ECC: There are no hash-based encryption schemes, only hash based signatures. I believe there is little faith in any multivariate scheme to remain secure under serious scrutiny. Many such schemes have been broken. The outlook on lattice cryptography, and some coding theory based schemes is slightly more positive. Apparently ECC has received much more attention and study than either of these.


Question for the practitioners: is there a hardware token like the yubikey neo, Openpgp card, gemalto.net etc that can do ECC signing or encryption? In some countries, fips140 hardware is required for legal filings, but I've only found RSA tokens.


I agree with pushing elliptic curves, but I don't like the way they get there.

First, some historical corrections: the first L[1/2] algorithm for either factorization or discrete log was CFRAC, presented by Brillhart and Morrison in 1974 [1]. This was the first time a L[1/2] algorithm was formalized: descriptions of this basic method go back to the 1930s and the work of Lehmer and Powers [2] and Kraitchik. I presume 1979 refers to Adleman's subexponential discrete log paper [3].

1984 saw Coppersmith's L[1/3] algorithm for discrete logarithms in GF(2^n) [4]. The trick was that, in GF(2^n), the freshman's dream is true: A^2 + B^2 = (A + B)^2. This makes the polynomials we need to check for smoothness much smaller (for appropriately chosen polynomials), so much so that the overall asymptotic running time decreases.

Then in 1990 the number field sieve [5] appears for factorization, also at L[1/3]. You'd think that this had something to do with 1984's Coppersmith, right? Wrong. The number field sieve came from independent efforts, started by ElGamal [6], Coppersmith et al [7], and Pollard [8].

So you can see that Coppersmith's GF(2^n) trick never factored into the number field sieve. The function field sieve [9] is, however, a continuation of Coppersmith's effort in fields of small characteristic (e.g., GF(2^n), GF(3^n), etc). There are many improvements over the FFS that I'm going to gloss over now. More recently, this work has been continued by Joux and friends [10,11,12,13], who have basically annihilated sieving for some specific fields, and most of the time is spent on individual logs. Note that even for small characteristic, where such algorithms apply, well-chosen fields will still be fairly secure: for example GF(2^2039) still has about 77 bits of security, similar to RSA-1024.

There is no reason to believe that the tricks used by Joux carry over to the integers or GF(p), as the 1984 trick never carried over either. We might see an improvement to integer factorization soon, but I seriously doubt these tricks will be in its origin. My guess is that RSA will get an long overdrawn death like Rivest's other brainchild, RC4.

There are plenty of reasons to prefer elliptic curves over finite fields. Elliptic curves are smaller, faster, safer, and shinier. But like the finite field case, there are also classes of curves that are cheaper to attack. Did you know that all elliptic curves over GF(2^n) can be solved in subexponential time [14]? Let's move to NTRU! This kind of scare tactics is unproductive.

[1] http://www.ams.org/journals/mcom/1975-29-129/S0025-5718-1975...

[2] http://www.ams.org/journals/bull/1931-37-10/S0002-9904-1931-...

[3] http://cr.yp.to/bib/1979/adleman.html

[4] http://www.enseignement.polytechnique.fr/profs/informatique/...

[5] http://www.iai.uni-bonn.de/~adrian/nfs/lenstra90number.pdf

[6] http://link.springer.com/chapter/10.1007%2F978-1-4684-4730-9...

[7] http://cr.yp.to/bib/1986/coppersmith.html

[8] http://link.springer.com/content/pdf/10.1007/BFb0091536.pdf

[9] http://cr.yp.to/bib/1994/adleman-ffs.html

[10] http://eprint.iacr.org/2012/720

[11] http://eprint.iacr.org/2013/074

[12] http://eprint.iacr.org/2013/095

[13] http://eprint.iacr.org/2013/400

[14] http://eprint.iacr.org/2012/146


This is a self-evidently excellent comment, and thank you for posting it. I want to make a couple clarifications:

* From the conversations I've had with cryptographers, there does seem to be a sort of consensus that Joux's DLP work is unlikely to result in a viable attack on RSA any time soon, or, for that matter, on any mainstream DLP scheme.

* But: that isn't a uniformly held opinion. Dan Boneh is an example of someone who seems to believe the line of research Joux is pursuing is going to harm RSA "sooner" rather than "later".

* As you note: RSA isn't looking sturdy today. I think the RC4 example is a good one. The recent attacks on RC4 weren't based on a newly discovered vulnerability, but rather on a more diligent effort to exploit an old one. I'm not convinced that there was any better reason for those attacks being unveiled in 2013 than that nobody had a strong reason to investigate them.

* A (just say) 10-year time scale is still relatively short! If we can reasonably predict material weaknesses in a cryptosystem 10 years out, we should replace it now. We don't generally choose to work with cryptosystems that offer us a margin of just 10 years to work with.

Finally, and most importantly: please understand that the audience at Black Hat isn't cryptographically literate. There are a couple crypto people that show up every year, but the attendees at Black Hat generally bring very little background in the subject with them. So, I think there's value in distilling subtle messages down to actionable information for that audience. A subset of the people in the audience at Black Hat are at some point going to try to develop their own cryptosystems, and they're much more likely to do that by trying to hack together RSA than they are ECC. The less attractive RSA looks to them, the better.

Thanks again for the corrective comment, though. I sort of wish Tom and Javed could have had you up on stage debating them. :)


Not cryptographically literate, and hacking together cryptosystems? This can only end badly.


Does this help to explain xKeyScore?

If there have been major mathematical advances in the last 6 months what is the chance that mathematicians at NSA made those advances 2 years ago and didn't reveal? (How many good mathematicians know colleagues who 'dissapeared' into a career for the spooks, I certainly know of some). If so , what's the chance that somewhere in the billion dollar budgets they found enough money to turn advances into practical attacks? And if such practical attacks exist, wouldn't that have made it possible to turn a network level tap (such as has been revealed) into a usable 'full take' system that bypasses the 'going dark' problem... given prevalence of RSA based approaches to cryptography on the net today.


The slide deck has an impressive list of authors, but slide 55 (about TLS and ECC) appears to be misleading without the context of the talk itself.

I didn't see the talk, so I don't know what they were aiming at, but ECC works the same in TLS 1.0, 1.1 and 1.2. Hybrid chains (i.e. an ECDSA public key in a certificate, signed by an RSA certificate) work fine too.

(Not all implementations of TLS support ECC of course.)


It's my understanding (again, I endorse this talk but had nothing to do with its preparation) that they addressed this on stage, just for whatever it's worth.



I learnt a little about ECC to make my bitcoin-key-for-encryption-gravatar-as-key-server proof of concept http://kybernetikos.github.io/VisualSecrecy/

I really love the way that you can visualise what is actually going on by drawing lines on a graph. Factoring is much less visual for me. It just seems very fun that you have a private factor, a public point and, when doing encryption, you have a shared point which is a literal point on a curve.


Regarding DNSSEC, it should be noted that ECDSA was specified in April 2012 (RFC 6605) and resolver support started with Unbound 1.4.17 (May 2012) and BIND 9.9.2 (Oct 2012). So this was very recent and the percentage of resolvers on the Internet which have already upgraded to an ECDSA-enabled release is so small that it is absolutely pointless to use ECDSA with DNSSEC now. Most of the resolvers won't recognize the algorithm and will treat the zone contents as unsigned.

Regarding OpenSSL, I tested ECC with S/MIME in 2009 and it turned out to be unsupported. Just tried it again with 1.0.1e and it's still not supported. It works for TLS but the problem is that vendors ship outdated releases: Debian stable comes with 1.0.1 which is good, RHEL comes with 1.0.0 which is not so good (no TLS 1.2), SLES comes with 0.9.8j (even in the recently release SLES 11 SP3) which is very bad.

So support for ECC is still very disappointing and far away from being usable in production.


The argument here is approximately this: as solving jigsaw puzzles has been dramatically improved recently, you never know, maybe manufacture of weapons of mass destruction will suddenly get really easy overnight, resulting in an apocalypse.

What Joux and others have done is both important and impressive. But it simply isn't the same as breaking secure cryptosystems. The analogy between the two is simply overplayed.

Did people have reason to believe that the systems Joux and co. can crack were hard? No, despite not knowing how to crack them. Do people believe that their more secure cousins are hard? Yes, they do.


That's a weird way to put it, because Joux and Barbalescu's DLP work has already threatened some real (but exotic) cryptosystems (haven't they?).


Kinda late here, and unqualified to comment on math - just wanted to mention, in case ECC and the others eventually are compromised - there is something to fall back on. The "one time pad" method "has been proven to be impossible to crack if used correctly" (https://en.wikipedia.org/wiki/One-time_pad). Given a side channel that's secure for the purpose, I expect there could be a software implementation that would be usable indefinitely.


Most systems that attempt to implement one time pads are just stream ciphers built from crappy random number generators; they're crappier versions of RC4. Real one time pads aren't particularly useful (which is why they're rarely used): they require keys as long as their plaintexts.

Schneier calls them "pretty much useless": http://www.schneier.com/crypto-gram-0210.html#7

They are, at any rate, a sturdy indicator of snake oil crypto.


If you are an organisation with deep pockets (nation state) and are willing to send trusted (armed, diplomatic) couriers to every end point you want to talk securely with (embassies) every month then it can be secure but it will definitely be expensive.


Unfortunately it is difficult for the receiver to know the key. I don't know how this would be solved, but it's quite the conundrum.


Question for those in the know: have there been any advances on DLP in galois fields generated by primitive polynomials?


Slide 5. Why can no one spell "Appelbaum"?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: