That was an amusing read. The big takeaways are even deliciously quotable:
The irony is that the feature designed to bring more security was the one that completely broke it.
Too much complexity: having lots of blocks that say “AES” and “RSA” in your plan might impress the boss, but it just adds to the attack surface. Always go with the simplest plan that secures against your threat model.
Security is hard to get right, and retrofitting security features into existing systems is ripe with very subtle traps. Add backwards compatibility into the mix and you've probably created a fragile Frankenstein.
A fact that needs to be remembered when people ask for centralized key escrows and gratuitous backdoors in encryption systems. I know this gets mentioned twice a week now a days in HN, but it literally can´t be repeated enough.
i suppose a handheld gaming device could be considered low consequence on the spectra of encryption use, which may simply answer my own question..
but being someone interested in factorisation i am baffled as to why anyone has or is using rsa
my intended inference is that factoring numbers is deterministically achievable in sub polynomial time, and having your crypto rely on 'it's hard for me it must be impossible for you' reads so sophomoric
The 3DS is higher stakes than virtually any other cryptosystem. Nintendo spends billions bringing it to market. Game consoles live or die based on third party developer support, and one of the important promises console developers make to entice developers to their platforms is that they will carefully steward those developers IP.
What's more interesting to me is that the 3DS system is nonrenewable. As the author of this post correctly points out, all DRM systems inevitably fall, but they accomplish their commercial purpose as long as that fall happens far enough into the future that the developers make their money. Smart systems trying to employ DRM this way build, from the beginning, mechanisms to refresh and re-complicate their DRM systems on a title-by-title basis.
I actually agree with you "in the large" about RSA (factoring doesn't seem like a hard enough problem anymore), but in the particulars, nobody is breaking RSA-2048 any time soon. Unlike strong curves, though, it seems plausible that RSA-2048 (and beyond) could fall before quantum computing.
First, the 3DS mostly uses RSA for code signing, something I didn't really touch on since there's no serious mistakes there. Second, while RSA does not fair well against quantum attacks, as of today, there is still no way of factoring a 2048 bit key that won't take you at least (being generous) hundreds of years (provided no quantum computers). So if you have proof that it's (today) achievable in polynomial time, please email me ;)
But turning the question around, what do you think we should do to achieve asymmetric encryption? If you don't want it to break with Shor, then RSA and ECDSA are off the table. If you want small message size, you can't use Lamport.
if i had that proof an email would be unnecessary, you'd already know :p
to address your question i'll say this, crypto is only interesting to me in securing my own work and because my interests are in number theory that, due to rsa popularity, would incidentally affect crypto
i lack an intimate familiarity with the optimisation requirements of rolling production crypto, meaning i can comfortably ignore any message size restraints for my personal uses
instead of implying i know how crypto should be enacted.. with my current knowledge and interest i would be wholly unable to deduce as you did in your article.. i was simply stating that i am interested in factoring large integers and i wonder why anyone thinks open problems in number theory are a sound means of crypto
> i wonder why anyone thinks open problems in number theory are a sound means of crypto
The short answer is that the numbers are so large that with current state of the art means (which I'm sure you know a lot about), factoring a 2048 bit number takes around 1000 years and 4096 bit keys takes about 2^32 times more years. Even if computers get faster every 10 years, you can see that it won't improve much. That's why we use number factoring for crypto; because it takes so long to break.
However, as you've pointed out, since the problems are open, the calculations are meaningless. And of course all this analysis goes out the window with quantum computers. But the unfortunate truth (the "biggest embarrassment of computer science" as Dan Boneh calls it) is that we don't actually have any proof that security exists. All crypto is built on open problems. So better choose open problems that's been open for a long time.
I wasn't referring to factoring. I quite like lattice based and ECC systems. I was referring to 'if it's hard for me it must be hard for you'. ALL cryptography is based off of that assumption.
edit: I can't seem to reply to you again, so I'll edit. This is all a big miscommunication. I was more confused than hostile, though I recognize how it came off poorly. I genuinely thought he had some fundamental understandings due to the combination of things he said. I was mistaken and my response came off poorly.
That's a reading of the original comment so uncharitable as to seem tendentious. The commenter was specific about factoring; they were not calling into question the entire idea of a trapdoor function.
The original poster probably meant to write "sub-exponential". The commenter to whom you're responding may or may not have known what the correct term was; we'll never know, because they chose snarky incivility instead of correction.
> factoring numbers is deterministically achievable in sub polynomial time
Am I missing something? I thought that the whole point of using the difficulty of integer factorization as the strength of things like RSA was that it's not known to be solvable in polynomial time. "P==NP" is one of the unsolved problems of computer science.
I think that the state of the art is that we can factor in sub-exponential time.
You may not mean it this way, but for any reader who might be mistaken:
P == NP => integer factorization can be done in polynomial time with a classic computer but not vice versa.
In fact, many people believe P != NP && integer factorization can be done in polynomial time since FACTOR lies in this awkward space in the intersection of NP and co-NP. (Most other problem in this space are also in P, but we think P != NP, so...)
Now of course, if you put in quantum computers into the mix, Shor's algorithm can break both RSA and ECDSA in polynomial time.
By "things like RSA", you pretty just much mean "RSA". A lot of progress was made on integer factorization over the last 35 years, so that key sizes we'd have thought totally reasonable in 1995 are totally inadequate today. That is not similarly true of non-factoring systems, and so it's legit to point out the relative weakness of IFP crypto compared to other approaches.
> key sizes we'd have thought totally reasonable in 1995 are totally inadequate today.
Not really. The last big jump in complexity was in 1991, when the first 512-bit modulus, 2^512+1, was factored by NFS [1]. What has progressed the most, by far, has been the available computational power.
The last big theoretical jump in complexity was NFS, but is your argument that implementations of NFS haven't much improved, apart from raw computational power of the same sort available to general purpose computers (ie: non-vectorized instruction sets)?
You know much more than I do about this stuff, but I think I'm prepared to challenge you on this. :)
NFS implementations have certainly improved, and certain sub-algorithms as well, e.g. lattice sieving and polynomial selection. But these things don't significantly tilt the exponent you expect in a factorization. As in, you expected around 2^80 "operations" for RSA-1024 back in 1995, and that's still roughly what you expect today.
The practical NFS improvements since then might bring that down to 2^78 or even 2^72---and that's a lot of money saved---but when you're thinking of which key size to use it doesn't matter nearly as much as how 2^80 computation is something that is reasonably practical today, but was unthinkable in 1995.
I take your point. ECC probably wouldn't cost Nintendo more to implement, but would provide a system that probably isn't being attacked as successfully, hopefully giving their system more time on the market before it's cracked.
It didn't make a huge difference here, I should be clear. I don't like RSA, and my dislike for it comes almost entirely from something orthogonal to key sizes --- public key systems provide one or more of three primitives: direct encryption, signature verification, and key exchange; RSA provides all three, and the direct encryption primitive is a giant foot cannon.
If the original commenter's point was that RSA directly harmed the 3DS in some fundamental way, that would be a legit thing to push back on, I think.
Apologies, I mistook your typo/mistake combined with the odd phrasing of different difficulties to imply you really thought factorization could be done in subpolynomial time by some actors (i.e., NSA or something) as opposed to a misunderstanding.
I should have phrased my response differently regardless.
> deterministically achievable in sub polynomial time
it's already been stated that 'sub polynomial' was a typo, the original sentence discussed subexp state of the art but i changed it to discuss my research instead and failed to remove the 'sub'
'deterministic' can also be appropriately called out as redundant when used with polynomial time because it is implied in the latter but i wanted to be explicit
> my intended inference
inference is defined as 'a conclusion reached on the basis of evidence and reasoning'
this is a word structure i use in place of faith based assumptions: i think, i believe, etc; to establish a respect for evidence based conclusions
part of my research's aim is interested in a polynomial time algo for integer factorisation, but before i am to conclude that there is such a thing i require evidence, hence it being the intended inference of my research stead a direct inference
> 'it's hard for me it must be impossible for you'
this is what you will hear very often from many in the know if you tell them you have interest in a poly factorisation algo, even in spite of the problem being open
except in the case of one of the people that fanned all this interest
Richard Karp: (Berkeley, unsure, P=/=NP) My intuitive belief is that P is unequal to NP,
but the only supporting arguments I can offer are the failure of all efforts to place specific
NP-complete problems in P by constructing polynomial-time algorithms.
I believe that the traditional proof techniques will not suffice. Something entirely novel will
be required.
My hunch is that the problem will be solved by a young researcher who is not encumbered
by too much conventional wisdom about how to attack the problem. (o)
I remember that paper. I believe P=/=NP, but man it would be amazing what you could accomplish if it was! (For example, you could answer in polynomial time whether a proof for a sentence S existed with N symbols or fewer. For sufficiently large N, it is essentially a universal truth tester for a given model T! Incredible!)
As a simplification, I did not draw any of the blocks contributing to code signing. RSA is used everywhere to ensure that the code is signed. If you consider the signature as the MAC, it does "Authenticate Then Encrypt" (the signature is over the NCCH or FIRM header. The header contains hash + size of each section.)
Help me understand a bit more about how a bogus key was able to generate a gibberish text segment that code could successfully return into if all program text was protected by RSA signatures. I may have misread the article.
So we have FIRM, the main firmware image. It has three segments: 1) main ARM11 kernel modules, 2) ARM11 kernel, 3) ARM9 kernel/ARM9 process. The FIRM header has a SHA256 hash over all the segments and the size of all segments. The FIRM header also has a RSA2048 signature over the header. Then everything (sig + header + FIRM) is encrypted with AES-CTR and placed on the NAND.
On system start, the whole chunk is decrypted, the signature is verified, and everything works as expected. Until in the New 3DS, they decide to also additionally encrypt segment 3 (the ARM9 stuff) with a separate key on the NAND. That's what led to the whole mess. So I guess their mistaken assumption was that since FIRM was signed, the encrypted ARM9 section was protected. However, they didn't take account of the fact that the key to decrypt it can be corrupted. It's a bit subtle.
I follow. But, if they'd encrypted every segment with AES-OCB or AES-GCM, rather than raw AES-CTR and then relying on the RSA signature of a header with hashes, this bug wouldn't have been possible, right?
(I get that there are multiple ways to break a bug. :)
Of course, in a perfect world we would all be doing that. However, there is a tradeoff of boot-time/power consumption/hardware complexity (the hw AES engine only does CCM/CTR/CBC). Given their track record though, I wouldn't be surprised if things still broke if they used GCM. I think the main takeaway isn't that there's a couple of things they could have done differently to prevent this (which is always true in hindsight), but that certain major design decisions made such bugs inevitable.
A funny thing about the key generator. There was originally a project within the community to raise the funds to do a successful decap of the AES engine and key generator.
They successfully raised the $2000 before the person claiming to do a decap (Jl12) stole all the money and disappeared off the face of the earth.
Another illustration of the classic thesis "security should be built from the ground up and can't be added later". Also shows the importance of minimizing the attack surface. Would love to hear some inside stories about discussions that ultimately led to these implementation decisions.
It was broken much, much earlier. At first you required an external device (special cartridge) or hardware mods, but there was a software exploit via the web browser for over a year now.
In addition to being broken earlier, it's been trivial for even a casual user to pick up the exploits and use them to run arbitrary software on the device. There's a long-running website out there you can load in the 3DS web browser to exploit your device with a single click.
Well the first break that brought piracy happened about two years after launch. Also, the Vita, which launched around the same time is sill unhacked pretty much. It's a shame that there's not as much good games on there for it to be a commerical success. In fact the Vita does so many things right that the 3ds fails at (security wise).
The irony is that the feature designed to bring more security was the one that completely broke it.
Too much complexity: having lots of blocks that say “AES” and “RSA” in your plan might impress the boss, but it just adds to the attack surface. Always go with the simplest plan that secures against your threat model.
Security is hard to get right, and retrofitting security features into existing systems is ripe with very subtle traps. Add backwards compatibility into the mix and you've probably created a fragile Frankenstein.