The original response about these type of issues [1] rubs me the wrong way
In particular this statement:
> That being said, we were surprised by Ledger’s announcement of this issue, especially after being explicitly asked by Ledger not to publicize the issue, due to possible implications for the whole microchip industry, beyond hardware wallets, such as the medical and automotive industries.
As I understand they are using a standard STM32 chip for these wallets, and relying on it's basic protection. Companies make real processes designed for securely storing data, why aren't they using them? Instead they are suggesting that there is no alternative and everyone is vulnerable to this style of attack.
Edit: I missed some of the backstory. They don't mention that option as their competitor (who found the security issues) already uses a secure element, like a sane person.
To me, this is full admission of a complete lack of security competency. Building a hardware wallet without using a smart card or some other secure element that at least has mitigation’s against voltage/clock glitching, detects light, reduces the ability to measure power consumption, etc is negligent.
Either they don’t know how to design secure solutions or they wanted to use cheaper chips since tamper resistant chips cost more. Neither is a good look
As most on HN know, if anyone has physical access it's game over - so when I read "critical flaw" in the title to me that meant remote key extraction (or similar remote flaw), and since there's nothing remote about this I consider it a clickbait article. No, what has been written up is not "critical".
physical in-person key extraction after literally opening up a piece of hardware and glitching its exposed innards isn't a "critical flaw". it's baseline expectation.
I would rate the issue raised in the article as "not a bug, won't-fix." with the explanation that "Physical key extraction will always be possible regardless of anything we do."
or are people here claiming that their "better" competitors (who are using "better" hardware, more "correctly") are immune from physical attacks?
EDIT: I am keeping this even if it gets voted to -4. I don't believe a physical, local (in person) glitching attack on the innards of a device, which requires physical access and opening it, constitutes a "critical" vulnerability on a hardware cryptographic device.
IMO there's a huge difference between invasive and non-invasive attacks. I would expect something that bills itself as "The safe place for your coins" to require a bit more effort, know-how, and tools to read out my keys than "a couple hundred dollars of equipment" and a python program.
> if anyone has physical access it's game over
It's actually not when you use a series of common defenses that wipe the chip when tampering is detected. Of course it's still possible to determine the private keys via perfectly executed microprobing...but there's a huge difference here. Invasive attacks require significant time in very expensive laboratories per attack, which very well may fail.
Let's say managed to steal my wallet which leverages a secure element with tampering protection. If you're unaware that voltage/clock glitching will wipe the device, you may try and then you've lost. But let's say you're aware so you want to go the microprobing route. Do you have the necessary lasers and acids to get directly to the circuitry you want to read out without accidentally compromising the integrity of the top-layer sensor meshes? Do you possess a focused ion beam station (only costs ~500k USD)? By using this mesh I've made the extraction significantly more tedious and requiring far higher levels of precision for you. You've got my smart card, but I wouldn't call it "Game Over" by any means. Maybe in this amount of time I figured out that my wallet is missing.
This attack here on the Tresor, though, requires physical access but can be automated. Here, physical access really is game over. I would rate this issue as "Trezor shows themselves to be an inferior solution, will not use to store my keys"
Interesting comment, thanks. What would you say about my other question: how good are tamper evident seals - for example would a tamper evident seal on the enclosure show visually whether it has been opened (for example to exploit the flaw this article is about), or are tamper evident seals easy to get around or re-apply undetected?
Trezor's security features list [0] mentions firmware verification, JTAG, and welding - strongly implying that intends on at least some resistance to physical attack. This is not uncommon for hardware cryptography modules. Since 2001, the federal government has had a certification program, FIPS 140-2 [1], recognizing four different levels of physical attack resistance.
The security engineering industry is very interested in the capability to physically ship secrets to potentially hostile actors inside devices that limit their use or duplication. There are many many applications:
- Payment cards: EMV credit/debit, transit, laundry, parking, prepaid electric meters, etc.
- DRM: Widevine for Netflix, DCP for your local movie theater, anti-piracy and anti-cheat in your Xbox.
- Privacy: the iPhone's Secure Element only decrypts user data given the right PIN, rate limits or caps attempts, resists extraction of private key, much to FBI's disappointment.
- Root of trust: enterprise HSMs for PKI will only enable signing operations with their internal private keys after the presentation of a quorum of operator credentials [2].
Ross Anderson's Security Engineering has a great chapter on this [3].
How extreme is decapping? How much equipment and how much time does it require? How often does it destroy the chip or cause damage? How obvious is it that it occurred?
While we're at it, for the last question, tamper evidence, how good are tamper evidence seals - for example would a tamper evident seal on the enclosure show visually whether it has been opened (for example to exploit the flaw this article is about), or are tamper evident seals easy to get around or re-apply undetected?
Decapping requires dissolving the plastic with acid. Attacking the chip from there is typically done in a Focused Ion Beam workstation (about $500k), and the risk of destroying the chip depends on too many factors. Some chips have photosensitive elements that generate just enough voltage to wipe their memory if they're exposed to light via decapping.
Tamper evident seals can often be defeated with nothing more than a PTFE (Teflon) knife made from shim stock.
Trezor is designed to protect against remote/logical attacks (including a compromised host). It isn't really hardware protected in any meaningful way against local access. This lets users inspect/validate their own hardware better, though.
The issue is most users (reasonably, IMO) assume physical protection for their hardware wallets, at least against someone getting temporary access and without insane levels of resources. That is fairly safe using a Ledger today (barring an undisclosed vuln); that's why I think the Ledgers are somewhat better.
I would definitely pick Coldcard over Ledger though.
Coldcard is open source and open hardware to a much greater extent, while still using secure element for secret storage and PIN counter. It also offers advanced security features like proper multisig support, airgaped operation, roll-your-dice entropy input, etc.
The attack doesn't work if you are using a passphrase. I'm not sure why they let people use a PIN in the first place, but you should never be using PIN instead of a passphrase.
Why don't all silicon chips have glitch and overvoltage detection?
It would seem very easy to put a pair of fets in such a way they detected sudden voltage changes (via their gate capacitance). That could then be used as an input to a circuit which ensured the chip is properly reset by asserting the reset line for at least 1 clock cycle.
This should probably be paired with brown-out detection, although that's power hungry, so I can see why people might not want it.
This wouldn't only have security benefits - lots of electronic designs might be accidentally glitching their microcontrollers due to poor design of other circuits, and having the chip reset in a predictable way is much better than undefined behaviour.
> Why don't all silicon chips have glitch and overvoltage detection?
Reliability. This is basically the microchip version of Boeing's MCAS.
The circuit you describe is not only an analog circuit, but is in fact a noise amplifier. You're now shipping a chip containing a noise amplifier that drives the device-wide reset line.
What could go wrong?
The stuff you describe is very, very difficult to get right, and beast-mode insanely difficult to troubleshoot or even diagnose when it goes wrong.
It's also very sensitive to manufacturing variations. So if there is a problem with the circuit, it'll probably only affect a few batches. Which, Murphy's Law and all, will be the batches that wind up in the hands of your most important customers.
Stuff like this can bankrupt a chip company if you get it wrong, and there's no way to be sure you got it right. At most you put it in your super-high-end ultra-secure product line, so long as that line's sales are small enough that you can afford a recall.
Have SIM cards actually been tested against these vulnerabilities? The payoff per card cracked is much lower than with crypto wallets so maybe there’s just no point trying these attacks?
Luckily he hadn't updated the firmware so the vulnerability wasn't patched on his device, but despite that, it took a long time and was not easy. But like this newer vulnerability, it would almost be impossible if he had also used a strong passphrase, as Trezor recommends.
Every wallet is physically hackable, that is why we are building Cypherock where we introduce a second variable that is locations using Shamir Secret Sharing on the private keys. I would love to have the opinion from the community on it - https://cypherock.com.
This may seem bad but how many people have lost crypto because of this? You are more likely to have your crpto stolen when transferring crypto to your trezor when setting it up.
Banks store private keys for their ATMs in hardware security modules (HSM) and there are lots of crypto exchanges that started doing that. One of the features is private keys self destruct when tampering is detected. If you have a backup you’ll be able to recover the private key. While I agree that Trezor wasn’t designed with this in mind, I think it’s a good idea to include this feature. Not sure about the size requirements for that though, it might make the device significantly bigger.
True HSM with active self destruct needs to be constantly powered. On the other hand for many if not most applications, typical secure smart-card is is completely sufficient (and in fact typical POS card terminal stores most of it's long term secrets on SIM-like smart-card).
Somewhere in my junk parts bin is such a PCI card I bought out of a junk bin at Akihabara, that has Mitsubishi logo clearly printed with archaic construction overall, apparently marketed by NEC somehow, which product brochure page disappeared after I mentioned it on Twitter,
Had a pair of blown AA battery for self destruction. I never bothered to get it working, but IIRC it was supposed to detect removal from PCI slot(the proper) to self erase. So it’s not rare or difficult.
Size requirements shouldn't be intensive, assuming it's a single-shot system. All you need is 128-256 bits worth of secret key data that is physically-destructible (e.g. with a high voltage spike). You then encrypt/decrypt the rest of the secrets stored in the device with this destructible key.
People who lost 100% of their coins are probably wishing they had the option to buy some kind of insurance. But no, be your own bank. (Wait, don't real banks also have insurance?)
Crypto is digital cash, not digital credit. If someone steals your physical wallet, you generally aren’t getting that cash back. Can we please dispense with this kind of hyperbolic nonsense?
In particular this statement:
> That being said, we were surprised by Ledger’s announcement of this issue, especially after being explicitly asked by Ledger not to publicize the issue, due to possible implications for the whole microchip industry, beyond hardware wallets, such as the medical and automotive industries.
As I understand they are using a standard STM32 chip for these wallets, and relying on it's basic protection. Companies make real processes designed for securely storing data, why aren't they using them? Instead they are suggesting that there is no alternative and everyone is vulnerable to this style of attack.
Edit: I missed some of the backstory. They don't mention that option as their competitor (who found the security issues) already uses a secure element, like a sane person.
[1] - https://blog.trezor.io/our-response-to-ledgers-mitbitcoinexp...