Hacker News new | past | comments | ask | show | jobs | submit login
Critical flaw in Trezor hardware wallets (kraken.com)
121 points by menduz on Jan 31, 2020 | hide | past | favorite | 49 comments



The original response about these type of issues [1] rubs me the wrong way

In particular this statement:

> That being said, we were surprised by Ledger’s announcement of this issue, especially after being explicitly asked by Ledger not to publicize the issue, due to possible implications for the whole microchip industry, beyond hardware wallets, such as the medical and automotive industries.

As I understand they are using a standard STM32 chip for these wallets, and relying on it's basic protection. Companies make real processes designed for securely storing data, why aren't they using them? Instead they are suggesting that there is no alternative and everyone is vulnerable to this style of attack.

Edit: I missed some of the backstory. They don't mention that option as their competitor (who found the security issues) already uses a secure element, like a sane person.

[1] - https://blog.trezor.io/our-response-to-ledgers-mitbitcoinexp...


To me, this is full admission of a complete lack of security competency. Building a hardware wallet without using a smart card or some other secure element that at least has mitigation’s against voltage/clock glitching, detects light, reduces the ability to measure power consumption, etc is negligent.

Either they don’t know how to design secure solutions or they wanted to use cheaper chips since tamper resistant chips cost more. Neither is a good look


As most on HN know, if anyone has physical access it's game over - so when I read "critical flaw" in the title to me that meant remote key extraction (or similar remote flaw), and since there's nothing remote about this I consider it a clickbait article. No, what has been written up is not "critical".

physical in-person key extraction after literally opening up a piece of hardware and glitching its exposed innards isn't a "critical flaw". it's baseline expectation.

I would rate the issue raised in the article as "not a bug, won't-fix." with the explanation that "Physical key extraction will always be possible regardless of anything we do."

or are people here claiming that their "better" competitors (who are using "better" hardware, more "correctly") are immune from physical attacks?

EDIT: I am keeping this even if it gets voted to -4. I don't believe a physical, local (in person) glitching attack on the innards of a device, which requires physical access and opening it, constitutes a "critical" vulnerability on a hardware cryptographic device.


IMO there's a huge difference between invasive and non-invasive attacks. I would expect something that bills itself as "The safe place for your coins" to require a bit more effort, know-how, and tools to read out my keys than "a couple hundred dollars of equipment" and a python program.

> if anyone has physical access it's game over

It's actually not when you use a series of common defenses that wipe the chip when tampering is detected. Of course it's still possible to determine the private keys via perfectly executed microprobing...but there's a huge difference here. Invasive attacks require significant time in very expensive laboratories per attack, which very well may fail.

Let's say managed to steal my wallet which leverages a secure element with tampering protection. If you're unaware that voltage/clock glitching will wipe the device, you may try and then you've lost. But let's say you're aware so you want to go the microprobing route. Do you have the necessary lasers and acids to get directly to the circuitry you want to read out without accidentally compromising the integrity of the top-layer sensor meshes? Do you possess a focused ion beam station (only costs ~500k USD)? By using this mesh I've made the extraction significantly more tedious and requiring far higher levels of precision for you. You've got my smart card, but I wouldn't call it "Game Over" by any means. Maybe in this amount of time I figured out that my wallet is missing.

This attack here on the Tresor, though, requires physical access but can be automated. Here, physical access really is game over. I would rate this issue as "Trezor shows themselves to be an inferior solution, will not use to store my keys"

Read here if you want to see more on techniques for readout and known countermeasures. https://www.cl.cam.ac.uk/~mgk25/sc99-tamper.pdf


Interesting comment, thanks. What would you say about my other question: how good are tamper evident seals - for example would a tamper evident seal on the enclosure show visually whether it has been opened (for example to exploit the flaw this article is about), or are tamper evident seals easy to get around or re-apply undetected?


Depends on the seal; most are not particularly strong and the rest require thorough and well-designed inspection procedures.


Inferior to what? The only other hardware wallet with large market share?

Have a look at this: https://saleemrashid.com/2018/03/20/breaking-ledger-security...


Trezor's security features list [0] mentions firmware verification, JTAG, and welding - strongly implying that intends on at least some resistance to physical attack. This is not uncommon for hardware cryptography modules. Since 2001, the federal government has had a certification program, FIPS 140-2 [1], recognizing four different levels of physical attack resistance.

The security engineering industry is very interested in the capability to physically ship secrets to potentially hostile actors inside devices that limit their use or duplication. There are many many applications:

- Payment cards: EMV credit/debit, transit, laundry, parking, prepaid electric meters, etc.

- DRM: Widevine for Netflix, DCP for your local movie theater, anti-piracy and anti-cheat in your Xbox.

- Privacy: the iPhone's Secure Element only decrypts user data given the right PIN, rate limits or caps attempts, resists extraction of private key, much to FBI's disappointment.

- Root of trust: enterprise HSMs for PKI will only enable signing operations with their internal private keys after the presentation of a quorum of operator credentials [2].

Ross Anderson's Security Engineering has a great chapter on this [3].

[0] https://trezor.io/security/ [1] https://en.wikipedia.org/wiki/FIPS_140-2 [2] https://www.cloudflare.com/dns/dnssec/root-signing-ceremony/ [3] https://www.cl.cam.ac.uk/~rja14/Papers/SEv3-ch18-dec18.pdf


> if anyone has physical access it's game over

That's true but for only for threat models that assume decapping and other extreme efforts.

https://en.wikipedia.org/wiki/Secure_cryptoprocessor


How extreme is decapping? How much equipment and how much time does it require? How often does it destroy the chip or cause damage? How obvious is it that it occurred?

While we're at it, for the last question, tamper evidence, how good are tamper evidence seals - for example would a tamper evident seal on the enclosure show visually whether it has been opened (for example to exploit the flaw this article is about), or are tamper evident seals easy to get around or re-apply undetected?


Decapping requires dissolving the plastic with acid. Attacking the chip from there is typically done in a Focused Ion Beam workstation (about $500k), and the risk of destroying the chip depends on too many factors. Some chips have photosensitive elements that generate just enough voltage to wipe their memory if they're exposed to light via decapping.

Tamper evident seals can often be defeated with nothing more than a PTFE (Teflon) knife made from shim stock.


thanks


Trezor is designed to protect against remote/logical attacks (including a compromised host). It isn't really hardware protected in any meaningful way against local access. This lets users inspect/validate their own hardware better, though.

The issue is most users (reasonably, IMO) assume physical protection for their hardware wallets, at least against someone getting temporary access and without insane levels of resources. That is fairly safe using a Ledger today (barring an undisclosed vuln); that's why I think the Ledgers are somewhat better.


Exactly.

I would definitely pick Coldcard over Ledger though.

Coldcard is open source and open hardware to a much greater extent, while still using secure element for secret storage and PIN counter. It also offers advanced security features like proper multisig support, airgaped operation, roll-your-dice entropy input, etc.


I think people in practice buy these and use them thinking that they are secure against physical theft because of “encryption” and requiring a pin.

This shows that assumption to be totally false.


The attack doesn't work if you are using a passphrase. I'm not sure why they let people use a PIN in the first place, but you should never be using PIN instead of a passphrase.


Why don't all silicon chips have glitch and overvoltage detection?

It would seem very easy to put a pair of fets in such a way they detected sudden voltage changes (via their gate capacitance). That could then be used as an input to a circuit which ensured the chip is properly reset by asserting the reset line for at least 1 clock cycle.

This should probably be paired with brown-out detection, although that's power hungry, so I can see why people might not want it.

This wouldn't only have security benefits - lots of electronic designs might be accidentally glitching their microcontrollers due to poor design of other circuits, and having the chip reset in a predictable way is much better than undefined behaviour.


Brownout detection is definitely one of those things to turn off for low-power operation. I suspect glitch detection is harder than it sounds, too.


Probably due to price. Some applications it doesnt matter if the chip can be glitched but getting them at a lower cost does matter.


> Why don't all silicon chips have glitch and overvoltage detection?

Reliability. This is basically the microchip version of Boeing's MCAS.

The circuit you describe is not only an analog circuit, but is in fact a noise amplifier. You're now shipping a chip containing a noise amplifier that drives the device-wide reset line.

What could go wrong?

The stuff you describe is very, very difficult to get right, and beast-mode insanely difficult to troubleshoot or even diagnose when it goes wrong.

It's also very sensitive to manufacturing variations. So if there is a problem with the circuit, it'll probably only affect a few batches. Which, Murphy's Law and all, will be the batches that wind up in the hands of your most important customers.

Stuff like this can bankrupt a chip company if you get it wrong, and there's no way to be sure you got it right. At most you put it in your super-high-end ultra-secure product line, so long as that line's sales are small enough that you can afford a recall.


Nothing in this flaw is a surprise considering Trezor does not even use a secure element (unlike Ledger).


It’s surprising considering how cheap SIM card chips are. It’s not hard to do secure elements these days, at least at scale.


Have SIM cards actually been tested against these vulnerabilities? The payoff per card cracked is much lower than with crypto wallets so maybe there’s just no point trying these attacks?


SIM cards are not secure either, you can glitch them. The ones in phones and credit cards, that is.


What makes you think the Ledger secure element provides any meaningful security?


Some lucky people will be able to restore lost crypto.


That really has happened. A legit owner used an old vulnerability to rescue $30,000 from a Trezor wallet when he forgot the pin.

https://www.wired.com/story/i-forgot-my-pin-an-epic-tale-of-...

Luckily he hadn't updated the firmware so the vulnerability wasn't patched on his device, but despite that, it took a long time and was not easy. But like this newer vulnerability, it would almost be impossible if he had also used a strong passphrase, as Trezor recommends.


Every wallet is physically hackable, that is why we are building Cypherock where we introduce a second variable that is locations using Shamir Secret Sharing on the private keys. I would love to have the opinion from the community on it - https://cypherock.com.


This may seem bad but how many people have lost crypto because of this? You are more likely to have your crpto stolen when transferring crypto to your trezor when setting it up.


No hardware can protect itself from absolute physical compromise; perhaps a self-fuse burner when somebody tries to open it


Banks store private keys for their ATMs in hardware security modules (HSM) and there are lots of crypto exchanges that started doing that. One of the features is private keys self destruct when tampering is detected. If you have a backup you’ll be able to recover the private key. While I agree that Trezor wasn’t designed with this in mind, I think it’s a good idea to include this feature. Not sure about the size requirements for that though, it might make the device significantly bigger.


True HSM with active self destruct needs to be constantly powered. On the other hand for many if not most applications, typical secure smart-card is is completely sufficient (and in fact typical POS card terminal stores most of it's long term secrets on SIM-like smart-card).


Somewhere in my junk parts bin is such a PCI card I bought out of a junk bin at Akihabara, that has Mitsubishi logo clearly printed with archaic construction overall, apparently marketed by NEC somehow, which product brochure page disappeared after I mentioned it on Twitter,

Had a pair of blown AA battery for self destruction. I never bothered to get it working, but IIRC it was supposed to detect removal from PCI slot(the proper) to self erase. So it’s not rare or difficult.


At this years RWC someone fuzzed the software on the HSM. Keys came out.


Thanks for sharing this, I had to google RWC. For others that don’t know the acronym: https://rwc.iacr.org


Size requirements shouldn't be intensive, assuming it's a single-shot system. All you need is 128-256 bits worth of secret key data that is physically-destructible (e.g. with a high voltage spike). You then encrypt/decrypt the rest of the secrets stored in the device with this destructible key.


Bigger may be better.

After all these devices are hard to use in part because if the tiny screens.

Since most of the time you don't carry them in your pocket it does not appear to be a problem if they are bigger.


Doesn't the iPhone claim to be able to protect from physical access?


Let's agree that protection against physical access is extremely difficult.


That’s what the US Department of Justice claims, at least.


With the right systems in place, you can be protected from physical compromise. For example, if my credit card is stolen, I call visa and I'm fine.


And who do you think foots the bill? You might not pay it in one lump sum, but I’m pretty sure you still pay it.


People who lost 100% of their coins are probably wishing they had the option to buy some kind of insurance. But no, be your own bank. (Wait, don't real banks also have insurance?)


Crypto is digital cash, not digital credit. If someone steals your physical wallet, you generally aren’t getting that cash back. Can we please dispense with this kind of hyperbolic nonsense?


You wouldn't carry large sums of cash on your person, so why are people considering large piles of cryptocurrency?


People actually carry large sums in many places where credit cards aren't prevalent. Like Japan.


The analogy isn't credit. It's a bank. I doubt people carry much of a percent of their total Yen wealth in their pockets.


The merchants who accepted the fraudulent credit card transactions don't get their money from Visa. So the merchants pay.


... and factor this into their pricing, completing the circle.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: