I understand the short answer is probably "Judges see it differently so the logic doesn't matter", but I don't get the difference between setting a "booby trap" to wipe a phone and the basic phone-wipe security settings that are already on phones.
In the San Bernadino iPhone case there was a lot of hand-wringing about Apple's password limits, but no one was accusing Apple of purposefully destroying evidence because it has a setting that wipes data after multiple failed login attempts.
Cellebrite does not only sell its software to the US government; one of its chief criticisms is that it doesn't really care who gets its code. So the threat model to end-users is the same, the same fears that would make me want to wipe my phone if someone is trying to get into it might make me want to wipe my phone if someone is trying to automatically pull large amounts of data off of it.
Is the worry here that Cellebrite's vulnerability would need to be executed on a different computer, so it's in a different category? Forget technicalities and cleverness, I don't understand even the basic logical difference between Signal destroying its own data on export and iPhones wiping their data after failed login attempts. I trust the author, but I just don't get it. What security measures are acceptable to build into software?
> Is the worry here that Cellebrite's vulnerability would need to be executed on a different computer, so it's in a different category?
Yes, possibly. The legal system is all about trying to establish clear lines dividing the spectrum of obviously OK behavior and obviously unacceptable behavior. It is obviously OK to delete text messages off your phone. It is obviously unacceptable to break into a police station and delete evidence off their computers. Somewhere in between is the dividing line, and if this went before a judge it is entirely plausible that they would draw the line there.
> no one was accusing Apple of purposefully destroying evidence because it has a setting that wipes data after multiple failed login attempts.
That’s because the feature is designed to protect your data and phone after it’s been stolen by a criminal, not to destroy evidence lawfully sought by law enforcement.
> not to destroy evidence lawfully sought by law enforcement.
But Apple's encryption does still destroy evidence lawfully sought by law enforcement. And in fact, Apple has gone out of its way to make sure that its encryption will still destroy evidence even in instances where law enforcement is trying to access it -- that was the entire controversy behind the San Bernadino case.
Apple wasn't willing to put holes in its security even knowing that their position made it harder for police officers to execute a lawful warrant.
> is designed to protect your data and phone after it’s been stolen by a criminal
I think I already talked about this above:
> Cellebrite does not only sell its software to the US government; one of its chief criticisms is that it doesn't really care who gets its code. So the threat model to end-users is the same, the same fears that would make me want to wipe my phone if someone is trying to get into it might make me want to wipe my phone if someone is trying to automatically pull large amounts of data off of it.
It's not clear to me that the threat model from Cellebrite is different than the threat model from a criminal. Cellebrite does not exclusively sell its software to the US government. And if Signal's devs could get their hands on it then there's no reason to believe that criminals couldn't get their hands on it as well. We already know their software has been leaked outside of law enforcement, because Moxie has it right now.
I'm not saying that the author is wrong, I believe them. And I can understand that exploiting a vulnerability might be treated differently than on-device data deletion. But the specific reasoning you give about intent to disrupt an investigation makes no sense to me. In both cases I'm defending against criminals who may have stolen my phone and may be trying to exfiltrate data. The police aren't who I'm worried about here.
It's not even that Apple/Signal's threat model seem to be 'technically' the same tunder some kind of narrow criteria or letter of the law, they're basically identical. As a layperson on the street or as a programmer trying to build secure software, I don't know how I would tell the two threat models apart. I don't know what test I can use here to tell when I am and am not allowed to be afraid of criminals stealing my phone and data.
The fact that evidence is destroyed even though it is lawfully sought by law enforcement can be, as attorneys would say, incidental. The same goes for people like me who regularly shred documents. I don't do it to frustrate law enforcement; I do it to frustrate identity thieves.
BTW, companies destroy what could be future evidence as a routine matter under the guise of data retention policies. Take, for example, a policy that all incoming and outgoing emails are expunged after 90 days to conserve space and help contain the damage caused by corporate espionage. Courts aren't going to hold companies accountable for willful destruction of evidence if some evidence they would have lawfully sought is gone due to the execution of the policy. However, if a company has been given notice that they are subject to a subsequent preservation order, they must suspend the policy to the extent needed to effectuate the warrant or subpoena to avoid getting into trouble.
> I don't know what test I can use here to tell when I am and am not allowed to be afraid of criminals stealing my phone and data.
That's why we hire attorneys! Legal counsel is here to help. Ain't nothing wrong with relying on subject matter experts. For the same reason, I hire people to crawl through my attics, too.
I guess this is what it would come down to (ignoring other liabilities like the CFAA): if Signal actually implemented their feature, would they be able to argue that their intent was to stop criminals abusing Cellebrite software, or would it be possible to argue that their intent was from the start to disrupt police investigations?
With Apple that's probably going to be hard to argue, they'll come out with some basic stats that talk about theft reduction, they'll point at their advertising and messaging around the feature.
I can see a distinction there, even though I don't see a super-clear reason to believe from Signal's one blog post that they're specifically trying to disrupt the police.
> That's why we hire attorneys! Legal counsel is here to help.
This is obviously good advice, and people shouldn't be looking at HN musings to figure out what is and isn't legal. But at the same time, minor sidenote:
I'm not angry at you, and this isn't anyone's fault in specific, but I low-key hate this answer because a metric ton of innovative software gets built by people who do not have the resources to hire attorneys for every decision that they make, and it's really unreasonable on a societal level to expect every Open Source dev, teenager, single-founder entrepreneur, etc... to have the resources to have legal counsel on hand. The majority of people in the US can't just go out and talk to a lawyer whenever they want.
Not really relevant to what we're talking about, and again, nothing to do with you, but I'm still unable to keep myself from ranting about that whenever this topic comes up.
> if Signal actually implemented their feature, would they be able to argue that their intent was to stop criminals abusing Cellebrite software, or would it be possible to argue that their intent was from the start to disrupt police investigations?
You should read the article again if you haven't already. The author discusses the context that would make this a hard sell to a court. Moxie Marlinspike has been around a long time, and he hasn't been particularly discreet about his opinions about law enforcement.
> etric ton of innovative software gets built by people who do not have the resources to hire attorneys for every decision that they make, and it's really unreasonable on a societal level to expect every Open Source dev, teenager, single-founder entrepreneur, etc... to have the resources to have legal counsel on hand. The majority of people in the US can't just go out and talk to a lawyer whenever they want.
Have you tried? I think there's a misconception that lawyers are resources that will only talk to you if you can verify you have $1M in the bank. Even when I was a poor student I found that I could phone up just about any lawyer and get 15 minutes of their time. This is often enough to get the gist of whether whatever I want to do is going to be legally risky. Most attorneys are nice enough to tell you what you're about to do is incredibly stupid (assuming it is, in fact, incredibly stupid) without charging you for the privilege.
And come on, we're not talking about writing a new game or a container orchestrator or some new ML algorithm here; we're talking about technology that clearly has a strong relationship to law enforcement and has a legacy of adversarial practices. Let's practice a little common sense here.
> Moxie Marlinspike has been around a long time, and he hasn't been particularly discreet about his opinions about law enforcement.
I did read the article, I don't personally see the intent that the author is attributing (although I understand how other people might, and again, this is all separate from the CFAA concerns). The author is claiming that Cellebrite's users are synonymous and interchangeable with law enforcement. But that's clearly not the case, or else Moxie wouldn't have a copy of the software since he's not a cop.
Signal never mentions law enforcement in the original post; the only mention they make to governments are non-US authoritarian countries, which... it's not illegal in the US to build software that Turkey dislikes. The only category of users that Signal's post specifically supports by name are journalists and activists -- in other words, not criminals under US investigation.
And this is part of what weirds me out about this conversation. When you start talking about how obviously Moxie is trying to hack police departments because he's criticized them in the past -- my opinions about law enforcement overall shouldn't block me from writing secure code. I believe in police accountability, I publicly backed Apple's position during the San Bernadino case, which according to multiple police spokespeople apparently means that I love terrorists and hate America. Does their framing of that position mean I'm not allowed to build secure software now?
You don't see it as problematic that being critical of the government would mean that you have less legal leeway to write secure code? "Hasn't been particularly discrete about his opinions of law enforcement" to me reads as "he's going to face increased legal scrutiny and have fewer legal protections purely because of 1st Amendment protected speech."
> Have you tried? I think there's a misconception that lawyers are resources that will only talk to you if you can verify you have $1M in the bank.
It's entirely possible that I'm bad at looking around at stuff like this. I haven't seen specialist law offices that aren't charging more than a hundred dollars an hour, but... I am not going to pretend I'm an expert on this stuff, at all. I'm not an expert on anything we're talking about.
> And come on, we're not talking about writing a new game or a container orchestrator or some new ML algorithm here; we're talking about technology that clearly has a strong relationship to law enforcement and has a legacy of adversarial practices. Let's practice a little common sense here.
I'm not sure I follow, is your argument that security code is in a separate category from other Open Source software? I don't understand what you're implying. Signal is a messaging app, shouldn't ordinary people be able to build those?
There aren't a ton of consumer-facing projects I can build that won't have to care about security and privacy. You don't think that games, or music storage/tagging, or backup systems have to care about this stuff? It's not only banks that do encryption, any system that touches user data should be able to protect that data from criminals.
In the San Bernadino iPhone case there was a lot of hand-wringing about Apple's password limits, but no one was accusing Apple of purposefully destroying evidence because it has a setting that wipes data after multiple failed login attempts.
Cellebrite does not only sell its software to the US government; one of its chief criticisms is that it doesn't really care who gets its code. So the threat model to end-users is the same, the same fears that would make me want to wipe my phone if someone is trying to get into it might make me want to wipe my phone if someone is trying to automatically pull large amounts of data off of it.
Is the worry here that Cellebrite's vulnerability would need to be executed on a different computer, so it's in a different category? Forget technicalities and cleverness, I don't understand even the basic logical difference between Signal destroying its own data on export and iPhones wiping their data after failed login attempts. I trust the author, but I just don't get it. What security measures are acceptable to build into software?