You're not supposed to sympathize with Cellebrite, according to the article.
> If you work at Cellebrite, on the other hand: get down off your high horse, stop it with the “we’re the good guys” shtick, quit selling to authoritarian governments, and for god’s sake, fix your shit.
> Giving defense attorneys more ammo to push back harder against the use of Cellebrite devices against their clients is Good and Right and Just. The general point that Moxie made — Cellebrite’s tools are buggy A.F. and can be exploited in ways that undermine the reliability of their reports and extractions as evidence, which is the entire point of their existence — is actually more important than the specifics of this exploit
You're kind of missing the point of the article. The article agrees with you that Signal's hack was a net positive and Cellebrite is not a good company.
I saw those parts, but my overall impression was that the author thought Signal was foolish to write up their adventure and they shouldn’t have done it (while conceding that Cellebrite aren’t angels).
I also disagree with the notion that it’s good that Cellebrite exists because without them we’d have stronger anti-encryption laws. That’s hypothetical and all we know is what we have today. I’m not thrilled that someone is peeing on my basement carpet instead of peeing in my living room; I’d rather not have someone peeing on any of my rugs.
I’m not reading the article as a criticism of the work Signal has done, but their “lol u got pwned” way of announcing it — in particular, their coy threat about exploiting the vulnerability. Specifically:
- The threat is likelier to annoy judges than garner sympathy
- Following through on it is probably illegal
- Worse, following through could put their users in legal (and/or physical) jeopardy
- More generally, Signal should consult with lawyers before doing things like this
> - Worse, following through could put their users in legal (and/or physical) jeopardy
This bit is very relevant, and I agree. It’s ethically dubious to put unknowing users at risk in that way, whether from democratic or authoritarian governments.
The other points, though, very much assume that the goal is to change the outcomes of American court processes. The focus is almost entirely on what a judge in the US would think, on evidence rules in American courts, etc. Maybe American law and law enforcement isn’t as relevant as an American lawyer thinks it is, and Signal is betting that the PR and politics game is more important.
If they did make that bet, which I think is likely, then the article has some valid arguments at the end – this hack(or non-hack) may lead politicians to introduce stronger laws – and _that’s_ where the focus should be. Is this a god move, politically?
And, to reiterate from the beginning: Does this put end users in danger? If it does, it’s likely not worth the price even if there was some political victory in the end.
I'm curious how? If they announce publicly that they will place files on devices that may exploit a publicly announced vulnerability in Cellebrite, then it's Cellebrite's prerogative to fix the vulnerability. If they knowingly ignore a publicly disclosed risk, then they have only themselves to blame.
>No, intentionally spoiling evidence — or “spoliating,” to use the legal term — is definitely not legal.
>Neither is hacking somebody’s computer, which is what Signal’s blog post is saying a “real exploit payload” could do. It said, “a real exploit payload would likely seek to undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.” All of those things are a violation of the federal anti-hacking law known as the Computer Fraud and Abuse Act, or CFAA, and probably also of many state-law versions of the CFAA. (If the computer belongs to a federal law enforcement agency, it’s definitely a CFAA violation. If it’s a state, local, or tribal government law enforcement agency, then, because of how the CFAA defines “protected computers” covered by the Act, it might depend on whether the Windows machine that’s used for Cellebrite extractions is connected to the internet or not. That machine should be segmented apart from the rest of the police department’s network, but if it has an internet connection, the CFAA applies. And even if it doesn’t, I bet there are other ways of easily satisfying the “protected computer” definition.)
Imagine physical vault in your house. This vault has mechanism within it such that if anyone forces it open it will destroy all its contents. It may have defense mechanisms triggered to act as deterrence – it may spill/spew very bad odor and permanent ink – on anything nearby and that odor and color will be very hard to get rid of. Is such a vault legal?
If someone breaks into your house and steals such a vault and in the process of trying to open it, if they suffer damage, is the owner of the vault liable?
What's the principle being applied here? How would the same principle be applied in the case of digital property?
The parts of the article quoted above suggest that the principle is a CFAA violation – someone who distributes an exploit tailored to destroy evidence captured by Cellebrite probably “knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer.”
Difficult philosophical questions arise with the phrases “knowingly causes” and “intentionally causes damage,” but a jury can use common sense to resolve them on the evidence in a particular case. The same issues arise when trying to determine intent and causation when someone fires a gun or carries a bag full of white powder. The details matter.
I think the crucial point glossed over in the analysis of the blog post is "knowingly causes the transmission". The user of Cellebrite causes the transmission. I would really like to see a proper legal analysis of the situation, and this doesn't seem to be it.
The author also misses the point of the "show me yours I'll show you mine". Cellebrite is, from what I understand, knowingly leaving everyone's machine vulnerable in order to conduct their business.'
This is something that _should_ be illegal. Not disclosing (and actively benefiting from) vulnerabilities in other peoples products is what we should have laws against.
The DOJ publishes legal guidance on prosecuting computer crimes [1], which includes this relevant passage:
> An attacker need not directly send the required transmission to the victim computer in order to violate this statute. In one case, a defendant inserted malicious code into a software program he wrote to run on his employer's computer network. United States v. Sullivan, 40 Fed. Appx. 740 (4th Cir. 2002) (unpublished) [2]. After lying dormant for four months, the malicious code activated and downloaded certain other malicious code to several hundred employee handheld computers, making them unusable. Id. at 741. The court held that the defendant knowingly caused transmission of code in violation of the statute. Id. at 743.
The CFAA is notoriously broad, which is probably why Pfefferkorn didn’t feel the need to undertake a detailed analysis of exactly how it prohibits the deployment of a targeted exploit which would “undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.”
This passage describes a really different situation though.
Say I have a USB Stick with important data on it. It has a warning label on it that says "if you plug this in, it may destroy your computer unless you have the correct password file.". If you plug it in (and your OS is vulnerable) it wipes all drives (including itself) it can find unless it finds a particular password file.
Is this USB Stick illegal?
Signal made it very very clear that scanning their users with Celebrite tools might trigger some behavior. Now if you still go ahead and use this tool can Signal be blamed, despite warning you that this will occurr?
I find all of this far from obvious. What Signal did is purely defensive _and_ clearly labeled. It's very unlike the examples cited so far.
(And after all we are talking about a scenario where the cops can still get the evidence simply by taking screenshots of the open app, so they are not even preventing cops from getting to the evidence, merely making it more inconvenient.)
I'm inclined to think that any computer used in good faith by law enforcement duly authorized to obtain evidence ought to be considered a "protected computer if it is specifically targeted as opposed to being affected by a ubiquitous harmful code not distributed with any expectations of causing harm to LEO discovery machines (eg the authors of ransom ware might reasonably expect law enforcement agencies to bare metal install known good OSs)
What worries me most about this disclosure is the potential for abuse inside law enforcement agencies and departments . What if a evidence gathering machine is deliberately not patched against this e exploit?
If I sold software like Cellebrite I would have at least attempted to make enforceable the cessation of licenses for any out of date instalation.
What really confuses me is why vendors like Cellebrite don't have a commercial case for at least some level of independent testing of their wares in order to provide a limited warranty for the operation and results.
Until now I actually thought it was necessary to obtain suchlike independent testing and make appropriate assurances to LEO to be able to legally sell such software in the first place.
Article concludes the uneasy status quo permits all parties to do their best work respectively. Unmentioned is that that at least pays lip service to the American Way of meriticracy and endeavour and the ideal ultimate effect of fairness to all.
ThIs is probably my naivety again ; but why can't laws prohibiting the use of 0Days exploitation work to the advantage of the law and society and commerce alike?
If zero day exploits had to be disclosed to a central independent organisation (comprised of members from LEO and civilian life and working on a mandatory equal resources footing to enable citizen participants without any need for corporate sponsorship) and there was a definite widow permitting the use of exploits ended with a mandatory tested patch release and public announcement, I don't see how it would be unfair or the unreasonable for anyone on either side of the law. I would even consider it isn't a bad thing to disclose vulns identified by software engineering and not discovered publicly, to be notified to federal agencies when identified. I actually think that we should do this already for the protection of our diplomats and overseas representatives.
Since we already have the instrumentation to selectively patch individual devices in widespread use, why cannot agencies request the exception of devices under surveillance to enable the security of the general public?
I realise this doesn't work for covert and unlawful intercepts. And there do exist reasons for covert intercepts to be carried out. However every advanced society should be pushing such incidents to the margins with every available force possible.
Security experts are worried about this argument because the global security of the USA is increasingly and credibly threatened. Show me how a well designed infrastructure for the protection of the innocent from unwarranted invasion, how I've outlined here, can possibly be a negative for law enforcement and national security and I'll eat my hat : the suggestions I'm making entirely reinforce the accessibility of intercept capabilities for lawful deployment and instrumentation for device specific code patching only enhances the potential for positive acquisition of intelligence on criminals and foreign agents. The USA should be peeling back the layers of the baseband implementations of 5G and immediately order the decommissioning of all 2G installations that are trivial to abuse.
The faster the USA creates a viable OSS 5G RAN code base the faster foreign potentially hostile competition is disabled in the race for budget handsets and deployment.
The number of people who have any interest in this field is small enough for background checks to not be prohibitive to open source goals. However serious consideration needs to be given to any blanket release to higher education institutions because the number of overseas students is simply too great to rule out hostile intentions.
Along the similar lines we need to undo academic publishing holds on legitimate interest interest in research. Because only hostile nations are served by making the distributors of publicly funded research available to the public.
I mentioned that last point because I think the most important argument of the article was about the blurring of the lines where actually really sensitive concerns do exist on the national basis that are being trivialized by a leading vendor of personal privacy communication software touting hacks in the way the author explained he found unbecoming and - unspoken but clear to me at least - dangerous to society as a whole.
Last year I implemented so called "content protection" software for my company which enables the restriction of eg sending emails with sensitive words included. Or the attachment of files. And in depth classification and full text inspection tools and services. This is a growth market right now and I would strongly encourage anyone wanting interesting and well paid consulting work to study this area and particularly spend time for looking at how many new entrants are appearing constantly. My company doesn't expect to see much benefits from this expensive software installation, but the purpose we have is to use the obtained metadata for eg graph database analysis for assisting with our own research and development of opportunities from customer provided documentation and research. We're planning on linking back to raw incorporation filing feeds on individual parties and even public LinkedIn posts and comments.
I'm mentioning that because the value of captured surveillance data in the raw becomes massively more potent information combined with the associated network of correspondents and individual sources and references.
At one time when I was young I thought the cost for academic research papers was the cost of government surveillance of interested parties obtaining advanced insights into technology and analysis and systems.
The software my company purchased is in theory capable of tracking the lifetime of a document that has been passed through any number of hands.
Obviously it's trivial to air gap your reading device. But consider the volume of individual papers and documents you consume in any given year and certainly for the hn crowd that's likely a large number.
Make it difficult for criminals to conceal the pathway taken up to their own devices by a very large number of information sources and the resulting black hole is a hypothetical perfect telltale snitch.
Conversely, it's perfectly possible to enable free acquisition of research documents by a intermediary for the consumption of a legitimate researcher or team. I have worked for 30 years in specialist publishing in industry association members journals paid for by advertising. The Internet allegedly destroyed the viability of my business. What did happen was advertising agencies suddenly declared print media dead and ceased operations in my field almost in choreographed unanimity. This was 25 years ago. I actually think that it was my field that Google was interested in when they declared reported in Advertising Age and other trade media to have, along with a consortium of the biggest publishing houses, that their multi year and multiple hundreds of millions of dollars project for trading printed advertising online had failed and mentioned that particular obstacles included the very problems my company overcame just to start trading in 96. I don't think Google wanted to help anyone sell consumer targeted advertising. They almost certainly even in 04 knew that would be their market to themselves. But highly vertical advertising within industry niches where what's being advertised often is incomprehensible without accompanying features commissioned by the publication to cover a niche within a niche and attract everyone in that market as advertisers. Take 200 thousand times 4 for quarterly issues and 50 thousand average readers by name times 4 a low "reach" estimate gives 1.6*10^11 pairs of eyeballs per year in this forgotten and buried business.
That's who will be only too happy to bear the infrastructure costs of the document management system necessary for a truly global scale tracking of research dissemination.
Don't dismiss this immediately only for concerns about privacy : this couldn't fly without a way to give real privacy for the protection of researchers needing to avoid any giveaway of their direction and interests. Legally double blind intermediary agents as proxies are far from trouble to implement and I know that demand exists for such a proxy among some customers of ours for a additional layer of privacy and discretion for their work.
We've almost forgotten because of the global economy how much the USA and critical input from other western nations is advanced compared to the row. I personally think that the expansion of university campus facilities has been happening because of foreign students demand and potentially profits from them assuming that zero interest rates continue until the debts are paid and assuming that that happens before the lifetime expectancy of the buildings erected creates a financial noose around higher educations head. The borrowing I've looked at doesn't have principal repayment horizons early enough by a very long way.
The purported hack here would specifically target Cellebrite, not anyone accessing these files in general.
Also, if you know someone is stealing your lunch from the shared work fridge, so you add rat poison to your lunch, do you get to walk away scot free on the theory that it’s the thief’s fault?
But cellebrite could be used by an oppressive regime, or criminals that got their hands on it. I'm doubtful such an argument would hold up in court, but I don't think you can honestly say targeting cellebrite is the same as targeting US law enforcement.
The issue here is not "someone" breaking into your house and stealing it, it is the authorities doing it. Destruction/sabotage of the evidence collection process is very possibly going to be held against you.
It seems like there's a bit of a logical leap in that argument. As the article notes, Cellebrite isn't exactly discerning when it comes to their customer base. It seems like they sell their tools to just about anyone willing to pay their steep fee, not just US law enforcement. I'd argue it's more akin to a specialized crowbar or blowtorch in the safe analogy. Sure, law enforcement might use it to try to crack your safe, but so could various other bad actors. There would be a legitimate non-spoilation purpose in protecting political dissidents who have their phones seized at a foreign border or stolen, for example.
But all Signal is (threatening to be) doing is blowing up devices that parse all files using insecure software.
Let's look at another case, I remember that some people had USB drivers that detected "wrigglers" and shut down the computer in response to such a wiggler. Would that also be illegal?
If I install anti scan files and anti mouse wrigglers when travelling to China do they become legal then?
The article quotes the part of the Signal blog that said “a real exploit payload would likely seek to undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.” A complex exploit like that would say much more about the author’s intent than a driver that shuts down the computer when a “wiggler” is detected.
Well if it's the authorities, they can present you with a warrant and request that you disable your defenses. You should not be required to roll over and present your defenseless underbelly to everyone that wants to break in, in case some of them are "the authorities".
You don’t have to help anyone collect evidence against you, you’re innocent until proven guilty, it’s up to others to prove their case- why would you help implicate yourself?
Presumption of innocence is the most fundamental cornerstone of common law.
I agree that the overall situation around evidence recovery from locked devices is not straightforward, but I don't think I referenced this in my comment–I merely provided insight into why the specific actions might be considered to be illegal (using the argument in the blog post, I might add).
Physical booby traps tend to be illegal if they aren't supervised. Not sure if that's only for physical damages / injuries or applies to other damage as well.
Not a lawyer but if I was a lawyer, I would tell my client to never do that. Should something go wrong and the glitter gets into someones eyes, that's a case. Even worse, glitter dispenses while person is operating a vehicle and pedestrians are injured, that's a case.
If the vault hurts police officers who had a legal warrant for opening it, through a feature that was purposefully designed for this, I would bet that yes, it would be completely illegal, and both the manufacturer AND owner (if it can be proved they were aware) may be held responsible for the injuries.
Similarly, if your app/device damages government property and tampers with legal evidence, both you and the creators would likely be held responsible. Even if the law may be unclear, you will definitely face charges for this, given how defensive police departments are in these cases (there was one case where a person they beat up had extra charges brought against him for dirtying the officers' uniforms with his blood... ).
Furthermore, simply creating exploit code and releasing it into the wild is illegal, so Signal, if it were ever found to have done what they let us believe they could do, could be held legally responsible, even if the code never made it to exploit a live system at all.
Less clear, maybe, but as far as I understand the various regulations, they all refer to personal self defense. So to protect yourself from harm, you may use (deadly) force.
That would not apply to protecting your vault from theft, by using physical - automated - violence against the thief.
>No, intentionally spoiling evidence — or “spoliating,” to use the legal term — is definitely not legal.
My point is Cellebrite/the Cellebrite user would be the one spoiling the evidence. The evidence is sitting there on the device unspoiled, and only if the user decides to charge ahead without heeding the public warning that doing so without the necessary precautions will spoil the evidence will the evidence actually be spoiled.
Signal itself has no knowledge of which files constitute evidence (it applies this completely indiscriminately), so I don't think you could argue that it is knowingly spoiling evidence.
> Signal itself has no knowledge of which files constitute evidence (it applies this completely indiscriminately), so I don't think you could argue that it is knowingly spoiling evidence.
The article, written by a legal scholar with a specialty in precisely these issues, directly contradicts this.
Signal coyly threatened to make their app hack Cellebrite machines with the intent of spoiling evidence. It doesn't matter that they aren't targeting specific evidence. Blanket spoiling all Cellebrite evidence would apparently be enough to get them in legal trouble.
Where is this special status for Cellebrite coming from? Just because they're one of the vendors whose software happens to be used by some governments, I'm suddenly forbidden from having an arbitrary sequence of bytes on my device in case someone else happens to connect and run some arbitrary software on it?
I'm having a hard time imagining this being a viable argument. Seems like the vendor should just fix their software if they expect it to work reliably. Anything else would be too large of a transgression on civil freedom.
There’s no special status for Cellebrite: it comes down to intent and, especially, that judges are not computers rigidly executing code. If you do something which is designed to damage equipment used by law enforcement, a judge is going to ask what your intention was, not just throw up their hands and say anyone could have had those random bytes. As a real-world analogy, consider for the sake of argument how having a trap on your home safe might look if you were a) in a very high-crime neighborhood or b) engaged in criminal activities and had written of your desire to harm cops – even if the action is exactly the same (and illegal in your jurisdiction), I’d expect the latter situation to go a lot worse because you’re knowingly targeting law enforcement engaged in (at least from the court’s perspective) legitimate activities.
Since Signal would be deploying that exploit to millions of devices to combat surveillance tech, I would expect that to at least result in a suit even if they were able to defend themselves successfully. It would be especially interesting to see how Cellebrite’s use by various repressive regimes entered into that: an American court might, for example, be sympathetic to a campaign trying to protect dissidents in China which happens to impact an American police agency using the same tool.
There is still legitimate utility to this behavior defending against non-United States Law Enforcement actors.
People are looking at Cellebrite wrong due to law enforcement using it. Cellebrite is a set of specialized thieving tools. Those tools can be wielded by anyone. The fact law enforcement has unwisely and blindly integrated it into their toolchain does not mean the device should be given special protection over anything else. All this does is further cement "law enforcememt" as a special privileged class in the United States, to whom Constitutional boundaries (5th Amendment, which at this point, I hold that testimony by electronic device metadata disclosure/compromise should realistically cover when breaking through encryption is involved, and 4th Amendment when Third Party Doctrine is taken into account).
> The fact law enforcement has unwisely and blindly integrated it into their toolchain does not mean the device should be given special protection over anything else.
I'm not arguing that it should have whatever “special protection” you have in mind. This is why I mentioned the concept of intent: just as having lock picks or a gun isn't automatically a crime, I think having an exploit for Cellebrite would depend on why you were developing and installing it.
If you were, say, helping dissidents in another country I would expect a judge to be far more supportive of that than if it came up in the context of a criminal investigation with a lawful search warrant. In the latter case, you are aware of but refusing to comply with the legal system and, irregardless of how any of us personally feel about it, that's just not going to end well in most cases.
> I'm not arguing that it should have whatever “special protection” you have in mind. This is why I mentioned the concept of intent: just as having lock picks or a gun isn't automatically a crime, I think having an exploit for Cellebrite would depend on why you were developing and installing it.
In that case, as long as one is not intending to interfere with a search warrant or other legal process, it should be fine for them to deliberately install a Cellebrite hack.
Constitutional boundaries and the 4th amendment applies. They do need a warrant, but with a warrant they are allowed to go through all your electronic records on your devices just as they are allowed to go through all your written records in your drawers and safes and envelopes.
Encryption has no special treatment that would cause 5th amendment to apply. 5th amendment may apply if they ask you to tell something (e.g. a password), but if they can break your encryption without your assistance, then there's no difference if they're decrypting a physical letter or an electronic file, if the evidence (that letter or that file) was lawfully obtained, they can do that.
My read was closer to what the article stated at the end -- the issue is that it is written for a tech geek audience when the real audience should have been judges and lawyers. So being vague and flippant was why they were foolish, not in saying something at all. And that they should probably not have implied that they were going to break the law, which also seems foolish.
Doesn't mean it isn't net positive, just means the details of how they did it were... maybe not the cleverest. But who knows, one person's opinion, etc.
I think it should have been written for politicians. The big problem isn’t that some American police agencies use these tools, the problem is that it’s legal to make, sell, use, and export them.
And that’s not for (American) lawyers and judges to decide against, its for politicians in all democratic countries:)
> If you work at Cellebrite, on the other hand: get down off your high horse, stop it with the “we’re the good guys” shtick, quit selling to authoritarian governments, and for god’s sake, fix your shit.
> Giving defense attorneys more ammo to push back harder against the use of Cellebrite devices against their clients is Good and Right and Just. The general point that Moxie made — Cellebrite’s tools are buggy A.F. and can be exploited in ways that undermine the reliability of their reports and extractions as evidence, which is the entire point of their existence — is actually more important than the specifics of this exploit
You're kind of missing the point of the article. The article agrees with you that Signal's hack was a net positive and Cellebrite is not a good company.