These are problems other tools have solved, without having to resort to a walled garden network or having a SPOF.
Sure, maybe Signal has done some useful technicality -legal- protections for now for US citizens, but what happens when a state actor threatens to kill the family of a Signal employee if they don't ship a very subtle compromise in how their binaries source random numbers, or if they don't sell the metadata of who is talking to who.
Signal is not anonymous so that metadata alone could have real value. It is at the end of the day using phone numbers as identifiers. Sure they do SGX remote attestation but that has been demonstrated broken multiple times and won't stand up to a motivated physical attacker. Even if it -is- solid now, I would not underestimate how far a state actor will go. (As demonstrated by NSA wiretaps on google datacenters). Can they compel the right Intel employee to CA certify a manipulated enclave? Can they just get handed the key?
Also why would people outside the US trust the legal protections afforded to a US company to protect US citizens?
Their refusal to federate their network just creates a Lavabit sized target... and I have yet to hear any technical reasons for doing so particularly when, again, other projects have demonstrated end to end encryption and decentralization are not ast mutually exclusive as Moxie claims.
The idea that only Signal can do this right, and only if they keep it centralized on their servers, with them being the only people that can sign the binaries... is pure hubris imo.
There are a lot of alternatives we all should be carefully considering for the next -standard- for ubiquitious secure messaging.
You asked "what valuable thing has Signal done" and offered to track them, and I responded with two examples. "What if someone threatens a Signal employee" is a moved goalpost. Who else has solved private contact discovery? Conversely: who else has solved Mossad as a threat model? I have repeatedly pointed out the Signal subpoena elsewhere, which is responsive to a number of your comments.
If your threat model includes Mossad, you're going to get Mossaded upon. You say "thinking only Signal can do this is hubris", but in the same breath suggest we're just carefully consideration of a standard away from being safe against Mossad threatening someone's kids. That's hubris.
I will look closer into private contact discovery across messengers, sorry for rushing over that point.
I did try to respond to the sopoena comment in that that is only of limited value. A blackhat will proably have an easier time getting to servers than a lawyer with Signals setup, and I do credit them with providing a substantially better assurances than say Whatsapp... but still not good enough for my particular threat profile.
Personally I don't like using permanant non anonymous identifiers like phone numbers so I would not use the feature as they have implemented it, but that doesn't mean it does not have some value for some use cases worh exploring.
That is at least something that can be looked at objectivly in the scope of the spreadsheet.
but what happens when a state actor threatens to kill the family [...]
No messenger system protects you against that. You seem to be going through the full sequence of well-known poor ways to evaluate the security of something like an instant messenger, starting with the feature matrix, going through 'it can't be secure if it's not open source/self-hosted/federated' and reaching the Mossad. Which is a worthwhile and educational exercise but it's still not a good way to evaluate the security of something like an instant messenger.
If the system doesn't have central server then it would become more difficult. You cannot subpoena a Tox network.
Any messenger that requires and stores a phone number (read your real-world identity and physical ___location) is neither anonymous nor private.
Also, a centralized messenger with a single server means that all traffic between all the users around the world goes through one data-center in one country that can do what? Observe the traffic and detect who is using this messenger, at least their IP address and country.
Of course, for most users this doesn't change anything because they don't do anything illegal. But if they don't do anything illegal then they don't really need protection and can use VK or Telegram.
> Also, a centralized messenger with a single server means that all traffic between all the users around the world goes through one data-center in one country that can do what? Observe the traffic and detect who is using this messenger, at least their IP address and country.
I mean, yes, a single server would have that property. But what system are we discussing that has a single server in a single data center?
Furthermore: if you actually care about metadata hiding it's a lot more complicated than "we have more than one person operating the servers".
> Of course, for most users this doesn't change anything because they don't do anything illegal. But if they don't do anything illegal then they don't really need protection and can use VK or Telegram.
Privacy is not just for people who do illegal things.
So that whole "it's all flowing though a single DC" thing for Signal - that has a name in this sphere. It's called "Don't Stand Out".
When five people call the known mob boss you follow all of them. Maybe one is just a friend from high school. Another is the mob accountant, another an enforcer, you're getting leads. But what if it's five thousand people - now you can't follow them all, it's overwhelming.
Knowing five people in your country use Signal puts them all on the watchlist. If it's five million that's pointless. Without "Don't stand out" the encryption used just makes you a target.
Barry, who uses Barry's very own private self-hosted server for a popular federated system, Stands Out. Message from Japan to Barry's system? That's for Barry. How do we know? Well Barry's the only one on that server, easy. No cryptography can fix this.
If the system doesn't have central server then it would become more difficult. You cannot subpoena a Tox network.
The threat that was brought up was an actor with state-level resources, coercive capability and lack of scruples - the specific example being the threat of murdering someone's family, not subpoenas.
Journalists covering sensitive topics in sensitive areas -must- care about these questions.
If you are using something anonymous, fully end to end encrypted with open source reproducible verified builds on decentralized servers, you can greatly limit the risk of having a central third party that can be compelled to act against your interests.
Maybe in the US we don't think we need those sorts of protections, but we should consider the worst cases when designing security systems, and ensure no one single compromised person has the ability to backdoor thousands or millions of people.
Not everyone cares about this sort of thing though, and there are 70+ other options listed with various tradeoffs.
None of this makes any sense. This reads like an argument from a parallel universe where the only sane option for end-users is Linux on the Desktop.
In fact: a huge portion of commercial vulnerability research and assessment work is done on binaries of closed-source products, and it has never been easier to do that kind of work (we're coming up on a decade into the era of IR lifters). Meanwhile, the types of vulnerabilities we're discussing here --- cryptographic exfiltration --- are difficult to identify in source, and source inspection has a terrible track record of finding them.
There's no expertise evident in the strident arguments you've made all over this thread (network security work is not the same as shrink-wrap software security work, which is in turn not the same as cryptographic security work --- these are all well-defined specialties) and it concerns me that people might be taking these arguments seriously. No. This isn't how things actually work; it's just how advocates on message boards want them to.
You can of course do network analysis and say "this looks TLS encrypted with X algo" then dive into those packets and then say "this looks end to end encrypted with X algo".
You can also look into a binary application and study jumps, where the private key gets loaded into memory etc etc.
I am with you on this, there is tons of low hanging fruit you can do on a binary application, and I do a lot of this sort of thing myself.
Making sure an application at a high level seems to do what it says on the tin is a great phase one audit of something as a baseline, and should be done even on open source projects.
Trouble with closed source tools is you either have to do this over and over again on every new binary released, -or- you can look at the diffs from one release to next and see if any of those critical codepaths have changed.
You also -much- more easily evaluate bad entropy in key generation if you have the source code. People think they are clever all the time using silly approaches along the lines of random.seed(time.now()+"somestring") and it is much less time consuming to spot those types of flaws if you have the code.
This is why I argue you need both whitebox and blackbox testing when it comes to mission critical security tools, and a clear public auditable log of changes made to a codebase so review can continually happen, instead of only at specific checkpoints.
Again, some people may not care about that level of rigor and just want to share memes with their friends instead of journalists communicating in the feild. I tried to be pretty comprehensive with the list, and welcome recommendations for new objective criteria to include.
It is currently roughly sorted by security. Usability is subjective and people can work down the list to the most secure tool they feel they can comfortably use.
No, I don't think you're following me. You don't tcpdump and look at the packets (that would be silly; even horribly encrypted messages will look fine in a pcap). You don't even "study jumps, where private keys get loaded into memory". You take the binary and lift it to an IR and audit that.
People who do a lot of binary reversing vuln research were doing a version of this long before the we had modern tooling; you don't look at each load and store, but rather the control flow graph. People were auditing C programs, by manual inspection, in IDA Pro before there even was a market for secure messaging applications.
And that's just manual static analysis! Professional software assessors do a lot more than that; for instance, Trail has been talking about and even open-sourcing symbolic and concolic evaluation tools for years, and, obviously, people have been iterating and refining fuzzers and fault injectors for at least a decade before that.
This isn't "low hanging fruit" analysis where we find out, like, the messenger application is logging secret keys to a globally accessible log file, or storing task-switcher screenshots with message content. This is verification of program logic --- and, I'd argue even more importantly, of cryptographic protocol security.
"Bad entropy in key generation" is a funny example. Straightforward binary analysis will tell you what the "entropy source" is. If it's not the platform secure random number generator, you already have a vulnerability.
The problem with your chart is that there is no real rigor to it. It's just a list of bullets. It's obviously catnip to nerds. We love things like this: break a bunch of things into a grid, with columns for all sorts of make-me-look-smart check-marks! But, for instance, there's no way to tell in this chart whether a messenger "audit" was done by JP Aumasson for Kudelski, or some random Python programmer who tcpdumps the application and looks for local-XSS.
You can't even neatly break these audits down by "did they do crypto or not". The Electron messengers got bit a few months ago because they'd been audited by non-Electron specialists --- that's right, your app needed to be audited by someone who actually understood Electron internals, or it almost might as well not have been audited.
Given that: what's the point of this spreadsheet? It makes superficial details look important and super-important details look like they don't exist. You didn't even capture forward secrecy.
People who pay a lot of attention to cryptographic software security took a hard bite on this problem 5 years ago, with the EFF Secure Messaging Scorecard. EFF almost did a second version of it (I saw a draft). They gave up, because their experts convinced them it was a bad idea. I see LVH here trying to tell you the same thing, in detail, and I see you shrugging him off and at one point even accusing him of commenting in bad faith. Why should people trust your analysis?
Obviously I was just attempting to make a simple and easy to reason about example here, but yes of course there are a wide array of more practical approaches and tooling today depending on the type of binary and platform we are talking about.
I also fully agree you need to get people that specialize in the gotchas of a given framework to be able to spot the vuln. In fact that kind of bolsters my point, that you need the -right- eyeballs on something to spot the vuln, and this is much easier if something is open source vs only shown to a few select people without the right background knowledge to see a flaw.
The point I was trying to make is that you need both continual whitebox and checkpointed blackbox testing and if a tool is not open source then half of that story is gone unless you are Google sized and really can pay dedicated red teams to continually do both for every release... and even they miss major things sometimes in chromium/android that third parties have to point out!
Being open source client and server does -not- imply something has a sane implementation (and to your point things that rank well on paper like Tox have notable crypto design flaws).
This is more so you can quickly evaluate a few tool that contain the highest level criteria you care about for your threat profile, then you can drill down those options with less objective research into the team, funding, published research, personal code audting etc etc to make a more informed choice.
For me as an AOSP user, something without F-Droid reproducible build support is a non starter. Anything that is closed source I can't personally audit the critical codepaths of and have knowledge a lot of the security researchers I communicate with can do the same... is also a non starter. Lastly I want strong knowledge I can control the metadata end to end when it counts, and being able to self-host or being decentralized entirely is also an important consideration.
Many people have reviewd this list and described to me they have learned about a lot of options they didn't even know existed, or didn't know where closed or open source. That is the point. Discoverability.
It would be unreasonable and even irresponsible for someone to fully choose a tool based -only- on this list imo. If that was not clear elsewhere then hopefully it is clear now.
Have you used any of these tools? Have you done a binary reversing engagement? Have you found a vulnerability in closed-source software? Can you tell us more about the tools you used to do this work? What kinds of vulnerabilities did you find? You have very strong opinions about what's possible with open- vs. closed- source applications. I think it's reasonable to ask where the frontiers of your experience are.
It's clear to me from all your comments that you are somebody who wants to sysadmin their phone. That's fine; phone sysadmin is a legitimate lifestyle choice. But it has never been clear to me what any of the phone sysadmins are talking about when they demand that the rest of us, including billions of users who will never interact with a shell prompt, should care about their proclivities. The argument always seems to devolve to "well, if we can't sysadmin this application onto our phones, you can't be sure they're secure". But that's, respectfully, total horseshit, and only a few of the many reasons that it's horseshit appear among the many informed criticisms of your spreadsheet on this thread.
You might want to put your warning about how unreasonable and irresponsible this tool is on the actual spreadsheet, instead of just advocating that it be enshrined in Wikipedia.
On the topic of phone sysadmin, I agree it's unreasonable to expect a large proportion of people to sysadmin their phones directly. But what if we advocated for a world where we tech-savvy folk volunteer to be sysadmins for our family and close friends? They would trust our motives more than they can trust the motives of platform and device vendors.
My point is not that this isn't of interest or important (especially to people who routinely handle sensitive information) but that your methodology is poor and it's poor in ways that have been reasonably comprehensively examined. As I said, it's still a worthwhile exercise and there's no law of physics that says you have to agree with existing consensus-y thought but your effort would be more serious if you're familiar with the arguments. You, on the other hand, just reinvented a threat model that has its very own funny paper.
> Not everyone cares about this sort of thing though, and there are 70+ other options listed with various tradeoffs.
This is precisely pvg's point. The problem with your methodology is a systemic one that emerges in every crowdsourced threat modeling exercise. You've enumerated every possible security attribute and security feature of every software in a specific category, then tossed them all into a matrix of boolean values. But that does not result in a threat model users can competently assess, for several reasons:
1. You're treating all features as equal - if not in intention, then at least in presentation. Even if you don't intend it, the sea of green at the top is a loud proclamation of safety; likewise the sea of red at the bottom is a siren of insecurity.
2. You're not allowing any nuance in assessment feature or attributes. Boolean flags cannot capture all the nuance inherent in cryptographic security. Which specific party was responsible for an assessment? What are their credentials? What did they find?
3. You're including features which most users don't and shouldn't care about just because some minority might. Moreover you're not being opinionated enough, which is something that comes with expertise - for many of the "features" you listed, the minority that cares probably shouldn't if they only care because of a vague notion of security.
4. You're leaving out important features which should absolutely be considered for security. Where is forward secrecy? Where is authenticated encryption? Where is consideration of specific algorithms or primitives? Where is nonce misuse resistance?
5. Most importantly: you do not have any explicitly called out methodology that allows someone to audit what you've done. If you begin by pre-supposing that a given feature is worthy of inclusion because it's an important security metric, your conclusion is just going to end up magnifying that bias. Therefore it's paramount that you call out methodologies explicitly and early.
We see this time and time again in security. People try to first principles the security of an entire category of software by being exhaustive about every type of threat and security feature they can think of. But they invariably leave out important threats/features, underestimate the importance of some and overestimate the importance of others. Exercises which attempt to give users the world almost always end up "empowering" them to boil the ocean. People looking at this spreadsheet are approximately all unqualified to make an informed decision based on a critical assessment of all those features, which means they're likely to just go with the most green option (or worse, proselytize the most green option as the most secure one to their friends and coworkers).
> Also why would people outside the US trust the legal protections afforded to a US company to protect US citizens?
Oh, hello there. The short answer is, of course we don't.
It's a very obvious attack surface that Signal and Whatsapp could avoid if they wanted to. For Whatsapp, being tied to Facebook, I kind of get why they don't (it's mainly that I don't expect them to be better).
But for Signal there aren't any good reasons. I've read various threads (on github and HN?) with Moxie being asked questions about this and I've not heard reasons that satisfied me. On occasion he was even evasive. Now, when the reasons given aren't good enough to explain why to take such a major security affecting decision when there is an obvious better alternative, and Signal seems to be very meticulous about doing the right thing in almost every other area of the protocol and systems around it, then there MUST be another motivation behind the decision that is not stated openly. Maybe it's just something benign left unsaid, who knows?
But even then, the only reasons I can imagine for choosing becoming this huge target, are reasons that are just good for Signal/Whispersystems but meaningless risk to its users.
Sure, maybe Signal has done some useful technicality -legal- protections for now for US citizens, but what happens when a state actor threatens to kill the family of a Signal employee if they don't ship a very subtle compromise in how their binaries source random numbers, or if they don't sell the metadata of who is talking to who.
Signal is not anonymous so that metadata alone could have real value. It is at the end of the day using phone numbers as identifiers. Sure they do SGX remote attestation but that has been demonstrated broken multiple times and won't stand up to a motivated physical attacker. Even if it -is- solid now, I would not underestimate how far a state actor will go. (As demonstrated by NSA wiretaps on google datacenters). Can they compel the right Intel employee to CA certify a manipulated enclave? Can they just get handed the key?
Also why would people outside the US trust the legal protections afforded to a US company to protect US citizens?
Their refusal to federate their network just creates a Lavabit sized target... and I have yet to hear any technical reasons for doing so particularly when, again, other projects have demonstrated end to end encryption and decentralization are not ast mutually exclusive as Moxie claims.
The idea that only Signal can do this right, and only if they keep it centralized on their servers, with them being the only people that can sign the binaries... is pure hubris imo.
There are a lot of alternatives we all should be carefully considering for the next -standard- for ubiquitious secure messaging.