Signal’s encryption algorithm is fine. The problem is the environment in which it runs: a consumer device connected to the general internet (and it’s hard to believe that someone who does this installs patches promptly). He’s one zero day or unwise click away from an adversary getting access to those messages and potentially being able to send them. Signal’s disappearing message feature at least helps with the former risk but runs afoul of government records laws.
The reason why the policies restrict access to government systems isn’t because anyone thinks that those systems are magically immune to security bugs, but that there are entire teams of actually-qualified professionals monitoring them and proactively securing them. His phone is at risk to, say, a dodgy SMS/MMS message sent by anyone in the world who can get his number, potentially not needing more than a commercial spyware license, but his classified computer on a secure network can’t even receive traffic from them, has locked down configuration, and is monitored so a compromise would be detected a lot faster.
That larger context is what really matters. What they’re doing is like the owner of a bank giving his drunken golf buddy the job of running the business, and the first thing he does is start storing the ledger in his car because it’s more convenient. Even if he’s totally innocent and trying to do a good job, it’s just so much extra risk he’s not prepared to handle for no benefit to anyone else.
At least in the case of the leak the culprit was the UX, no?
Suppose a user wants the following reasonable features (as was the case here):
1. Messages to one's contacts and groups of contacts should be secure and private from outside eavesdroppers, always.
2. Particular groups should only ever contain a specific subset contacts.
With Signal, the user can easily make them common mistake of attempting to add a contact who already is in the group. But in this case Signal UI autosuggested a new contact, displaying initials for that new contact which are the same initials as a current group member.
Now the user has unwittingly added another member to the group.
Note in the case of the leak that the contact was a bona fide contact-- it's just that the user didn't want that particular contact in that particular group. IIRC Signal has no way to know which contacts are allowed to join certain groups.
I don't know much about DoD security. But I'm going to guess no journalist has ever been invited to access a SCIF because they have the same initials as a Defense Dept. employee.
My understanding was that the journalist's phone number was accidentally added to the existing contact of a trusted user through the following process:
1. Journalist emailed trusted user seeking comment on something. This email contained the journalist's cell phone number in the signature block.
2. The trusted user forwarded this email to the fool with Signal.
3. The fool's iPhone suggested adding the journalist's cell phone number to the trusted user's contact.
Definitely - that kind of context is critical. Signal, iMessage, etc. are designed to let you securely connect to people you just met and don’t share much with other than a phone number. The DoD has the opposite problem: they have a list of people they trust enough to have access and blocking anyone not on that list is a major feature. Beyond the fact that both are sending messages, these problems are less alike than they seem at first.
>> The problem is the environment in which it runs
Too deep. The problem is the physical environment, the room in which the machine displays the information. Computer and technological security means nothing if the information is displayed on a screen is in a room where anyone with a camera can snap a pic at any time.
That’s valid in general, but in the specific case being discussed is an official military facility with strict access control and I would assume it’s regularly checked for bugs.
The reason why the policies restrict access to government systems isn’t because anyone thinks that those systems are magically immune to security bugs, but that there are entire teams of actually-qualified professionals monitoring them and proactively securing them. His phone is at risk to, say, a dodgy SMS/MMS message sent by anyone in the world who can get his number, potentially not needing more than a commercial spyware license, but his classified computer on a secure network can’t even receive traffic from them, has locked down configuration, and is monitored so a compromise would be detected a lot faster.
That larger context is what really matters. What they’re doing is like the owner of a bank giving his drunken golf buddy the job of running the business, and the first thing he does is start storing the ledger in his car because it’s more convenient. Even if he’s totally innocent and trying to do a good job, it’s just so much extra risk he’s not prepared to handle for no benefit to anyone else.