Hacker News new | past | comments | ask | show | jobs | submit login

The other members of the five eyes had better be careful about what they share with the U.S. while this is going on.

Public key encryption, like Signal uses, offers good security for most purposes. e.g. It's fantastic for credit card transactions. The problem with using it for transmitting state secrets is that you can't rely on it for long-term secrecy. Even if you avoid MITM or other attacks, a message sent via Signal today could be archived in ciphertext and attacked ten years from now with the hardware/algorithms of ten years in the future. Maybe Signal's encryption will remain strong in ten years. Maybe it will be trivial to crack. If the secrets contained in that message are still sensitive ten years from now, you have a problem.

Anything sent with Signal needs to be treated as published with an unknown delay. If you're sharing intelligence with the U.S., you probably shouldn't find that acceptable.






Signal’s encryption algorithm is fine. The problem is the environment in which it runs: a consumer device connected to the general internet (and it’s hard to believe that someone who does this installs patches promptly). He’s one zero day or unwise click away from an adversary getting access to those messages and potentially being able to send them. Signal’s disappearing message feature at least helps with the former risk but runs afoul of government records laws.

The reason why the policies restrict access to government systems isn’t because anyone thinks that those systems are magically immune to security bugs, but that there are entire teams of actually-qualified professionals monitoring them and proactively securing them. His phone is at risk to, say, a dodgy SMS/MMS message sent by anyone in the world who can get his number, potentially not needing more than a commercial spyware license, but his classified computer on a secure network can’t even receive traffic from them, has locked down configuration, and is monitored so a compromise would be detected a lot faster.

That larger context is what really matters. What they’re doing is like the owner of a bank giving his drunken golf buddy the job of running the business, and the first thing he does is start storing the ledger in his car because it’s more convenient. Even if he’s totally innocent and trying to do a good job, it’s just so much extra risk he’s not prepared to handle for no benefit to anyone else.


An obvious issue that I noticed. He sent the exact same message to two different group chats.

I assume he copy pasted the message on his unsecured device.

How many apps had access to that text in his clipboard?

To me this isn't a technical problem with Signal, it's an opsec problem, and that's quite a lot harder to explain to people.


Yikes. Especially if it's been near a Windows PC. If I have link-to-PC switched on then I have a shared clipboard between phone and PC...

Or if it's an iPhone & Mac pair it probably synced with iCloud.

Surely they don't have iCloud on their devices though...


Signal does have a forward feature which would look the same but I don’t know if he uses it.

> Signal’s encryption algorithm is fine.

At least in the case of the leak the culprit was the UX, no?

Suppose a user wants the following reasonable features (as was the case here):

1. Messages to one's contacts and groups of contacts should be secure and private from outside eavesdroppers, always.

2. Particular groups should only ever contain a specific subset contacts.

With Signal, the user can easily make them common mistake of attempting to add a contact who already is in the group. But in this case Signal UI autosuggested a new contact, displaying initials for that new contact which are the same initials as a current group member.

Now the user has unwittingly added another member to the group.

Note in the case of the leak that the contact was a bona fide contact-- it's just that the user didn't want that particular contact in that particular group. IIRC Signal has no way to know which contacts are allowed to join certain groups.

I don't know much about DoD security. But I'm going to guess no journalist has ever been invited to access a SCIF because they have the same initials as a Defense Dept. employee.


My understanding was that the journalist's phone number was accidentally added to the existing contact of a trusted user through the following process:

1. Journalist emailed trusted user seeking comment on something. This email contained the journalist's cell phone number in the signature block.

2. The trusted user forwarded this email to the fool with Signal.

3. The fool's iPhone suggested adding the journalist's cell phone number to the trusted user's contact.

4. The fool accepted this, perhaps blindly.


You are right. Here's the discussion from a month ago:

https://news.ycombinator.com/item?id=43601213

This iPhone feature exists probably to save a few taps when people text you their new phone number.

But it fails to account for the fact that this is not the only reason phone numbers happen to be in texts.


Definitely - that kind of context is critical. Signal, iMessage, etc. are designed to let you securely connect to people you just met and don’t share much with other than a phone number. The DoD has the opposite problem: they have a list of people they trust enough to have access and blocking anyone not on that list is a major feature. Beyond the fact that both are sending messages, these problems are less alike than they seem at first.

Well apparently at this point it’s his phone, his wife’s phone, his lawyer’s phone, maybe anyone in the inner circle of people he cares about.

>> The problem is the environment in which it runs

Too deep. The problem is the physical environment, the room in which the machine displays the information. Computer and technological security means nothing if the information is displayed on a screen is in a room where anyone with a camera can snap a pic at any time.


That’s valid in general, but in the specific case being discussed is an official military facility with strict access control and I would assume it’s regularly checked for bugs.

well you can alway retrieve the messages from the Keylogger eh swiftkeyAIsendsItToCloud plugin

What other type of encryption would you use for state secrets? You seem to be implying that governments and three-letter agencies use some vastly superior cryptographic scheme, whereas AFAIK Signal is as close to the state of the art as it gets.

Also, to be clear, Signal doesn't use public-key cryptography in the naive way (i.e. to encrypt/decrypt messages) as was/is possible with RSA. It uses asymmetric key pairs to first do a Diffie-Hellman key exchange, i.e. generate ephemeral symmetric keys, which are then used for encryption/decryption. This then also guarantees forward secrecy, see https://signal.org/blog/asynchronous-security/ . (Add to that they incorporate an additional post-quantum cryptographic scheme these days, and I'm probably omitting a lot of other details.)


> Signal is as close to the state of the art as it gets.

For their use case, which requires communication between two (or more) arbitrary users who never communicated before among millions of users, running on cheap commodity hardware over wireless connectivity to the internet.

Leaving encryption aside, looking only at the network level, the DoD is capable of using a dedicated fiber line. Or rather a parallel fiber infrastructure.


Aside from that, there's base level authentication that it is Hegseth.

Is this device using DoD PKI [0]?

If not, then how is DoD managing access to it? Or is there a post-it with a local password stuck to it?

[0] https://en.m.wikipedia.org/wiki/Common_Access_Card#Integrate...


The issue isn't the encryption. It's the unsecure device it's running on. Nobody has to waste time cracking Signal if they have backdoored one of the computers at the endpoints. The US government categorically doesn't use unapproved hardware for secure communications. This is something the Secretary of Defense is supposed to know about.

I agree. Yet for some reason the top comment on HN is criticizing Signal's encryption.

> You seem to be implying that governments and three-letter agencies use some vastly superior cryptographic scheme

About a month ago there was a discussion here saying Signal is preinstalled and widely used at the CIA.

https://news.ycombinator.com/item?id=43478091

It's also recommended by the government's cybersecurity agency CISA.

https://www.cisa.gov/sites/default/files/2024-12/guidance-mo...


Poking around it seems like pre shared keys are used for the secure stuff, so no public keys, no rsa. It isn't that signal isn't state of the art, it just makes compromises for usability.

Edit: I didn't state something perhaps I should have. Symmetric key is considered more secure because public key is more complicated so more room for side channel mistakes, and the computation needes to break public keys doesn't scale as fast with key size. I am not an expert but that is what I've read.


> What other type of encryption would you use for state secrets?

Maybe it’s the servers that is the problem.


How are the servers a problem in an end-to-end encrypted scheme?

The server could be recording the traffic for later analysis, and the contents revealed if the encryption is defeated.

The encryption probably won't be owned up to the point where it is practical to decrypt traffic in bulk, but it's a valid thing to look at.


But that's not what would make Signal "weaker" than other solutions. It's a myth that there is some kind of "military grade" encryption and that Signal only reaches some kind of "amateur level" that's inferior to that. Anybody with any kind of crypto background knows this very well. (With "crypto" in the traditional pre-bitcoin sense.)

That the encrypted traffic is available to third parties is exactly what makes Signal weaker than the other solutions the government uses.

Note that this is what we are discussing in the above messages, not the "strength" of Signal's encryption. I get that it is largely hypothetical risk, but it's a real difference.


I believe much of the secret government communications are accomplished using layered secret encryption algorithms. Many of these are symmetric and have physical key loading accomplished by a guy with a gun.

Store now decrypt later still defeats diffie hellman if you capture the handshake. And quantum computers break diffie hellman as easily as RSA.

Not sure why you're getting downvoted; I do think you're bringing up a valid point against my original comment: DH is susceptible to Shor's algorithm, too. That being said, the question is how long is it going to take to break a single DH key once we have adequate quantum computers? If it's in the order of, say, a couple months to a year, a ratchet algorithm will still protect privacy in the grand scheme of things, as it won't be feasible to decrypt more than a couple select messages per computer per year. Sure, quantum computers might improve, get cheaper and everything but on what timescale? It's not unlikely that that'll take many years and by that time no one might care about your private messages of today anymore and we might have established a new set of cryptographic schemes that are quantum-resistent.

Quantum computers don't exist. If you want to talk about a hypothetical machine which might exist in the future you should state that plainly.

Forcing the reader to parse thru the literary devices in order to get to the argument weakens the argument.


Not them but you are replying on a thread talking about how it isn't safe in the longer future. That context was already built.

Quantum computers absolutely exist and are commercially available. They're just not very useful at the moment.

It get exponentially difficult to add more qubits so it's not a given that we will be able to build one large enough to be a real threat to modern cryptography.

“Quantum computers that break diffie hellman as easily as RSA”, where “easily” means “not at all”, do exist.

came here to say similar. GGP is another great example of hn people jumping in to make comments without having even a basic understanding of what they're talking about. Frustrating as it spreads misinfo about security which is the last thing we need.

You're in a comment section where people are flipping out that there exists a computer on his desk that isn't connected to any DoD network but is connected to the public internet.

Approximately 30,000 people go to work in the Pentagon every day. There are areas in the building that are SCIFs and they don't allow cell phones and laptops. But the majority of the building is an office building used for office building type stuff. Employees and contractors bring their personal cellphones and mobile devices in there every day.


Are they using those devices to discuss upcoming military operations?

Even if Signal's encryption implementation is secure, the device on which it is running probably doesn't satisfy TEMPEST requirements. Most consumer crypto is vulnerable in some way to a side-channel attack.

None of that matters if Signal is running on what is effectively a personal device connected to the internet. That device is now the weak link and is what intelligence agencies in many countries are now probably trying to get into.

Pegasus all the way down as an example.

Exactly. And Pegasus is what we know about. I'm sure there's plenty we don't know about that's used for more high profile targets, like former Fox News hosts.

It has almost certainly already been breached.

> Anything sent with Signal needs to be treated as published with an unknown delay.

Oddly they have thought of that already, to the point all encryption systems in use in the gov are thought of in these terms.

All that matters are the different assumed times to publication (weeks to years), and then treating the strength of measures involved differently based on what is reasonable for the given use.

If you absolutely need something to never be published then encryption isn't the solution, and nor are computers generally.


It's the entire mandate of the NSA's Utah Data Center. Archive all the world's encrypted data until such a time as it can be decrypted when either the algorithms have been cracked or machines are powerful enough to brute-force.

https://en.wikipedia.org/wiki/Utah_Data_Center


More like until they'll get the keys

Or maybe they found a way to outsource brute forcing the keys.

128 bits will never be bruteforced. There's nothing to outsource. The only actual risk is that the encryption algorithm is cracked.

> 128 bits will never be bruteforced

Famous last words


Concur. This is part of why Suite A ciphers (algorithms) exist, and the second component includes robust key management practices are so important (this includes hardening of devices to prevent leakage of signals that could compromise those keys or the cryptographic processes themselves).

https://en.wikipedia.org/wiki/NSA_Suite_A_Cryptography


I'd give different advice.

You shouldn't share state secrets with the US. They will be on or transferred between misconfigured cloud accounts. Some agency will eventually get authorization for analysis of them with an intention of financial espionage. The probable or confirmed loss of them will serve as a plausible deniability for the US when it misuses them.


Isn't that true for basically everything though? I'm not familiar with what other encrypted messaging systems security agencies use, but either (1) they store ciphertexts that can in theory be attacked later or (2) they delete their data after some time, but signal has that option was well.

Obviously using signal here is a terrible opsec failure, I'm just not sure how what you are saying changes anything


I worked at a videoconferencing hardware/software company. We provided systems to USA government offices like the NSA and State Department and provided an input for which they gave us hardware specs but told us nothing else, the customer did the final testing on it to make sure it worked as specified. We assumed it was for some sort of encryption method of which they revealed to us as little as possible, the hardware engineers who saw it tested only saw a large, portable black box. Otherwise our system used the standard encoding/decoding methods of the day in the 1990s.

The most secure method of communication is a one-time pad, a pre-shared private key.

"A one-time pad (OTP) is considered theoretically the most secure method of communication — when it’s implemented correctly. That means: 1. The key (pad) is truly random. 2. The key is at least as long as the message. 3. The key is used only once. 4. The key is securely shared in advance and kept completely secret.

When all these conditions are met, a one-time pad provides perfect secrecy — an eavesdropper cannot learn anything about the message, even with infinite computing power."


And you need to do that on paper, not on consumer device.


There are significantly fewer concerns about symmetric encryption, and while it doesn't scale to the size or budget of a service like Signal, it's exactly the type of thing the military is good at:

Distribute a bunch of physical artifacts (smartcards) across the globe; guard a central facility (a symmetric key exchange center) extremely well etc.

The military can also afford to run its (encrypted or plaintext) communications over infrastructure it fully controls. The same isn't true for a service provided out of public clouds, on the public Internet.


Signal's crypto is quite good. The problem with it is that it has zero authorization functionality, otherwise the government could use something like Signal internally. The lack of military-grade IM solutions is a problem.

The encryption is completely irrelevant if the information is sent directly to 3rd parties.

> Even if you avoid MITM or other attacks, a message sent via Signal today [...]

That's not the threat model. The threat model is that Signal is a tiny LLC making an app on behalf of a foundation and open source software project. It's a small group of human beings.

Small groups of human beings can be coerced or exploited by state-level actors in lots of ways that can't feasibly be prevented. I mean, if someone walks up to you and offers $2M (or blackmails you with whatever they found in your OneDrive, etc...[1]) to look the other way while you hand them your laptop, are you really going to say no to that keylogger? Would everyone?

At scale, there are auditing techniques to address this. The admins at e.g. github are (hopefully) not as vulnerable. But Signal is too small.

[1] Edit: Or, because let's be honest that's the scale we're playing at: straight up threatens to Novichok you or your family.


There’s a million threats. These are not particularly bright people. They are busy and not aware of or concerned with much beyond limiting their own accountability for when they inevitably get burned by their bosses.

You and I know that. So do the adversaries. The biggest issue for them is going to be not tripping over the intelligence collecting agencies (or corps) already on their devices.


Five eyes have been 'careful' about what they share since they got burned during the first trump presidency.

They've been careful since before Perl Harbour.

> The other members of the five eyes had better be careful about what they share with the U.S. while this is going on.

Right, but this is nothing new: Hegseth is only a recent example of Trump's camp mishandling sensitive docs; I'll bet there's been an inner secret Four Eyes group since the the Mar-a-Lago bathroom official-document-archive story dropped years ago.

What surprises me is that I expected Tulsi Gabbard to be the centre of mishandling allegations, not SecDef.


Tulsi is a competent mole, she knows better than to be this obvious.

It would be gauche to attack Tulsi Gabbard, because you would have to start with her connections to Russia's Assadist interests and to RT & the Russian web brigades, and we have established (we voted on it) that any connection to Russia is Old News and Not A Big Deal and Hillary's Dirty Tricks. But Hegseth? Hegseth leaked something to The Atlantic. A far greater threat than Russia.

The circus has only been in town for a few months. This thing is, as scandals go, an almost comically dumb scenario - even by Trump whack pack standards.

Tulsi is by all appearances more experienced in operating under the radar. That said, I’m sure she won’t disappoint.


Maybe Tulsi is just staying out of the spotlight while Hegseth was hired to be in the spotlight.

Even before thus, Ukraine learned painfully that it shouldn't share every plan and every detail with the US. It kind of looks like a sad, self-fulfilling proficy. Ukraine makes a plan, some details get leaked state side, plan goes disastrously. Ukraine plans another operation, doesn't say anything, the plan goes off ok. The US feels betrayed and Ukraine looks like an ungrateful ally abusing trust, the relationship is strained. The election happens and Trump points at how they're a bad partner yadda yadda. Ukraine is blamed for the outcome of what is originally an American problem, the US leaking like a sieve.

This is silly, many countries use consumer messaging for internal communications. The UK government famously uses whatsapp for example.

Signal has been used widely in US intelligence for many, many years. Nothing about this is new, though perhaps people that never paid attention are just now becoming aware of it. As for the rest of Five Eyes, they use WhatsApp the same way. I’m not sure that WhatsApp would be considered an improvement.

It is clear there is a gap between how people imagine this works, or should work in theory, and how it actually works.


> Signal has been used widely in US intelligence for many, many years.

For lunch orders and office softball schedules. Not top secret information.


This is a factually incorrect and very naive take. The same topic has been in the news in European countries too about the widespread use of WhatsApp when discussing secret information. It isn’t just the US government, everyone is doing it.

Do you have any sources? Because I don't see any information about secret information normally going through Signal or WhatsApp.

They're paying attention to Signal now because Hegseth doesn't know his ass from his elbow when it comes to tech and secrecy, instead acting like someone who has watched too many action films and thinks those are just like real life. The problem is not Signal. The problem is incompetence. Plain and simple. Because he blindly added persons to the group that probably didn't belong there, we now have the infamous "we have OPSEC" line, but instead of questioning why this idiot still has a job anywhere near the intelligence agencies, we're wasting our breath scrutinizing what is easily one of the best opens for secure comes if the user understands how it works.

> why this idiot still has a job anywhere near the intelligence agencies

Because competence is a disqualifying attribute in the kakistocracy known as the Republican Party.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: