Hacker News new | past | comments | ask | show | jobs | submit login

Why would that be a good thing? You'd still be left with a system that leaks immense amounts of metadata, which is often all government adversaries really care about, where the majority of the installed base isn't encrypted and you have to interoperate, and the vast majority of users gets access to messages through a web browser and so is hamstrung on secure delivery of encryption in the first place.

Just stop using email to deliver secrets.




>Just stop using email to deliver secrets.

At the least I'm not sure this is legally feasible under ERISA electronic delivery requirements, to name a single one amongst a host of other often incredibly complex regulations. It might be ok under the UETA, but either way that wouldn't be my call.

I am 100% sure that it wouldn't be ok with coworkers, customers, and in turn bosses, so demanding to cease all usage of email would get me fired. I mean, even entirely plain text email isn't sufficient to get anyone to stop using it to deliver secrets. It's been a real improvement just to have the MUA<>Server connection be encrypted, and even that is probably still not universal!


I think the GP's desire is not email per se, but rather an open and federated protocol like email so we aren't dependent on a single commercial vendor. I'm not really sure how practical that is since it's really hard to get broad adoption of a new protocol on today's internet.


Sure, email leaks meta-data.

But at its core, the promise of pgp/gpg is the promise of encrypted and authenticated file transfer.

The file might come from ftp, samba/cifs, Dropbox/Google drive/etc, from tape or HD backup - or come attached to an email.

It may come from yourself, or from a friend.

But the promise is that between the encryption and signing - and the verification and decryption - the file remain the same. The zip file is as (Un)safe to extract, the font as (Un)safe to render - the installer or the executable as (Un)safe.

If you have (authenticated) file encryption, you have encrypted email.

If you want to use authenticated file encryption as a secure messaging platform, you should probably invest in a more sane format for your plaintext file than email. And a better wrapping transport than email.

The fundamental problems with email+pgp as a secure messaging platform isn't pgp alone - but the intersection of email-the-format, email-the-protocol mixed with the mail user agents and their 80s trusting file handling (ok, that's unfair to the 80s, we're still too trusting when it comes to files in general and transclusion in particular).

In short, I don't think we should give up all of pgp as such (open, secure, authenticated file encryption and Web of trust).

But it's probably true that newer protocols are better for "real-time" messaging, and perhaps usenet is better for hold-and-forward.

[ed: I'd love to hear some current informed discussion about: https://saltpack.org/ mentioned in this discussion here. From the previous hn discussion it seems reasonable; a sane format (message pack), authenticated encryption primitives - but I worry a bit about public key handling - is there Web of trust/certificate support - and if so, is it sane? I'm not saying pgp/gpg wot is sane. But neither is ssh (certs) in practice. I'd much prefer wot - were for some applications I could choose arbitrary key/certs as authorative CAs and give out short-lived "certs" (signed public keys with valid from-to).]


> a system that leaks immense amounts of metadata

Doesn't Signal require you to publish your actual phone number, which reveals your RL identity in the network pretty much completely controlled by government?


No.


Explain? I Googled it and it seems that Signal still doesn’t support having an account that isn’t tied to a phone number. Some of the results suggest working around the limitation by obtaining a phone number just for Signal from third parties, but that’s not a particularly reasonable alternative.



This article explains how to create a second phone number, for usage for Signal. Which confirms my assertion that you need phone number, and makes your statement of "no" completely false. Among options offered:

The desk phone at your office.

A free Google Voice phone number, if you live in the United States (this is what I do).

Any phone number from any online calling service, like Skype.

A cheap pre-paid SIM card for a few dollars a month (and temporarily put it on your phone to register your second Signal number).

Twilio, a cloud service that allows developers to write software that makes and receives phone calls and SMS messages.

Obviously, each of those still requires registration within government-controlled phone network, and additionally also may require you to entrust your security to a third party - such as your employer, Google, Microsoft or Twilio. So in fact, you refutation of my assertion "you need to publish a phone number" is "you can have two phone numbers". This is not serious.


As an added bonus, some of these creative solutions also make MITM-ing the whole scheme as trivial as "our corporate phone tree has changed, please use new number 1-555-666-666 to contact me from now on". You have no way to verify it, nobody knows how corporate trees and phone prefixes look like. And of course if the person is fired or reassigned or leaves the job (which never happens!), their account is now under control of some unknown third person.


Signal doesn't entrust security to carriers. Messages are sent E2E encrypted to the registered device over the internet. If someone MitMs a device, the safety numbers would not match on either end.

Signal, and the Signal server operators, malicious or not, do not know the phone numbers of people you are communicating with (presuming the secure enclave on the server isn't cracked). Signal does know your phone number, so someone could figure out if you use Signal. If you're worried about that, or personal sharing of your number, you could falsify information to create an anonymous phone number, or you can just use Matrix or Wire.


My experience of Signal is that people don't reliably follow up on safety number changes, yet keep using it in a MITM-vulnerable way. Including myself, honestly, because people change or reset phones frequently. Is your experience different?

At least with GPG I can factory reset all of my computers and phones and not have to re-establish trust if I take the right steps to preserve the secret key information.

Even if I don't preserve that correctly, people change computers less often than phones.

On the other hand, I'll admit that my GPG key is newer than my Signal number (which I've owned for 15 years), due to upgrading crypto algorithms.


It isn't. I rarely send sensitive messages, however, so I feel that some surveillance potential is acceptable, to save time and effort. The few times where I did, I first verified both ends.

I perform the safety number verifications in exchange for forward and backward secrecy. GPG isn't enough to establish a truly confidential communication channel, I think. (Unless you erase keys after every sent message, maybe.)

I'm not happy with Signal's dependence on a phone. Ideally I'd like to use a pocket-sized SBC for secure messaging. Come to think of it, that sounds rather like a phone. Just, uh, without the cellular hardware, and with a user-installed OS.


It entrusts the identity to carriers. What use of having perfect peer-to-peer security if you can't be sure who you are talking to? So you will send the data to Eve in a perfectly secure way, while thinking you're talking to Bob - and that's better than Eve breaking the encryption? I say it's much less work for Eve - instead of employing vast resources to exploit a tiny vulnerability in encryption, she would just need to take over a phone number. And if she works for the government, it's not even a hard thing to do.


Again, you can be sure, by comparing the safety numbers. It's the same as comparing SSH or GPG key fingerprints. If someone else masquerades as Bob, the numbers won't match. See section III-D3, key fingerprint verification: https://www.ieee-security.org/TC/SP2015/papers-archived/6949...


That would be true if people routinely verified fingerprints of their contacts. I don't think it happens more often than any other commonly ignored security precautions. Also, what happens if the phone is lost/damaged/replaced with a newer model? I assume new key and thus new fingerprint?


Signal, WhatsApp, Matrix et al. show notifications when the participant’s device keys change. You’re right that most users don’t verify. The opportunity for detection or prevention of common forms of surveillance is better than none at all.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: