Hacker News new | past | comments | ask | show | jobs | submit login
The PGP Problem (latacora.micro.blog)
483 points by wrench4916 on July 17, 2019 | hide | past | favorite | 368 comments



I was an engineer at PGP from 2004-2011 and ended up running the server team as lead engineer. I wouldn't disagree with most of the points brought up by the author, both the code base and standard has accreted over time and it's incredibly complex. There were only a couple of people on the team that really understood all the facets of either the OpenPGP or SMIME/x509 standards. It's made worse in that it was a hack on top of another system that had accreted over time (email) and that system was never intended to be secure. We had massive database of malformed or non-compliant emails from every email client since the beginning of time. The sub-packets that the author mentions were primarily used for dealing with forwarded chains of encrypted email messages where each previous message had it's own embedded mime-encoded encrypted/signed messages and attachments.

The problem is that no-one else has gone through the process of establishing a community/standard that's capable of replacing it. Each system has built their own inoperable walled garden that doesn't work with anyone else, and none of them have a user base large enough to make encryption easy and pervasive.

My own secret hope is that Apple is forced to open iMessage as a part of an anti-trust action and that acts as a catalyst for interoperability.


Forcing iMessage to open will immediately result in MITM iMessage proxies that users can use to store iMessages that are meant to auto-delete, so that they can violate the wishes of the other party. These do not exist today because Apple binds iMessage to your hardware and bans your entire device when anyone is found to be operating such a service, either for themselves or others.

Do you want open source clients that can be altered to ignore all privacy criteria — or do you want closed-source clients that make a good faith effort to adhere to auto-deletion protocols?

Pick one. There is no middle ground.


You can violate the wishes of the other party by taking a screenshot or, in the extreme, a photo of the screen. You're only preventing the very lazy/unmotivated from retaining messages.


Correct, screenshots are a viable attack against both closed and open source platforms. Preventing casual retention is the best you can hope for, and is a worthy goal regardless that it does not result in the perfection of a Faraday cage’d clean room


So, your threat model includes MITM servers, but not cameras? It seems a little silly to worry about the MITM problem when you can simply snap a photo already.


They are both valid threat models, but ones which for me have different meanings.

Screenshotting or photographing the screen of a device owned my my intended message recipient is a reasonably small problem to me. If my recipient wants to expose a message I've sent them, they're going to be able to do that. I never expected any more privacy for that message than I'd have accepted based on my trust in that person.

MITM servers are a whole other thing. Large scale surveillance of all users of a specific server, "full take" collection and searchable databases of messages available effectively forever to unknown current and future opponents?

Different threats. Yeah, I'm happy enough to accept the risk of cameras in the hands of my correspondents. Way happier than I'd be with MITMable servers (or services that can add "ghost users" as the UK seems to be proposing).


I might be missing something, but how would large scale surveillance with searchable databases be possible with e2e encryption? They could save the messages, but they would still be encrypted.


If you have to get into a legal battle with someone about misuse of information, it's much better that you are able to focus on the sender and the recipient of the information as potential sources for that information instead of also having to go after every potential network hop as well.


Actually apples approach results in a MUCH lower level of retention than other providers even if someone can screenshot all conversations


Just like with Snapchat, Auto-delete is a fantasy and is not worth sacrificing security or privacy for.


iMessage doesn't have any kind of auto deleted messages - it's a feature that messages are persistent across all your devices.


Incorrect. Audio messages are deleted two minutes after playback by default.


Which is a receiver-side setting and can be set to one year. Your point is moot.


For me, the two choices in settings are "after two minutes" and "never", nothing in between. As you said, this is not a security setting, it's a storage space-saving setting.


This is unrelated to the current discussion and not meant to contradict your "Your point is moot", but instead just as a hopefully useful anecdote: In my experience, this requires the sender to choose 'Keep' too. I have been bitten several times by me sending an audio message to my wife because I was in a situation in which typing was complicated for me (outdoors, plenty of sunshine, I don't have the best eyesight), only to find that she never even got to see it because it got self-deleted after a few minutes.

My conjecture from looking at how this has worked for me is that the sender must choose 'Keep' so that the audio message stays on the receiver's phone until listened, and the recipient must choose 'Keep' so that the audio message stays on their phone after listening.

I, of course, have no proof of this other than my own experience on devices a few years old (iphones 5 and 6).


I also had some odd occurrences like this, and I simply stopped using voice messages over iMessage. It hasn't really penetrated the local phone culture so to speak, so it isn't a problem. Those that I do use it with happen to be on WhatsApp, which retains messages forever or something.


That's a client side feature meant to save disk space.


> The sub-packets that the author mentions were primarily used for dealing with forwarded chains of encrypted email messages where each previous message had it's own embedded mime-encoded encrypted/signed messages and attachments.

If you ("you" here being the PGP team) knew going into the design that the use-case of ASCII-armored-binary (.asc) documents is specifically transmitting them in a MIME envelope... then, instead of making .asc into its own hierarchical container format, why didn't you just use MIME, which is already a hierarchical document container format?

I.e., if you're holding some plaintext and some ASCII-armored-binary ciphertext, why not just make those into the "parts" of a mime/multipart container, and send that as the email?

Then all the work of decoding—or encoding—this document hierarchy would be the job of the email client. The SMIME plugin would only have to know how to parse or generate the leaf-node documents that go into the MIME envelope (and require of the email client an API for retrieving MIME parts that the SMIME parts make reference to.)

And you'd also get the advantage of email clients showing "useful" default representations for PGPed messages, when the SMIME extension isn't installed.

• Message signature parts would just be dropped by clients that don't recognize them. (Which is fine; by not having SMIME installed, you're opting out of validating the message, so you don't need to know that it was signed.)

• Encrypted parts would also be dropped, enabling you to send an "explanation" part as a plaintext of the same MIME type to the "inner" type of the encrypted part, explaining that the content of the message is encrypted.

I guess this wouldn't have worked with mailing lists, and other things completely ignorant of MIME itself? But it would have been fine for pretty much all regular use of SMIME.


Have you looked at DIME?

https://darkmail.info


I can't see that happening: There are multiple other messaging apps that are clearly successful and popular on iOS and macOS, and neither iOS nor macOS are the majority OS.


Until a better solution is brought forward, my team implemented a PGP packet library to make everyone's lives a little bit easier: https://github.com/summitto/pgp-packet-library

Using our library you can generate PGP keys using any key derivation mechanism for a large variety of key types! When using it right, this will greatly improve how you can generate and back up your keys!


So what do I use for encrypted messaging that can, like, replace email, then? Nobody seems to have provided any sort of satisfactory answer to this question. To be clear, an answer this has to not just be a secure way of sending messages, it also has to replicate the social affordances of email.

E.g., things distinguishing how email is used from how text-messaging is used:

1. Email is potentially long-form. I sit down and type it from my computer. Text-messaging is always short, although possibly it's a series of short messages. A series of short emails, by contrast, is an annoyance; it's something you try to avoid sending (even though you inevitably do when it turns out you got something wrong). Similarly, you don't typically hold rapid-fire conversations over email.

2. On that point, email says, you don't need to read this immediately. I expect a text message will be probably be read in a few minutes, and probably replied to later that day (if there's no particular urgency). I expect an email will be probably read in a few hours, and probably replied to in a few days (if there's no particular urgency).

3. It's OK to cold-email people. To text someone you need their phone number; it's for people you know. By contrast, email addresses are things that people frequently make public specifically so that strangers can contact them.

So what am I supposed to do for secure messaging that replicates that? The best answer I've gotten for this so far -- other than PGP which is apparently bad -- is "install Signal on your computer in addition to your phone and just use it as if it's email". That's... not really a satisfactory answer. Like, I expect a multi-page Signal message talking about everything I've been up to for the past month to annoy its recipient, who is likely reading it on their phone, not their computer. And I can't send someone a Signal message about a paper they wrote that I have some comments on, not unless they're going to put their fricking phone number on their website.

So what do I do here? The secure email replacement just doesn't seem to be here yet.


Your point 1 especially speaks to me. Phone-based messaging in general isn't appropriate for the things I would most like to be kept private between me and a recipient, because those sorts of things can't be created on a phone. I've found PGP pretty good for making that happen, when I'm working with someone who a) uses PGP also and b) exercises some caution when using it. I haven't found an option that I can trust that will work for me.


I strongly agree with this; email is not instant messaging, and there is not yet any secure replacement for email.

We need a modern design for a successor protocol to email, and no one is working on it because they prefer instant messaging (or think other people do).


Google has tried with Wave but failed. I don't think it can be done.

Everything would have to support SMTP as a fallback for the lot of people that just don't care and thus couldn't actually improve.


I think we're more likely to "accidentally" end up there via E2EE document collaboration tools that are in development now.

One day, a while after they become usable and common, people will just realize they've been sharing documents E2EE in place of sending email, and they'll be using it for basically everything that matters.

It would be a proper restart and allow for significant improvements in usability and security and everything else.


Can you point me to examples of such tools?


Still under active development, nothing is really ready to use yet


Do you feel like PGP is a good way to cold email people in practice? (I'm not trying to put words in your mouth, but that sounds like what you're saying.)


> Do you feel like PGP is a good way to cold email people in practice?

Not OP but I can definitely say that's a yes from me after doing this repeatedly. I cold email people once a month or so, and if it is to do with anything sensitive I'll check to see if a public key is available for them (on their website is best, else I check a public key server and use that key as long as there is only one listed).

I get a better response rate from PGP/GPG users too, I can only recall one not responding to an encrypted message and I sent a follow-up message unencrypted which they responded to.

I think it's important to send PGP messages for ordinary communications whenever possible, because this normalizes it and may increase the workload for those trying to defeat it.


Good question. Not sure. Although I don't see why I wouldn't, if they have a PGP key listed? (I guess there is some question over whether the listed key is actually them?) But my point is that, well, email is a good way to do that, and Signal isn't, so I'm going to use email rather than Signal.

Honestly, I wouldn't focus on (3), because as I see it, if you can replicate the feel of email, things like (1)-(2), so that it can replace email in contexts without (3), then (3) will just come naturally as it slowly replaces email.

Edit: All this is assuming it isn't tied to a phone number or something similar, of course!


How does one list a public PGP key, is there a verified central listing service?


One of the major features of PGP is that you don't have to rely on -- trust -- a "verified central listing service".

The "Web of Trust" [0] fills that role:

> As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.

[0]: https://en.wikipedia.org/wiki/Web_of_trust


In practice a web of trust is only trustworthy 1 degree out from you. Just because you trust someone doesn't mean you should trust the people they trust. The web of trust is a difficult to use misfeature. In theory it's great. In practice it's unusable.


The problem is nobody uses this right


If you control your own ___domain, Web Key Directory [0] is a good option too.

[0]: https://wiki.gnupg.org/WKD


You can put it on your website or anywhere really. Some people use keybase.io for this.


You can't put it anywhere really, otherwise anyone could tie their key to your identity. Keybase.io is a good solution.


You're technically correct of course.


I guess they mean putting the key on a webpage or using Web Key Directory or a centralized service such as https://keys.openpgp.org


There used to be a bunch of those in the 90s and it was a mess.


"mostly cold" email to a security email address listed on a website is probably the only use I've ever had to PGP encrypting email to someone I hadn't already been communicating with... (But I can imagine other scenarios. I bet Snowden's cold email to Greenwald was encrypted...)


Trust on first use is not an uncommon security practice. Imperfect but in many times the best alternative, and a good solution while we wait for a replacement to gain traction.


Why wouldn't it be?


3 is incorrect for encrypted email: you cannot email someone unaware without their consent and expect them to willingly and ably participate in decrypting.


Why not? I mean, that's what publicly listing your public key is for, right?


Nope. I have at least three public never-expiring keys that I am unable revoke and that remain listed as valid because the keyservers don’t occasionally revalidate proof of ability to decrypt.


Well, the keyservers also don't validate if it's your key instead of a key submitted by me with your email address on it, so for any secure messaging you need some other, authenticated channel for the potential recipient to assert which is their key.


Oh, that's a good point. Heh, I have one of those, too, which even caused a problem once[0]. I wouldn't expect people to find it first, though, because I wouldn't expect people to go to a keyserver first; I'd expect them to find my key on one of the places I have it listed on the web. I've never tried blindly entering someone's email address into a keyserver and just hoping they have a key; I've only sent PGP-encrypted email to people who list their keys on the web.

[0]How it caused a problem: I added an email address to my public key (or maybe it expired or something, I forget), and asked people to refresh their copy of my key. One person instead downloaded it entirely anew from a keyserver and got the old one. Oops. (Admittedly I didn't explicitly use the word "refresh".) Anyway yeah -- though this problem had happened to me, it hadn't ocurred to me that it might be common; maybe this is more of a problem than I thought...


GPG chooses the key to use based on alphanumeric ordering of the short key ID, last time I experimented anyways. Best of luck overcoming that!


>never-expiring keys

While this is bad, the keyserver issue is still valid.


I don't get what's insecure about normal unencrypted email. It's sent over https, isn't it? It's not like I can read your emails unless I break into Google's servers, no? And even if I do, they probably aren't even stored in plaintext.

I just don't get the encrypted email obsession. It's impossible for an individual to withstand a targetted cyber attack so it seems pointless to go above and beyond to ultra encrypt every little thing


> It's not like I can read your emails unless I break into Google's servers, no?

Well, first of all, "breaking in" isn't the only way someone might get access to data on Google's servers. There are such thing's as subpoenas, not to mention that it is possible a Google employee might abuse access to servers. And then I would be _very_ surprised if google doesn't use the content of your emails for advertisement and tracking purposes.

Furthermore, unless both parties are using gmail, the email will be stored at least temporarily on other mail servers, which may be less secure (and you might not even know who controls them).


> And then I would be _very_ surprised if google doesn't use the content of your emails for advertisement and tracking purposes.

That would go against their own privacy policy. But they are one change away from doing it.


Really? When gmail came out they were explicitly up front about using the content of the email to deliver targeted ads. Has that changed?


There are a lot of misinformations around, and the Google haters crowd has plenty of pitchforks.

https://safety.google/privacy/ads-and-data/

> Google does not use keywords or messages in your inbox to show you ads. Nobody reads your email in order to show you ads.


It's not misinformation. Gmail only stopped scanning messages for ad targeting in 2017:

https://www.nytimes.com/2017/06/23/technology/gmail-ads.html


It's just the passage of time that has rendered it misinformation. Until not so long ago, Gmail messages were actually scanned for ads -- IIRC, Google was actually pretty upfront about it when they first launched Gmail, and explained that it's how they could afford to give users 1G of inbox space in a day and age where 25 MB was pretty good and 100 MB was pretty hard to get for free.

They eventually stopped, although the phrasing of the privacy policy is vague enough that, as wodenokoto mentions above, I wouldn't be surprised if email messages were still scanned for some advertising purposes. The fragment on the page you link to is only about ads shown in Gmail, doesn't exclude using keywords and messages for tracking, classifying etc. (it just doesn't use them "to show you ads") and doesn't actually exclude using programs to process messages (i.e. you can still reasonably say that "nobody reads" messages if you just feed them into a program). It's also not very obvious if "messages in your inbox" also includes messages you send.

FWIW, I think the policy is deliberately open-ended so as to be future-proof, but I doubt emails are an important source of advertising data today, so I think it's likely that Google doesn't rely on it that much anymore. Most sources of legitimate (i.e. non-spam email) that are useful for advertising -- e.g. online shops and the like -- already track you and show you ads, and Google is already deeply embedded there. Millions and millions of personal accounts are an useful strategic asset to have but I think there are better sources of data.


> I doubt emails are an important source of advertising data today, so I think it's likely that Google doesn't rely on it that much anymore.

I completely disagree with the first part of your assertion. Email is still the main medium for all organizations, especially private companies taking your money for something, to communicate with you with plenty of details. Be it ordering some product online or booking a flight or other travel ticket or ordering a service or anything else.

The richness and amount of information conveyed over email pales in comparison to SMS notifications. So email is still a treasure trove of what people are doing and have done.


> Be it ordering some product online or booking a flight or other travel ticket or ordering a service

That's true, but all these places already track the living hell out of you. Even the newsletters they send over email have tracking information. By the time they've sent you an email after your first purchase, they know everything they need to show you relevant ads (in fact, that's probably why you made the first purchase...). I doubt bulk analysis of emails can show anything that is not already known way before the emails got sent.


Depends on what they mean by messages? The whole raw message, incl. all headers? The text user sees?

Google can probably serve nice ads just based on metadata it gathers on the SMTP level, without even using the raw message. Someone mails his bank, maybe show some banking related e-mails, etc.

And it would still stay true to the proclamation on that page.


Are we sure about this? When I downloaded my Google data, I fished around and found information related to my Amazon purchase history and etcetera. The only possible way that I can think of that Google is able to get my purchase data is from my email.


That's not misinformation - it's just mildly dated information. Google stopped scanning emails for ads very recently.

It was never a conspiracy theory. Google used to be very open about the fact that they were scanning emails.


Yes when.

What changed was gsuit. They want to move away from the “we are harvesting your emails” image, in order to better sell paid accounts

“Consumer Gmail content will not be used or scanned for any ads personalization after this change.”

https://blog.google/products/gmail/g-suite-gains-traction-in...


If you pay Google for email they will not use the content for ads. Free accounts I think they still do.


In modern practice, email is sent over TLS sockets already. Any good email client should prohibit you from using SMTP, POP, or IMAP with TLS, and for the past few years, even the MX-MX transfers in the backend have started to become protected (albeit mostly opportunistically at this point, I believe) with TLS.

So the only people who can read email are you, your counterparty, your ESP, and your counterparty's ESP, assuming the email providers are following good practice.


This is an excellent explanation overall, I do however think that it's important to note that opportunistic STARTTLS is vulnerable to downgrade attacks by mitm. Since this would have to be a mitm of e.g. Gmail it's not trivial by any means, but neither is it completely out of reach (see for example the periodic rerouting of the internet caused by odd BGP advertisements).

One further note is that you can know post-hoc if an email was delivered to Gmail via TLS by the presence or absence of a red lock in the Gmail app or web UI.


> STARTTLS is vulnerable to downgrade attacks by mitm.

Not only that, but intentionally downgrading STARTTLS commands has as times been the default configuration for some Cisco routing gear.

(Buy me a beer one day and I might tell you about the time I charged one of Australia's big four banks about $70k to debug that in a router on their internal network that nobody there knew existed...)


Which routers did that? I'm well aware that the ASA firewalls did it ("ESMTP inspection") -- I've disabled that dozens of times -- but I've never heard of a router that did it (by default).


This is often explained in a needlessly confusing way.

STARTTLS isn't a problem, the problem is if a user checks a box (or moral equivalent) saying it's optional obviously bad guys will opt for "No". If your client is set to _require_ STARTTLS, then an adversary blocking STARTTLS has the same effect as if they just blocked the IP address of the mail server, you get an error and it doesn't work.

There's no reason to use "opportunistic" STARTTLS for your own mail servers (ie IMAP or SMTP to an outbound relay). Nobody should be configuring their own gear, or corporate gear, to just let somebody else decide whether it uses encryption.

Opportunistic encryption still has a place in server-to-server SMTP relaying, but if you're choosing options like "if available" in a desktop/phone mail client that's wrong.


It may be better than nothing, but it's far from a sure thing: If you can BGP announce an IP, you can get a certificate from letsencrypt.

This is a trivial attack vector not just for state-actors, but also stupid kids: in the early 2000s, I announced Microsoft's AS from my own network (AS21863) to see what would happen and got a significant amount of microsoft.com's traffic. There was no security, and there still isn't: Most multihomed sites that change links frequently inevitably find themselves unfiltered either through accident or misplaced trust.

For this reason, TLS without key-pinning (even with IP filtering, as is popular with a lot of banks/enterprise) is far less secure than people realise, and on unattended links (server-to-server relaying) it offers only some casual confidentiality (since detection is unlikely) at best.

If you use MTA-STS, you have a good chance of detecting this kind of attack though. I've not seen anyone use a long policy on a distant but popular network to require someone BGP hijack two big networks to beat it, but I suspect such a disruption would be felt across the Internet.


Letsencrypt supposedly has deployed a system that makes the connections from different locations around the world to make this attack more difficult and also you can’t get a letsencrypt certificate for gmail.com or microsoft.com (or Gmail.* or microsoft.* for that matter), there’s a block list for high value targets.


I would hope letsencrypt has a number of heuristic safeguards, but I can guarantee they do not make connections from multiple routing paths: My ad server registers a certificate during the SNI hello (but before the certificate is presented), and I get a certificate after a single ping.


> I do however think that it's important to note that opportunistic STARTTLS is vulnerable to downgrade attacks by mitm

See SMTP MTA Strict Transport Security (MTA-STS):

* https://tools.ietf.org/html/rfc8461

And STARTTLS Everywhere:

* https://starttls-everywhere.org/


MTA-STS prevents downgrade attacks.


> Any good email client should prohibit you from

Most e-mail servers required a login (regardless of fetching or sending), and it would take a real incompetent sysadmin to allow that to happen in the clear.


> It's sent over https, isn't it?

That's actually more complicated than that.

If you're using a web mail, your connection to the mail provider most likely uses HTTPS. That is, HTTP over TLS. When the mail is sent, it depends whether the recipient uses the same provider or not. If it's the same provider, well, protocols are irrelevant. If not, it will usually be SMTP over TLS (minus any potential problems with STARTTLS).

The main problem with that is that the mail is not encrypted on the various servers it goes through. Only the server-to-server connections are encrypted. So your provider can access your email, and so can the recipient's. When that provider's business model is reading your emails so it can send you targeted adds, this is less than great. (Yes, Google reads your emails. They try to reassure you by telling you their employees don't read it, but the fact the process is automated actually makes it worse.)


Also, it might surprise some people just how many servers an email travels through to get to its destination. I just grabbed a random mail from a mailing list I'm on (generally a worst case scenario) and it had 7 Received headers. Every mail server is supposed to add a Received header when the mail passes through but there's no way to enforce that, so all I can really say is that mail probably passed through at least 7 servers on it's way to my inbox.

Each one of those hops may or may not have talked TLS to the next hop. Each one probably wrote the mail out to a disk based mail queue in plaintext. There is nothing preventing any of those 7 servers from keeping around that mail even though they forwarded it on. There is nothing preventing them from indexing the mail for spam or marketing purposes.


Any sysadmin can read your email, in general. There's no holistic "this email can't be read by anyone other than the recipient" as a solution, which is what a lot of us are aiming for. Things like protonmail and tutanota get really close, but they're proprietary solutions and don't work for "the many" (such as yourself) who use a hosted solution such as Gmail, who seem to have no interest in providing an open solution.


I don't want my emails to be readable by Google, yet they will when people I communicate with are using Gmail.


Mail doesn't use HTTPS. But even if TLS is enabled, you don't know that all the time.


The old ‘security is hard so let’s not do it’ argument. Emails are not properly encrypted in transit and are available for access at the provider if a court decides to grant a warrant. That might not be enough protection for everyone.

It is possible for a determined individual to withstand targeted attacks if he’s careful and willing to make the sacrifices that come with the territory.


Are you assuming everyone uses Gmail or do you not know how SMTP works?


You might be surprised just how many mail providers support STARTTLS for email, at least opportunistically.

https://www.fastmail.com/help/technical/ssltlsstarttls.html


And opportunistic STARTTLS is vulnerable to downgrade attacks by MITM.

the problems with email are that, no matter how sure you are that the connection between you and your mail server, and your local and server storage, are secure, the parties you may be interacting with are not. And then, as is talked about in the article, your recipient forwards the mail as plaintext...


And downgrade attacks are mitigated by MTA-STS: https://www.hardenize.com/blog/mta-sts

Not supported by everyone just yet since this is a new standard, but Gmail at least supports it.


I did some "OpenPGP Best Practices" work for a client recently. They don't have a choice, because a third party requires it. The goal was to make sure it was as safe as possible. One thing that struck me is that I have a simplified mental model for the PGP crypto, and reality is way weirder than that. The blog post says it's CFB, and in a sense that's right, but it's the weirdest bizarro variant of CFB you've ever seen.

In CFB mode, for the first block, you take an IV, encrypt it, XOR it with plaintext. Second block: you encrypt the first ciphertext block, encrypt that, XOR with second plaintext block, and so on. It feels sorta halfway between CBC and CTR.

Here's the process in OpenPGP, straight from the spec because I can't repeat this without being convinced I'm having a stroke:

   1.   The feedback register (FR) is set to the IV, which is all zeros.

   2.   FR is encrypted to produce FRE (FR Encrypted).  This is the
        encryption of an all-zero value.

   3.   FRE is xored with the first BS octets of random data prefixed to
        the plaintext to produce C[1] through C[BS], the first BS octets
        of ciphertext.

   4.   FR is loaded with C[1] through C[BS].

   5.   FR is encrypted to produce FRE, the encryption of the first BS
        octets of ciphertext.

   6.   The left two octets of FRE get xored with the next two octets of
        data that were prefixed to the plaintext.  This produces C[BS+1]
        and C[BS+2], the next two octets of ciphertext.

   7.   (The resynchronization step) FR is loaded with C[3] through
        C[BS+2].

   8.   FRE is xored with the first BS octets of the given plaintext,
        now that we have finished encrypting the BS+2 octets of prefixed
        data.  This produces C[BS+3] through C[BS+(BS+2)], the next BS
        octets of ciphertext.

   9.   FR is encrypted to produce FRE.

   10.  FR is loaded with C[BS+3] to C[BS + (BS+2)] (which is C11-C18
        for an 8-octet block).

   11.  FR is encrypted to produce FRE.

   12.  FRE is xored with the next BS octets of plaintext, to produce
        the next BS octets of ciphertext.  These are loaded into FR, and
        the process is repeated until the plaintext is used up.
Yeah so CFB except your IV isn't your IV and randomly do something with two bytes as... an... authenticator? And then everything after that is off by two? This isn't the only case where OpenPGP isn't just old, it's old and bizarre. I don't have a high opinion of PGP to begin with, but even my mental model is too charitable.

(Disclaimer: I'm a Latacora partner, didn't write this blog post but did contribute indirectly to it.)


I think this is called Plumb CFB. It was invented by Colin Plumb back in the day. I first saw it in their FDE products which didn’t have a good IV generation process (kind of acting like a replacement for XTS or a wide block cipher) and no, I don’t know what it’s for.


There's a few places where this engages in goalpost shifting that seems less than helpful even though I end up agreeing with the general thrust. Let's focus on one:

> Put a Signal number on your security page to receive bug bounty reports, not a PGP key.

We can reasonably assume in 2019 that this "security page" is from an HTTPS web site, so it's reasonably safe against tampering, but a "Signal number" is just a phone number, something bad guys can definitely intercept if it's worth money to them, whereas a PGP key is just a public key and so you can't "intercept" it at all.

Now, Signal doesn't pretend this can't happen. It isn't a vulnerability in Signal, it's just a mistaken use case, this is not what Signal is for, go ask Moxie, "Hey Moxie, should I be giving out Signal numbers to secure tip-offs from random people so that nobody can intercept them?".

[ Somebody might think "Aha, they meant a _Safety number_ not a Signal number, that fixes everything right?". Bzzt. Signal's Safety Numbers are per-conversation, you can upload one to a web page if you want, and I can even think of really marginal scenarios where that's useful, but it doesn't provide a way to replace PGP's public keys ]

Somebody _could_ build a tool like Signal that had a persistent global public identity you can publish like a PGP key, but that is not what Signal is today.


The safety number is only partly per-conversation. If you compare safety numbers of different conversations, you'll discover that one half of them is always the same (which half that is changes depending on the conversation). This part is the fingerprint of your personal key.

The Signal blog states that "we designed the safety number format to be a sorted concatenation of two 30-digit individual numeric fingerprints." [1]

The way I understand it, you could simply share your part of the number on your website, but Moxie recommends against it, since this fingerprint changes between reinstalls.

[1] https://signal.org/blog/safety-number-updates/


Ah! Yes, I see. You'd need to figure out which is "your" half, which the application as it exists today doesn't help you to do since that's not what they're going for. The person initiating would need to send something to establish a conversation, like "Er, hi?" and then once that conversation exists they can verify the number shown on your web page matches half of the displayed safety number as expected and actually proceed.

It's clunky, but less so than I feared. I can actually imagine a person doing this. I mean, they won't, but like PGP this is something a person _could_ do if they were motivated and competent.


> persistent global public identity

Certificate Transparency could be reused/abused to host it. If, for example, you issued a cert for name <key>.contact.example.com and the tooling would check CT logs this could be a very powerful directory of contacts. Using CT monitors you could see if/when someone tampers with your ___domain name contact keys.

Mozilla is planning something similar for signing software: https://wiki.mozilla.org/Security/Binary_Transparency


This is basically the keybase.io log system. Although they too rely on PGP currently


Minus the network of independent logs as from what I remember only keybase.io runs their own log system. Although they do timestamp their Merkle tree roots into Bitcoin.


> > Put a Signal number on your security page to receive bug bounty reports, not a PGP key.

Does anyone actually do this? Even Signal developers themselves don't! (see https://support.signal.org/hc/en-us/articles/360007320791-Ho...). Instead there is a plain old email address where you are supposed to send your Signal number so that you can chat.


We manage bug bounties for a bunch of different startups, and I can count on zero fingers the number of times I've had to use PGP in the past year for that. In practice, people just send bugs with plain 'ol email.


I used to get about 1 or 2 PGP-encrypted emails with security bug reports per year when I managed this for my employer. There's a dedicated team that receives security reports now, with email feeding into an automated ticketing system with automatic acknowledgements, reminders, spam filters, PagerDuty alerts, etc. There's a huge amount of tooling and workflow built around email, with a lot of integrations into all kinds of enterprise software. Often the only sane way to trigger all this stuff is to send an email.

So I think the result of removing PGP will be even more plain 'ol email than anything else.


It sounds like you’re saying “and that’s why GPG is good”, but I read that as an argument why there’s a very high probability that one of those things is going to spill the beans, plaintext, in an email anyway.


No, I'm not defending PGP. Even without the automation, every PGP-encrypted email almost certainly results in a bunch of internal plaintext emails between employees that could easily accidentally cc the wrong person, etc. I'm just pointing out that the chances of replacing PGP with something genuinely secure for these kinds of use-cases are close to zero.


So even Latacora-advised startups use plain old email for bug bounties. Why then does the blog post recommend using Signal for that?


Because Signal would be better than the PGP theater. In practice, though, it doesn't matter; people are just going to use plain old email no matter what. They're not going to encrypt their findings to you.


Anecdote about said startups: in 2y of the one big bounty that did have a PGP key, we got one PGPd report, and it was “session takeover”: if I copy the cookie out of Burp and into a new Incognito session, I will be logged in. Bounty plz?

We also got super clever reports on that same bounty program. They just sent email.


Maybe all PGP users are morons, that's beside the point. My point is that if someone recommends something but doesn't follow their own recommendation, it is most likely that the recommendation is not well thought-out and can be ignored. In this case the recommendation to use Signal looks more like a refutation of the point brought up by PGP advocates and not something that anyone would actually do.


That’s a fair criticism and I will happily admit that’s what it should say: that all PGP users are morons. (Just kidding. You’re right re: bug bounty advice.)


> PGP theater

Hmm I don't see it as theater if you are unable to intercept and decrypt my message. Or forge my signature, etc.


If the people from Signal start a conversation with you on the number you emailed, how do you know it’s actually them? Couldn’t it be a third party who intercepted your email?

You need to check their “safety number”, and now we’re back to the same idea as with PGP with web of trust and key sharing parties.

At some point you still need some kind of pub-key identity check if you don’t want to accidentally report your vulnerability to PRC instead.


Right, that's insecure. Maybe they should, you know, put a PGP key on their website? :)


Keybase.io supports this usecase


Whent talking about alternatives, Signal and WhatsApp get mentioned because they're easy to use. They are. Signal is pretty secure. WhatsApp probably is as well but we can't be sure. That is, until it isn't anymore.

WhatsApp already has a key extraction protocol built right in for its Web interface. Signal has a web (Electron) interface as well, and a shitty one at that, where the messages also get decrypted. For WhatsApp, this means you're one line of code away from Facebook extracting your private keys.

Signal is different, in that they're not a for profit company. However, they've shown in the past that they are under no circumstances willing to allow support of any unofficial client or federate with another. In fact, they've taken steps against alternative clients on the past, making it clear that only their client is allowed to use the signal system. The moment the signal servers go out, Signal becomes unusable. This also leaves signal in the same position as WhatsApp, where we are dependent on one person compiling the app and publishing it on whatever app store you prefer. If signal has any Australian contributors and their code review fails sufficiently this means you're basically toast the moment the Australian government gets annoyed at a particular signal user enough.

Very few real alternatives to PGP exist. PGP is not just a message encryption format, it's a _federated_ message encryption format. There are very few actual federated message standards that come close to the features PGP supports. There's S/MIME, but that's only available after paying an expensive company for it to be of any use because it's validated as a normal TLS certificate and the free vert providers don't do S/MIME.

If all of these "real cryptographers" disagreeing with PGP's design would design a new system that can be used the same way PGP is used, I'm sure we'd see that getting some good usage figures quite quickly. But all alternatives seem to focus on either signing OR encrypting OR the latest secure messaging app instead of a PGP replacement.


>WhatsApp already has a key extraction protocol built right in for its Web interface.

I don't believe this is correct. WhatsApp (and Signal AFAIK) web works by decrypting the original message on your phone, re-encrypting it with a different key that is shared with your web interface (this is what is being shared via the QR code when connecting to WhatsApp Web), sending it to the web client, and having your web client use the second key to decrypt. This is why your phone must continue to be powered on/connected to the network for the web service to work. The original key is never "extracted", and AFAIK can't be extracted by normal means.

There are a few apps that attempt to exploit a few security vulnerabilities to recreate your key for you if you lose it and need to access backups, but that isn't the same as what you're describing.


WhatsApp always requires your phone to be around, whereas Signal needs it only when you link it. After linking, the desktop client is independent of the phone (being online or in your vicinity or the number being in your possession).


Yep, you're right. I just looked more into it and WhatsApp and Signal operate differently. WhatsApp works as I described, but Signal actually does share the original key between all devices through some sort of key sharing mechanism.


Fair enough, I suppose it's more of a plaintext extraction protocol.

Still, it would take just one decision by Facebook to completely disable e2e or add an actual key extraction method to WhatsApp and there's nothing you can do about it. While WhatsApp is the most secure of all conventional chat apps, it's certainly not a replacement for PGP in most use cases.


Signal-Desktop works even when your phone is turned off.


I think the real problem is that nobody has ever created a decent PKI, and I doubt a sufficiently secure PKI is even possible.

CAs require you to trust people that aren’t supposed to be party to the communication (trust both not to be hostile, and not to be insecure themselves).

All other forms of PKI offer entirely impractical authentication mechanisms. With signal and the like, your options are

1) Verify keys by being in the same room as the other party before communication, and after every key rotation

2) Just hope that the keys are genuine...

The only thing that you can trust is that the party you’re communicating with is one of potentially many holders of the correct key.


I would regard the Web PKI as the only decent global public PKI, but sure, whatever.

You don't seem to have understood what's going on in Signal. Ordinary key rotations, which happen automatically, do not change the verified status. What can happen is that another participant changes phone or wipes it, and so obviously trust can't survive that change.

The problem isn't that somebody else may know the correct key, the Double Ratchet takes care of that. The problem is that a Man-in-the-middle is possible. Alice thinks Mallory is Bob, and Bob thinks Mallory is Alice. Mallory can pass messages back and forth seamlessly, reading everything. Only actually verifying can prevent this.

You don't verify the encryption keys, that's useless because those change constantly, the verification compares the long term identity value ("Safety Number") for the conversation between two parties, which will be distinct for every such conversation. Mallory can't fake this, so if Alice and Bob do an in person verification step Mallory can't be in their conversation.


The implementation details have some UX benefits, but all they do is kick the can down the road, not solve the problem. You need a secure channel to authenticate the keys (or “safety numbers”, or whatever you want to call them). This can only practically be done face-to-face (or by getting somebody you trust to do it face to face - to act as if they were a CA). You need to do this prior to first communication, and additionally every time somebody loses their key material.

Some people will be motivated enough to do this, most won’t, and this absolutely can’t scale.

All known PKI systems are either impractical, or require a level of trust that undermines the system entirely. You can say your threat model doesn’t require that much security, but in that case it probably doesn’t require a PKI either.


Signal PKI doesn't really work for me, conceptually. I mean, Signal is great work, but the approach to key management and federation seems like it undermines the regular security of the approach.

The problem is the key servers are run by the same people who control the app. This helps if the key server specifically gets compromised and the target is verifying, but for many attacks people worry about it's actually not the key servers specifically that get popped, it's an employee laptop or the employee themselves via subpoena, policy change etc. And for those cases nothing stops the app itself being changed to show you a false safety number, possibly by Apple without the app vendor even knowing.

So we end up with a rather curious and fragile threat model that only really helps in the case of a classical buffer overflow or logic error that grants an adversary the ability to edit keys and not much else. It's very far from "you don't have to trust the providers of Signal" which is what people tend to think the threat model is.

And honestly, a technique that combats very specific kinds of infrastructure compromise are too low level IMO to bother advertising to users. The big tech firms have all sorts of interesting security techniques in place to block very specific kinds of attacks on servers but they generally don't advertise them as primary features. If you have to trust the service provider, and with both Signal and WhatsApp you do, then are you really getting much more than with bog standard TLS? After all forward secrecy achieves nothing if the router provider is diligently deleting messages after forwarding them to the receiving device - the feature only has value if you assume the provider is recording all messages to disk and lying about it, in the hope of one day being able to break the encryption of ... their own app. Hmmm.


with signal I believe you can have already trusted contacts vouch for new contacts


You're thinking of PGP web of trust, Signal doesn't have that


you can have contacts send you another contact. There is no way to set up a server of public identities, but you should be able to share your own contacts.


Signal won't validate the session key via that mechanism, each pair of communicating users have to do that themselves


TIL; does this also mean that you can "fake" forwarded messages?


> But all alternatives seem to focus on either signing OR encrypting OR the latest secure messaging app instead of a PGP replacement.

OP mentions exactly this point in the "The Answers" (https://latacora.micro.blog/2019/07/16/the-pgp-problem.html#...) section.


I'm a little confused as to why you mention Signal and WhatsApp but not Telegram?


Most cryptographers do not see Telegram as a secure encrypted protocol. This is for two reasons: the first one is that Telegram doesn't do end-to-end encryption by default (and if you enable it, functionality is limited). And secondly, they roll their own cryptographic protocol.


Telegrams crypto is based on their own, contested protocol and is disabled by default. Telling someone to use that for secure communications is difficult because you also need to remind people to turn on encrypted communications.

Furthermore, signal and WhatsApp do e2e in group chats where telegram doesn't.

Dont get me wrong, I use Telegram daily (it's desktop clients far outperform any of its competitors), but it's not as secure as WhatsApp or Signal.

I'd classify Telegram as "maybe secure" but I wouldn't recommend it to people depending on the security of their messenger application.


Telegram invented it's own crypto, without an audit it's untrustworthy. There's only Signal and Keybase that has been audited, so Whatsapp should be excluded from the list of trustworthy IM apps as well.


Whatsapp uses the exact same technology as Signal. If you consider that Signal is fine based on an audit of what is clearly an older version (do audits come out every day with new Signal releases? No, so the code you're running wasn't covered by the audit) then Whatsapp is fine based on being the same protocols with different branding.


An audited piece of software with relatively minor changes is definitely far more trustworthy than a piece of software that is much much more different. Whatsapp is garbage compared to Signal.


"without an audit"

It would be more accurate to say that they have failed every attempted audit of the protocol.


I'd love to read more about this, could you please provide a few links?


Just FYI, Actalis will mint you a free S/MIME cert.

https://www.actalis.it/products/certificates-for-secure-elec...


Where they generate your private key :(. I rather let them sign my own key


I understand that there are better tools for encryption, but is there anything that replaces the identity management of PGP? Having a standard format for sharing identities is necessary in my opinion. If I have a friend (with whom I already exchanged keys) refer me to some third friend, it would be nice if he can just send me the identity. Sending me the signal fingerprint isn't a solution for two reasons:

- I don't want to be manually comparing hashes in 2019

- it locks me into signal, I wont be able to verify a git commit from that person as an example

Is there a system that solves this? Keybase is trying but also builds on PGP, we can use S/MIME which relies on CAs but is not better than PGP. Anything else?


Keybase builds a lot on top of saltpack, which works like a saner PGP: https://saltpack.org

The underlying cryptography is NaCl, which is referenced in the original post.


I don’t get it. How does Saltpack solve the issue of identity management?


saltpack does not solve identity management itself, it is merely the cryptography and the physical format. Keybase, however, is all about identity: https://keybase.io.


I think that currently Keybase.io is the only thing trying to be universal, with their transparency log plus links to external profiles along with signed attestations for them.

But even that's still not quite what I'm looking for. There's no straightforward way to link arbitary protocol accounts / identities to it, outside of linking plain URL:s.

We need something a bit smarter than keybase that would actually allow you to maintain a single personal identifier across multiple protocols.


Also, as I looked a little bit into Keybase I learned that they don't support any protocols that don't have public profile-like pages. So a pure messenger wouldn't be supported by Keybase.


They're opening up the protocol so that any website can provide the authentication, and Mastodon already implements it: if you have an account on Mastodon, you can have an additional proof on keybase.

See the blog post and the spec that details the changes to implement: https://keybase.io/blog/keybase-proofs-for-mastodon-and-ever...

I presume a pure IM system would have to implement some web gateway at the server level.


The CA system is strictly better than PGP for identity management in every respect.

People often think it must be the opposite but this is essentially emotional reasoning: the Web of Trust feels decentralised, social, "webby" un"corporate", free, etc. All things that appeal to hobbyist geeks with socialist or libertarian leanings, who see encryption primarily through the activist lens of fighting governments / existing social power structures.

But there's nothing secure about the WoT. As the post points out, the entire thing is theatre. Effectively the WoT converts every PGP user into a certificate authority, but they can't hope to even begin to match the competence of even not very competent WebTrust audited CAs. Basic things all CAs are required to do, like use hardware security modules, don't apply in the WoT, where users routinely do unsafe things like use their private key from laptops that run all kinds of random software pulled from the net, or carry their private keys through airports, or accept an email "From" header as a proof of identity.

I wrote about this a long time ago here:

https://blog.plan99.net/why-you-think-the-pki-sucks-b64cf591...


In an otherwise good write-up, I disagreed with this line:

" There is no scope for difference between a “big corporate” CA and a “small politically active” CA because the work they do is so mechanical, auditable and predictable."

There is room for a politically-active CA like there is for anything else. In each market, there's players that get business for doing better things for privacy, being eco-friendly, being more inclusive, etc. Things that get business from vote with your wallet types. My idea, inspired by Praxis doing Mondex's CA, was a non-profit or public-benefit company that had built into its charter and legal agreements many protections for the customers in a country without secret laws/courts like U.S. Patriot Act. The CA would also be mandated to use high-security approaches for everything it did instead of just HSM's. They might also provide services like digital notary.

In short, I can imagine more trustworthy and innovative CA's being made. I'd easily pay one like that over the rest. I'm sure there's some number of people and businesses out there that think the same way. I wouldn't try it as main business, though, since market is too cut-throat. My idea was a company like Mozilla would try it to see what happens. Let's Encrypt confirmed the non-profit, public-benefit part being feasible.


> But there's nothing secure about the WoT.

I haven't read your blog, but this sentence unfairly paints WoT with PGP/GPG's problems.

It's completely reasonable to have a WoT that operates correctly when at least a single participant isn't completely incompetent. That's how git works.

I haven't looked closely but I'd be willing to speculate that PGP is to WoT what C++ is to fast compile times.


Just to drive the point home, compare:

* the amount of time some pedants waste at a PGP "key party"

* the time it takes me to accept a merge request from someone who made a commit in the gitlab UI using Internet Explorer on their grandparents' malware-laden Dell desktop

Both examples leverage WoT.

Edit: hehe I wrote PGP "key party" instead of "key signing party."


Are we talking about extended validation or ___domain validation?

With ___domain validation it is likely better to use dane in the context of email. The sender looks up the key and mx record and act accordingly, and for postfix there are plugins that already do it. Very few current users however.


I am not against CAs but even then, what tool do you use? Personal SSL certificates for signing? Except for s/mime I haven't seen this used anywhere


The CA system as set up today is a bit fragile and much too limited, though. If it was all we needed, everybody would be using S/MIME with signed certs.

We need something more expressive than the current CA system, where you can make the choice to define your own trusted roots.


You can always edit the trust store to add or remove certs your local computer trusts. That's easy, there are even GUI tools available to do it. Heck on MacOS there's even a GUI wizard to create a local CA from scratch!

Nobody does it because the hard part of being a CA isn't the protocol part, it's convincing everyone that you're going to do a good job of issuing certificates. The WoT just ignores that problem entirely - and it's ultimately a social issue.


But you can't trivially define your own scopes wherein each has their own independent set of trusted CA:s. That's part of what's missing. But default it's universal or per program.

Just look at every kind of umbrella organization out there like industry specific auditors with a scope limited to a field (medical, finance, food safety), or even hobby organizations with a parent organization auditing local chapters.

You don't go to the social security office to look up your neighbors phone number when you need to talk to them. The attributes people care about are often more local, more narrow.

People first go to local trust anchors to get information about things (and their software clients could then traverse various directories up to a root and back down, if necessary). I need my client to be able to understand an assertion from an entity far more personal to me than a distant CA. The CA:s are most useful in ephemeral connection, not long term ones.

This is what I mean when I say the CA system isn't expressive enough.


I would suggest using restic over TarSnap for encrypted backups -- it gives you more flexibility with where your backups will be stored since TarSnap is pretty integrated with Colin Percival's online service and is also unfortunately not free software. But it's also an as-simple-as-possible cryptosystem. Filippo did a quick lookthrough and said that it seemed sane from a crypto perspective[1].

[1]: https://blog.filippo.io/restic-cryptography/


I'm using Restic, and it mostly works. Unfortunately, it uses an absurd amount of memory, proportional to the size of your backup store. So, don't use it to back up a small VPS.

If you do, "export GOGC=20" can help a little, but it'll still use a lot of memory.


Restic is also fine.


I wish that restic supported asymmetric keys. I'm uncomfortable storing the key alongside the backup tool, even if it just gets injected at runtime. If a nefarious party gets the key all my backups from that key are vulnerable.

I suspect that it's probably hard to add that functionality because you can't do the deduplication without decrypting the prior backups (or at least an index). That would also explain the memory usage JoshTriplett mentions.


Any opinions about rclone?

It seems to be fine for my mediocre backup needs


rclone is fine (in fact I use rclone with restic to synchronise my restic backup repository on BackBlaze B2) but it doesn't encrypt or deduplicate your backups -- it's just a synchronisation tool like rsync.


What about its 'crypt' encryption backend?

Afaik it uses scrypt which was designed for tarsnap


Ah, I wasn't aware it had an encryption backend. Just looking at the documentation I'm quite worried -- why on earth is there an "obfuscate" mode?

I would suggest using restic. It doesn't have any weird modes, it simply encrypts everything and there isn't any weird need to specify (for instance) that you want things like metadata or filenames encrypted. Also backups are deduplicated using content-defined-chunking -- giving you pretty massive storage savings. If you really need rclone since it supports some specific feature of your backend -- restic supports using rclone as a synchronisation backend (or you can do what I do, which is to batch-upload your local restic backup every day).


It's a choice between plaintext filenames or no plaintext filenames.

You might want to crypt your nudes, but still access normal pictures unencrypted through the provider's web interface.


The missing answer in there is how to avoid PGP/GnuPG for commit signing. I've asked about this in another similar thread[0] but didn't get a hopeful answer.

Everytime I look at git's documentation GPG seems very entrenched in there, to a point that for things that matter I'd use signify on the side.

Is there a better way?

[0] https://news.ycombinator.com/item?id=20379501


It seems pretty clear that, with the current tools available, there is no way to do this (at least with git). There's nothing in principle difficult about it, just that (say) git+signify hasn't been implemented.

I'm getting the strong sense (see also my toplevel comment, and maybe someone will correct me and/or put me in my place) that there's an enormous disconnect between the open source + unix + hobbyist + CLI development communities, and the crypto community. The former set has almost no idea what the state of art in crypto is, and the latter (somewhat justifiably) has bigger fish to fry, like trying to make it so that non-command-line-using journalists have functional encryption that they can use.

I think this is a sociological problem, not a technical "using command-line tools makes Doing Crypto Right impossible".


Signing tags (or somewhat less usefully, commits) can be done the same way packages are signed. It might not be directly integrated with git, but it wouldn't be hard to make a good workflow.

The article mentions Signify/Minisign. [1]

[1] https://jedisct1.github.io/minisign/ as an PGP alternatie.


> It might not be directly integrated with git

That's the problem I see. I have signingkey in .gitconfig, together with [commit] gpgsign = true. This way, set & forget, all my commits are signed (it's my employer requirement, probably some "compliance" stuff). You can see it right away nicely displayed as "Verified" on github. I didn't know about GPG-s supposedly weak security until now, but always considered it not very convenient to use.


Ah, well if your employer mandates PGP signatures on every commit, that's that.

FWIW, the creator of git argues that signing of every commit is essentially pointless. [1] I agree.

[1] http://git.661346.n2.nabble.com/GPG-signing-for-git-commit-t...


So the suggested solution for more secure email is just to give up on the concept of email entirely? Anything that does not do perfect forward secrecy is just pointless so there is no point in trying to keep significant discussions to refer to later. We are expected to return to a sort of virtual pre-writing stage.

This is not really helpful. For all its shortcomings, PGP is pretty much all we have. If used in a straightforward way it actually can protect email from nation state level actors for a significant time. That's gotta count for something.


I know right - they are all recommending Signal, Wire, WhatsApp, etc., but these aren't alternatives. They are all centralized, controlled by a single entity even if the underlying protocols are open. And you're right, they are instant messaging - ie. alternatives to Messenger, Hangouts, etc.

We need a modern email replacement that is decentralized, federated, et al. Something that keeps all the modern cryptographers happy, while facilitating the same kind of long form conversations and federated self-hostability that email provides.

I think Matrix is getting there, but even that is still focused on instant messaging.


As I've mentioned in another thread, I think we're more likely to get there via the route of E2EE documents collaboration tools. Something that cleanly breaks away from the email model, and adds enough value to make the switch worth the effort.


Telling people to treat email as insecure and thus not use it for anything serious is terrible bad advice.

I am reminded of BGP (Border Gateway Protocol). Anyone who has even glanced at the RFC of BGP could write an essay of the horrible mess of compatibility, extensions, non-standard design of BGP. It also lack any security consideration. The problem is that it is the core infrastructure of the Internet.

Defining something as insecure with the implied statement that we should treat it as insecure is unhelpful advice in regard to critical infrastructure. People are going to use it, continue to use it for the unforeseeable future, and continue to treat it as secure. Imperfect security tools will be applied on top, imperfectly, but it will see continued used as long as it is the best tools we have in the circumstances. Email and BGP and a lot of other core infrastructure that is hopelessly insecure will continue to be used with the assumption that they can be made to be secure, until an actually replacement is made and people start to transition over (like how ipv6 is replacing ipv4 and we are going to deprecate ipv4 if you take a very long term view of it).


People that use email to convey sensitive messages will be putting themselves and others at risk, whether or not they use PGP, for the indefinite future. That's a simple statement of fact. I understand that you don't like that fact --- nobody does! --- but it remains true no matter how angry it makes you.


I would say that people that use any critical infrastructure not designed with security in mind is putting themselves and others at risk if they convey sensitive information. This is why plain text protocols should be considered insecure.

It would be great if we could replace the whole Internet with modern technology rather than relying on ancient systems like BGP and email.


> It would be great if we could replace the whole Internet with modern technology rather than relying on ancient systems like BGP and email

I've occasionally though of starting a long term project that could eventually do that assuming that politicians screw up things the way it looks like they going to do over the next few decades.

The idea is that a group of interested people would develop these new system with no requirement whatsoever to have backward compatibility or interoperability with the current systems.

Of course these new systems would not get widespread adoption. They'd probably only be used by the developers and a few others who are willing to essentially have two completely different systems in parallel: the new stuff for communications among themselves and the current stuff for everything else. That's fine. It means no pressure to compromise to get something out faster.

Lack of adoption is not a problem. That's where politicians come in. What we are counting on is that those idiots are going to manage to cause or fail to prevent some apocalyptic event(s) that will sufficiently destroy the current systems that when the survivors get around to rebuilding the Internet and communication infrastructure they are starting from a clean slate.


How do you write this after writing that previous comment, which says that what I just wrote is "terrible bad advice"?


Telling people to stop using the Internet because it is insecure is bad advice. It is extremely unrealistic, like telling people to stop using cars and trucks because driving kills people every year.

However suggesting that we should change things to eliminate the risk is good. We could eliminate car accidents completely if everyone went over the automatic driven cars that communicated as a mesh network. The Swedish "zero vision" could be achieved, maybe even with todays technology, but it would be a massive undertaking.

Replacing BGP would be a similar massive undertaking. Just switching away from ipv4 to ipv6 has so far taken 20 years and we have no date in sight when we can start deprecating ipv4. From what I have heard/seen, a lot of people are somewhat reluctant to issue backward incompatible replacements of core infrastructure because they look at ipv6 and fear that kind of process. Even seen some pessimistic talks that argue that it is impossible and the only way to achieve changes in core infrastructure is with incremental changes that are fully backward compatible. I am not really of the view but I do understand their fear.

My advice to people is not to abandon email, even if I doubt much people would heed to the warning that email is unsafe for government, business, people and their family. People will risk it regardless. Thus I focus on what may help, imperfect as those may be. In the past that was PGP in the form of enigma mail plugin. Today I am keeping an eye on the new pretty Easy privacy which hopefully can outsource the security to a library that attempts optimistic encryption when ever possible.


The PGP team openly and enthusiastically discusses how they've advised dissidents in places like Venezuela, about which just recently the NYT ran an expose of death squads sponsored by the Maduro administration. What they're telling dissidents to do has to work. It demonstrably doesn't. Pretending otherwise, because everyone else does it, is malpractice. I don't understand where the wiggle room people are finding on this issue is.


The only advice you can semi-safely give to dissidents that face state organized death squads is to hide and get new identities, and never ever reveal the old ones to anyone.

Signal will not make people immune to death squads, nor will any other technology. It was not that long time ago that members of Anonymous went after the cartel and we got pictures of people tortured and killed. It only take one trusted person who know a dissidents real identity or family or friends or community for things to get very ugly very fast.

If the PGP team promised security against state organized death squads then that's their fault. Pretending that technology will protect you against that kind of threat can be a very costly mistake.


I am a combination of honored and terrified that signify is the leading example for how to sign packages. The original goals were a bit more modest, mostly focused only on our needs. But that's probably what's most appealing about it.


I think you should get comfortable with that, because all the opinions I've collected seem to be converging on it as the answer; Frank Denis Frank-Denis-i-fying it seems to have cinched it. Until quantum computers break conventional cryptography, it would be downright weird to see someone design a new system with anything else.


We over at Sequoia-PGP, which gets a honorable mention by the OP, are not merely trying to create a new OpenPGP implementation, but to rethink the whole ecosystem.

For some of our thoughts on the recent certificate flooding problems, see https://sequoia-pgp.org/blog/2019/07/08/certificate-flooding...


The homepage has some meaningless marketing bullet points about the greatness of pgp itself. Where would I find the ways in which you rethink the whole ecosystem? It seems like Sequoia is just a library, not even a client. Wondering how this could change pgp by much if at all.


The PGP problem isn't going away until there is a stable alternative. Under 'The Answer' there are several different, ___domain specific tools, with their different features and UIs and limitations. And the general case (encrypting files, or really that should be 'encrypting data'), "this really is a problem". If I want to replace my use of GnuPG on production with the things on that list, I need to write my own encrypt/decrypt wrappers using libsodium and hope that future travellers can locate the tool or documentation so they can decrypt the data. So I stick with the only standard, GnuPG, despite acknowledging its problems.


What specific problem are you trying to solve with PGP? If it's "encrypting files", why are you encrypting those files? What's the end goal? I acknowledge that there are cases that boil down to "encrypt a file", but believe they are a lot narrower than people assume they are.


We encrypt files:

- For offsite backups (disaster recovery), mirroring object stores and filesystems to cheap cloud storage.

- For encrypting secrets needed for maintaining IT systems (eg. all those shared passwords we never seem to be able to get rid of)

- For encrypting sensitive documentation for transfer (email attachment, shared via filesystem, shared via HTTP, shared via pastebin even)

Despite the awful UI, GnuPG does all of that in a standard way. We have tested disaster recovery with no more instructions than 'the files are in this S3 bucket'.

And the same tool is also useful for other tasks too: - public key distribution (needs care to do it securely, but functional) - commit signing, signed tags - package signing (per Debian)

We could use custom or multiple tools for all this, but a single tool to learn is a big advantage.

I think all use cases boil down to 'encrypt and/or sign a file' for one of the stages. In the article, 'talking to people', 'sending files', 'encrypting backups' are all really just 'encrypt/sign a file' followed by transmission. And some sort of keyring management is needed for usability. A tool that can pull keys from a repository and encrypt and/or sign a file to a standard format could be used to build all sorts of higher level tools. I imagine it would be quite possible to build this on top of libsodium, and if it gained mindshare, replace uses of GnuPG.


> I think all use cases boil down to 'encrypt and/or sign a file' for one of the stages. In the article, 'talking to people', 'sending files', 'encrypting backups' are all really just 'encrypt/sign a file' followed by transmission.

But they aren't the same thing. That's the whole point the article is making. Yes, if all you have is a tool that does "encrypt+sign a file", then all crypto problems will look like "encrypt+sign a file" problems.

For backups, tools like restic provide deduplication and snapshots as well as key rotation (and restic works flawlessly with dumb S3-like storage). You can't do that with PGP without reimplementing restic on top of it. Same with TarSnap. For talking to people, you want perfect forward secrecy and (usually) deniability. PGP actively works against you on both fronts. For sending files, there are also other considerations that Wormhole handles for you (though to be honest I haven't used it in anger).

While you can "solve" these problems with one tool, the best way of solving them is to have separate tools. That's the point the article is making.


For securely talking to people, often you may want non-repudiation, which is the exact opposite of deniability and anonymity.

There are very different, incompatible needs for slightly different usecases.


Signal -- and all other OTR-like protocols -- have deniability (or if you prefer it has repudiation rather than non-repudiation). Neither conversation participant can prove to a third party that the other party said something in a conversation. Moxie wrote a blog post about this in 2013[1].

The only circumstance in which you want non-repudiation is if you are really sure that you are okay with the recipient of your message later posting cryptographic proof that you said something in a chat with them. I bet most people (if you asked them) would effectively never want that "feature" for private chats.

[1]: https://signal.org/blog/simplifying-otr-deniability/


Sure, you usually don't want that feature in private setting, but you almost always want that feature in a commercial setting, and lots of communication happens in that context.

E.g. vendor-customer helpdesk chat, internal workplace communication including "less internal" things like different subsidiaries of international companies, etc, etc. Half of financial world runs on Thompson Reuters messenger which is essentially glorified chat. What if your boss sends you a message "hey, do that risky thing right now" - do you want that (likely informal) means of communication to have deniability? Does the company want deniability in the app in which random middle-managers message their subordinates? It makes sense for companies to mandate that teams choose only communications platforms that support authentication and nonrepudiation.

As soon as money, any kind of disputes, and the smallest chance for future legal proceedings are involved, anonymity and deniability are flaws and not features - as I said above, superficially similar use cases can have opposing and incompatible requirements.

Even going back to the commonly discussed use case of Signal for journalism. Let's say a journalist interviews a whistleblower over a mobile messaging app - you'd want anonymity and deniability there. And five minutes later that same journalist asks a clarifying question to the official head of that agency, likely also using a mobile messaging app, possibly the same one. Do you want the answer of that official to have deniability, or do you want that journalist to be able to cryptographically prove that the official lied?


I think our main disagreement is the usage of the word "usually". There are chat systems that have non-repudiation that aren't PGP -- I don't think there's much more to elaborate.

For personal communications, usually people want deniability. For business-related communication, you might want non-repudiation.


Probably the point I'm trying to make is that for me the communication scenarios seem similar enough, and the line between business and consumer comunication is so blurry with people using private mobile devices in business and expecting the same set of tools to cover all their communication, that saying "oh, there's another tool that does it the opposite way" isn't really a sufficient answer - perhaps we need to treat it as essentially a "flag" in the same tool; this useraccount/chatgroup/etc is authenticated and doesn't have any deniability whatsoever, but in the same app over there I have a pseudonymous contact with a marker over it that's effectively anonymous with full deniability and OTR communications.


Magic Wormhole is neat, but the happy path is to use a third-party rendezvous server, which is susceptible to traffic analysis (I also wish that the codes had more than 16 bits of entropy, but that is part cargo-culting on my part).

Signal is also vulnerable to server-side traffic analysis, and is strangely keen on both demanding a selector (a phone number) for identity and on trusting Intel's 'secure' enclave (I strongly suspect that it's suborned).

One thing I do like about PGP is that it has been around awhile: I can still decrypt my old files & verify old signatures just fine, something I don't trust the flavour of the month to do.

I think that rather than a Swiss Army knife tool & protocol like PGP, we should have a suite of tools built around a common core, probably NaCL. That way we can have compatibility when we need it, but also aren't cramming square pegs into round holes.

Finally, the Web of Trust was a bright, shiny idea — and wrong. But Trust on First Use is also pretty bad, as is the CA ecosystem. We need something else, something decentralised and also secure — and I don't think that's impossible. I've had a few ideas, but they haven't panned out. Maybe someday.


> Finally, the Web of Trust was a bright, shiny idea — and wrong

Yeah, this whole part is some 90s cypherpunk way of modeling human relations, which has never mapped onto any real world relationships. As soon as people had real world digital identities outside of their gokukillerwolfninja666 logins, this didn't help.

CA ecosystem might be fundamentally flawed, but WoT was a complete failure. So PGP users end up trusting some key server which is probably sitting under someone's desk and has been owned by every serious intelligence service since forever.


Re: common core: isn't that already happening? age and signify are both based on those 2, and magic-wormhole arguably is too (though point addition is not always exposed, so SPAKE2 is a little harder to implement than signify).


Yeah-ish, but what I mean is an actual set of different tools (so, not one-size-fits-all) but all part of the same suite, rather than a bunch of different implementations of mostly the same idea — i.e., *BSD rather than Linux.

One of my numerous hobby projects is exactly that, but … I simply don't have enough Round Tuits.


OK, so you're saying something like:

  magic send <FILE>
  magic receive <CODE>
  magic encrypt <FILE>
  magic sign <FILE>
... that ideally all have NaCl at the base but are otherwise one binary that you have to remember?

The tricky one there is probably chat.


I think they meant something like the OpenSSL binary that obviously builds on the library and provides everything through the command line.

The problem is that the same thing for NaCl/libsodium would be lower level, and probably still not enough as exhibited in the article: a typical use case is not "I want to encrypt this file", it's "I want to send this file to that person such that no one else can read it" or "I want to send a message to that person such that no one else can read it, and if they can they shouldn't be able to read other messages from the same conversation". No cli tool can properly solve this, it has to be incorporated in the application or even protocol.


Incidentally, the real goal of magic-wormhole is to provide the initial secure introduction between two people's communication tools. Get your public key into my address book safely, and then all those other modes have something to work from. Keybase.io is kinda in the same direction except they're binding key material to de facto identity services (github, twitter, etc) rather than pairwise introduction.


Well, they don't all have to be one binary, but I'd like them to use consistent file formats, command-line arguments &c.

And yeah, chat is very much not like the rest — but I'd still like my chats to be somehow validated with my magic ID.


Couldn't we instead just cut the shared password in 2 (or use 2 passwords, it's the same), so we don't require point addition? I really don't feel like implementing key exchange in Edwards space just so I can have the complete addition law… unless maybe I don't need the law to be complete?

Here's how it could work (unless you tear it apart):

  Alice and Bob share two passwords out of band: Pa and Pb
  Alice and Bob generate two key pairs ka/KA, and kb/KB
  Alice sends KA, Bob sends KB
  Alice and Bob compute ss = HASH(DH(kb, KA) = DH(ka, KB))
  Alice responds to KB with Ha = HMAC(Pb, KB || ss)
  Bob   responds to KA with Hb = HMAC(Pa, KA || ss)
  Alice verifies Hb
  Bob   verifies Ha
  The session key is HASH(ss) or something
The main disadvantage is that to achieve the security of a true PAKE, passwords here must be twice as long. A 4 digit Pin number here would only have the security of two digits (1/100). You'd need 8 digits to get to 1/10,000 security. On the other hand, it's extremely simple, doesn't require point addition, and if there's any flaw you probably already have spotted it.


Re: magic-wormhole's 16 bits: I don't think you should be worried about that, because SPAKE2 will give you proof positive if the attacker attempts to guess. Are you saying 2*-16 success isn't good enough?


> Are you saying 2-16 success isn't good enough?

I really don't think it is, because it might be worthwhile for a particular sort of attacker, say one who runs the default rendezvous server: observe global activity, attempt to MitM every connexion for 30 seconds, then write up a spurious blog post about a 'network issue' or 'bug' or whatever which caused a brief outage. N:2^16 is okay* against targeted attacks, mostly (hence my 'cargo-culting' comment), but with a large enough N …

The nice thing about 1:2^128 is that you just don't have to care.


(magic-wormhole author here)

It's probably worth pointing out that the 2^-16 chance is per invocation of the protocol.. it's not an offline attack. So you'd have to be reeeealy patient to run it enough times to give the attacker a decent chance of success.

The best attack I can think of would be for me (or someone who's camped out on my rendezvous server) to make an MitM attempt on like one out of every 100 connections. Slow enough to avoid detection, but every once in a while maybe you get a success. Of course you don't get much control over whose connection you break (if you did, you'd be back in the detectable category again).

FWIW, some numbers. The rendezvous server that I run gets reports from clients about the success/failure of the key establishment phase. Over the last year, there were 85k sessions, of which 74% resulted in success, 22% in timeouts, and 2.5% in bad key-confirmation messages (meaning either a failed attack, or someone typoed the code). So in the worst case where every one of that last category was really a failed attack, there's roughly a 2130/2^16 = 3% chance that someone managed a single successful attack last year.

But I tried to make it easy to choose a different tradeoff. `alias wormhole-send=wormhole send --code-length=4` gets you to 2^-32 and gives codes like "4-absurd-almighty-aimless-amulet", which doesn't look too much harder to transcribe.


Yeah, I think that current default sails too close to the wind. 2^-16 chance on a stochastic MitM attack feels fine statistically - and then you think about that one person who it worked on. For them it wasn't 2^-16 it was binary, it didn't work.

They just used your "secure" file transfer mechanism and their data got stolen.

You're on the same side of this as the airline industry. The person driving a car understands intellectually that their drowsy half-attention to the road is statistically going to kill them, whereas travelling in coach on a mid-range two engine jet liner is not - but emotionally they consider driving to be fine because they're doing it, and air travel is trusting some random stranger. As a result, the industry need to make air travel so ludicrously safe that even though emotionally it still feels dangerous the passengers will put that aside.

2^-16 is intellectually defensible, but my argument above is that shouldn't be what you're going for. So that's why I'd suggest a longer code by default.


Okiedokie: you should just use wormhole then: wormhole receive 4-gossamer-hamlet-cellulose-suspense-fortitude-guidance-hydraulic-snowslide-equation-indulge-liberty-chisel-montana-blockade-burlington-quiver :-)


Isn't the code length also selectable?


Yep, the default python implementation lets you pick a code and generate a code with more than 2 words. I'm just cautious in telling people to twiddle knobs that don't need twiddling :-) But you can absolutely go up to 2^-64 or whatever if that makes you happy!


Really, all I did here was combine posts from Matthew Green, Filippo Valsorda, and George Tankersley into one post, and then talk to my partner LVH about it. So blame them.

(also 'pvg, who said i should write this, and it's been nagging at me ever since)


The elephant in the room is "what to do about email", and a significant part of the issues are related to the "encrypt email" use case: part of the metadata leakage, no forward secrecy, ...

The closest advice to this in the article would be "use Signal" which has various issues of its own, unrelated to crypto: it has Signal Foundation as a SPOF and its ID mechanism is outright wonky, as phone numbers are IDs that are ___location bound, hard to manage multiple for a person, hard to manage multiple persons per ID, hard to roll over.

To me that seems to be a much bigger issue than "encrypting files for purposes that aren't {all regular purposes}".


Is it wrong to use openssl to encrypt files?

0. (Only once) generate key pair id_rsa.pub.pem, id_rsa.pem

1. Generate random key

  openssl rand -base64 32 > key.bin
2. Encrypt key

  openssl rsautl -encrypt -inkey id_rsa.pub.pem -pubin -in key.bin -out key.bin.enc
3. Encrypt file using key

  openssl enc -aes-256-cbc -salt -in SECRET_FILE -out SECRET_FILE.enc -pass file:./key.bin
-- other side --

4. Decrypt key

  openssl rsautl -decrypt -inkey id_rsa.pem -in key.bin.enc -out key.bin 
5. Decrypt file

  openssl enc -d -aes-256-cbc -in SECRET_FILE.enc -out SECRET_FILE -pass file:./key.bin


from https://www.openssl.org/docs/man1.1.1/man1/openssl-enc.html

> The enc program does not support authenticated encryption modes like CCM and GCM, and will not support such modes in the future.

> For bulk encryption of data, whether using authenticated encryption modes or other modes, cms(1) is recommended, as it provides a standard data format and performs the needed key/iv/nonce management.

So don't use `openssl enc` to encrypt data.

`openssl cms` that is recommended above is S/MIME. Don't use S/MIME.

I can't wait for Filippo Valsorda's `age` to be done so I would have an answer to the question of "what should I use to encrypt a file?".


`enc` doesn't support and will never support _any_ authenticated ciphers. Consider it a red flag when you see it in the future.

https://github.com/openssl/openssl/commit/c03ca3dd090c6eb739...


To start with, none of that encryption is authenticated.


So if I understand you correctly (Noob here), Alice would need to sign the pair (key.enc, file.enc) to authenticate that those files originated from her.

Without that, Bob could potentially receive any pair of (key,file), which would just decrypt into garbage data.

BTW, variations on that sequence appear all over the internet when searching for "openssl encrypt file with public key"...


This is one of the problems with cryptography: with a little knowledge, you can end up making yourself completely insecure while believing yourself to be very secure.

People generally imagine that "encrypt this block of data" is a simple primitive that does everything you want it to. But naive encryption doesn't work like that. In the worst case, where you use ECB for the block cipher [1], you end up with the ECB penguin: https://blog.filippo.io/the-ecb-penguin/. Your secure crypto becomes a pretty trivial Caesar cipher, just on a larger alphabet. Other modes (such as the CBC mode you used) aren't so bad, but if you have some hint of the structure of the underlying data, you can start perturbing the ciphertext to manipulate the encrypted data.

The modern solution to that problem is "authenticated encryption," which means that you add in an additional guarantee that the ciphertext hasn't been tampered with. Even then, there is still room for doing things incorrectly (padding is a real pain!).

[1] This is so bad it shouldn't ever be an option in any tool.


[1] This is so bad it shouldn't ever be an option in any tool.

And yet it's effectively the default.


No, the problem is that the AES encryption step uses bare AES-CBC. An attacker can flip bits in the ciphertext to make targeted changes to the plaintext. What you want is an authenticated encryption mode, but I don't know that the OpenSSL cli supports any of them.


I've followed and enjoyed your commentary on PGP and cryptography in general, so I thought I'd post it.

Any idea when Fillipo's `age` will be done, or how to follow its development, other than the Google doc?


Filippo should get to work! The design part of age is the hard part; the actual programming is, I think? maybe one of the easier problems in cryptography (encrypt a single file with modern primitives).

I am a little bit giving Filippo shit here but one concern I have about talking "age" up is that I'm at the same time talking the problem of encrypting a file up more than it needs to be, so that people have the impression we'd have to wait, like, 5 years to finally see something do what operating systems should have been doing themselves all this time.


What are your thoughts on Keybase as a secure Slack replacement?


Personally, I love Keybase and it is my #1 choice for communicating with a few people, but bugs are far too frequent for me to consider it for a business.

It's getting better, but not close being business ready imo.


I've been using Wire on iOS, web, desktop (electron), and Android, and Keybase on Android and desktop (cli). Both are not great but Keybase is definitely the more buggy. Wire on Android, on the other hand, is also quite unusable due to its battery drainage. And both are pretty much unusable on desktop: Keybase is a CLI (not even a TUI) and Wire is Electron. I'd prefer a TUI over Electron but it's not even a TUI, so I guess Wire wins this round. Keybase also doesn't have a web client, which is why I have experience with the command line client. I think that says enough in and of itself.

What I'm trying to say is, definitely also try Wire as it's similar but slightly better. I also haven't figured out how to verify someone over Keybase, so it's basically unauthenticated or opportunistic encryption. By comparison, Wire is considered secure enough by my company after doing a pentest on it, which says quite something (most customer's stuff we would run from if we were thinking of using it), and we use it as our main communication platform within the company.


> Keybase is a CLI

Keybase has a relatively functional electron app on Mac and PC that I've used, and presumably also Linux.

> I also haven't figured out how to verify someone over Keybase

Isn't the goal with Keybase to see "this person is accountiknow on Github" and do verification that way?


> Keybase has a relatively functional electron app

Oh, then I am mistaken, thank you for correcting me and sorry for talking nonsense.

> Isn't the goal with Keybase to see "this person is accountiknow on Github" and do verification that way?

Yes, but that doesn't help me because my chat account is not cryptographically tied to my Keybase account.

So let's say I sign this statement that I am lucb1e on Github as well as on HN as well as on Keybase. You know and can verify that I have all these identities nicely tied to my PGP key. The person who controls those identities clearly also controls the PGP key. Now comes chat. You start a chat with me, and... magic? I don't know, but there is no verification code that I can use to check that I am really talking to lucb1e on Keybase; nothing that ties it to a PGP key; nothing that ties it to the HN user. My device might be encrypting the data for an entirely different key (whose private part is known either by Keybase or some other (W)MitM ((Wo)Man in the Middle)) and there is no way to check as far as I have been able to find.


Keybase isn't centered around a PGP key, each account is like a web of devices and cross-verifications of different proofs. All of this is stored and verified locally. When you open a chat with "rakoo on keybase" you can be sure that all messages are encrypted to me because the keybase client will have used the equivalent of my public key. The same process allows us to exchange files, or share a git repository hosted on keybase. There is no specific "chat" account, it's just part of the account.


> you can be sure that all messages are encrypted to me because the keybase client will have used the equivalent of my public key

But it's not end to end encryption if you can't verify it. Heck, even Whatsapp can do this. Let me cite Wikipedia on end to end encryption:

> Because no third parties can decipher the data being communicated or stored, for example, companies that use end-to-end encryption are unable to hand over texts of their customers' messages to the authorities.

So if Keybase got a court order to intercept your messages, they totally can. Just because Keybase sends you an encryption key, doesn't mean you got the one of your conversational partner: you'd have to do funky stuff (read and interpret the app's memory, or decompile and patch it to show fingerprints) to be able to verify that. That's unauthenticated encryption. It's like clicking past an https warning without ever seeing the warning.


That's not how keybase works. Keybase doesn't send you encryption keys you blindly trust. It only sends you a bunch of data structures allowing you, the sender, to verify that your correspondent really is who they claim they are. All verification of the proofs is done on the client. That means that your client is going to fetch all the sites allegedly linked to that account and do a verification locally. Because of this, if keybase ever receives a court order, there must also be a court order for Twitter, GitHub, Mastodon, all the websites that are linked to this account.

This verification happens everytime you receive new information about your peer; if something has changed it must be new, otherwise a rollback or a change in history is detected. Once your client has made this automatic check you can get a list of the linked accounts and manually check if they are the correct ones; at this point your client can take a snapshot to prove even more strongly that it's the correct one. Those snapshots are shared, so if an account has multiple followers (the name of someone whl manually verified na account) it's that much harder to crack.

If that's not enough, any change to any account is stored in a Merkle tree whose root is thus monotonically incrementing, and tht tree can be retrieved at any time to verify nothing happens. And that root is stored in the Bitcoin blockchain so that any fork is easily traceable. You really have to go out of your way to distribute a compromised key to a client. In the meantime the encryption is end-to-end.

Here's an article about how they protect against malicious attacks against the server: https://keybase.io/docs/server_security

Here's how they store that Merkle root in the Bitcoin blockchain: https://keybase.io/docs/server_security/merkle_root_in_bitco...


Oh gotcha - the lack of visibility into the verification codes could be an issue, but IMO it's like signal or any other encrypted chat app - the complexity is hidden from the user, but the identities are still verified. All of the client code is open source and so you could dive in to see how verification is handled, but why should you need to? And what code could the app show you that a malicious app running a MITM attack couldn't?


> what code could the app show you that a malicious app running a MITM attack couldn't?

I trust the app itself, since you can indeed verify that code, but the idea is that you don't have to trust the (Keybase-managed) server.

So what you'd verify is the encryption key. If we do a Diffie-Hellman key exchange and our shared key is abcxyz, then both phones should show that key. If an attack is going on, the key would have to be one known to the attacker rather than your conversational partner.

Simplified, DH is quite easy: you pick a base number and a modulus (public knowledge), both parties generate a random number (let's call it Rand), they do base^Rand or Rand^base (I forgot which way around), apply the modulus, send the number to the other person, and apply the same exponentiation with the number you received (and apply the modulus). The resulting number is only known to both parties, even though anyone could have observed the public parameters and the numbers that were sent to the other side. If a person wants to intercept this, they need to pick a Rand themselves and do the operations with that, replacing the number that gets sent over with their own. Because they can't know the Rand of either legitimate party, they will necessarily have a different result, and so the resulting encryption key is different. Both parties would (upon verifying their key out of band, for example by holding their phones next to each other in real life) see different encryption keys.

It's not about verifying the source code, but verifying that the server which I'm talking to is not malicious (for example if someone compromised it). That's the one property which makes end to end encryption "end to end" :)

Similar schemes can be made with different types of encryption (Diffie-Hellman is one of a few good methods), but with end to end encryption, the end devices are always the ones that verify each other.


This is great, thanks for writing it!

Brings to mind the words of renowned Victorian lifehacker Jerome K. Jerome:

“I can't sit still and see another man slaving and working. I want to get up and superintend, and walk round with my hands in my pockets, and tell him what to do. It is my energetic nature. I can't help it.”


The problem with the alternatives is they are product specific and baked into that product. I need a tool that is a separate layer that I can pipe into whatever product I want, be it files, email, chat, etc. Managing one set of identities is hard enough thank you very much and I also want to be able to switch the communication medium as needed.

I use gnupg a lot and I'm certainly not very happy with it but I guess it's the same as with democracy: the worst system except for the all the others.


The problem with this is that a tool that is too generic is itself dangerous, because it creates cross protocol attacks and confusion attacks like in https://efail.de for PGP email.

I think that a better approach is to bind identities from multiple purpose built cryptographic protocols.


If you blame gpg for efail you can blame anything really for a virus on one of the endpoints of any form of encryption.

Next you will hold rensponsible tech for social engineering and mandate users should not know their own secrets because that causes vulnerabilities in protocols :p


By that logic we shouldn't recommend cars with high safety ratings, because we can always train users to drive motorized unicycles at 200 MPH. Clearly there's no fault with the unicycle regardless of how many people crash, it behaved as specified.

Except the specification is trash.


Except cars with 5 star rating exist but a better solution than PGP doesn't.



I don't understand how these represent "mistakes", let alone "serious mistakes". But I'm glad he liked it.


It's over my head, but I thought that a reference would be useful. Given that the mail list thread references HN.


Oh, no, sorry, I'm glad you posted this; I wouldn't have known where to look. Thank you!


So the recommendation here is just to use Chat clients to communicate and forget about Email? Well that is hardly a good solution.


Yes. Email is designed to be stored forever, and all attempts to do otherwise fail due to the design of email clients. Chat is possible to designed to self-destruct. Anyone can screenshot either, so there’s no use worrying about that.


Yes! E-mail is fundamentally terribly positioned to do secure messaging. You can use e-mail, or you can have cryptography that works and have people use it, but you can't do both.


> E-mail is fundamentally terribly positioned to do secure messaging

E-mail is fundamentally a way to send a sequence of bytes somewhere (untrusted) so they can be picked up later by someone (trusted).

That’s also literally what Signal is built on so I think you’re overstating the difference.


Secure messaging is much more complex, but here’s a simple example of how that’s not true: TCP is bidirectional and email is one message, fire and forget. That immediately affects your ability to have forward and backward secrecy.


I do not think the OSI model is very useful but you seem to, so let me put it this way: E-mail is bidirectional too just at layer 4 instead of layer 3 (I hope I remembered my layers right!)

E-mail is store-and-forward just like TCP is; how do you think an IP router works? TCP is fully duplex; a tx doesn’t wait behind an rx, exactly like an e-mail reply not waiting behind an e-mail receive. The only difference is that a router will typically use volatile memory to store messages before they are sent but e-mail will typically use disk.

If your security model relies on this difference then your security model is broken. It’s worth noting that Signal does NOT rely on this difference. It relies on participants being mostly online to permit frequent rekeys and not having to retain old keys indefinitely.


No, email is not bidirectional. You send an email, the recipient later opens it. Sure, the recipient's SMTP server might respond right away with an ephemeral key you can use to enjoy forward secrecy, but that server has to store the message for the recipient to retrieve later.

You can't have full forward secrecy with email as it is used today. If you want forward secrecy with email, you need three emails sent in rapid succession: Alice sends a request to Bob, Bob sends a response to accept the request, and Alice sends the actual encrypted email. That would work. But you basically need Bob to be online.


> you need three emails sent in rapid succession

This is partially correct, but they do not need to be in rapid succession, and therefore Bob does not need to be online.


Alice's ephemeral private key must be kept as long as the whole handshake. Bob's is a bit shorter (between the last two messages).

If the messages are slow to come, those ephemeral keys become less and less ephemeral, and could actually be stolen.


That is exactly correct. As I'm sure you know, it is Alice that retains her DH key not the email server or anyone else. As I said:

> If your security model relies on this difference then your security model is broken. It’s worth noting that Signal does NOT rely on this difference. It relies on participants being mostly online to permit frequent rekeys and not having to retain old keys indefinitely.

Signal does not depend on TCP being "bidirectional" as lvh said, it depends on participants being mostly online. This has nothing to do with the transport properties of e-mail vs. TCP.


“Bidirectional”. So peers can mostly talk to each other. Do you really want to die on that particular semantic hill? “These Ethernet frames have source and destination addresses eventually”?


> Do you really want to die on that particular semantic hill?

Sure. The world of cryptography software is already muddled by misinformation, poor practices and misguided appeals to authority. We shouldn't need to spread misinformation about technologies such as e-mail to get people to stop using it.


> Long term keys are almost never what you want. If you keep using a key, it eventually gets exposed. You want the blast radius of a compromise to be as small as possible, and, just as importantly, you don’t want users to hesitate even for a moment at the thought of rolling a new key if there’s any concern at all about the safety of their current key.

Interestingly some protocols such as roughtime use the same tactic as OpenPGP: one long-term identity key that can be kept offline and rotation of online (short-term) keys signed by the long-term key. Details here: https://roughtime.googlesource.com/roughtime/+/HEAD/PROTOCOL...


If you squint enough, that’s how CAs work too.


> there’s a simple meta-problem with it: it was designed in the 1990s, before serious modern cryptography

SSL was designed in 1994 but it has been properly maintained and today no-one argues that TLS should be replaced by noise/strobe etc. OpenPGP's problem no 1. is that there are no parties using it on a wider scale and interested in improving it.


SSL 2.0 was overhauled (by cryptographic experts) to create TLS, and, after something like 10 years of effort, support for SSL 2.0 was scourged off the Internet. And still, we had the DROWN attack a couple years ago, which manages to exploit cross-protocol attacks between SSL 2.0 and TLS on different servers. In the last few years, we got TLS 1.3, which again made significant, breaking changes with the previous versions of the protocol (such as getting rid of the RSA handshake), and presumably over the next 10 years we'll be shedding support for all previous versions of TLS.

That's what didn't happen with PGP.


> That's what didn't happen with PGP.

Yes. And your software suggestions are excellent for 2019. I just wonder whether in 10 years it would be better to have a standard improved/developed, instead of a collection of one-vendor tools where the code is specification. That said I don't have high hopes for PGP given its maintenance problem.

Thanks for writing on the subject, even if the subject should already be clear to majority of technical people!


I’d argue for replacing TLS if it were plausible to replace TLS for mainstream users. HTTPS is a dumpster-fire for a lot of the same reasons PGP is.

For example, the fact that there’s a grab bag of different ciphers, compression options, and other toggles makes properly picking settings an exercise in copy-pasting from a site you trust or guessing and then running an SSL Labs test until it comes back green. If you miss something, congrats, somebody can MITM and trick your users into downgrading.

Things like this are why the most notable features of TLS 1.3 are the things it removed, more so than what was added.


I guess one difference here is that often major implementations of HTTPS make the best choices (like operating systems, major browsers, major web server software, etc.), whereas with something like PGP, everyone is using GPG which has only one implementation which is known to be terrible.


First of all, if you are signing something and want to prove that you are the author, then PGP will allow you to do that.

If you want to encrypt something and prove you are the author, PGP will still allow you to do that.

Does the author mean that PGP is bad for email specifically?

Excel has many of the mentioned properties, such as backwards compatibility and inefficiency, but it gets the job done and you bet it will pay your bills.

It feels to me like these posts are like the 80:20 problem, but rather with 99:1 and it's all about that 1%. I understand that software developers should use libsodium. But I'll sign the words "U R A >on" right now in GPG and wait for you to break my key and sign "U R 1 2" with my private key...


Yeah, most (all) of these arguments seem silly. The primary argument not to use it in emails because someone MIGHT improperly quote your email in a reply and then NOT encrypt their response? Well someone might screenshot your Signal app, or have malware on their phone, or a million other things. It seems like such an absurd corner case to me.

For my use case I don't have any concerns about using GPG. I encrypt files with it, and if anyone wants to put up any money that they can access my files, let me know what escrow service you want to use.


Not because it "MIGHT" happen, because it does happen.


I mean "might happen" in any particular instance. Not that it has never happened and only "might" in the future. Because 0.000000000000000000000000001% of messages have been accidentally exposed does not make PGP inherently bad.


With "Johnny you're fired", it's clear that many clients don't correctly validate PGP signatures


So my question is this: is PGP itself to blame or are people basically saying, "Don't use PGP unless you know what you are doing."

I don't really know what concrete advice the article gives for me personally. (The only thing I take away from this is to learn libsodium as well, rather than not using PGP.)


With PGP I can encrypt/decrypt/sign files, and text. It is widely available in every platform. The day a single tool comes around that does that, better, I will switch.


I don't understand why the use of Yubikeys for a non-exportable key isn't valid for folks that care about security. I mean, I get that not everyone will use it. The vast majority won't. However, the vast majority don't care about security at this level. So... what is the actual criticism? If you care about security, use the keys, right? That feels no different from "use some other product."


The only thing in this world more complicated than setting up GPG is setting up GPG with Yubikey.

The fact that I have `fix-gpg` script to restart gpg-agent somewhere in $PATH that I run when for some reason it can't find my YubiKey tells me that it's not a viable solution for 99% of people.

PS. Actual command from GPG:

  > help
  ...
  sex     change card holder's sex
  ...


This is more an issue with GnuPG though, not OpenPGP.

Also, you may want to try using an actual OpenPGP Card (https://www.floss-shop.de/en/security-privacy/smartcards/13/...). (You can get a small one inside a USB token too)


On older machines, I agree. On recent installs, it just worked.

Getting it to work with my phone is slightly dumber. But still not super hard.


Sure. We do that for eg SSH. I don’t think it’s a great idea for our standard audience (startups) to implement.


I'm not sure what you mean. :(


I'm suggesting that the Yubikey experience isn't exactly flawless. I use Qubes too, but I don't think everyone should have it as a daily driver. If you want to use Yubikeys for SSH, that's fine. IF you want to use Yubikeys for GPG, you have many of the problems outlined in the post. Does that clarify?


Ah, I can somewhat agree with that. But I have doubts any experience is flawless.

Though, I did mean my yubikey use is for gpg. I have considered otp uses of it. But primarily use it for supporting pass, which is gpg.


First, thanks enormously for writing this -- and for all the other recent articles that have appeared here the vein of "PGP is as bad as it is unpleasant to use". It's a point I didn't really appreciate (at least as much as I (sh/c)ould have) and I'm sure I'm not alone.

It seems that the state of package distribution for many distributions is poor, security-wise. (OpenBSD, to nobody's surprise, is an exception.) For instance, archlinux (I'm loyal) signs packages with PGP[1] and, for source-built packages, encourages integrity checks with MD5. My recollection is that, about 5 years ago, MD5 was supposed to be replaced with SHAxxx. Am I misinterpreting this? Is this actually Perfectly Okay for what a distro is trying to accomplish with package distribution?

(I'm particularly suspicious of the source-built package system, which consists of a bunch of files saying "download this tarball and compiler. MD5 of the tarball should be xyz." I'm pretty confident that's not okay.)

Okay, now moving from package distribution to messaging, and again looking at the state of my favorite system. How am I supposed to message securely? The best nix messaging tools are all based around email. Even when I can get the PGP or S/MIME or whatever toolset to work (let's face it, that's at least 45 minutes down the drain), it's clear that I'm not in good shape security-wise.

I should use signal, apparently. Great. Just a few problems: (1) no archlinux signal package, (2) I'm guessing I can't use it from the terminal, and (3) most severely, it seems signal has incomplete desktop support. In particular, I need to first set up signal on my phone. Well, let's face facts: I have a cheap phone from a hard-to-trust hardware vendor, and I think there's a >5% chance it's running some sort of spyware. (The last phone I had genuinely did have malware: there were ads showing in the file manager, among other bizarre behaviors.) So in order to use signal on my desktop, I need to buy a new phone? That's even worse, usability-wise, than PGP.

Is... is it really this bad? I'm getting the sense that the desktop linux community has completely dropped the ball on this one. (And perhaps more generally desktop mac/windows... I wouldn't know.)

[1] Perhaps not so bad, since the keyring is distributed with the system -- but how was the original download verified? Options are: PGP, MD5, SHA1, with the choice left up to the user. That can't be right.


Arch Linux's pacman is not a good example of a secure package distribution system (especially not the AUR, where you are downloading all the bits from the internet and building them yourself as your own user or even sometimes as root). They didn't do any package signing or verification at all until (shockingly) recently -- less than 10 years ago IIRC. I am a huge fan of Arch's philosophy but am definitely not a fan of the design of their package distribution system.

If you look at systems like Debian-derivatives or RPM-based distros (openSUSE/SLES, and Fedora/CentOS/RHEL) the cryptosystems are far better designed. In the particular case of openSUSE, the AUR-equivalent (home: projects on OBS) are all signed using per-user keys that are not managed by users -- eliminating almost all of the problems with that system. Yeah, you still have to trust the package maintainer to be sure what they're doing, but that should be expected. I believe the same is true for Fedora's COPR.

[Disclaimer: I work for SUSE and contribute to openSUSE.]


If you have to trust the package maintainer what's the difference between having package signed by them or not?

For the record AUR packages can use GPG keys, see e.g. GPGKEY variable: https://wiki.archlinux.org/index.php/Makepkg#Configuration

Arch also uses Web of Trust to introduce Trusted Users: https://www.archlinux.org/master-keys/ so I wouldn't call "less than 10 years" as a disadvantage, but rather advantage - they have seen problems with alternative designs (e.g. Debian's curated keyring) and came up with something better.


aur is a system. you can of course make your own pkg's without sharing if you so chose.

responsibility is assumed when using others for any can submit there. that said there are signatures that can be verified ie available.


I don't know where you found MD5, most PKGBUILDs have `sha256sums` in them.

Package signing is (hot take!) overrated and can be somewhat theater. It helps if your package manager connects to third party mirrors, but otherwise, the only threat it protects against is "the https server is compromised but the package build farm is not". I don't know why anyone would worry so much about that.


> the only threat it protects against is "the https server is compromised but the package build farm is not"

Looking at an /etc/apt/sources.list, it doesn't look Ubuntu is using HTTPS for package distribution. Since you don't need both package signing and transport security, and I suspect that the list of packages you're downloading is a fairly trivial conversation length analysis anyways, I don't think the setup of signed packages over HTTP is meaningfully less secure than unsigned packages over HTTPS.


Signal does a great job of supporting activists. That's basically its intentional product focus. Everything an open source proponent engineer might want to promote is secondary. The focus on activism and trying to deal with large actors definitely looks like #1. Everything else about their product is secondary to that.

Signal's product focus has been at best un-encouraging to those who want to use it for anything else. Federation, I'm looking at you, but you've also got to take it in the sense that every single vendor that embraced the only realistic messaging federation standard of the 21st century went on to embrace and extinguish it in less than a decade.

This speaks to a few problems: 1. Messaging is hard 2. Security is hard 3. Logic about security is hard to reason. 4. Historically, anyone paid enough to care about this space hasn't had any sort of public interest at heart.

Any application that combines any of the three falls squarely into what the people at Latacora and the like would call a high risk application. I might disagree with much of their analysis, but in the lens of risk control, they are perfectly correct.

If you're trying to figure out how we got here, you've also got to realize that there was an avalanche of government and commercial entities whose goals are not in alignment with say, those who think the optimal solution is a home rolled provable trustless security system.

For myself and many engineers I'd bet, I'd say that's where we thought things should go in the 90s and early aughts. Some things are better now, but most are much worse.

Society and encryption's implications I would say have caught up with each other, and theres definitely something found wanting. There's definitely say a market opportunity there, but there's also another big challenge that I read reviewing a discussion about package signing lately: "Nobody in this space gets paid enough to care."

That's what separates people like Signal, even if some of the engineering crowd doesn't like the way they delivered.

This is a bit of a ramble, so there's two afterwords:

1. Much of the morass about PGP is explicitly due to the environment, space, and time in which it was developed. This does not boil down merely to 'it wasn't known how to do it better.' There were decisions and compromises made. I think the writer at Latacora is not giving the history of the piece justice. That's OK though, because that's not the crux of their argument. It would be good though I think if they gave that a further explanation than why things like the byzantine packet format are impossible to explain, even if that explanation were only a footnote and reference. (Writing the history of how it got there is absolutely doable, but it would make for a dryly humorous history, at best.)

2: The open source and (linux/others?) distro community has tried hard, more than once to make this work. The development, design, and implementation burden though, is gargantuan. The overarching goal was basically to be compatible with every commercial system and maybe do one or two things better. What the article casts as purely a liability was the only way to get practical encryption over the internet well into the early '00s.'

Regardless of all that though, PGP is still a technical nightmare. If you dismiss it though, even when we have better components, I worry that we'd only repeat these mistakes. If you work in any sort of crypto/encryption dependent enterprise, please find and study the history. Don't just take the (well considered) indictment of PGP at face value. There's important lessons to be learned there.


The byzantine packet format in PGP buys PGP nothing. There is no engineering reason for PGP to work that way, and PGP continues to suffer security issues because of it. I'm not arguing that PGP's original designers were incompetent, nor am I considering that argument, because it is totally uninteresting to me. What is relevant to me is the fact that today, that design is costly and bad. What more is there to think about? We should get rid of bad cryptography and replace it with good cryptography.


You make a good point. The design is costly and bad, I hope I wasn't arguing to retain it for historical reasons or anything like that. I do believe that the history of how PGP got there, and how its ecosystem became the sort of hydra that it is should be kept in mind for anyone trying to build a replacement.

The commercial factors that drove the engineering there I think is a real risk for any cryptosystem implementor, especially if they are trying to build or retain a userbase.


I can probably agree on the packet format, it's excessively flexible and easy to implement insecurely.


You can probably agree about the PGP packet design?


Yes, it depends on what you mean. The packets themselves aren't a big deal to me, it's more the ability to construct them in arbitrary ways that concerns me.


> Signal does a great job of supporting activists.

I don’t think it practically achieves this. Even now, Signal is an unreliable platform to communicate with. One can’t be sure if the message will reach in a timely manner (like a few seconds or several seconds) or even reach at all. The UX and feature set are also far behind something like Wire.

I’ll accept that Signal has a strong and reputed protocol, and has take some strong measures on security and privacy. But everything else about is truly meh, to put it mildly.

Please don’t dismiss these points about reliability saying it has never failed for you or someone you know. It routinely fails for people I know, and that’s all that matters when recommending a messenger platform to others.

Signal has made it seem like security is easy to focus on (though a lot of thought and work has gone into it), but has shown that UX or pretty hard and that running a platform is even harder (even at Signal’s scale, which I presume is a fraction of other platforms).


Signal is beholden to the whims of Google due to their reliance upon the google cloud messaging stack. All messages are delivered based upon something outside of their control (whether Google decides they have enough priority to wake the modem). If you are on your phone all the time and have signal open it is usually instantaneous but say your phone is usually dormant it can be hours before a notification pops up. I'm not sure how different the situation is on iOS or if this is even avoidable with the current mobile duopoly. I find that the unreliability of signal precludes exclusive use of it so I tend to route a lot of communication through SMS via signal or double send. This is not high security stuff. It does seem like a relic of our age that chat is considered the preeminent form of communication. I think long form and long lasting texts like e-mail are valuable but maybe I'm just long-winded.


For backups, I would recommend Restic. The author mentions Tarsnap, and if you don't want to back up five terabytes that is probably great, but after a few gigabytes it's just not economical for private persons. If you're on hacker news (i.e. the 'engineer' the author was talking about), odds are that storing a hard drive connected to a raspberry pi at your parents' and using Restic as client is an extremely cheap way of backing up many terabytes securely.


For Windows users that were wondering, Restic doesn't back up files that are opened exclusively.

VSS support on Windows is an open issue:

https://github.com/restic/restic/issues/340


I have a few gigabytes on tarsnap for pennies a day.


Hence "after a few gigabytes".


So, as a technical person (I can write and debug software, deploy it to production, etc.) but only an end-user when it comes to cryptography:

What are the biggest weakenesses of using PGP for local file encryption?

My use case is I have a 'vault' of secrets that I store in a gpg-encrypted org file, with a key binding in emacs to let me easily decrypt this 'vault'.

The encryption is done with gnupg's symmetric encryption, i.e., just me typing a passphrase, not using my private key. This encrypted file resides in a Filevault-encrypted drive, and is backed up to a an encrypted TimeMachine backup. It never leaves my local drive or the external USB drive I use for backups (at least not that I know of).

What are my biggest risks here? I did not get much info from the article on this use case other than "perhaps wait for age". I understand I could be using an encrypted volume that I mount on demand, which I already do for other use cases, but that would get rid of the convenience of having this "vault" available to me just one emacs shortcut away. I'm willing to give that up if the risks are big enough but I'm kindly asking the knowledgeable HN audience for advice to see what I'm doing is really that bad.


There's nothing wrong with this. Public/Private keys are for sharing secrets and proving identity.

Assuming you are using a sane cipher and mode, your risks boil down to any of a myriad of methods of stealing/bruteforcing your key or grabbing the unencrypted file.


Thanks!


Messenger are not easier than email, only when 2 ore more communicate on one topic.

But with email you communicate with one or more people about many topcis.

To archieve the same structure in a messenger, you must create several discussions. So messenger are not the holy grail of communication, that is why people still use email.

We need an email user interface with open messenger protocols under the hood for secure communication and usability. None of the current messenger offer that.


For a proof of concept, I've made a couple of years ago a "gateway" which was launching a local IMAP server in my XMPP client. This way you could use any MUA you like while taking profit of XMPP infrastructure (JID — XMPP identifier — are similar to email addresses) and its encryption (OMEMO can be used with that for instance). In other words, you could send messages from Thunderbird (or Gajim) to KMail by using only XMPP protocol (and thanks to IMAP Push, you got notifications immediately).


Good point, private channels are the new discussions. But they tend to never die so over the years we end up with a cluttered channel list.


> Encrypting Files > This really is a problem. If you’re/not/ [...] then there’s no one good tool that does this now. Filippo Valsorda is working on “age” for these use cases, and I’m super optimistic about it, but it’s not there yet.

I've been (ab)using ansible-vault for this. It's apparently[1] now using reasonable primitives (PBKDF2+AES-CTR+HMAC-SHA256), but not necessarily the latest and bestest ones: https://github.com/ansible/ansible/blob/v2.8.1/lib/ansible/p...

The implemention is sub-optimal though, with some silly issues that haven't been fixed like https://github.com/ansible/ansible/issues/12121

[1] http://toroid.org/ansible-vault-strange-crypto


I have some notes on why I (and some of my colleagues) use gpg instead of Ansible vault: https://dotat.at/prog/regpg/doc/rationale.html

I would dearly like a replacement for gpg in this context; `age` by Filippo Valsorda and Ben Cartwright-Cox looks like it will be nice, though I would like it to be easier to audit the recipients of encrypted files. Note that this kind of auditing relies on an information leak that cryptographers usually try to suppress...


With the rapid churn of various chat clients it's hard to get something to stick, and I attempted to settle on XMPP+OMEMO (or OTR), since it's a protocol, not an "app" (eww). Not a single person I know still uses XMPP, and it's easier to just tell people "Download Signal". However, that requires a phone number, and is useless on a desktop if your phone's off. Despite being Free Software, it's fairly locked down.

Secure communication with most people is nigh impossible these days, because nobody wants to not use SMS.

There has to be a better solution, and despite email+PGP's flaws, it's the only system I've seen that's stood the test of time. There are pretty good plugins for Mail.app (macOS) and Outlook (Windows), and if you use Linux you probably can deal with gpg. We use it at work for email, and pretty much everyone can work with it at any level of technical expertise. Anyone else have any solutions that have worked in their circles, including non-technical family members?


Signal Desktop has never required your phone to be on. It's completely independent once set up (apart from the occasional contact list sync). It has been this way since the first version of Signal Desktop. You're probably thinking of WhatsApp Web, which works the way you describe. I've seen this incorrect information spread on HN too many times, and I'm not sure how so many people came to think that. It's been 100% wrong from the beginning.


You are correct. Thanks for pointing that out; I tested it out this morning and verified I was able to send/receive messages on the desktop program with my phone in airplane mode.


I'm running a "beta test" of sorts of XMPP+OMEMA with non-technical people in my circles, and while Conversations (Android) and Monal (iOS) are getting there, there are still functionality and compatibility gaps.

Gajim is a usable client for desktops, but it's certainly picky about which users it is friendly with (that is, I can't recommend it to non-technical folks).

On the other hand "it uses IDs like email" is a concept my friends could understand, and the promise of end-to-end encryption over a server I control (as compared to some faceless organization somewhere else) was appealing to them.

And so I'm first level support for my peers and report issues upstream in the hope of improving the ecosystem.

And while it's closer to an email replacement (in that it avoids the ID issues of the WhatsApp clones and provides multiple clients for different purposes), it's still only a complement to email due to the ephemeral nature of messages, not a replacement.


As far as usable desktop XMPP+OMEMO clients go, Dino is much friendlier to non-technical users, but paradoxically, it's only available on Unix-like OSes. It runs on WSL and is probably buildable for X on MacOS, but by that point, you've lost non-technical users in both cases.


Wow, that one looks good. Thanks for the pointer!

I wonder how much is in there that precludes cross-compiling to Windows.


Why not Keybase as secure messaging? It's easy to get people to use it and it provides much more than just XMPP+OMEMO.


Matrix works, and has fairly easy-to-use clients (and I've switched a few non-technical folks without too much trouble). It's also an open protocol, not just an app.

The main criticism (and I'm preemptively responding to tptacek here) is that they haven't yet made E2EE the default (though this should happen in a few weeks now that cross-signing appears to be done). I also think the key backup system should be much better explained -- there is a usability bug open for it and I've posted some suggestions.


Use Matrix if you want to contribute to Matrix or are an enthusiast about what Matrix is trying to do. But don't use it as a secure messenger, or tell at-risk people to use it. It may someday be a serious option for secure messaging, but it is not that today.

I'm not a Matrix hater, but I think Matrix's cheering section gets the project in trouble, since their answers about privacy and security are demonstrably worse than those of other secure messengers. All Matrix is trying to do is build a modern, generalized IRC with optional built-in encryption, which is a project congenial and relevant to my own interests (my only funded startup tried to do the same thing!). That's great, as long as you're not trying to get people to use it to hide things from governments.


Is there a specific issue other than "it's not the default" that precludes it from secure messaging? This is the thing I don't understand about your position -- you have been saying for a very long time that "it's not ready yet" but as far as I can see the default-to-unencrypted setup is the main issue you have with it? I get that asking a journalist to use it right now is a bad idea, but if E2EE was the default today what other issues do you see?

From my PoV, Matrix has many features that might actually end up increasing security over Signal's design. Just as an example, you cannot blacklist or even get alerted to new devices being added to an E2EE conversation with Signal (and if you look at things like the Assistance and Access legislation here in Australia, that is a serious concern). With Matrix you do detect it and can blacklist the other device (and with cross-signing being done very soon, you can also be sure that verification of devices will be a rare event). I also think the new emoji-based verification is a massive improvement over Signal's "safety numbers" setup.


I'd also be interested to hear Thomas clarify this. I saw a recent thread on Twitter where he and bascule were talking about it and it still wasn't super clear, but one specific point I recall is that Matrix has a significant amount of metadata stored on the server side which constructs a social graph. As opposed to something like Signal which has close to nothing stored on the server.

To me this seems like an issue of use case. If my goal is to be able to talk to my family and friends, and I don't care that it's known that I'm talking to them as long as the contents of the messages are private, that is fine for me. For a case with more stringent requirements, I can see Matrix not being a good recommendation in its current design.


> Use Matrix if you want to contribute to Matrix or are an enthusiast about what Matrix is trying to do. But don't use it as a secure messenger, or tell at-risk people to use it.

Also replying to this (likely too late!) in the hopes of a little clarification. In particular, "don't use it as a secure messenger" seems like strange advice in view of the fact that there don't seem to be any better options. That is to say, there are no options that fulfill all three of these requirements:

1. Full support for group chat with end-to-end encryption between all participants. (Matrix even includes cryptographic controls on how much history is shared, though that's not a requirement.)

2. Completely open source with no centrally controlled servers.

3. Does not require any PII (including a phone number) to begin using.

As far as I know, every messenger fails on at least one of these counts, and many fail at all three. It's one thing to criticize Matrix (there are a lot of things about it that suck right now, to the point that I'd never recommend it to a casual user), but to do so without any alternative doesn't seem helpful.

Have I missed something better?


You can try recommending quicksy. It's been launched fairly recent and is an effort from the conversations developer to reach a similar userfriendliness as other phonenumber-based messengers have, while still being open to other users who decided to go with the regular accounts (e.g. you). You can even bind your phonenumber to your jabber account, so you show up as a concact without them having to add your jabber id to their phonebook.


to be fair i see signal as a alternative to a more secure texting means in presentation. of course this requires data so then there may be better alternatives for sure but what people are used too.


I use my PGP key (with a Yubikey) for two things, the pass password manager and as an SSH authentication key. Is there a replacement for those two use cases where I can store my private key on a hardware token?


I use the GPG compatibility goo to get an SSH key out of my Yubikey 5 and that's fine, it's fine, go ahead and do that.


Given that you have set this up, and that both of these usecases are for your own use only, I wonder if there is any reason to change. Most criticism in the article is about the complexity on a larger scale.


PGP was a game changer when it was introduced and the idea of a chain of trust was neat.

Unfortunately, its implementations are difficult to use, it was never directly supported by operating systems or major applications, and its crypto agility makes it impossible to write minimal implementations.

I wrote and use Piknik for small files transfer (especially with the Visual Studio extension), Encpipe for file encryption and Minisign to sign software. What they have in common is that they are very simple to use, because they only do one thing, with very little parameters to choose from.

The main reason for having written these in the first place was that PGP is too complicated to use, not that it couldn't have done the job.


Has there been any effort to integrate minisign into git?


None that I know of, but it's never too late :)


The post is well-written and summarizes I think most major pain-points with PGP.

Comparison of all these tools to GnuPG is valid and it clearly shows not only implementation problems but design ones as well in gpg.

What I fear is future riddled with all these incompatible tools. Even if they're written by brilliant engineers and cryptographers they are not standards (e.g. IETF standards). Why is that important? For example rewriting libsignal from scratch (for example to publish it under permissive licenses) can be problematic [0].

[0]: https://news.ycombinator.com/item?id=12056673


My minisign implementation in rust compatible with signify and minisign keys https://crates.io/crates/rsign


And based on your implementation: https://crates.io/search?q=rsign2


Glad to see wormhole listed here, I hadn't seen it recommended by security experts before. Is there also a similar consensus about Syncthing and whether it is secure enough for long-term file-sharing?


PGP has issues, but for many use scenarios there is effectively no ready alternative at the moment.

Until there is, whoever already uses or knows how to use GPG is still better off using GPG than not using anything.

Signal is especially not a good replacement for anybody who doesn't want to have "secure" communication connected to his (or any) phone number and to the central server.

The major issue with GPG is that in some (many?) cases a user could believe to be more protected by using it than the user is. But the same can be said for other technologies.

And the article rightly notes that the GPG defaults are often bad.

The right question to ask first is always:

https://www.usenix.org/system/files/1401_08-12_mickens.pdf

Are you "dealing with Mossad or not-Mossad"? (Also worth noting is that somehow Edward Snowden managed to "deal with" NSA and still remain out of prison. At least he knew from the start the answer to the question.)

Still, anybody who wants to replace PGP in some PGP use-case should provide a really good alternative for that use-case.

As of right now, there isn't any better for even such a simple need as encrypting a file. Let's discuss it once we have that one, at least. And even then, we'll need to be able to open our old files, meaning we'll still need GPG for that.


Another use case I don't know a replacement for is offline team+deployment secrets. Requirements:

- you need team members to read/write the secrets

- you need the deployment service to read the secrets

Without using an online system managing the secrets via ACLs + auth, I don't know how to replace PGP here.


what's wrong with an online system managing the secrets? KMS is great, and makes it easy to separate decrypt from encrypt permissions.

(KMS is not the only option! I'm just trying to eke out why you think that's valuable. For example, I think age, mentioned in the blog post, is a direct replacement?)


It has to be online. And reliable. And you can run into rate-limiting. And you have to be online. And services using it need extra network configuration to allow access. And if you can't narrow it down to a single ip/block you have a service with full network access just to reach out to a KMS.


KMS historical downtime is pretty great, rate limits have been bumped quite a bit to the point where I seriously doubt you're hitting them for secrets management, I'm not sure you can do a lot of useful things with most secrets while you're offline, KMS has a VPC endpoint and an Internet endpoint, so you can do both variants of the tight network scoping you want.

(And, again, age once that's around :-))


How do you turn on the online system?


KMS is a service, it’s already on.


Stop parroting these criticisms of OpenPGP without offering concrete, working replacements that are widely adopted and have open standards.

Some kid recently tried to have OpenPGP support deprecated from Golang's crypto-x package. And that's fine, but do not pull stunts like this without offering a concrete, working and widely adopted replacement. Otherwise, they are just that, publicity stunts with a lot of sound and fury but no solution. That's not helpful to anyone.

A more mature thing to do would be to suggest deprecation in 3 to 5 years and offer a plan of how to get there with other specific tools (some of which do not exist today).


I was going to say something about how the article does mention several alternatives, some of which far more widely deployed than OpenPGP.

But first: hold up. "Some kid"? You mean Filippo Valsorda? Google's Go Crypto person? The same person who is writing the replacement for that one use case?


Hi Thomas et al.! Good write-up and with several software suggestions that I didn't know. Excellent! I think one thing you are missing is an "alternative" for the identity problem. I somewhat agree that all long-term keys are a problem but how do we build an identity out of a bunch of keys for different systems, covering privacy concerns, with some sort of key transparency to detect attacks, ... I think this would be a very interesting post from you guys that I would love to read. ;)


I read these advice columns like "You should stop using hand saws because we now have electric saws which are better in every way that I care about." Great! You use them then. I'll keep using crufty old hand saws wherever I want, you use and proselytize your electric saws and we can get on with our lives. I have my own networks and threat models and my own evaluation functions for these. Hand saws are pretty good.


If your threat models have you using PGP, your threat models are badly engineered.


That's pithy. I have used PGP sign (via an air gap) release tarballs on a public server for clients that have individually verified my org's public key. It made sense in this context and everyone already had the tools. My point is we have our own contexts.


If you used signify/minisign to do the same thing, your workflow would be simpler, your keys shorter, and your entire stack more secure. Since clients are individually verifying your key, none of PGP's identity goo is even ostensibly winning you anything. Your context isn't the issue.


i work with a manufacturer that’s a multi billion dollar behemoth and they move at the slowest possible pace. every decision takes weeks, a hundred emails, 20 phone calls, and 10 pdfs. we needed to add encryption to data we exchanged. they said PGP and gave me a pgp key. i encrypted a test file and gave my key back. within 20 mins we had an answer and moved on. this easiest part of anything we have ever integrated with them


Long story short - signed / encrypted emails are hard. SMIME doesn’t really exist any more. Sending secure email is hard.


So far there has been a lot of hate on Hacker News about PGP. I'm sure most of it is true, maybe we shouldn't use PGP. However, every single one of these that offers suggestions cops out at end about encrypted files leave people with.. PGP.


But.. it's the only thing we have. Until some "pgp alternative" for signing and encrypting emails comes along, and is adopted by "most" email clients, it's the best thing we've got, even if it's "bad".


Something I'm curious about - if GPG uses such old/not-recommended encryption standards, is it still secure in the sense that if I gpg encryption something and post it online, a three letter agency will be still unable to decrypt it?


There are two ways that breaks:

- your key won’t be private forever but future compromise does mean past disclosure

- on long enough timescales if said TLAs can mount offline attacks against it.

So, maybe? But it’s definitely the safest way to do it, most of GPGs problems are unforced interaction errors.


> Every well-known bad cryptosystem eventually sprouts an RFC extension that supports curves or AEAD, so that its proponents can claim on message boards that they support modern cryptography.

I'm being attacked! :P


They mention encrypted volumes for offline backups and storing files which must be encrypted. For cross platform I'm guessing VeraCrypt is the recommended software now right?


So, how I sign git commits if I don't want to use PGP (GPG) ?


The pass password manager uses PGP, what would be a better design?


Pass is an interesting case, because of how it's implemented. The easy, obvious answer would be "a version of pass that uses libsodium (or Jason's own crypto library) instead of pgp", but there's no command line for those tools, and the only other command line widely available for cryptography is OpenSSL's, which is horrible.


Nothing at present, but the article did talk about work being done on a project that will be able to work as a direct replacement for pgp in this use case. Near the end, look for the mention of 'age'.


at first i thought, oh boy a rant from someone that can’t see past a pretty GUI ... but actually he gets it. well worth your time to read.


Until a better solution is brought forward, my team implemented a PGP packet library to make everyone's lives a little bit easier: https://github.com/summitto/pgp-packet-library

Using our library you can generate PGP keys using any key derivation mechanism for a large variety of key types! When using it right, this will greatly improve how you can generate and back up your keys!


Does this improve on gpg2 —list-packets for practical debugging/auditing of extant GPG setups, or is just useful for people who want to do alchemy on top of the OpenPGP packet format?


It just parses the packets with a relatively easy API, for those which want to do alchemy or which want to build tools for practical debugging/auditing purposes in C++ instead of using shell commands.


While waiting for "age" to get written, can I use libsodium for file encryption?


https://download.libsodium.org/doc/secret-key_cryptography/s...

Battle-tested by ransomware already (sigh).


So what should we use for public/private asymmetric encryption then?



Pretty shocking, thanks for such a detailed analysis


It's a large number of minor points. Nothing shocking, no huge issues, and of course nothing we didn't already know. The crypto is good, if you manage it well it stands up to the NSA as far as we know... Sure, it has flaws that make it hard to use in general, and hard to use securely (no long term keys, for example, would make me less paranoid about my private key), but it's still fine despite having huge backwards compatibility.

The author is overly dramatic about it in order to make a point, to hopefully get people looking for alternatives, so that a good one might take it from pgp in the future (and continues to suggest whatsapp and signal, like, really? That's your replacement for pgp?).


The crypto is good

It's not. This is detailed in the piece. What do you think it gets wrong?

That's your replacement for pgp?

One of the key points is that by now we know PGP is a bad idea conceptually. There can't be a replacement for PGP. This is a bit like asking what what's to replace mummification now that we know it doesn't really grant access to the afterlife.


As usual, all articles of this OpenPGP-is-bad kind keep failing acknowledging OpenPGP's most important feature: it has a way (clumsy as much as you wish, but existing) to attach identities to keys, and to verify them. I have a rather well connected OpenPGP key, and that means that anyone can verify the signed emails claiming to be from me are actually from me, even if we never met. Of course you can mention XKCD and people saying they will never check a signature, but that's their problem, not mine. If do not bother to check your keys, any encrypting/signing system is untrustworthy. If you have a lot of keys, each for any of your crypto applications, and no way to cryptographically tie them to an identity, you will eventually mess them up, especially if they have to be deployed on more than one machine.

I am perfectly fine with saying that GnuPG uses old algorithms, or that different applications should use different keys, algorithms or techniques. But, please, when designing such systems keep in mind that you want to check where your keys come from and which identity they are attached to. And TTBOMK only OpenPGP is currently able to do that. To me it would be great if all crypto applications done in the right way would have a way to tie their keys to the OpenPGP web of trust, in the same way Monkeysphere tried to do for SSL and SSH keys.


And as for how PGP gets used in practice, https://xkcd.com/1181/ did a good job of demonstrating how useless that is.

Seriously, I'm a developer. I have never once felt motivated to verify a PGP signature. Of anything.


Identity is an unsolved problem. Even outside of digital use cases.


PGP has one big problem:

Lack of a proper email infrastructure.

And by "email infrastructure", I mean proper support by most email clients contact apps.

The most basic use case seem to have never been addressed. You receive an email with a public key attached. Clicking on it should automatically open your contact app and ask if you want to add it to your contact's details. It should also be synchronizable with cardDAV or any other protocol of your choosing.


Poorly thought out and argued.


Is this blog post satire? or an ad of sorts for some vaporware?

The talking alts either use PGP or a botnet. (signal uses PGP with a CA type of thing, so much for "stop using pgp") and whatsapp is owned by facebook.

tarsnap as I understand requires that I lock in to a service to secure my backups. F- that. Someone already talked about wormhole.

And how is essentially forking pgp with 'age' really going to solve things? Wow thanks another forked app! :^)

It would be easier to just make a wrapper for gnupg that sets the settings for everything the author is talking about. (well most of the things the user is talking about)

Wouldn't it be easier to just inform maintainers of the package to change the default standards of packages like gnupg? Has the author even attempted to change some of these things?

Don't get me wrong, I get where the author is coming from. Uinx philosophy should be followed... but certain systems can not be compartmentalized they have to unfortunately interact with other another.

If you for example encrypt data without signing it, how would you know that someone isn't trying to poison ciphertext to extract data leakage or worse yet, they already found a way to decrypt your data and manipulating sensitive data?

An encryption program by DESIGN should also have a method to sign data.


> (signal uses PGP with a CA type of thing, so much for "stop using pgp")

I legitimately have no idea what this means.

> And how is essentially forking pgp with 'age' really going to solve things? Wow thanks another forked app! :^)

A big part of the criticism we've gotten when we tell people "PGP bad" is that we're not providing alternatives. age is one of those alternatives, for one of those use cases.

> It would be easier to just make a wrapper for gnupg that sets the settings for everything the author is talking about. (well most of the things the user is talking about) > Wouldn't it be easier to just inform maintainers of the package to change the default standards of packages like gnupg? Has the author even attempted to change some of these things?

As we mentioned repeatedly in the blog post: no, the PGP format is fundamentally broken, it is not a matter of "just fixing it".

> If you for example encrypt data without signing it, how would you know that someone isn't trying to poison ciphertext to extract data leakage or worse yet, they already found a way to decrypt your data and manipulating sensitive data?

I think you're making an argument against unauthenticated encryption here. That's true! You should not have unauthenticated encryption, the way PGP makes it easy for you to have. age is not unauthenticated encryption, so the criticism does not apply.


Signal uses the same public key/private key cryptography. Hopefully that makes more sense?

In any case, after reading the google doc more carefully, I see the idea isn't to be against the whole pub/private key authentication its just a proposal of how pgp works.

>age is not unauthenticated encryption, so the criticism does not apply.

yeah, I see. its pretty good actually.


Signal does not use "the same public key/private key cryptography", nor does it use a CA, or anything like a CA.


Double ratchet using Curve25519 and Ed25519 signatures is not at all the same asymmetric cryptography as PGP.


> If you’d like empirical data of your own to back this up, here’s an experiment you can run: find an immigration lawyer and talk them through the process of getting Signal working on their phone. You probably don’t suddenly smell burning toast

Well, currently extricating myself from the Signal mess, I really am expecting my smoke alarm to go off any minute.

On intallation a couple of years back, the Signal app desparately wanted to curate my sms traffic, but forgot to inform me it would be holding my message history hostage. Forgot to inform me it would kill my instance, should I ever have the audacity to try setting up on a second phone unit. Forgot to inform me that yes, there is a desktop client, but it is useless Electron crap. Forgot to inform me it would be livestreaming my usage to everyone on my contact list - when I installed, when I reinstalled or changed devices, or when I mistook an ambigous list-feature for a personal book-keeping thing. The last item didn't happen to me, but to an acquaintance who thus had a somewhat embarrassing list of contact spilled out into the open. Not to mention the whole phone number based ID disaster, and the lack of any web interface. Sorry, but I'm out.

Fully aware of serious concerns about the crypto and the privacy, but for UI, consistently well designed client apps, data export, and lack of nasty surprises, I have seen nothing to rival Telegram. Which may help explain why that seems to be where everyone is heading.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: