I'm really excited about PAKEs! But for a totally different reason.
PAKEs are great for low-entropy secrets, but they don't have to be static: you can generate them on the fly too. Every time you have a human-mediated channel that eventually needs to agree on a strong secret, PAKEs are good. So, if you have a device that you want to pair with an existing device with an untrusted third party PAKEs are super great. You can use this to safely send a file across the ether by just sharing a short string like "2-bonobo-magic; see magic-wormhole.
I therefore disagree with the notion that a PAKE being symmetric is a downside: I just think that they're different tools for different tasks, and we should talk about sPAKEs and aPAKEs specifically. Symmetric PAKEs are worse for password auth: I just think PAKEs are not a high priority for password auth. There are two problems with passwords: cred stuffing and phishing (fundamentally the same attack: an attacker gets a password "out of band" and then replays it) and PAKEs solve neither.
Unfortunately this won't help while we continue to trust JS sent to us by the server.
Their threat model is "server is compromised by an attacker, who then vacuums up all the cleartext passwords". Assuming that under normal operations there's some fancy UI that takes the password and handles the PAKE stuff (which it will have to be, until browsers support it), there's nothing stopping that attacker from simply adding malicious code that fire off your cleartext password.
If you want user authentication security, use client side TLS certs. 1) the UI is built into all major browsers, and can't be faked, and 2) the server doesn't require the CA's private key, only the server generating new certs needs it. In the above threat model, the worst that can happen if an attacker compromises the server is he can MITM the traffic at the time... but the attacker will never capture user credentials that can be used in the future, or be used to decrypt past communications.
You're assuming the client-side PAKE implementation must be provided by JS sent from the server.
Why not implement a client-side PAKE API in the browser? It could be designed to disallow any server-sent code from accessing client credentials in plaintext, while still allowing styling and layout of credentials fields (this might be impossible, though). Or alternatively, let the browser handle the login/credentials UI, and build in a seamless password manager integration.
The problem that exists with TLS certs, and not with PAKE, is usability.
Client certs suffer from questions like "as a user, how do I handle multiple devices? Logging in from my phone? Logging in from the biz center at some hotel / my friend's phone / etc?
The UX for client certs in browsers makes it a pain to use even as a tech-savvy person; if we want it to actually become popular as a web auth method, it has to start with better UX.
> Client certs suffer from questions like "as a user, how do I handle multiple devices? Logging in from my phone? Logging in from the biz center at some hotel / my friend's phone / etc?
I see this as a perfect case for "logging into your browser" (Chrome Sync/Firefox Sync/iCloud Safari Keychain sync/etc.) Why not have any issued client certs be global to your "browser account" rather than local to your device? The cert bundles can also be transferred and stored using end-to-end encryption (like iMessage keybags.)
Sure, now you can't tell which of a person's devices they're on from just the cert you're seeing. That's what cookies or localStorage would be for, since they explicitly don't sync between devices. Client cert for AAA + session cookie for holding the server-side connection ID.
To be fair the UX for client side certs is pretty idiotproof once you have the cert installed. You get a "X site is requesting an identity certificate" box. Installing certs is pretty painless too -- you navigate to a certain page and get a "this site is attempting to install a security identification certificate in your browser" box. Securely getting to that page is a different matter.
As for logging in on untrusted devices (biz center, friend's PC, etc)... you shouldn't be doing that under any circumstances in a high security situation.
The problem with certificates is that A) it requires the user to install them on every device B) they are required to be installed on every device C) they are a bitch to manage and distribute and D) protection around certificates is fairly weak and any client compromise will lead to the exposure of all credentials.
You also end up delegating the authentication to your transport layer which is never that good of an idea.
I think most linux distro's protection of certificate private keys is pretty weak (just storing a .pem file somewhere) and I can't say anything about OSX but on Windows you can prevent the private key from being disclosed even if the user wants to export it and enforce entering a password or PIN each time the certificate is used. Getting access to the physical hardware is about the only way to bypass the CryptoAPI, which is pretty far from 'any' client compromise.
You can't actually "prevent the private key from being disclosed even if the user wants to export it" out of box on Windows [if you have an HSM this may be an option although it may also make the keys unsuitable for some purposes]. You can make it _annoying_ which is a stock in trade for Microsoft designs, but attackers don't care about it being annoying. The reason is that the CryptoAPI hands over the keys to an application for it to use (it does not function only as a proxy like say ssh-agent), and so application can export the keys. People even make ready-to-use apps to do this, because, as described above, the CryptoAPI behaviour is annoying.
You can require a password, but that's true for text files on a Linux system too. So why don't most people use a secret password to protect a file with a secret key inside it? Because it's yet more pointless busy work, again a stock in trade of Microsoft environments.
Also, because this is so very annoying you don't need physical access but only console access, which you can of course get remotely, as otherwise administrators of the mountains of Windows servers set up this way couldn't type in all the passwords needed over RDP every day. (If you're lucky they are using an encrypted RDP session to do this...)
> You can't actually "prevent the private key from being disclosed even if the user wants to export it" out of box on Windows [if you have an HSM this may be an option although it may also make the keys unsuitable for some purposes].
All modern laptops ship with TPM that basically allows provisioning private keys in an non exportable way. Additionally server can remotely attest that the key is stored in the TPM. (your HSM remark probably covers this though)
I agree that CryptoAPI is nasty, but if your key doesn't have the CRYPT_EXPORTABLE flag set when it was created then it can not be exported by CryptoAPI, and can only be accessed maliciously by breaking the CryptoAPI (by patching it in memory, which requires full admin access to a running system) or accessing the OS disk offline.
CryptoAPI does not hand over the key and delegate signing and encryption to the application. Instead, it provides APIs for signing and encryption and does not disclose the private key.
> CryptoAPI does not hand over the key and delegate signing and encryption to the application. Instead, it provides APIs for signing and encryption and does not disclose the private key.
There is no proxy, they keys live in your process. Because Windows is proprietary they can do the moral equivalent of:
If you did this in OpenSSH, everybody would laugh because they could see it in the source code. "This doesn't make the key unexportable Beavis, you idiot". But in Windows they can solemnly announce that the key isn't exportable and as you've demonstrated, people believe them even though some introspection tells us this can't be true (Where is the key? Like, really, where - it isn't in a proxy because there isn't one... when we load it _our_ process gets bigger and in a debugger we can see it's in our memory, just CryptoAPI tells us we're not supposed to export it...)
You don't need to patch anything the keys are stored in %APPDATA%\Microsoft\Crypto\Keys for user certs, they are "encrypted" if the protection flag is set but there are plenty of tools to recover the key material from those files, the protection material is stored in %AppData%\Roaming\Microsoft\Protect :)
It seems unlikely. The differences between OPAQUE and SRP aren't what kept SRP from being adopted. The fact that SRP doesn't really solve a problem that most organizations care about for their web applications is a bigger issue.
Remember, virtually all mainstream applications accept logins over TLS (exclusively) at this point, so being able to authenticate and encrypt a channel with a passphrase isn't all that valuable. As a security engineer working for a startup, there aren't a lot of new features I'd be able to build after having adopted SRP, and essentially none of my real current challenges (credential stuffing, phishing) would get any easier.
That's not to say OPAQUE isn't useful; it's just not that useful in SAAS-type application settings.
The killer use case I see for PAKEs is that they verify not only that the client has the password, but also the server.
When modern browsers trust 150 Certificate Authorities [0] (and the untold subordinate/wildcard certs that were further issued by them), it's reassuring to know that I've verified that the server actually had my password (or a derivative), which makes it even harder to phish or impersonate a server with a stolen/malicious signed TLS key (this is all assuming the browser has a special non-spoofable dialog when authenticating via a PAKE).
For example, picture all of the people who have lost their banking passwords to phishing sites. If PAKEs were used, then the attacker gets nothing except a single guess at the password. If the attacker already knew the user's password, then the user already lost.
> When modern browsers trust 150 Certificate Authorities
That's not what the link you provided shows.
It lists 150 CA roots. 20 of those roots lack a "websites" trust bit, which means the web browser does not trust those roots (Mozilla also has an email client even if it's not much loved)
Furthermore, the 130 of those roots that are trusted in a web browser are operated by only 52 distinct Certificate Authorities. CAs operate several roots either because of business consolidation or segmentation, for example DigiCert operates 23 of the web trusted CAs you linked because they purchased Symantec's business in 2017 and are in the process of replacing Symantec's soon-to-be-untrusted hierarchy.
It's very strange to merge subordinates and wildcards which aren't the same kind of thing at all. Unconstrained subordinates are, in fact, tallied by Mozilla and they're on the adjacent page:
The vast majority of these intermediates are just for issuance and remain entirely under both physical control and legal authority of the CA, so they make no difference to the trust picture they just reduce the exposure if things go wrong.
This is true, but can't you address the same problem without a PAKE by simply remembering the server certificate public key (or at least its issuing CA)? Remember: browsers don't support PAKEs now. So consider: what's the least required mechanism to solve the problem you're posing?
I agree the PAKE theoretically does a better job on this problem. But does it do that job so much better as to justify its inclusion in a protocol?
The difference seems to be that server certificate public keys and CAs will change, so browsers will have to include an "Ignore this change?" prompt, which means a certain number of people will click it.
On the other hand, a PAKE failure due to the server not having its half of the authentication data is not possible to ignore, so people will be forced to get it right.
But certificate authentication precedes user authentication, so the people who don't click it are sufficient to detect misissuance and kill the offending CA. You'd rather not have the problem at all, but if every exploit burns an entire CA, that's not the worst place to be.
I think I generally sympathise with this way of thinking about the value of a PAKE here, and you're right that only one person has to see a smoking gun - but just to be quite clear here on how fragile such manual checks would be:
The certificate used for the HTTP transaction where your credentials are submitted may be completely different from the one used to present the web form you filled them into.
A sneaky bad guy can let you talk to the real Bob for every single HTTP transaction aside from the one they want to capture, and intercede just for that single case, then drop back out silently, leaving you talking to Bob directly as before.
The web browser's own checks (including matching the FQDN against the certificate) run every single time - if they spot Eve taking Bob's place they'll freak out, but any human checks like "Look at the certificate in Certainly Something" (the nice certificate view for Firefox) only happen when a human gets to intervene which may not happen at all at key moments.
When did you get to examine that certificate for login.example.com? The answer in every browser I've seen is not at all, because it responded with a 30x so the page never finished and displayed.
I don't believe that's a security concern that motivates entire new protocols; further, I think it's one of those security issues that is talked about more than it is actually felt. So far as I know, none of the HIBP datasets have come from logs? I'd be interested in corrections.
The "real" issue PAKEs address is long-term compromise, where an attacker can observe passwords out of memory. For that matter: for attackers to get log files in the first place, 9 times out of 10 they had to get RCE somewhere in prod, and that is already game over. And, there's a lot of things you'd do first do address game-over than replacing your TLS login POST with a PAKE handshake.
It happened at Twitter, was discovered internally at Twitter, remediated at Twitter, and disclosed by Twitter. It was talked about but not in any apparent way felt. And, again, if that's your only concern, there are simpler ways to mitigate that threat than a whole PAKE.
PAKEs don't address Heartbleed-like issues; in fact, they create more opportunities for them.
It still feels marginally better because it requires making a change that is visible to users, vs the normal case where a server can surreptitiously collect passwords. It also works nicely for the mobile app login case.
Usually this blog is very practical and easy to understand. This post, though, just talks about how great PAKE is and what this new proposal achieves, but not the how of it. It sounds better than client side hashing, but it doesn't explain how (or whether) that's achieved. Actually looking into PAKE is left as an exercise for the reader (other than one screenshot full of math at the bottom).
Wikipedia is actually more clear in this case:
> an eavesdropper or man in the middle cannot obtain enough information to be able to brute force guess a password without further interactions with the parties for each (few) guesses. This means that strong security can be obtained using weak passwords.
Looking into SRP, it seems that a server can still brute force a stored login. While cool, TLS achieves the same goal in popular login systems: eavesdroppers cannot learn anything (in fact, they cannot even make a single attempt at the password, unlike with PAKE). The other advantage of PAKE, the server never receiving the password, can be done with client side hashing (for which I've been advocating for years, as it would have prevented many issues).
Client side hashing solves only one aspect of the problem. Effectively, this means a password breach of your service will give the attacker a “password equivalent” that they can subsequently use to log into the service without further cracking. This may or may not be a problem, depending on whether you’re willing to lock all user accounts.
Whereas a proper asymmetric PAKE means that even if your password file is breached, the attacker can’t use the leaked hashes to log into the service without first cracking each password.
The problem you point out: that password hashes can be brute-force cracked, is unfortunately not a problem we can solve. Consider that a server can always “attempt a login to itself”, so given a password database you can always run a dictionary attack. The only thing truly protecting you in that case is the combination of strong passwords and a hard password hashing algorithm. Passwords are fundamentally a drag.
Using a hardware one-way function you can ensure that even if your database is breached or disclosed, attackers cannot brute-force outside of your environment (e.g. the hashed are “anchored” to your systems).
You can construct such a function using something like CloudHSM (pity that KMS cannot be coaxed to be deterministic or you could use that). All you need is an encrypt operation with no corresponding decrypt or way to extract the key.
I’m surprised that more large online services don’t use anchoring.
It's not all-or-nothing. You can hash passwords on the client side, and hash it some more on the server side before committing the final result to permanent storage. A breach of your database won't give the attacker a password equivalent in that case.
Given how common SSL interception is becoming, I don't consider TLS sufficient protection for passwords. Your security appliance may have a legitimate need to see my network traffic, but not my password. And if someone pwns the appliance, they pwn everything. Client-side hashing helps a little, but not enough; you're still vulnerable to pass-the-hash attacks. Using PAKE inside a TLS connection avoids all of these vulnerabilities.
How does client-side hashing solve the "server shouldn't have the password" problem? Doesn't that just make the hash into the real password? It may solve the problem of password reuse across different sites, but it doesn't solve the problem of the server's password database is leaked and the attacker can now try and break the passwords in order to use them to log back into the server and impersonate the user.
Maybe this is naive, but if the server is compromised such that an attacker can circumventing your hashing function can't the attacker just send some JS down to the client that records the password before it's hashed?
Yes, you can't securely deploy a PAKE using browser javascript. If it's web applications you want to secure, your users will have to install something to make a PAKE secure.
This still sounds the same as client side hashing to me.
At some point I tried talking to people to make client side hashing an option in <input type=password hashmethod=X>, but nobody cared more than saying "yeah sounds good" when prompted. If this would have been implemented and would have become mainstream, browsers can warn if a site (suddenly) doesn't use it, much like warning for http (or even pinning a la hsts). So as it is, it indeed cannot be done securely in browsers.
Apart from the fact that I think client-side hashing is silly, I agree with you; we're saying the same thing about the feasibility of PAKEs in browser applications.
I don't agree. PAKE is useful to setup a secure channel. But if you already have a secure channel that doesn't need client authentication, and you want to augment it with client authentication, this is easier to implement
The blog post points out that client-side hashing exposes you to ahead-of-time dictionary attacks, because you have to reveal the salt in the initial phase of the challenge.
No, it's not. You can capture the password-equivalent authenticator, but (assuming a strong KDF) not easily reverse it back out to a password that could be used at other sites, which is the only reason anyone cares about hiding passwords from servers in the first place.
Since you're not worried anymore about burning the server-side resources, you prevent the dictionary attack in this scenario with a punitively expensive KDF.
(Note: I am not saying this is a good idea! It's OPAQUE but worse.)
In the draft spec I wrote it would be salted by default with the origin (proto+___domain+port), and could be overridden (in case a site changes domains) by a parameter. That would act as salt for the site, so that you have a unique hash for each site without having to keep (and sync to other browsers and computers) a database of salts.
That something can be minimized to "a browser" when SPAKE+2FA support is finalized in Kerberos. Admittedly, only really useful for enterprise clients who already have krb5 deployed, but still it is something better than JS.
Yes you can. You just need to download the page and have SRI to serve the javascript. Of course it’s harder to update but now you’re emulating native apps and their “added” security
No you can't. The security improvement of PAKE over traditional HTML forms + TLS is that the server never knows the password plaintext; the threat model is a compromised first-party server. If the server is compromised, it can just change the SRI hashes that it sends to the client. SRI is only an effective protection against compromised third-party services.
I can't believe I'm defending save-an-HTML-file-to-disk as a model, but, in their defense: their did say _download_, so you can't mess with the SRI hash.
(I don't think this is a good idea, I'm just saying technically your criticism doesn't apply, though you are correct in how SRI works. I'm so sorry :()
I mean, tptacek's claim was that "your users will have to install something to make a PAKE secure." If your interpretation of the_clarence's comment is correct, then he didn't refute tptacek's claim, he just suggested a rather silly value of something; something="an HTML file".
Sri was just an example of how to make it light. Hell you don't need to use sri. Download the whole page and keep using it. But again that's like downloading native apps.
This is technically true, but the UX for this is so horrendous it's hardly worth taking seriously. If people are willing to put up with bad UX we could instead use TLS client certificates, which are still _much better_ than saving a page that has JS SRI in it (at least it's in the browser!) and has _much better_ cryptographic properties. (And that's assuming that you could somehow get the origin issues right.)
It's worse than that. In the web app use case, if the browser is sending cleartext passwords to the server, all an attacker has to do is hijack the server IP with BGP, get a DV SSL cert and slurp up all the passwords.
The reason it's not used more often is only because it's incrementally better than salted and memory-hard hashed passwords. It has obvious and important benefits, but still only incrementally better.
It's more important that bad implementations are removed immediately. Personally I'd rather get rid of all of my passwords and use FIDO2 or perhaps even something through touch-id on my iOS/Mac. At this point every password I have is unique thanks to great password managers.
USB/NFC tokens and phones/computers are all "loseable". There are cool ideas about backup tokens https://dmitryfrank.com/articles/backup_u2f_token but IMO they can't be the only way to sign in, there should be some way to restore access from nothing (I guess my specific paranoia is about being locked out :D)
There are many specific auth "factors":
- valid signature from an extrernal hardware token (U2F/WebAuthn via e.g. YubiKey)
- valid signature from a device's embedded token/TPM (something like Touch ID / Windows Hello / Android keystore thing)
- … as a WebAuthn implementation
- … in response to a push notification on another device (Twitter, if they still have that)
- … in response to a QR code (SQRL, Yandex.Key)
- one-time code via push notification to another device (Apple)
- one-time code generated from a shared secret key (TOTP)
- one-time sign-in link via email (Tumblr)
- passwords
- secret questions (like extra passwords but worse)
- behavior pattern analysis (IIRC Google showed an Android demo that looked at keyboard typing patterns and whatnot)
A good auth system should combine multiple factors in a correct way to minimize both impersonation and lockout chances, and maximize usability. How? That's the challenge…
>Personally I'd rather get rid of all of my passwords and use FIDO2
FIDO2 should be implemented in many more areas but it shouldn't be exclusively relied on for authentication. Authentication should require proof of person/entity (something you have, a FIDO2 key) and proof of volition (using a password stored in their brain that their will decides to produce/disclose)
Channel bindings allow Kerberos to piggy back on top of an existing encrypted channel (e.g., TLS). When doing 2FA with SPAKE in a browser, this means that we can take advantage of the security guarantees of the TLS protocol by ensuring the value of the final handshake is the same. This is most helpful for SSO-type scenarios.
I wonder if "Don't do password-authenticated key exchange." is still "Cryptographic right answer"; does OPAQUE address the reservations that tptacek had back in the day
WPA-3 is using a PAKE, ignoring the fact that there are some concerns about the specific PAKE used, do you think a PAKE buys anything in that use case?
It's pretty decent for the WiFi use case, but you could actually do something cooler with a symmetric PAKE if your WiFi device had a display. Pick a short random password, copy it over to the client, do SPAKE2. You now have a unique, high entropy secret -- and you have strong guarantees no-one else has that secret (an attacker can guess, but there's a tiny time window for them to do so, and the honest participants are immediately convinced foul play occurred).
As it stands, the PAKE is better than what they had, but not ridiculously better in the way U2F is over TOTP for example.
Without digging into this post too much ... is the problem that we don't know the solution, that servers aren't implementing it, that clients (browsers) are limited, or more than one of the above?
To the degree that it's the browsers' limitations, couldn't the browser makers make a standard authentication client (or at least an API) with whatever tech and features are needed? A sandboxed mini-application, hardened, that passes a 'yes', 'no', or 'timeout' to the browser? Not every website would utilize it, but certainly the well-resourced ones could at first (FAANG, banks, etc.) and web server engines, frameworks, content management systems, etc. could incorporate the server side over time. It doesn't seem expensive, and the ROI would seem to enormous - safe authentication over the Internet. But maybe I'm solving the wrong problem?
One place where PAKEs are widely deployed is in enterprise wifi networks that use PEAP/MSChapV2 password auth. MSChapV2 is really terrible and basically requires you to store passwords with weak hashes (or in cleartext).
Its nice to hear that there are PAKEs that are not tied to specific password hashing functions.
The WPA2 4-way handshake isn't great, but a PAKE actually doesn't do too much --- besides breaking the (slow) dictionary attack --- to operationally improve its security. PAKEs are a good fit for wifi handshakes, if you're starting from zero.
Something I've never understood is why we don't hash the password on the client before sending it to the server, and again on the server. This prevents the server from knowing your password (assuming it's a good one) with no real drawbacks that I can see.
Because then the hash of the password becomes equivalent to a password - a malicious actor that learns the hash of the password can just use that to log in to the service (since they can just choose to send it to the server).
You're absolutely right. But it means that they can't use the password to build a dictionary, to log in to other services, etc. (Assuming the password is complex enough that the hash isn't reversible)
No, it doesn't. It does however provide defense against taking over the server, stealing everyone's passwords, and using them on other servers. I'm assuming, and I suppose I should have stated above, that the first hash is salted with something like the ___domain name. Even if not it provides the same level of protection for password generation schemes.
We have been looking at something very similar (using scrypt and Ed25519) at work for LDAP (SASL) authentication. The primary motivation for us is to move costly password hashing off the LDAP servers.
Our current design is basically the same, but we generate and store a random salt server-side and present that to the client along with the challenge. Using a deterministic salt based on the username and ___domain is a nice idea - we actually do something similar if the client presents an incorrect username to avoid leaking whether the account exists (we send salt = HMAC(secret, username) in that case).
> The Diffie–Hellman key exchange method allows two parties that have no prior knowledge of each other to jointly establish a shared secret key over an insecure channel.
Maybe there's something I'm not understanding, but I thought the point of DH was you could exchange something even when other people are listening. You can see in the example in the article that it does't matter what Eve can see.
Without auth, you know you're doing DH, you just don't know who you're doing it with.
If somebody is passively observing/recording the traffic, they wont know what the negotiated shared secret was. However, if they actively insert themselves in the middle, they can negotiate a shared key with you, and another one with the final destination. Then they can decrypt messages from you with the shared secret they negotiated with you, re-encrypt with the shared secret they negotiated with the other party and forward them on.
The other comments are all right, I just want to rephrase this is bit. Neither quotes are wrong, per se.
This part:
> Maybe there's something I'm not understanding, but I thought the point of DH was you could exchange something even when other people are listening
DH does protect you against people listening. It doesn't help if they can actively change what you hear. Passive eavesdroppers are thwarted, but an active MitM attack is not, which is where authenticating the other end comes in.
The common example of DH I've seen is that you can establish a secret key with someone in a crowded room, with everyone overhearing your conversation. In that example, our "authentication" is that we can visually see the other person and we recognize them (or at least, we want to communicate with that person).
If the exchange isn't authenticated, then Eve can read everything by modifying the traffic (not just observing it). Eve just needs to insert herself in between Alice and Bob, intercept and stop all direct messages between them, and then do her own DH handshake with each one:
Bob starts handshake with Alice:
Bob -> Alice
Eve intercepts and finishes handshake, claiming to be Alice:
Bob <-> Eve Alice
Eve starts and completes handshake with Alice, claiming to be Bob:
Bob <-> Eve <-> Alice
And now, Bob and Alice think they're sending messages to each other, but actually they're sending messages to Eve, which she can read and then forward-on so that they don't notice the difference.
Unless Bob and Alice can authenticate the person they exchange keys with, the whole exchange can be subverted.
This and similar attacks is why cryptographers strongly believe now that authentication is inseparable from encryption and that the encryption should basically never be offered independently (which wasn't properly understood back in the 80s and early 90s, hence crappy protocols like PGP-email where authentication is a optional plugin).
Other people have explained why this doesn't stop Eve on its own.
It's maybe obstructive to walk though what happens in TLS 1.3
Alice sends somebody her random DH key share. She addressed it to Bob but has no guarantee it isn't intercepted.
Somebody sends Alice their key share back - so now Alice and whoever this is has a shared key. All further traffic is encrypted.
Whoever it is sends Bob's certificate. A cert is a public document so this proves nothing on its own, but...
Then is the important step, they use Bob's private key to prove that they saw this whole conversation so far by signing a transcript of the whole thing including the key shares.
Once she receives the signature Alice can conclude this is Bob.
Alice and Bob think they are talking directly to each other. Instead, they are both talking to Eve (the "man" in the middle).
Alice and Eve do DH exchange and agree upon a secret. Eve and Bob do a DH exchange and also agree upon a secret. Now Eve knows both these secrets and can relay messages between Alice and Bob. Both think they are talking securely to each other, but they are actually talking to Eve instead
You can exchange things and establish a shared secret with someone, but it doesn't tell you who that someone is. You need something else to be sure you really did establish a shared secret with Bob, and not Eve. This could take the for of the other end providing a certificate and signing something with their matching key.
PAKEs are great for low-entropy secrets, but they don't have to be static: you can generate them on the fly too. Every time you have a human-mediated channel that eventually needs to agree on a strong secret, PAKEs are good. So, if you have a device that you want to pair with an existing device with an untrusted third party PAKEs are super great. You can use this to safely send a file across the ether by just sharing a short string like "2-bonobo-magic; see magic-wormhole.
I therefore disagree with the notion that a PAKE being symmetric is a downside: I just think that they're different tools for different tasks, and we should talk about sPAKEs and aPAKEs specifically. Symmetric PAKEs are worse for password auth: I just think PAKEs are not a high priority for password auth. There are two problems with passwords: cred stuffing and phishing (fundamentally the same attack: an attacker gets a password "out of band" and then replays it) and PAKEs solve neither.
[magic-wormhole]: https://github.com/warner/magic-wormhole