Hacker News new | past | comments | ask | show | jobs | submit login

The server does not generate the tokens, the client generates the tokens. The server is supposed to be able to verify that they were generated by a client who was granted the authority to generate them, but not which client did so. At least, not without side-channel information.

> The main building block of our construction is a verifiable oblivious pseudorandom function (VOPRF)

I am not sure how well tested that primitive is, but it definitely appears to be more than the server handing clients tokens and then pretending not to know who it gave them to.

The referenced paper: https://petsymposium.org/popets/2018/popets-2018-0026.pdf




What I find confusing is: Given that the server is essentially authorizing each subsequent client request (eg: For Kagi: search queries after a Kagi user has already been authenticated) in a way whereby the client is anonymous, what is the difference between Privacy Pass and simply providing a common authorization token to each user (and thus skipping all this cryptography)?

Update: On some thought, for the approach of the server providing a common authorization token that there is no guarantee to the client that the server is actually providing a common token and thus not just simply providing a unique identifier to each user. Thus, the Privacy Pass's cryptography ensures that the client knows that it is still anonymous. Update 2: But, what guarantee exists that the server doesn't generate a unique public key (i.e. public-private key pair) for each user and thus defeat anonymity this way? Update 3: They use zero-knowledge proofs to prove that all tokens are signed by the same private-key, from their paper: "The work of Jarecki et al. [18] uses a non-interactive zero-knowledge (NIZK) proof of discrete log equality (DLEQ) to provide verification of the OPRF result to the user. Their construction is hence a ‘verifiable’ OPRF or VOPRF and is proven secure in the random-oracle model. We adapt their construction slightly to use a ‘batch’ DLEQ proof allowing for much more efficient verification; in short this allows a user to verify a single NIZK proof that states that all of their tokens are signed by the same private key. This prevents the edge from using different key pairs for different users in an attempt to launch a deanonymization attack; we give more details in Section 3.2.".


This is my understanding of how it works, without knowing the actual maths behind the functions:

    # client
    r = random_blinding_factor()
    x = client_secret_input()
    x_blinded = blind(x, r)

    # Server
    y_blinded = OPRF(k, x_blinded)

    # Client
    y = unblind(y_blinded, r)
So you end up with y = OPRF(k, x). But the server never saw x and the client never saw k.

This feels like the same kind of unintuitive cryptography as homomorphic encryption.


Using a client provided by Kagi can be a side-channel then? Should we rather be using an independent, standard client?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: