Hacker News new | past | comments | ask | show | jobs | submit login
Concrete: A fully homomorphic encryption compiler (zama.ai)
108 points by zacchj on May 6, 2023 | hide | past | favorite | 22 comments



Concrete is really impressive and permissively licensed. The ML library has a FHE version of (a subset of) scikit-learn, which I honestly thought I’d see in another 5+ years. Like look at this example:

    # Now we train in the clear and quantize the weights
    model = LogisticRegression(n_bits=8)
    model.fit(X_train, y_train)

    # We can simulate the predictions in the clear
    y_pred_clear = model.predict(X_test)

    # We then compile on a representative set 
    model.compile(X_train)

    # Finally we run the inference on encrypted inputs !
    y_pred_fhe = model.predict(X_test, fhe="execute")

    print("In clear  :", y_pred_clear)
    print("In FHE    :", y_pred_fhe)
    print(f"Similarity: {int((y_pred_fhe == y_pred_clear).mean()*100)}%")

There’s some ways to go on performance, but the ergonomics of using FHE are already pretty good!


Thank you! The python version is quite clear as well: still from the README,

``` from concrete import fhe

def add(x, y): return x + y

compiler = fhe.Compiler(add, {"x": "encrypted", "y": "encrypted"}) inputset = [(2, 3), (0, 0), (1, 6), (7, 7), (7, 1), (3, 2), (6, 1), (1, 7), (4, 5), (5, 4)]

print(f"Compiling...") circuit = compiler.compile(inputset)

print(f"Generating keys...") circuit.keygen()

examples = [(3, 4), (1, 2), (7, 7), (0, 0)] for example in examples: encrypted_example = circuit.encrypt(*example) encrypted_result = circuit.run(encrypted_example) result = circuit.decrypt(encrypted_result) print(f"Evaluation of {' + '.join(map(str, example))} homomorphically = {result}") ```

Here, that's more for non-ML computations.


Isn't this basically just unnecessary overhead if you can do a forward pass with encrypted weights? Like, what are you still protecting with encryption at that point?


You're protecting your inputs and outputs. If you have a model that's designed to run with sensitive data but you want to don't have the compute power to run it locally, what do you do? Putting the model on a cloud provider means their system would see your sensitive data, which may be unacceptable for contractual or legal reasons. This lets you send the inputs encrypted, receive the outputs encrypted, then you can decrypt the outputs in your weak but trusted environment.


That sounds like fairy tale engineering. Even for the IoT-with-a-potato-MCU use case, you'd be much better off offloading that computation to a trustable device (such as the user's desktop computer or home gateway) instead of shipping it off a cloud environment and paying the (absolutely massive) FHE tax.


In a case where you are offloading the computation to another device because of compute limitations it would indeed probably make more sense, at least at the moment, to offload the computation to a trusted device.

But there is always the case where the server side with the model does not want to disclose the model itself while the client does not want to disclose its data either (like in many healthcare applications for example or in the case of the recent Open ai Samsung incident). In this case the FHE tax might be a decent price to pay.

If you want to read more on the topic, there is blog post about the cost of running a LLM in FHE: https://www.zama.ai/post/chatgpt-privacy-with-homomorphic-en...

The main improvements in terms of speed will come from dedicated hardware accelerators but some models (those that run on tabular data for example) already have acceptable runtimes.


Sure, sometimes, if you have a trusted device, great. However, in other use cases, there will be no device which is both trusted by the user _and_ the model owner, and FHE will help here. We have to remind how valuable the models are


Too bad for the model owner, then.


In the example above the parameters are in the clear and only inputs and outputs are encrypted!

That being said you could probably do the reverse and encrypt the parameters of the model and not the inputs/outputs if you are deploying the model directly to the client.


not op but I think i'm too dumb on this topic to understand what you mean, could you explain further? ( to me it sounds like you're suggesting using encrypted weights while they're suggesting using encrypted inputs which to me solves two different use cases )


Aren't you running the second prediction on unencrypted data, contrary to what's said in the comment?


Actually the input is encrypted in the ‘predict’ function here. There are functions to encryption, run, decrypt separately


I started working on a CPU that was designed for FHE about 10 years ago, inspired by the ShapeCPU paper around that time [1] [2]. I've been waiting for someone to make a better gate-to-FHE compiler for some time.

[1] https://github.com/mmastrac/oblivious-cpu [2] https://hcrypt.com/shape-cpu/

FHE becomes a lot more interesting when you can hide the structure of your computation behind a VM.


Compare to Google's here:

https://jeremykun.com/2023/02/13/googles-fully-homomorphic-e...

It's a really fun write up. I prefer the syntax of Google's. But Zama is doing great work.

I really enjoyed the podcast with them here. It clarified a lot for me about the intersection of FHE and ZK.

https://zeroknowledge.fm/248-2/


Would you mind elaborating what you prefer in Google's syntax, please?


This reminds me of one of those software protection libraries. I think it was by Syncrosoft, the company that used to protect software like Cubase before it got acquired by the manufacturer of Cubase, Steinberg.

Basically you'd write your algorithms in C++ but instead of using the built-in types like int or float you'd use custom types that had all of their operators overloaded. Your code would look pretty similar to what you'd have before (modulo the type definitions) but when compiled your algorithm would turn into an incredibly inscrutable state machine where some parts of the state machine would come from some kind of protection dongle. Pretty effective.


Does anyone know of a good reference to get up to speed on FHE in ML?


If you just want to dive right in, this example from Concrete ML's repository is very clear:

https://github.com/zama-ai/concrete-ml#a-simple-concrete-ml-...


Ah I should have been a bit more clear. I'm interested in how FHE actually works and the steps needed to transform general computation to its FHE equivalent.


I'm looking for similar - I'm having a hard time wrapping my head around this


We are planning several other blog posts to explain all the details.

In the meantime if you want a good introduction to the FHE scheme we use behind the scene, you can take a look here: https://www.zama.ai/post/tfhe-deep-dive-part-1


It was explained eg in this Google TechTalk: https://m.youtube.com/watch?time_continue=1917&v=-lhn2GdHhGc...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: