Concrete is really impressive and permissively licensed. The ML library has a FHE version of (a subset of) scikit-learn, which I honestly thought I’d see in another 5+ years. Like look at this example:
# Now we train in the clear and quantize the weights
model = LogisticRegression(n_bits=8)
model.fit(X_train, y_train)
# We can simulate the predictions in the clear
y_pred_clear = model.predict(X_test)
# We then compile on a representative set
model.compile(X_train)
# Finally we run the inference on encrypted inputs !
y_pred_fhe = model.predict(X_test, fhe="execute")
print("In clear :", y_pred_clear)
print("In FHE :", y_pred_fhe)
print(f"Similarity: {int((y_pred_fhe == y_pred_clear).mean()*100)}%")
There’s some ways to go on performance, but the ergonomics of using FHE are already pretty good!
Isn't this basically just unnecessary overhead if you can do a forward pass with encrypted weights? Like, what are you still protecting with encryption at that point?
You're protecting your inputs and outputs. If you have a model that's designed to run with sensitive data but you want to don't have the compute power to run it locally, what do you do? Putting the model on a cloud provider means their system would see your sensitive data, which may be unacceptable for contractual or legal reasons. This lets you send the inputs encrypted, receive the outputs encrypted, then you can decrypt the outputs in your weak but trusted environment.
That sounds like fairy tale engineering. Even for the IoT-with-a-potato-MCU use case, you'd be much better off offloading that computation to a trustable device (such as the user's desktop computer or home gateway) instead of shipping it off a cloud environment and paying the (absolutely massive) FHE tax.
In a case where you are offloading the computation to another device because of compute limitations it would indeed probably make more sense, at least at the moment, to offload the computation to a trusted device.
But there is always the case where the server side with the model does not want to disclose the model itself while the client does not want to disclose its data either (like in many healthcare applications for example or in the case of the recent Open ai Samsung incident).
In this case the FHE tax might be a decent price to pay.
The main improvements in terms of speed will come from dedicated hardware accelerators but some models (those that run on tabular data for example) already have acceptable runtimes.
Sure, sometimes, if you have a trusted device, great. However, in other use cases, there will be no device which is both trusted by the user _and_ the model owner, and FHE will help here. We have to remind how valuable the models are
In the example above the parameters are in the clear and only inputs and outputs are encrypted!
That being said you could probably do the reverse and encrypt the parameters of the model and not the inputs/outputs if you are deploying the model directly to the client.
not op but I think i'm too dumb on this topic to understand what you mean, could you explain further? ( to me it sounds like you're suggesting using encrypted weights while they're suggesting using encrypted inputs which to me solves two different use cases )
I started working on a CPU that was designed for FHE about 10 years ago, inspired by the ShapeCPU paper around that time [1] [2]. I've been waiting for someone to make a better gate-to-FHE compiler for some time.
This reminds me of one of those software protection libraries. I think it was by Syncrosoft, the company that used to protect software like Cubase before it got acquired by the manufacturer of Cubase, Steinberg.
Basically you'd write your algorithms in C++ but instead of using the built-in types like int or float you'd use custom types that had all of their operators overloaded. Your code would look pretty similar to what you'd have before (modulo the type definitions) but when compiled your algorithm would turn into an incredibly inscrutable state machine where some parts of the state machine would come from some kind of protection dongle. Pretty effective.
Ah I should have been a bit more clear. I'm interested in how FHE actually works and the steps needed to transform general computation to its FHE equivalent.