While I’m excited about the launch, I’m concerned that your data policies are extremely vague and seem to contain typos and missing parentheticals. As of 12:30p ET they say:
> We have nuanced privacy controls on minusx. Any data you share, which will be used to train better, more accurate models). We never share your data with third parties.
What are these nuanced controls? What data is used to train your models? Just column names and existing queries, or data from tables and query results as well that might be displayed on screen? Are your LLMs running entirely locally on your own hardware, and if not, how can you say the data is not shared with third parties? (EDIT: you mentioned GPT-4o in another comment so this statement cannot be correct.)
https://avanty.app/ is doing something similar in the Metabase space and has more clarity on their policies than you do.
Frankly, given the lack of care in your launch FAQs about privacy, it’s a hard ask to expect that you will treat customer data privacy with greater care. There is definitely a need for innovation in this space, but I’m unable to recommend or even test your product with this status quo.
I totally share your concerns about data (especially data that may be sensitive). We have a simple non-legal-speak privacy policy here: https://minusx.ai/privacy-simplified.
> Are your LLMs running entirely locally on your own hardware, and if not, how can you say the data is not shared with third parties? (EDIT: you mentioned GPT-4o in another comment so this statement cannot be correct.)
We're currently only using API providers (OAI + Claude) that do not themselves train on data accessed through APIs. Although they are technically third parties, they're not third parties that harvest data.
I recognize that even this may just be empty talk. We're currently working on 2 efforts that I think will further help here:
- opensourcing the entire extension so that users can see exactly what data is being used as LLM context (and allow users to extend the app further)
- support local models so that your data never leaves your computer (ETA for both is ~1-2 weeks)
We are genuinely motivated by the excitement + concerns you may have. We want to give an assistant-in-the-browser alternative to people who don't want to move to AI-native-data-locked-in platforms. I regret that was not transparent in our copy.
Thanks for pointing the error in the FAQs, we somehow missed it. It is fixed now!
> We have nuanced privacy controls on minusx. Any data you share, which will be used to train better, more accurate models). We never share your data with third parties.
What are these nuanced controls? What data is used to train your models? Just column names and existing queries, or data from tables and query results as well that might be displayed on screen? Are your LLMs running entirely locally on your own hardware, and if not, how can you say the data is not shared with third parties? (EDIT: you mentioned GPT-4o in another comment so this statement cannot be correct.)
https://avanty.app/ is doing something similar in the Metabase space and has more clarity on their policies than you do.
Frankly, given the lack of care in your launch FAQs about privacy, it’s a hard ask to expect that you will treat customer data privacy with greater care. There is definitely a need for innovation in this space, but I’m unable to recommend or even test your product with this status quo.