Hi, this is a fair concern. We're super early and working on a proper privacy policy as we speak. But we also provided some color about how we handle your data on our Discord. Copying it here:
```
The privacy section on the landing README remains true. We just send your requests to OpenAI and store them for debugging purposes but we don't fetch or store anything else than what is required to process your requests of course. XP1 being opensource you can also look at the code if needed, but happy to answer any question
In short:
- Requests (including the text dump of tabs you select) go to the Dust main platform
- They are processed as part of a Dust app whose Run object is stored
- The LLM query is sent to OpenAI (retention policy 30 days, not used for training)
- The response is tored as part of Dust's Run object
- The response is streamed back to the client
```
This project this looks great, but it's going to be a no from me until your formal policy clarifies this point. Do those requests still "include the text dump of tabs you select" when stored? It's not that I don't trust you folks, it's that I can't trust the entire wider world to not eventually break into or subpoena your debugging repository.
Further up in the thread people asked your extension about privacy concerns and at least one assumed the response's included remark about storing requests for debugging must have been an AI hallucination.
"store them for debugging purposes" is a bit concerning if they then become available if law enforcement requests data, or if you guys are hacked and everything leaks.
``` The privacy section on the landing README remains true. We just send your requests to OpenAI and store them for debugging purposes but we don't fetch or store anything else than what is required to process your requests of course. XP1 being opensource you can also look at the code if needed, but happy to answer any question In short: - Requests (including the text dump of tabs you select) go to the Dust main platform - They are processed as part of a Dust app whose Run object is stored - The LLM query is sent to OpenAI (retention policy 30 days, not used for training) - The response is tored as part of Dust's Run object - The response is streamed back to the client ```