The context here is running third party LLM, not running arbitrary things in the cloud.
> the existing cloud tos and infrastructure fulfill all the legal and practical requirements
No, because the practical requirements are set by the users, not the TOS. Some companies, for the practical purposes of confidentiality and security, DO NOT want their information on third party servers [1].
Top third party LLM are usually behind an API, with things like retention, in those third party servers, for content policy/legal reasons. On premise, while being able to maintain content policy/legal retention on premise, for any needed retrospection (say after some violation threshold), will allow a bunch of $$$ to use their services.
> the existing cloud tos and infrastructure fulfill all the legal and practical requirements
No, because the practical requirements are set by the users, not the TOS. Some companies, for the practical purposes of confidentiality and security, DO NOT want their information on third party servers [1].
Top third party LLM are usually behind an API, with things like retention, in those third party servers, for content policy/legal reasons. On premise, while being able to maintain content policy/legal retention on premise, for any needed retrospection (say after some violation threshold), will allow a bunch of $$$ to use their services.
[1] Companies That Have Banned ChatGPT: https://jaxon.ai/list-of-companies-that-have-banned-chatgpt/
edit: whelp, or go this route, and treat the cloud as completely hostile (which it should be, of course): https://news.ycombinator.com/item?id=40639606