> If you agree to the notion that LLMs trained on copyrighted work are a form of IP infringement and not fair use, then training on their output is just data laundering and doesn't fix the issue.
It's fuzzy. I could imagine a situation where a primary LLM trained on copyrighted material is a big hazard and can't be released, but carefully monitored and filtered output could be declared copyright-safe, and then used to make a copyright-safe secondary LLM.
It's fuzzy. I could imagine a situation where a primary LLM trained on copyrighted material is a big hazard and can't be released, but carefully monitored and filtered output could be declared copyright-safe, and then used to make a copyright-safe secondary LLM.