That's what I've been predicting and scared of: LLMs learn from online Q&A platforms, but people already stop posting questions and receiving answers. The sole knowledge sources will get poisoned with inaccurate LLM generated data, and therefore the entropy available to LLMs will become damped by the LLMs itselves (in a negative feedback loop)