Twitter is just one example though, this problem is going to affect every single online community. If the LLM bull case is correct, the internet is going to be absolutely flooded with sophisticated misinformation.
Sophisticated being key. Quantity * quality almost indiscernible from mediocre human input.
Currently we tend to understand bad information on the stream as a function where quality is linear and quantity is exponential, and individuals or human filters can still identify reject the lower 99% as spam. Every point closer on the graph the quality comes to resemble human-made content represents an exponential degree of further confusion as to base facts. This isn't even considering whether AI develops its own will to conduct confusion ops; as a tool for bad actors it's already there, but that says nothing of the scale it could operate at eventually.
The sophistication of the misinformation is exactly the point: That's the mass multiplier, not the volume.
[edit] an interesting case could be made that the general demand for opinionated information and the individual capacity to imbibe and adjudicate the factuality of the input was overrun some years ago already... and that all endeavors at misinformation since then have been fighting for shares of an information space that was already essentially capped by the attention-demand. In that paradigm, all social networks have fought a zero-sum game, and LLMs are just a new weapon for market share in an inflationary environment where all information propagated is less valuable as the volume increases and consumption remains static. But I think this is the least worrisome of their abilities.
It was always a good idea to ignore the cesspool that is Twitter. No matter whether we are talking about bots or lynch mobs.
Btw, I think you mean Berenstain Bears.