Hacker News new | past | comments | ask | show | jobs | submit login

> No no, don't try to say that yes, there's a very real danger that people might generate propaganda and influence others. You know as well as I do that this is extraordinarily difficult in practice for multiple reasons, and that no one has ever seen an interactive, adaptive AI that can dynamically deliver the most persuasive propaganda to Reddit and fool everybody into thinking it's a genuine grassroots movement. And you know as well as I do that that would be roughly equivalent to inventing AGI, and that AGI is still nowhere in sight.

I generally agree with your point, but you're setting the bar way too high. Repeating an idea over and over is a depressingly effective way to bring something into the public discourse; if it happens to align with the aims of some opportunists (slandering a political opponent, casting doubt on an inconvenient fact, etc.) they will happily spend the manual effort to make less robotic-seeming versions. At that point it gets into the news, and well-intended individual are burdened with proving why the batshit-nonsense-du-jour is batshit nonsense.

Whilst this sort of thing is easily dismissed individually, it can serve as ammunition for Gish Gallops, JAQing off, and generally nudge the Overton Window (even if subconsciously).

You give Reddit as an example; there are probably many trying to spam it with astroturf at the moment, it's debatable how effective that is.

If we lower our standards a bit and apply the same reasoning to 4chan, it seems even more plausible (due to the ephemeral, anonymous, disjointed nature of its threads).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: