e.g. If you see a photo or video of a politician in circumstances that might affect your support for them - wouldn't you want to know if what you are seeing is true or not?
Look at what happened with Q-Anon - just a slow stream of text messages issued by some guy in his basement, but enough to rile up millions into believing something totally ridiculous (baby-eating politicians, etc). Now imagine what a smart disinformation campaign might look like, with an unlimited number messages over all types of social media, potentially customized for the individuals that have shown interest and are being targetted ... Of course disinformation isn't anything new, but technology is a force-multiplier and with AI a very sophisticated campaign of this nature could be run by a very small group of people, even just one.
> Look at what happened with Q-Anon - just a slow stream of text messages issued by some guy in his basement, but enough to rile up millions into believing something totally ridiculous (baby-eating politicians, etc).
That's not really the whole story though. The reason why a ridiculous thing like that gets legs, is because there isn't push back from the Republican party. They are happy to let these things go on, and they even involve themselves in it. They even elect people who believe in these theories to office, who then go on to perpetuate them.
Remember back when a gunman invaded a pizza parlor because he thought the Democratic party was running some sort of child trafficking ring in the basement? The Republican party could have, at that time, mounted a full-throated defense of Hillary Clinton, to say that of course she is not doing that, and to think so is completely insane. But they don't do that, because then they would have to defend Hillary Clinton, or any other Democrat. So they let the lie hang out there, unaddressed because it helps them politically, and it metastasizes.
So really, yes the Internet is a problem. But the real problem is that people in power are using it for this kind of thing on purpose, and it works.
A verified human can still post lies, I don't see how knowing that a real person posted something somehow makes it more or less accurate or truthful?
Even without an AI force multiplier (we still have farms of content makers for propaganda purposes), we are still wading in digital mess. I don't see that knowing if a real person made it does anything except makes that verification valuable for misuse.
Flipping it on its head, what if a farm of AI are used to spread fact-checked "correct" information? Is that devalued because a real person didn't hit the keystrokes?
AI or person, it doesn't matter to me. I still need to engage critical thinking and work under the assumption it's all garbage.
e.g. If you see a photo or video of a politician in circumstances that might affect your support for them - wouldn't you want to know if what you are seeing is true or not?
Look at what happened with Q-Anon - just a slow stream of text messages issued by some guy in his basement, but enough to rile up millions into believing something totally ridiculous (baby-eating politicians, etc). Now imagine what a smart disinformation campaign might look like, with an unlimited number messages over all types of social media, potentially customized for the individuals that have shown interest and are being targetted ... Of course disinformation isn't anything new, but technology is a force-multiplier and with AI a very sophisticated campaign of this nature could be run by a very small group of people, even just one.