Content that’s cryptographically signed by its creator would (hopefully) have more credence than unsigned AI generated fake content purporting to be from someone else, e.g. deepfakes.
Anonymity would not be heavy-handedly prohibited; rather, anonymous content would simply appear untrustworthy relative to authenticated content. It is up to the viewer to decide.
It would be good to have a way of checking if information came from a verifiable human, but I very much doubt that would make much of a difference in the proliferation of machine-generated fake photos, videos, tweets, etc. It requires the content providers and consumers to care, and at least on the consumer side it seems people will believe what they want to believe (e.g. Q-Anon) even when it's extraordinarily obvious that it's not true.
Maybe if misinformation gets too far out of hand (there's already been an AI-generated fake video used in a political campaign) verification will become required by law for anything published on the internet.
Content that’s cryptographically signed by its creator would (hopefully) have more credence than unsigned AI generated fake content purporting to be from someone else, e.g. deepfakes.
Anonymity would not be heavy-handedly prohibited; rather, anonymous content would simply appear untrustworthy relative to authenticated content. It is up to the viewer to decide.