Hacker News new | past | comments | ask | show | jobs | submit login

So for some of my volunteer moderation work, this was a question I tried to answer, so that we could have a justification for censorship, instead of "We need to do this, or the forum will continue to be a dumpster fire of hate".

The argument was a closer look at the "market place of ideas" analogy.

There are actors in the market selling bad content. Content designed to addict, poison or over turn the fair functioning of the market place.

Earlier things were far slower, so this was not as pertinent a threat.

With social media, and virality, this is a clear and present danger for the functioning of the digital meeting places we enjoy.

Therefore, there is a need for action to prevent these market perverting actions.

<This ignores the class of locutionary actions that cause direct harm, such as hate speech etc. but the same argument can be made for them, and because of the clear harm they cause.>

Nathan Mattias at Cornell is someone who wrote/writes about it, while also working to help citizen run experiments to figure out what works for content moderation.

He had an interesting article which makes a better argument: https://www.theguardian.com/technology/2016/apr/18/a-toxic-w...

Other interesting articles: https://citizensandtech.org/2020/02/can-public-infrastructur...




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: