If I'm hosting an IPFS node and I'm accidentaly hosting some content I'd rather not host, I should be able to remove that content from my node and let other nodes know, 'hey, this stuff seems illegal/unethical/unwanted'. Other nodes could then configure their node to automatically listen to you and remove the tagged content, with parameters of saying 'at least x amount of people tagged this content' and 'of those people, y amount should have at least a trust level of z' where the trust level is calculated from others listening to that specific node. With blacklist/whitelist behaviour for specific nodes. Should do the trick but maybe I'm missing something.
Sure, it works if your starting point is "If I'm hosting an IPFS node." There's a level of baseline tech-savvy that's implied by even knowing what that is.
Understand that most of the general population operates on the level of "Somebody said something on the Internet that offends me; how could this have happened?" And that the maximum amount of effort they're willing to put in to rectify this situation is clicking a button. The realistic amount is that they wish it never happened in the first place. That's the level of user-friendliness needed to run a mass-market consumer service.
You bring up a good point and I was just looking from the technical perspective. The method I describe handles content moderation 'after' it's already available. It seems to me
there should also be a system in place for content 'before' it's available so it handles the cases you mentioned.
That being said I don't think it's impossible to have such a system in a decentralized way, there are incentive structures you could build to handle this.
This is a naïve and dismissive perspective on the issue.
Moderation is a fundamental requirement of any social system, and it's one of the two things that Web3 can't yet address meaningfully -- the other being addressability.