Hacker News new | past | comments | ask | show | jobs | submit login

> The sheer volume of spam or even just angry/abusive users on public platforms these days is hard to describe if you haven't been on the spam/abuse prevention side of a popular website.

I haven't been on one of those teams, so I hope you'll forgive the naivety, but:

These seem like solvable problems with decentralized systems. In both cases, someone has to go through the work of manually identifying the bad content, right? In a centralized system, that's someone working for the system - in a decentralized system, that's a random user.

From a technical perspective, then, the centralized "please delete this content" message is pushed out the entire system, while the decentralized message/action can be put into a blocklist/banlist that other users can subscribe to. I believe that this is how the Fediverse works, for instance, and it's definitely how adblocker blacklists work - so this kind of system is already in effect, and seems to be working decently.

If the volume of spam is truly extreme, then what's to prevent you from having distributed blocklists that are fed by automated processes (as opposed to manual additions), and users just subscribe to the ones that they trust?

From a social perspective, users seem to be willing to do this work themselves, given how driven the users of sites like Reddit are, with no more reward for posting high-effort content than a bunch of imaginary internet points, and how effective adblockers are.

To summarize - what prevents a decentralized system from taking the same approaches that a centralized system would employ, packaging them into blocklists, and then allowing users to choose which of those they employ? Same tech, different level of control.




This is basically what killfiles were in the days of Usenet. It worked well for the demographic that was on Usenet at the time (tech savvy and dedicated).

I think the problem is that by and large, users are not willing to do this work themselves. When faced with a social platform that has a lot of jackasses on it, rather than individually curate their experience to remove the jackasses, most of them just leave the platform and find another one where this work is done for them already.

And this is why social networks have abuse teams. If it were totally up to them, they'd rather save themselves the expense, but users have shown that they will leave a platform that doesn't moderate, and so all social platforms are eventually forced to.


If I'm hosting an IPFS node and I'm accidentaly hosting some content I'd rather not host, I should be able to remove that content from my node and let other nodes know, 'hey, this stuff seems illegal/unethical/unwanted'. Other nodes could then configure their node to automatically listen to you and remove the tagged content, with parameters of saying 'at least x amount of people tagged this content' and 'of those people, y amount should have at least a trust level of z' where the trust level is calculated from others listening to that specific node. With blacklist/whitelist behaviour for specific nodes. Should do the trick but maybe I'm missing something.


Sure, it works if your starting point is "If I'm hosting an IPFS node." There's a level of baseline tech-savvy that's implied by even knowing what that is.

Understand that most of the general population operates on the level of "Somebody said something on the Internet that offends me; how could this have happened?" And that the maximum amount of effort they're willing to put in to rectify this situation is clicking a button. The realistic amount is that they wish it never happened in the first place. That's the level of user-friendliness needed to run a mass-market consumer service.


You bring up a good point and I was just looking from the technical perspective. The method I describe handles content moderation 'after' it's already available. It seems to me there should also be a system in place for content 'before' it's available so it handles the cases you mentioned.

That being said I don't think it's impossible to have such a system in a decentralized way, there are incentive structures you could build to handle this.


This is a naïve and dismissive perspective on the issue.

Moderation is a fundamental requirement of any social system, and it's one of the two things that Web3 can't yet address meaningfully -- the other being addressability.


A type of shared killfile might work, kind of like how some people or groups of people curate the lists of ad domains in ad blockers.


> These seem like solvable problems with decentralized systems. In both cases, someone has to go through the work of manually identifying the bad content, right? In a centralized system, that's someone working for the system - in a decentralized system, that's a random user.

It's solvable in the same sense that email spam is solvable. Much like adblockers, spam is "solved" by re-centralizing.

> If the volume of spam is truly extreme, then what's to prevent you from having distributed blocklists that are fed by automated processes (as opposed to manual additions), and users just subscribe to the ones that they trust?

Most users are unequipped to evaluate that and disinterested in putting in a bunch of work to defend themselves against the flaws of the system at hand. Like adblockers or email, most members of the general population want it to be easy, automatic, and require minimal effort beyond clicking the button that gets them going.

Users generally want things to work for them. Investing deeply in protecting themselves because the system's designers didn't consider abuse is rarely towards the top of the priority list. People like things that just work, and the further a thing is from that the more adoption will struggle.


> what prevents a decentralized system from taking the same approaches that a centralized system would employ, packaging them into blocklists, and then allowing users to choose which of those they employ? Same tech, different level of control.

Because the incentives are misaligned; bad actors have, if anything, more incentive than the good actors.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: