Yeah, Bill Gates proposed that solution to spam. Another one is having people register with proven credentials of their legal name and physical address.
Yet Hacker News isn't using these.
Because shadow banning, for example, works. Having someone's voice lost while they don't know about it (keep living in their bubble) works.
The lesson here: don't assume that your solution is the only solution.
Usenet, e-mail, and forums have taught us already that the solution to this problem is advanced, silent filtering (which have been innovated in the anti-spam arms race [1]) essentially akin to shadow banning. The enforcement of law also tends to help.
[1] RBLs, Bayesian filtering, and the already mentioned shadow banning
Shadow banning doesn't work against any adversary that's more sophisticated than an annoying person on the internet. As you observe, email spam filters are basically just "shadow bans" and spammers aren't confused by them at all, they just send themselves test messages.
Spam filters work for lots of reasons but not because spammers can't figure out if they're caught or not.
Others have pointed out that shadowbanning doesn't really work against sophisticated and coordinated adversaries, and that HN has probably never been attacked by an army. But there's also another problem: someone has to pay for the decision, and the system to make sure that the decision is made fairly, that moderation powers aren't abused, etc. HN is subsidized by YC, and it's not very important in the scheme of things so the injustice of a few bad decisions about bans is not the world's most pressing problem. As networks scale up, it becomes much more expensive to moderate them and much more important to get it right. If abusers of a system pay for their abuse, they can also pay for careful and fair enforcement.
If most people's willingness to pay for non-abusive communications media is $0, then... I expect they will continue to get abuse.
Why evil? Or more specifically, why any more evil than any other form of punishment with regards to online communication? If one bans a user as opposed to just shadow ban, the end result is the user either gives up or just makes another account. In rare instances they reform, but usually it's not a reform, it's a change in methodology to avoid whatever condition got them banned in the first place. This isn't rehabilitation, it's learning not to get caught.
In either case, straight up ban or Shadow ban, someone is silenced as a punishment. In either case, the user likely isn't learning their lesson. In either case, the tools are ripe for abuse.
What is the exceptional cost of shadow bans besides them being opaque?
It is interesting for me that shadow banning can be evil (or not). Shadow banning is comparable to who hears bad words and ignores them, replying only with instinctive filler replies ("yeah", "hmm", "so?" and so on). Of course as a moderation measure shadow banning is a lot broader and more systematic than filler replies, but unless moderators do not disclose that shadow banning is in effect (without disclosing subjects), it doesn't seem to be a subject of moral judgement or whatever because any user can always choose to ignore you exactly in that fashion.
I don't think that's a fair comparison; the one choosing to shadow ban another isn't simply choosing to ignore the target, they are choosing to disallow others from hearing what the target has to say, and in a way that does not allow the target an opportunity to change and improve themselves.
> they are choosing to disallow others from hearing what the target has to say
HN allows you to turn on shadowbanned posts.
In the several years I've been here, I've seen maybe 3 or 4 shadow banned posts that were of value. In those cases, other users replied to the shadow banned user and told them to message the mods about getting the shadow ban lifted.
Heavy moderation of the tone of message works. In a similar vein, the neutral politics sub on reddit is successful because they ban people who are rude.
I think that if we moderated tone more often, that a lot of the toxic communities wouldn't even form.
E2E silent ignore does not work in online conversations where multiple people participate ("online group communication") because it meddles with the context since others do not participate. Try it for yourself: join an active IRC channel and ignore the first few people who chat. You won't be able to understand the conversation anymore. Spam filters work on 1:1 communication, targeted specifically at you. Shadow bans work on 1:1 communication and group communication if they're centralised or server-side. Its more akin to (SMTP) tarpits [1].
I've been using the IRC method since the early 90s; it works fine. Sure, there's some appearance of discontinuity but that's a feature, and allows one to personally guage whether the choice to exclude was a good one, without having to be exposed to the comments of the one ignored.
Shadowbans aren't akin to SMTP tarpits insofar as the target of force is less likely to be a bot. Behind the silence is a human being who is trying to engage others.
I've been using IRC since the early 90s as well, and it doesn't work "fine" when you ignore active members of a community who get quoted and such.
SMTP tarpits are like shadow bans because the perpetrator isn't informed about the ban/tarpit, and its meant to slow them down by wasting their time in their bubble that they are getting "work" done.
If they're active members of the community that are regularly quoted then banning them is unlikely to be the best course of action; I prefer to publicly announce that I'm ignoring someone, and perhaps why, before I do. Banning active and valuable community members should, at least, be public and transparent, lest the community react negatively.
SMTP tarpits are more likely to involve a perpetrator who is operating a bot, and less likely a perpetrator who is personally engaged in the conversation and community.
Yet Hacker News isn't using these.
Because shadow banning, for example, works. Having someone's voice lost while they don't know about it (keep living in their bubble) works.
The lesson here: don't assume that your solution is the only solution.
Usenet, e-mail, and forums have taught us already that the solution to this problem is advanced, silent filtering (which have been innovated in the anti-spam arms race [1]) essentially akin to shadow banning. The enforcement of law also tends to help.
[1] RBLs, Bayesian filtering, and the already mentioned shadow banning