Hacker News new | past | comments | ask | show | jobs | submit login
Mastodon and the challenges of abuse in a federated system (nolanlawson.com)
192 points by dredmorbius on Sept 2, 2018 | hide | past | favorite | 177 comments



I don't think the problem is federation. The problem is media where participants have nothing at stake, so that abusers can't be made to pay the costs of their abuse. Changing this isn't incompatible with anonymity - for example you could require anonymous participants to post bonds in a cryptocurrency. But e-mail spam, for example, would never have been a problem if the recipient of an e-mail could push a button and destroy $1 of the sender's money. Getting the incentives right for a social network is undoubtedly harder, but you can't even start until people have something to lose.


Yeah, Bill Gates proposed that solution to spam. Another one is having people register with proven credentials of their legal name and physical address.

Yet Hacker News isn't using these.

Because shadow banning, for example, works. Having someone's voice lost while they don't know about it (keep living in their bubble) works.

The lesson here: don't assume that your solution is the only solution.

Usenet, e-mail, and forums have taught us already that the solution to this problem is advanced, silent filtering (which have been innovated in the anti-spam arms race [1]) essentially akin to shadow banning. The enforcement of law also tends to help.

[1] RBLs, Bayesian filtering, and the already mentioned shadow banning


Shadow banning doesn't work against any adversary that's more sophisticated than an annoying person on the internet. As you observe, email spam filters are basically just "shadow bans" and spammers aren't confused by them at all, they just send themselves test messages.

Spam filters work for lots of reasons but not because spammers can't figure out if they're caught or not.


Others have pointed out that shadowbanning doesn't really work against sophisticated and coordinated adversaries, and that HN has probably never been attacked by an army. But there's also another problem: someone has to pay for the decision, and the system to make sure that the decision is made fairly, that moderation powers aren't abused, etc. HN is subsidized by YC, and it's not very important in the scheme of things so the injustice of a few bad decisions about bans is not the world's most pressing problem. As networks scale up, it becomes much more expensive to moderate them and much more important to get it right. If abusers of a system pay for their abuse, they can also pay for careful and fair enforcement.

If most people's willingness to pay for non-abusive communications media is $0, then... I expect they will continue to get abuse.


> Because shadow banning, for example, works. Having someone's voice lost while they don't know about it (keep living in their bubble) works.

Shadow banning is evil and intransparent. It may work, but at what cost?


Why evil? Or more specifically, why any more evil than any other form of punishment with regards to online communication? If one bans a user as opposed to just shadow ban, the end result is the user either gives up or just makes another account. In rare instances they reform, but usually it's not a reform, it's a change in methodology to avoid whatever condition got them banned in the first place. This isn't rehabilitation, it's learning not to get caught.

In either case, straight up ban or Shadow ban, someone is silenced as a punishment. In either case, the user likely isn't learning their lesson. In either case, the tools are ripe for abuse.

What is the exceptional cost of shadow bans besides them being opaque?


It is interesting for me that shadow banning can be evil (or not). Shadow banning is comparable to who hears bad words and ignores them, replying only with instinctive filler replies ("yeah", "hmm", "so?" and so on). Of course as a moderation measure shadow banning is a lot broader and more systematic than filler replies, but unless moderators do not disclose that shadow banning is in effect (without disclosing subjects), it doesn't seem to be a subject of moral judgement or whatever because any user can always choose to ignore you exactly in that fashion.


I don't think that's a fair comparison; the one choosing to shadow ban another isn't simply choosing to ignore the target, they are choosing to disallow others from hearing what the target has to say, and in a way that does not allow the target an opportunity to change and improve themselves.


> they are choosing to disallow others from hearing what the target has to say

HN allows you to turn on shadowbanned posts.

In the several years I've been here, I've seen maybe 3 or 4 shadow banned posts that were of value. In those cases, other users replied to the shadow banned user and told them to message the mods about getting the shadow ban lifted.

Heavy moderation of the tone of message works. In a similar vein, the neutral politics sub on reddit is successful because they ban people who are rude.

I think that if we moderated tone more often, that a lot of the toxic communities wouldn't even form.


E2E silent ignore does not work in online conversations where multiple people participate ("online group communication") because it meddles with the context since others do not participate. Try it for yourself: join an active IRC channel and ignore the first few people who chat. You won't be able to understand the conversation anymore. Spam filters work on 1:1 communication, targeted specifically at you. Shadow bans work on 1:1 communication and group communication if they're centralised or server-side. Its more akin to (SMTP) tarpits [1].

[1] https://en.wikipedia.org/wiki/Tarpit_(networking)


I've been using the IRC method since the early 90s; it works fine. Sure, there's some appearance of discontinuity but that's a feature, and allows one to personally guage whether the choice to exclude was a good one, without having to be exposed to the comments of the one ignored.

Shadowbans aren't akin to SMTP tarpits insofar as the target of force is less likely to be a bot. Behind the silence is a human being who is trying to engage others.


I've been using IRC since the early 90s as well, and it doesn't work "fine" when you ignore active members of a community who get quoted and such.

SMTP tarpits are like shadow bans because the perpetrator isn't informed about the ban/tarpit, and its meant to slow them down by wasting their time in their bubble that they are getting "work" done.


If they're active members of the community that are regularly quoted then banning them is unlikely to be the best course of action; I prefer to publicly announce that I'm ignoring someone, and perhaps why, before I do. Banning active and valuable community members should, at least, be public and transparent, lest the community react negatively.

SMTP tarpits are more likely to involve a perpetrator who is operating a bot, and less likely a perpetrator who is personally engaged in the conversation and community.


Every solution needs to be robust in the presence of an adversary who is more technically savvy, motivated, organised, numerous and dedicated than an average user. That's why it's really hard. Any tool you give an individual to protect themselves can be turned against them by an army.


That's why white/blacklisting and similar measures will never work. What you need is proper delegation with an incrementally built web of trust.

Instead of your email address being a human-readable public address, it should be a revokable cryptographically unguessable address that's unique to each person to whom you're introduced. Introductions happen via a protocol which generates a new address given to that person, and you then assign a purely local pet name you use to refer to them [1].

Given any generated address, you can trace back the history of introductions that led to it.

This very clearly solves spam problems since it completely upends the traditional approach with a bottom-up solution. I'm not sure anything else could be immune to abuse to the same extent though.

[1] https://en.wikipedia.org/wiki/Petname


> I'm not sure anything else could be immune to abuse to the same extent though.

There are multiple solutions that work if you design the system that way from the outset (which is the same issue yours has). One was suggested above. You can have a human-readable public address but the first time a message is sent the sender has to commit $1 (or however much is necessary as a deterrent). If the recipient approves the message you get it back right away, if they ignore it you get it back in 30 days, if they mark you as spam you lose the money (and you know this and know that sending them more messages will be expensive).

Then if you send a message (including a reply) to someone it automatically (but revokably) whitelists them, and whitelisted peers don't have to commit money to send to you.

This also works for mailing lists, because then you subscribe by sending a request message, which whitelists the mailing list, and then they don't have to hold a huge reserve as long as they only send messages to people who actually asked for them.

But none of that works for email because it wasn't implemented that way originally and it's hard to change it now.


> You can have a human-readable public address but the first time a message is sent the sender has to commit $1 (or however much is necessary as a deterrent).

Requiring payments destroys the anonymity though, that's why I don't like it.

> But none of that works for email because it wasn't implemented that way originally and it's hard to change it now.

Actually, my system can be adapted to email but you have to give up typing in email addresses manually. Most people don't do that anyway, so no big loss.


> Requiring payments destroys the anonymity though, that's why I don't like it.

Isn't that what all of this Bitcoin and Ethereum is supposed to be good for?

> Actually, my system can be adapted to email but you have to give up typing in email addresses manually. Most people don't do that anyway, so no big loss.

Don't they? People put email addresses on business cards and billboards and things like that. That's where you would need people to upgrade -- the fix would be to use QR codes but then you need everybody to support that (and understand what to do with it).

Of course, people who have their own domains have long been doing something similar but with human readable names. You have example.com so you can create [email protected], [email protected], a hundred names for a hundred services.

It would also be possible to do this with subdomains. So if you were [email protected], gmail could theoretically let you create aliases like [email protected] and use it for your business with Foo Corp.

They do let you do something similar by ignoring dots in the address, so [email protected] is automatically an alias for [email protected], but if you use dots in different places for different activities then when one of them starts getting spammed you discontinue it and add a rule to trash everything sent to that alias. The inconvenience there is it's harder to remember what you used each variation for (and which ones you've already used) when it's time to rotate.


I think distributing the work of verifying new accounts as non-spammers to over all existing users is the only way a social network can grow without losing to the spammers or overloading their human moderators, so I like your idea.

But how do you solve the problem of people wanting to join who have no existing contacts in the network to invite them? Most likely they would use some other channel to request an invitation from a stranger and hope that they'll grant them access.

If that happens, then it dilutes the power of the web of trust. A spammer could manually create a bunch of accounts in the same way, and then use them to grow botnets at the edges of the web of trust. If only a few bots are used to spread spam, it would take a while to accurately determine which part of the network is controlled by the spammer.


But spamming has no power in this system. You can collect a huge list of addresses, but as soon as you abuse them once they would be revoked. One-time spamming isn't a viable business strategy.

The bootstrap problem is indeed challenging with this approach. Normally you could connect with people you know in real life using an NFC or QR code exchange, but social networks have created an expectation of being able to find people by searching an open directory. That's a spammers dream. As soon as you can send a friend request that's just reintroducing open addressing.

I haven't spent much time thinking about this since I rarely use social media, but my first thought is that you voluntarily join some groups and find people that way, ie. High school groups, etc. Perhaps friend recommendations by randomly asking existing members could be used to very the validity of newcomers.


The quantity of physical spam I get through my letterbox suggests charging doesn't cure spam -- although I admit it reduces it.


Metafilter charges a one-time $5 fee to post. It also has extensive human moderation. It’s worked well for them but they’ve never tried to scale up.


Yeah, that's certainly one way to improve the signal/noise ratio of a social networking site, but it simply can't scale to the size of say, Facebook or Twitter. Human moderation is simply too slow and expensive to be done on an 'industrial' scale, and charging money will always significantly limit a site or service's audience.

Then again, perhaps the unfortunate truth is that something on the scale of today's mainstream social networks is simply unworkable. Perhaps at a certain size a community site simply falls apart, and nothing short of human level AI will be able to keep it intact.


By some informal measures, Metafilter has stupidly high S/N ratios.



Immunity, impunity, and ignorance are superpowers enabling some good, and far more harm.

Costs do not have to entail financial transactions, though that's one option. Devising a hard-to-establish reputation, which can be lost on signs of clear anti-social behaviour, is one approach. Accounting for the inevitability of Sybil attacks is a challenge.


It's not so much that the problem is federation than that the solution is quite clearly not federation alone.

Tools, policies, values, coordination & cooperation, and the syones to create, specify, and use these, are necessary.


I was originally thinking that putting this in monetary terms would cause the rich to have an inordinate advantage for free speech. Then I realized that the larger their speech reach, maybe the larger their financial penalty (taking the example of spam). But then in that case, it still simply becomes a case of who can afford it. It's like defending against a DDOS attack, it depends on who has the bigger pipe.

Agree that getting the mechanics for social networks would be hard. Not sure if financial mechanisms are the answer (not saying you're saying that). It puts a price on speech. There has to be a better way.


I'm not sure how that makes sense. Wouldn't that just mean that trolls could attack targets financially in addition to silencing them?


Presumably it would still require manual human moderation to ensure that a slew of reports isn't a coordinated attack.

But also, presumably fewer trolls would exist if you had to put up a "$x" bond before signing up in the first place.


So fewer trolls, but the ones who remain will be trolling the system and bankrupting people. Perfect.


Is my spam filter allowed to push that button?


This was the first test of federation's claim that bad actors can be dealt with by instance-banning. They failed the test and without this key innovation performing to spec I no longer have faith that Mastadon is any more than a technically complicated open source Twitter.

IMO once the account was verified as Wheaton's the entire federation should have blacklisted every instance harboring a false accuser until they demonstrated they can be a functioning member of the fediverse.

Of course, there are better ideas in this thread such as removing the social proofing mechanisms which fuel tweet/toot abuse in the first place. But if we're just speaking in the nouns and verbs that are part of Mastodon's sales pitch they had a tool to address this and demonstrated they can't wield it.


> This was the first test of federation's claim that bad actors can be dealt with by instance-banning. They failed the test and without this key innovation performing to spec I no longer have faith that Mastadon is any more than a technically complicated open source Twitter.

It should have been clear to any person applying basic analysis from the very beginning that decentralization enables abusers. The parallels to and example provided by email is simply too strong.

Decentralized moderation, where a group of essentially unconnected moderators work independently on a problem, have no ability to coordinate or shift load. Centralized moderation systems have both of these abilities, as well as often financial incentives to keep working.

The article hits it right on one point: this is a basic structural problem.

What the article misses is that it assumes there are good solutions while remaining decentralized. There's a reason the approach email settled on was, effectively, centralization.


How is email 'effectively' centralized? There are tons of providers, you can even host it yourself. The only thing that might be somewhat centralized are spam filtering algorithmns that are shared, but many aren't.


> How is email 'effectively' centralized?

By having big provider turn up the spam detector for small actors. It's currently common to have a perfectly set up mail server, with verification and reputation and longevity… and still have Gmail flags the emails that come from it as spam.

Also, forget about sending email from home. Email sent from residential IPs are instantly deleted, and not even sent to the spam folder, by big providers (Hotmail makes it an explicit policy, and bounce the email right back). You have to at least relay the damn mail through a non-residential IP.


I once spent a week on this problem spinning up a QA server, our application had an email function for sending receipts. Our company used GApps and the QA team was complaining of not getting the generated email receipts. I poured through every configuration and every key file, mail still failed from the server citing spam/verification problems from the GApp side.

A week of troubleshooting and we discovered GApps didn't like emails coming from Linode IP addresses.

A sprint and a half later we had rewritten our email functions to use sendgrid. An extra expense, extra dev time burned. Becaus we had done everything right but upstream it still wasn't good enough.


> Also, forget about sending email from home. Email sent from residential IPs are instantly deleted....

Sorry, but this hasn't been my experience at all. I see this claim all the time, and, obviously, ymmv, but I've been using residential IPs for ~10 years with no issues.


Noted. Of course, the provider must know the IP is residential in the first place. From https://help.yahoo.com/kb/SLN26154.html

> 553 5.7.1 [BL21] Connections not accepted from IP addresses on Spamhaus PBL

> Dynamic or residential IP addresses as determined by Spamhaus PBL (Error: 553 5.7.1 [BL21])

> The Spamhaus PBL is a DNSBL database of end-user IP address ranges which should not be delivering unauthenticated SMTP email to any Internet mail server except those provided for specifically by an ISP for that customer's use. The PBL helps networks enforce their Acceptable Use Policy for dynamic and non-MTA customer IP ranges.

---

I guess you're lucky enough that your IP adress has not been tagged as "residential" by Spamhaus.


This suggests that the authorization/reputation system needs to be consolidated (vs. centralized) but that the services could still be decentralized.

The world of commerce has a variety of mechanism to ensure credibility and accountability (signed contracts, security deposits, identification systems, and so on). Extended validation certificates is one manifestation of this in the digital world.

A decentralized service that relied on a consolidated reputation system might be able to provide a framework for managing bad actors.


I'm curious how you distinguish a consolidated reputation system from a centralized one. I would expect that a consolidated system, in which one can get the same answer to the same reputation query from multiple places and perhaps multiple parties, to effectively be a centralized, distributed system. It seems to me like a distinction without difference - what am I missing?

Once you have a strong centralized reputation system, you are pretty close to having re-centralized your decentralized system once again. Especially in a context where a relatively small number of systems send and receive most of the email to begin with.


By consolidated I meant a small number of authorities vs a single authority.


You're completely correct! There are lots of email providers, and you can run a server yourself. Having previously done so myself, it's possible I might be keenly aware of this fact.

I found the fine article we are discussing to contain a great link explaining emergent centralization, including of email. Perhaps you had a different experience.


Has such a famous person ever come onto Mastodon? Maybe it just needs to be battle tested. Realistically if Twitter designed their system to handle abuse from start to finish before the first famous person showed up, what are the chances it would have scaled?


They hadn't even designed their system to handle traffic from start to finish before the first famous person showed up:

https://theoutline.com/post/4147/in-twitters-early-days-only...


It's become clear, as some, myself included, have been saying for well over a year, that federation and decentralisation is no silver bullet, and will not by itself prevent abuse. It does in fact facilitate certain types.

This is not a specific failing of Mastodon, however, and has affected other platforms as well. Many aspects of wilw's experience on Mastodon resemble my own on Imzy two years ago.

There's no substitute for effective administration of instances.

User-level blocking tools are not, of and by themselves, sufficient.

Admin tools are insufficient. This is a fairly frequent occurrence, and on virtually every social media platform I've seen, including staff-level access to several, this has been the case.

The early stage of system development is typically about user features. Then performance issues start to hit. Then abuse.

Mastodon have been encountering the performance and abuse problems, and no, they haven't handled the first encounter particularly well. This doesn't mean that the system as a whole has an invalid model. There are numerous people, many with years and decades of online experience, taking part in that conversation.

Throwing in the towel at this point is highly premature.


Right, because Twitter had everything, from load to abuse figured out right away.

There's already a solution under discussion; block incoming reports for a user, (with time limit), or block reports from a user, (with time limits). It was just something that wasn't expected. Next time, there would be better tools to deal with it.


> They failed the test and without this key innovation performing to spec I no longer have faith that Mastadon is any more than a technically complicated open source Twitter.

Maybe instead the question you should be asking is: who has the incentive to undertake such a resource-intensive attack? This seemed more a response targeted at Wil for promoting his logic regarding leaving Twitter, targeted at him directly after he reasoned why the good people of the world need to Quit Twitter.

Making new users feel unwelcome on Mastadon is, unfortunately, the easiest path for them to fend off the competitor. It would not surprise me in the least to discover this attack was, in fact, coordinated by some entity with strong ties to Twitter. Of course the attackers want it to look like what instead happened was a "famous person verification" problem within the Fediverse; bad and negative press about Mastadon are _exactly_ what the attackers wanted here. And, unfortunately, it worked. There is always financial motive.


Do you have anything to support that speculation? We have terrible people on all social media platforms. The simplest explanation is that there are genuinely terrible little on mastodon.


They do, the reasoning they already put forth.

On the other hand your explanation is no explanation at all. No possible motivation. Just "people are bad everywhere m'kay".


They did show their theory. They didn't really show supporting facts. If you claim "it's caused by something different than we see everywhere else" you really need more than "it's true, because money".


> In mastodon.cloud’s case, it appears the moderator got 60 reports overnight, which was so much trouble that they decided to evict Wil Wheaton from the instance rather than deal with the deluge.

In my opinion, what's missing are tools for instance operators to ignore problematic accounts. Unless there's huge churn, 60 reports per day would quickly become just a few. And this would also help hugely with the spam issue. And instance operators could share blocklists, just as for email spam.

Given all I've read so far about Mastodon, the only viable solution seems to be running single-user instances. That way, all blocking can only be user-specific. Instance operators can't (obviously) be forced to delete users' accounts. And if that's currently too technical for most users, then it needs to be streamlined and simplified.

Edit: spelling


My analysis of abuse in social media: it's an unavoidable side-product of buzz mechanics. Buzz exists in social media because the ad-based company that invented it needed a slot-machine-like mechanism to keep users addicted (has already been discussed here recently). As such, for me the root of both buzz and abuse are features like shared global news-feed and real-time trending topics (or network-wide search by some popularity-like criterion in general).

Abuse mostly works in medium to large groups and it crucially depends on dehumanization of the target. Afaik a classical way to unspring abuse IRL is to organize small isolated confrontations, exactly the opposite of doing an epic moral speech in front of the crowd because one cannot have reasonable and logical arguments in a big group. So in the end a global search ranked by recent popularity is exactly the opposite of what is needed for peacemaking: any heated argument will attract more and more bystanders and there is no way to reverse the steam.

So what solution? My guess is that only whitelisting makes sense and with a granularity lower than instances. One could also provide tools for instance moderators to make a particular thread private to isolate it. We should also not provide any popularity based ranking, leaving that to specific structures dedicated to it, with their own rules and a well-defined editorial policy (like hacker news or other). Of course we cannot just stop people from ranking things by popularity, we have to actually make it impossible. One way for that is that links really be directed: retooting something should not inform the target, making the retoot-count a hard edge-reversal problem (which would require full network crawling). We could even make crawling harder by a small rate limiting or proof a work in the query protocol (eg just making the query expensive for the client).

Does this sound sensible? edit: actually i somewhat just expanded on this comment below: https://news.ycombinator.com/item?id=17895159


Yes but I think there are two problems with your ideas, at least when it comes to Twitter:

> organize small isolated confrontations, exactly the opposite of doing an epic moral speech in front of the crowd because one cannot have reasonable and logical arguments in a big group.

That won't work. Open any Trump tweet and click on any person whether he/she is for or against president. You will see never-ending stream of tweets back and forth at each other, one party tries to convince other that Trump cheated on his wife, meanwhile other tries to convince that Clinton had her server hacked. Eventually such stream boils down to competition of who is more aggressive, rude and provocative against each other. Many such conversations ends with Hitler being injected, but of course no party will ever see the other side. I literally monitored 3 such streams of women wasting 10 days of their lives going after each other with frequency of one tweet per 45 minutes.

Your idea to break it down and sort of "mediate" between two people that honestly are not interested in even hearing what the other side has to say, but are only engaged to vent off and blindly cheer whatever cool aid they been already sold on long time ago (i.e. that Trump is incredible president, not a con man, or Clinton is amazing women, not a crook) won't work.

Second and finally:

> My guess is that only whitelisting makes sense and with a granularity lower than instances.

Twitter has signed off its death sentence when it went public. Its a tectonic shift within your organization when you move from serving users for the good sake of the whole system, into serving stockholders that only spend 15 minutes with you, once a year at the end of financial period and will only look into one-page accounting statements of profits and loss.

While twitter users and even owners may want the most happy, prosperous and healthy environment in which everyone thrives and enjoy spending their evening reading and responding to tweets, the truth is the only people Twitter answers to is stockholders who only wants to squeeze as many ads into your timeline as possible. I'll say it even more: if Twitter starts fixing its feeds and doing further account purges, they will get sued by stockholders because obviously less users on platform equals to less ads-engagement. And its irrelevant whether you engaged with trollbots or real genuine people; it only matters you seen (and hopefully clicked) on ads.

Does this sound about right?


Yeah, not sure. For the first point, actually i don't really care: eg the action i would like to take is simply to ignore the whole thread for myself (and thus in a sensible distributed system where people might make their policy depend on mine because they trust me, it would also lower the rank of this thread and in the end isolate it in some limbo). And after some time, one can hope that a long-term effect of discouraging abuse, shitstorm and humor/meme buzz will educate users and make "interesting" uses take over (eg these people will either get off this particular network or stop participating in flame wars because there are less of them and that there is no social reward on the network).

And for the second point, i completely agree with you: there is no way that twitter makes the right thing. But i was mainly talking about what "a hypothetical well-run probably federated or even fully distributed social network" (possibly mastodon but not restricted to it).


> And after some time, one can hope that [...] because there are less of them and that there is no social reward on the network).

That's a wishful thinking! One could say the same about first color TV when first comedy serial was aired; that eventually people will stop wasting their time watching silly shows (point given you cant argue with TV-set) and eventually move on to be more productive etc.

To the contrary I don't think we seen a mountain top just yet; still I meet people who don't know what Twitter is; their path to being daily engaged and being dragged into trollbots' wars is probably few years down the road.


Eh, I've met a few people who had only heard of the name and wondered what it was all about. When I showed them how to sign up and look for hashtags, they quickly gave up between the flame wars and the bad ui.

I don't think Twitter is going to reach anywhere near the ubiquity of tv or movies for entertainment. Most tv shows and movies don't have a good chance of making you feel angry, frustrated, or depressed after watching them


I have trouble seeing how decentralized services like Mastodon really improve on Twitter.

With regards to harassment, Twitter suffers from the moderator-to-user ratio pointed out in the article but ultimately decentralization must deal with whack-a-mole on bad faith instances (plus all the other issues described).

I fear that this sort of system also has the potential to drive people even further in to social echo chambers where terrible ideas and behaviors can flourish in harmful ways.


'echo chamber' should be nominated for some sort of worst-buzzword of the century award.

There's nothing wrong with people organising and debating issues among like minded peers. The dysfunction of twitter isn't rooted in 'echo chambers', but in the lack of grouping, which results in a Hobbesian war of all against all.

The Facebook marketing idea of a global digital village without borders is a nice vision for a corporate brochure, but no way to actually organise human communities.

Alex Pentland did interesting work on social networks and ironically enough it was precisely hyper-permissive and connected networks that produced bad outcomes and echoing because the opportunity to copy and descend into permanent agitation is largest in systems that have no barriers or idiosyncrasies. A barrier free system is a group-thinking sytem by design because it works too fast

It's easy to see why this is profitable for the people who run the platforms, but it is of questionable social value.


>There's nothing wrong with people organising and debating issues among like minded peers.

That's half - and the less important half - of what an echo chamber is. That by itself is fine. It's when joined with the other half of "echoing so loudly" that any dissenting views can't be expressed and are drowned out. The group only hears what they want to hear, never having their ideas challenged. No matter how terrible the idea is they only tolerate other people who support their idea. It isn't just discussing things with like minded peers, it's actively silencing anyone who doesn't agree and not having any debate on the subject.

HN is an opt-out echo chamber. Dissenting views are flagged or greyed off the page for nobody to read, except for the people who turn ShowDead "ON" and opt out of the echo chamber. In return for seeing dissenting opinions they see a lot more spam, a lot more vile posts, a lot more noise. In many ways people would say that makes the experience worse, but I wouldn't have it any other way. If HN didn't have a ShowDead option I wouldn't continue to be here.


Yeah, there's a big difference between telling people to stop talking about the Xbox in a Dreamcast forum and banning anyone who doesn't think that Jet Grind Radio isn't the best Dreamcast game.

One is proper curation of the discussion forum. The other is building an echo chamber.


The issue is that the latter is also a strawman, because that's not exactly common. That's also the problem with the initial response. Hackernews, by this detrimental definition of 'echo-chamber', is not one. People debate diverse viewpoints here, the only thing that is demanded of them is that they do so in civil manner and good faith.

That's not an echo-chamber in the bad sense of the term, that's the basis for productive discussion.


It was a ridiculous example to show that you can have a curated discussion forum that doesn't lead to an echo chamber. I took it to an absurd length to make it quite obvious that it was what it was and chose a topic light enough so that there's no need to actually debate the merit of the issue raised by the example. Basically whether or not Jet Grind Radio is the best Dreamcast game is a dumb point to argue, but we can agree that a forum that enforced that opinion would just create an echo chamber of people who really like that game. And that that's bad curation.

Essentially, agreeing with the guy I responded to that an echo chamber requires silencing dissenting opinions without any hope of discussion.

I wasn't passing judgment on whether or not HackerNews is or is not an echo chamber.


Your response being downvoted and thus less likely to be read - and with enough downvotes incapable of being read by those who haven't opted to read "Dead" comments - is a power given to the community to self-moderate. That power is also used to drown out opinions (ie. creating an echo chamber). Your response was civil, made in good faith, and had no reason that I can tell to be downvoted. If downvotes couldn't silence people, or prevent their opinions from being read, you would be correct. But because they can - it is a tool people can use to form an echo chamber.

If you browse through heated topics where the HN demographic skews towards one side of the argument, you can see the echo chamber at work simply by turning ShowDead to "On" and reading every comment on the thread. If you browse with ShowDead "Off" you might not be aware of how frequent replies are silenced for no reason other than not agreeing with the echo chamber.


I never used social media, because of privacy concerns, and because I'm somewhat of a veteran, having started with Fidonet, and continued later with Usenet.

These problem are nothing new, they existed back then already, they were only less severe, because not many people were using those services back then, and the broad audience did know nothing about them.

Having already experienced such things on a small level, I was too sceptical to use social media, and I was right.

But I started using Mastodon about a year ago, assuming that a distributed, censorship resistant network would do something better.

Boy, was I wrong. After the Wil Wheaton incident, I deleted all of my Mastodon accounts, and this will surely be my last experiment with social media.

I have a Threema account, that only my friends and relatives know, and I have my own blogs, that'll be enough.

I survided without social media before, and I'll continue to do so.


Don’t you consider Hacker News to be social media? All the links are sourced and ranked from the community, plus all the comments.


It sort of is, but it benefits a lot from the (virtually) unlimited number of characters you can use. It's possible to express complex and nuanced points on HN, which is impossible on Twitter by the nature of their decision to cap everyone to tiny messages.

The Twitter length limit is a great growth hack because you ensure nobody can excel, so everyone feels they can tweet just as well as anyone else can. But is atrocious if your goal is to promote debate and intelligent discussion. It's a medium explicitly designed to prevent any kind of complex conversation - big surprise how it turns out.

HN has its own set of problems of course: like most discussion forum software it confuses "I disagree with your view" with "I disagree with how you expressed your view", so people routinely downvote well reasoned and polite posts that simply cause them cognitive dissonance or that they find inconveniently true. It'd be better to explicitly separate the two, but I never saw any forums that did so.

But it's at least got the basics right: the screen space is devoted to high density text.


Email spam has not been a problem since we figured out Bayesian filtering. I understand why Twitter and Facebook don't offer that as a tool to it's users, because we'd all filter out the advertising that they live off of. Why couldn't Mastedon offer it? Will Wheaton opens a Mastodon account, messages he doesn't want to see start flooding in, he clicks "this is spam" on each one, and after a day or two he no longer sees the messages he doesn't want to see. The trolls, starved of attention, move on. Am I missing something?


> Email spam has not been a problem since we figured out Bayesian filtering.

My understanding is, this is not because Bayesian filtering solved the spam problem forever. It's because Bayesian filtering solved the spam problem for long enough for Gmail to dominate email.

Google had a ton of data and they were already using it to recommend ads, so they were well-positioned to use it to make the best spam filter. Unlike the things that came before, you could rely on Gmail's filtering. You didn't have to sort your spam folder ever. This was a clear advantage bringing even more people to Gmail.

Now the thing discouraging email spammers isn't any given filter, it's the fact that Google owns email and the cost of messing with Google is too high.

(I am not saying that this, in particular, makes Google evil. Another way to look at it is, Google saved email as a usable medium, by centralizing it.)


> Unlike the things that came before, you could rely on Gmail's filtering. You didn't have to sort your spam folder ever.

Was this ever true? It's certainly not true now. Gmail correctly filters almost all spam I receive. And, it also incorrectly filters a healthy amount of legitimate mail, so you still have to check your spam folder. It will even filter mail from Google.


I found that running Spamassassin is way more reliable than Google's filter, although it missed some spam from time-to-time I don't think I ever had false positives. Meanwhile I can attest that the few times I tried to use Gmail for receiving something "important" (e.g. an invoice for an online service/shop I just signed up for) it went straight to spam. Maybe because there are so many spam invoices it just can't tell the real ones apart anymore.


I check my spam folder out of curiosity once in a while, and there is nothing in there I would miss. Like some of it is "legitimate" but it's still just bulk drivel, nothing that obligates me in particular to read it.

Right now, I see mail from Google in my filter. Because it's automated dumb shit where Google Photos wants me to look at my photos from a year ago, and I already told Gmail that was spam, and it believed me.

If I miss an e-mail because it was caught in my spam filter, that's the sender's problem, as opposed to pre-Gmail, when it was my problem.

It seems to me that I haven't _had_ to re-train my spam filter by rescuing important ham messages from the spam folder in a decade. Perhaps your experience is different.


> Perhaps your experience is different.

Very much so. For example, earlier this year I sent email to a company inquiring about an opening. Their response went straight to spam.


I rely on Thunderbird's filtering and it catches almost all spam mails that I get.


I have my own ___domain, and can therefore generate as much email addresses as I want.

In fact, I use a separate (non existing) email address for every individual or business I communicate with, and their mails all end up on my administrator account.

When I receive spam, I can easily recognize who sold my email address, and bann them forever.

That's very effective.


How do you handle people who've had their systems hacked and address books stolen though? Is it just a blanket FU?


One way to see this could be that anyone getting hacked without disclosing it properly (intentionally or due to not having realized it) is arguably worse than those selling off personal information, so good riddance.

In the other case, just create a new address.


Google does not have a very good spam filter, it's pretty average.


In the comment you're replying to, I said that the quality of any particular spam filter used to matter, and doesn't anymore.

In 2006, Google definitely had the best spam filter. Now, instead, it has the harshest consequences to being caught spamming.


Partly I think no one has gotten around to it and partly there are quite a few "free speech at all costs" people in the fediverse.


The linked webpage in the article detailing Wil Wheaton's account of the incident is a thought-provoking read of online mob behavior. http://wilwheaton.net/2018/08/the-world-is-a-terrible-place-...


Mobbing on social media seems to be a real problem. Many articles have be written about it and we need solutions.

I think it sucks that Wil can’t function on social media because of it.

As for the essay, it is philosophically a huge contradiction.

He’s upset that twitter allows Alex Jones to post, while being upset he is being essentially blocked on social media?

But then he calls the sitting president of the United States juvenile and hateful nicknames?

So which is it? The essay writes from the assumption he has the moral high ground and is being victimized.

The mobbing stuff is a legit problem and attack and makes me sympathetic.

The rest completely undermines his point.


Alex Jones's speech is not equivalent to Wil's. Jones is accused of inciting harassment and violence, and defaming private citizens. Objecting to Jones's presence and to harassment is not hypocrisy.

There are two arguments against Twitter; one is they're not equally applying their own standards, ignoring the behaviors of people like Jones and Trump while punishing people who are seen as victims. The other is Twitter's standards and tools for dealing with harassment are insufficient. I don't know if the former is true or not but the latter seems to be proven by people finding it impossible to use the platform, as-is.

I don't really understand the thinking behind leaving a large platform because people you don't like also use it. The vast majority of people using Twitter, Reddit, etc. are not objectionable. To act like the presence of a few bad ones A) will harm your reputation if you also use it and/or B) your use constitutes an endorsement of them or even of the platform, seems silly. Leaving because you're personally harassed or have a hard time finding the content or conversation you're looking for is not silly.


My initial impression — and I by no means have any handle on the specific situation, is that this whole thing had nothing whatsoever to do with Wil, and everything to do with testing a capability.

Wil happened to walk in on a type of weapons test. I think the attack in this case would be considered a “success”. Or was it just for lulz?

Was there any cost to the attackers? Are they throwaway accounts or real identities? Is it at least forever tied to that account so that there could possibly be some future reackoning?

So many questions about the whole incident and response.


You can see a glimmer of why he has so much trouble with trolls though in that post. He refers to Trump as "Shitler" and his supporters as "cultists" and then a few paragraphs later admonishes the reader to "Please do your best to be kind, and make an effort to make the world less terrible."

"Do what I say but not what I do..." is a terrible way to get people around to one's way of thinking.


The most interesting thing to me is how Wil's antagonists seem to have successfully taken advantage of Wil's mythos (consciously or not) to create cognitive dissonance.

I'm not sure Wil gets that he's never going to have any inner peace while holding on to a mythos that regards him as subhuman because of things he can't control like his race or sex.

Some of the thoughts in that post feel like they're from another century. Almost ike reading Jonathan Edwards' diary, where any time he catches himself feeling happy he has to go off an meditate on how he is but a lowly worm deserving of death.

In both cases it's actually pretty self-absorbed, but at least Edwards had the good sense not to complain to other people (or even to himself in his diary) that the metaphorical hair shirt he'd put on was uncomfortable. In that sort of mythos, the only peace the righteous can expect is a life of suffering, since happiness and success are signs of worldliness (or, in Wil's case privilege).


> He refers to Trump as "Shitler" and his supporters as "cultists" and then a few paragraphs later admonishes the reader to "Please do your best to be kind, and make an effort to make the world less terrible."

Definitely, this post is littered with examples like this, like pointing out Gunn getting fired while Gunn himself cheered when the exact same thing happened to Roseanne. There's just so much hypocrisy on every side. If you want the moral high ground, then don't reach for weapons when someone disagrees with you.

People enjoy when weapons wielded against those they view as enemies, but decry the injustice of them being used against those they see as friends. But in the end, war only benefits the war mongers.

Dehumanizing people is never a reasonable response, and so weapons should never be needed, especially against fellow citizens.


I do feel that there is a difference in the Gunn and Barr cases.

The things Gunn said have enough distance to them that you could reasonably say that he learned better since then. Firing him for something he said a decade ago seems off.

Barr got fired for something she said that day.

So I wouldn't say it's the exact same thing.

What happened to Barr seems like punishment. What happened to Gunn feels more chilling. It's a world that will never tolerate or forgive a mistake. That "permanent record" your teacher warned you about has come to fruition.


Of course there are differences, but the question is whether those differences matter. The issue at heart is policing speech according to arbitrary mob rules, and employing that as a weapon against those with whom you disagree.

People love inciting the mob when it suits them, but the mob is fickle and cuts both ways. Gunn's poetic punishment for encouraging mob rule was to be cut down by it.

And frankly, I don't see how recency is relevant to such matters anyway. Should a murderer who was never caught still face justice 30 years later? Does it matter that "he may have learned better by now"?

Either forgiveness and compassion is a virtue and neither Gunn nor Barr should have faced harsh punishment more than a few sharp rebukes to demonstrate what they did wrong, or they both deserved what they got.


That's a false dichotomy. You are advocating for either letting people get away with anything or punishing people for life for a mistake.

And let's not pretend saying words is the equivalent of murder or that what happened to either Barr or Gunn is the equivalent of the punishment faced by murderers. Murder is a far more serious crime than saying something insensitive.

Even in the legal system, there is the concept of the statute of limitations. And it is different for different crimes.

Forgiveness and compassion are virtues, but that does not lead to the conclusion that both Barr and Gunn should have been immediately forgiven for what they said.

If Gunn had said what he had said the day before, then it would be a fair cop. But he didn't. The recency does matter. Gunn can legitimately say, "Yeah, I'm not proud of that. I know saying that is wrong now" and be convincing. It's been years. He's grown, he's learned. How much growing and learning can you do overnight?


> And let's not pretend saying words is the equivalent of murder or that what happened to either Barr or Gunn is the equivalent of the punishment faced by murderers.

No, the point of the example was to illustrate that justice is not temporally dependent.

While I agree and disagree with some of your points, I'm short on time, so I'll just focus on my main point which you seem to have missed:

If you encourage mob justice, you deserve mob justice. A mob is not open to nuanced moral reasoning. Gunn was fully supportive of mob justice for those with whom he disagreed. Gunn therefore deserves the same treatment, and he got it.


That presumes he wasn't sufficiently punished earlier.

Also, in what manner is ABC a mob? ABC took no time at all firing Barr.


> That presumes he wasn't sufficiently punished earlier.

Again, you're stuck in your own mindset of what constitutes justice. Clearly mob justice is not your justice. Gunn's view on Barr's firing is not the only example of his support for mob justice, just the most high profile. The mob makes its own rules for what constitutes a transgression, that's why it's so dangerous.

> Also, in what manner is ABC a mob? ABC took no time at all firing Barr.

ABC is not the mob here, they wouldn't have done anything if there were no furor surrounding Barr's comment.


But aren't you as well? You're saying that him being punished now is fair since he said Barr should be fired or expressed joy at Barr getting fired.

But if he was already punished, then he's being punished twice. And that wouldn't be fair. Unless you believe people should be punished continuously in perpetuity for their mistakes.

Saying ABC would have done nothing is speculative at best.

You're just looking for a way to say "Ha ha, tu quoque" by expressing this "concern" over "mob justice".

Not to mention, Disney fired Gunn when the overwhelming opinion is that he shouldn't have been fired. So it also isn't even "mob justice".


> But aren't you as well? You're saying that him being punished now is fair since he said Barr should be fired or expressed joy at Barr getting fired.

Gunn's outcome is justified by simply logical consistency: Gunn endorsed mob justice, therefore Gunn's judgment by mob justice is justified by his own logic. My own views have no bearing on this argument, you're just confusing the fact that I have also expressed personal satisfaction at seeing poetic justice for a loathsome form of judgment.

> Saying ABC would have done nothing is speculative at best.

Not really. Corporations literally only take action that affect their present or future bottom line in some way. This is simple economic fact. Barr's show was making them money, they fired her because the long-term cost of the PR nightmare outweighed the revenue. If there were no cost, there'd be no reason to fire her and lose out on that revenue. The weight of evidence exceeds mere "speculation".

> Not to mention, Disney fired Gunn when the overwhelming opinion is that he shouldn't have been fired.

You have some actual data backing up that claim?

> So it also isn't even "mob justice".

"Mob justice" is not "majority justice". Mobs can reflect minority or majority views, there's no relationship here.


You're missing the point, you're ignoring on whether or not he's actually been punished prior. If he had already been punished by "mob justice" (which you've ill-defined), then this is double-dipping.

It's not logically consistent. You don't like Gunn or his particular brand of activism/politics/discourse and therefore you are enjoying his downfall. If his movies simply failed, you'd likely be just as satisfied.

And if the advertisers would have pulled from the show? Then the show doesn't even make them money. Is that mob justice? I mean, what isn't mob justice according to you? And what if they just don't want her there for her views. And there are plenty of times where there's no cost and things get denied or cancelled. These decisions are never purely financial.

More data than you do that Barr's firing was "mob justice". All of the actors in Guardians have expressed support for him. Other peers have decried Disney's decision. Articles were written about it. You can do a simple search for "James Gunn firing backlash".

I mean, at this point, say what you really want to say.


> You're missing the point, you're ignoring on whether or not he's actually been punished prior. If he had already been punished by "mob justice" (which you've ill-defined), then this is double-dipping.

That's irrelevant. Given Gunn endorses mob justice, then the mob decides what is and isn't right. That's what mob justice means. The number of times you've been punished is immaterial. My personal satisfaction in this outcome is immaterial. How much you've suffered is immaterial. The mob decides what you deserve.

Further, people being judged according to how they judge others is the definition of logical consistency. To do otherwise would be special pleading.

> Is that mob justice? I mean, what isn't mob justice according to you?

https://en.wikipedia.org/wiki/Ochlocracy

Throngs of people driven by emotional motives rather than rational consideration of the facts and the context of any given scenario. No recognition of people's fallibility. No formal system recognizing the rights of both perpetrators and victims. Dehumanization of perpetrators. These are all characteristic of mob justice.

The rise of doxxing, shaming culture, and so on are virtually all forms of mob justice, and it's frankly abhorrent.

> And if the advertisers would have pulled from the show? And what if they just don't want her there for her views. And there are plenty of times where there's no cost and things get denied or cancelled. These decisions are never purely financial.

Those are all purely financial motivations. Advertisers pull from the show because the bad PR may affect sales. They don't want her there for the same reasons. It's frankly bizarre that you think there's any other motivation here, particularly for big corporations that are effectively amoral.


So if the motivations are financial, then it isn't mob justice. It's just financial reasons. If they put on 22 minutes of someone taking a shit, people wouldn't watch that either. It would also get pulled. That's not mob justice.

Also, I started with financial reasons then said that even after those, there are reasons things get pulled that aren't financial. And no decision exists in a vacuum. New studio head, expect cancellations of shows that are otherwise doing well. Expect shows that were about to be in production get scrapped for the new head's pet project. To say that everything is purely financial is not only ignoring that people are not rational actors, but undercuts your point about mob justice.

Or makes everything mob justice. Taco Bell removes your favorite item due to poor sales: mob justice.

You are also saying that ABC was both driven by emotional motives and that it was solely a financial move. It can't be both.

And yes, people can be fallible. Sometimes they need a stern reprimand in order to reform. And when several of those stern reprimands don't work and you continue tweeting crazy conspiracy theories and borderline racist things, you get fired.

It's funny that you believe the Barr thing was a single, isolated incident. Just say what you want to say: That you're glad that that leftist cuck got fired because fuck PC culture.


Yes, frightening!

Almost enough to put me off Mastodon.

Edit: Seriously, just why the bloody hell would I participate in some system where a bunch of trolls could get my account deleted? I seriously doubt that could happen here, for example, because moderators base decisions on values, rather than expediency. Another alternative, which I actually prefer, is for absolutely no third-party moderation, but lots of tools for users to efficiently control what they see.


After reading the list of recommended instances to ban I am officially turned off of Mastodon. Some of the sites banned are for good reason but many/most(?) seem to be vague personal grievances from people overly concerned about how people say things. This language politicization will not make many English-second-language people comfortable as one mistake could potentially ruin them. Meanwhile, the "do gooder" moralists are the only ones who can truly say what they think and all they do is produce lists of people they hate. Sorry. Not getting on the Mastodon bandwagon!


I was really surprised when the original article said mastodon was blacklist by default (and that instance whitelisting was only possible via forking the code!)

I’m actually not certain this is true. I used Mastodon briefly and I remember there was one account out there that auto-followed every user it saw tooting. That way that bot’s instance would build connections to every instance it could see, which gives both (A) metrics - you can get a rough estimate of how many instances are alive at any time - and (B) a way for instances to discover smaller/newer instances if they choose to (I.e follow this bot, and now the global feed visible to your instance sees just about everything).

If Mastodon is blacklist by default, then it must have some other discovery mechanism? Or maybe the terminology just isn’t being applied in the way I understand it (e.g. perhaps it is whitelist-by-default, in that you explicitly connect to instances of your choice, but this implicitly whitelists every instance that whitelisted instance is connected to, recursively, and the blacklist is built atop this)?


I believe other instances get automatically subscribed (without admin's preapproval) when any user follows someone on the other instance.


I think there exist some zero-moderation instances:

https://telegra.ph/Instances-to-silencesuspend-on-Mastodon-0... (this is a list of instances I found which some sensitive people recommend to block, so free speech/zero-moderation ones are included there)


But then other instances just blacklist them, right?

Edit: And damn, the quoted post from social.i2p.rocks actually makes some sense. The language is harsh, for sure, but the support for freedom is pretty clear.

Also, how hard is it to manage accounts on multiple instances? So I could see both mainstream and fully unmoderated.


Yes, we do. A number of instances have blacklisted bofa.lol, which is where this whole attack on Wheaton began.

I suspect that eventually the Fediverse will be divided into two major sections: places that are okay with "pissing people off" as a source of fun, and places that require basic adult civility at the price of, say, restricting your freedom to call an absolute stranger something vile.

I dunno how well the former will survive with read-only access to the latter. I would not be sad to see it wither and die, though several decades of life on the Internet suggests that's all too unlikely to happen.


> I dunno how well the former will survive with read-only access to the latter. I would not be sad to see it wither and die, though several decades of life on the Internet suggests that's all too unlikely to happen.

In my internet experience, it will be the fun, influential one, and the "civil" one will wither and die. The problem in this case was the abuse of the reporting system, it seems, not name-calling.


HackerNews benefits strongly (imho) from strong moderation.


Strong, principled, consistent, and effective.


Metafilter is way better than Voat. I concur that the less fun one will wither but no guarantee that's the civil one.


> I suspect that eventually the Fediverse will be divided into two major sections: places that are okay with "pissing people off" as a source of fun, and places that require basic adult civility at the price of, say, restricting your freedom to call an absolute stranger something vile.

Does that mean instances will enforce basic adult civility by preventing one from insulting, say, President Trump? He is by any reasonable standard an absolute stranger to the vast majority of users, who generally will have had absolutely no personal contact with him in any fashion. I don't care for the man's politics, but I also must admit I've never met him.

Have I missed something? Please, help me understand!


Yeah, this list is a blocklist which many "safe" instances use. But I thought it could be useful to find unmoderated instances too, author invested a lot of time into it.


I found a free speech instance but you have convinced me to stop using Mastodon. What a joke. This list has some reasonable things in it but a lot of attacking other users/instances on flimsy allegations.


I am actually not sure how widely this blocklist is used, I just randomly stumbled upon it (via https://github.com/tootcafe/blocked-instances).


Maybe I’m naive, but I’ve thought the idea of “groups” on Twitter would improve things. You join groups you like, you set your account to open or group only or private and you control who can interact with you. You could join global groups or very narrow groups, as you saw fit and created your own if none fit you.

Obviously the big drawback would be plunging “engagemrnt” and slower growth. I think it would be healthier. If you’re a wallflower, you find your groups, if you want to spout off, you join those groups... if you want miss manners, you join hers, if you want sailors, you join theirs.


https://thomask.sdf.org/blog/2018/08/19/from-gnu-social-to-m...

"There was a cool feature in GNU social called groups that has never existed on Twitter or on Mastodon. A user can register a group on their server—for example there was one about gardening called “!grow”. If you subscribe to the group and then put “!grow” in one of your messages, all the other subscribers see it in their home timeline. It’s similar to subscribing to a hashtag except the group’s creator can moderate members.

...

It’s kind of fascinating to me that this feature wound up in GNU social even though they didn’t have solutions for long term management. It enabled some cool interactions. It caused a bunch of consternation when things went wrong. Users bumbled their way through.

Is it a good design? Eh, probably not. I imagine that’s why they’re not supported in Mastodon."


Something like this could help. What modern "social networks" lack is proper separation and barriers to entry. Previously BBS just accomplished it by being separate, owned by private individuals, and topic based. Both Reddit and Twitter broke that.


The interesting observation if I understood Wil's account correctly, is that it appears he was likely mobbed by the very people with whom he would likely share common values.

I think this speaks to a deeper more disturbing pattern of social interaction in the space of social media. In many ways I see it is regressing our ability to have real conversations in the same way we would in the real world. Animosity is building between people/groups that I'm not sure would happen to the same degree otherwise.


Unfortunately I think you have it backwards. Some people just really, really like to fight, and to purge heretics from the group of pure believers. It also doesn't help that the smallest misstep is sufficient to switch one from "pure believer" to "heretic" in the blink of an eye.

I'll go ahead and link the famous "whale cancer" article from SSC: http://slatestarcodex.com/2014/06/14/living-by-the-sword/


It's also incredibly biased towards Wil (no surprise, as he wrote it).

In reality, Wil himself was the harasser and abuser, and the reports against him were completely justified. He got bofa'd and in response started reporting other users for even the slightest provocation. And it wasn't just reporting other users; he was specifically reporting a bunch of trans women, which really puts him in a poor light considering the fiasco with his Twitter blocklist that he put a ton of LGBT people on and pretty much ruined the lives of a bunch of independent trans artists who suddenly lost their source of income.

Now it's certainly possible that a group of people decided to mob him with reports, but personally I think it's pretty likely that those 60 reports overnight were all legitimate reactions to his campaign of abuse against other people.


> pretty much ruined the lives of a bunch of independent trans artists who suddenly lost their source of income.

Could you elaborate on this? I keep pretty clear of the petty squabbles that go on in the social justice space* so I'm not aware of this.

*Not to say that social justice isn't important, however there is a section that is more interested in generating noise than change. I've found most of the very minor celebrities involved are of the "noise" variety rather than "change".


Wil used and popularised Randi Harper's personal blocklist after ggautoblocker stopped being maintained and Randi Harper's list ended up with a bunch of trans attacktivists on it after they took exception to her saying something positive about Jesse Singal because he'd written something positive about Alice Dreher's book which analyzed the attacks on a couple of people by the extreme-SJ lot and didn't sufficiently condemn the targets as evil in the process (because that wasn't what she was analyzing but, yeah, well, the internet).

The result was lots of people whose only crime was having yelled at Randi getting blocked by lots of people unrelated to Randi who were just trying to rid their timeline of nazis and etc. - and a bunch of those people were artists who now couldn't easily contact potential employers via twitter, which apparently (in the sense that I know nothing about this, not that I disbelieve this) put a serious crimp in their careers.

So a bunch of people experienced wildly disproportionate consequences for a single moment of anger online, as collateral damage from Wil trying to do something he considered good, and he's apologised for it but some people still believe he's transphobic because they don't feel the apology was commensurate to the degree of damage caused even if the damage was accidental.

I don't really have a take away here apart from "we have no idea how to do personal social media environment curation in a way that works yet". Mostly I just feel kinda sorry for everybody screwed over by the situation, including the subset I otherwise dislike.


So, in other words, he personally didn't do anything to "ruin the lives" of anyone.

He trusted someone that shouldn't have been trusted and in turn people trusted that his judgment was sound when it wasn't because Wil is more interested in playing the role correctly than to actually affect change or get in touch with those who can.

I'm sorry, there are a lot of people doing good work for social justice. Harper has never been one of them. There are those who use the platform of social justice to further their own brand and aren't actually concerned about getting people rights.

It's unfortunate that those people's careers were affected by getting on the wrong side of a vindictive bully, but I wouldn't call it Wil's fault. And I find it funny that Wil is getting the heat for it when it wasn't his list.


I think the argument is that he promoted somebody's personal blocklist as a general blocklist, and that this therefore made him culpable. Which I think was a judgement failure on his part, but not necessarily a trust related one, and I don't find the culpability part of the argument particularly convincing.

Note also that I said "the subset of people involved I dislike" to try and keep my opinions separate; I'm unsure why you're responding as if I'm supporting any specific person involved when I've merely done my best to outline the situation as I understand it.


I will admit to have had you confused with the person I originally asked, so there is some minor conflation there. That's my bad. I asked them a direct question and you responded instead.

They said it was his blocklist that he put these people on. And that simply by being on his blocklist, their lives were ruined.

That's not the same story you told me.

And ignores the part where Harper was promoting her blocklist as a superior version of ggautoblocker. She encouraged people to promote it. So to say it's his fault for promoting a "personal" list also feels hollow.

He trusted Randi. That was a mistake. People blindly trusted the list he promoted. That was also a mistake. There's a degree of personal responsibility that everybody who used that list without verifying the content shoulders.


The story I told is the story as I understand it, from reading a smattering of articles from various viewpoints. I suspect many people have largely heard the story from a giant game of telephone on twitter, and as such details and nuance have been lost and distorted.

Note that when I describe what I think the argument somebody's making is, that doesn't mean I endorse the argument, merely that I'm trying to explain what the argument is.

I'm not taking a position on "fault", all things considered.


The bit that's new to me is "popularized". How popular was this block list and how much of the popularity of it could be attributed to Wil?

Also, do people really contact potential employers through Twitter and don't have an alternative like email?


He was apparently patient zero for the list's spread through hollywood and other creative industry twitter.

Beyond that, no idea - I'm a bystander who happened to've read a few articles giving various points of view on this and have pieced together what I hope is an accurate 10,000ft view, but YMMV definitely applies.


AIUI it's even worse than that. According to a tweet thread I read, he actually ran his own personal blocklist that imported Randi Harper's blocklist, but also included pretty much anybody that annoyed him, for such tiny infractions as making a star trek reference at him on Twitter, or just being "rude". He personally blocked a bunch of trans artists whose Patreon and SoundCloud incomes subsequently dropped to zero.


The whole story sounds completely ridiculous.

Probably the best option for people who value their sanity is to just not associate with anyone involved in these nonsense Twitter dramas.


Maybe, but why should we believe this version of the story rather than the other one? Do you have a source?


You'd be better off believing neither, but taking the participants' stories as what they believe happened.

Then drawing your own conclusions.

(Sorry, not picking on you in particular; I'm just quite annoyed to see this thread treating the incident as a spam problem to be solved technically. There were also people who really did not like Wil Wheaton, and then suddenly found him in their online home.)


My source is a whole bunch of comments on Mastodon that I read as the whole thing was going down, plus a bunch of more comments on Twitter talking about what happened after it was all over. I don't have specific links as I didn't save any, but there were a lot of independent sources on this at the time.

Edit: found a new tweet that summarizes this: https://twitter.com/eiwbm_cat/status/1035716245914624003?s=2...


Wil does admit to some of it: "And for what it’s worth, the part of me that wants to apologize to the people who ended up [the blocklist] by mistake is overwhelmed by the part of me who was attacked really viciously by a lot of those people and feels like maybe blocking them wasn’t such a bad idea, after all."

That directly supports the narrative that he publicly blocklisted some innocent people, hasn't apologised and is being mobbed for it.


I love how he says "by mistake" as if he didn't personally add a bunch of people to his blocklist deliberately simply because he was mildly annoyed by them.

Hey Wil, when you personally are responsible for taking away the livelihood of a bunch of marginalized people, you don't get to complain about being "attacked really viciously" by your victims.


But none of the alleged others have written an explanation about their side of the story so there is nothing to compare.


They have written them, as toots on Mastodon and tweets on Twitter.


As a conservative who typically votes Republican, it frustrates me that an otherwise thoughtful individual like Wil Wheaton in the middle of a plea for empathy and understanding drops language like "President Shitler" and says of Twitter "the most lucid and concise indictment I can give Twitter is: it's the service that Donald Trump uses to communicate with and incite his cultists."

Dude, if that's how you think and talk about people whose political opinions you disagree with, how can you expect anything better in return? I'm appalled by how you were treated. I'm on your side. But I get the distinct impression that if the shoe were on the other foot, and you knew how I voted, you would not be on mine.


I don’t think he disagrees with the political view but rather with his moral code, compass or even his lack of humanity.

From my view (not related to the USA) it does seem like a correct statement. None of the masses that voted for him have shown empathy towards others or have a logical argument. That being said if he didn’t win and it was a different candidate with the logic lacking masses I think a similar statement would be valid as well.


The aggravating thing to me is: the description of twitter does, to my mind, kinda match - and I know quite a few people who're very definitely conservatives who've accused the populism-driven Trump base of being a cult in far, far more eloquent and persuasive terms.

And yet I still don't think Wil would be extending any sympathy to Never Trump Republicans if they got similarly attacked either, so I echo your frustration.


I'm finding it difficult to see how a Trump voter can complain about tone. Trump himself is usually toxic and abrasive, and relies heavily on petty insults with the eloquence usually found on grade school playgrounds. His followers proudly wore "fuck your feelings" on their clothing, and that seems to have been their operational philosophy ("cucks! snowflakes! get out of your safe spaces!") before they realized how much the rest of the country hated them. Now, ironically, they complain about insults and not being accepted.

With all that in mind, is it any surprise people on the Left treat Trump with zero respect? And can you not understand how ridiculous it is to demand that respect when respect has not been given by your side. From the president on down, there has been no grace, no diplomacy, no kindness. This is a classic "remove the log in your own eye before complaining about the mote in another's" situation.

And that's without getting into how this administration is the most corrupt in modern American history. Perhaps the most corrupt ever in our nation. And is intent on doing as much looting & damage as possible before they're finally kicked out. Difficult to treat someone with respect when they're behaving in such a reprehensible fashion.


I think the issue is that log goes both ways. Complaining about someone being petty and insulting rings hollow from someone being petty and insulting. It feels like a "rules for thee, none for me" situation.

Or the even worse "no bad tactics, only bad targets".

We can insult Trump because he is a bad person. Trump can't insult me because I'm a good person.

Well, who decided one was a bad person and which was a good person? You? That's not exactly unbiased. Why is Trump a bad person? Because he insults good people? But what makes those people good? Because they insult bad people? Well all you've done is put a level of abstraction between the problem of why do you get to decide who is good and who is bad? And how do you make that determination?

Trump mocks the weak, the disabled, things people can't help, things he's guilty of himself. He's self-serving, he inconsistently applies rules, favoring sycophants despite their actions and criticizing critics for doing things he ignores in his favorites. He's actively trying to avoid investigation into certain aspects of his campaign. These are all reasons he's not a good person. Because good people don't do those things.

So I don't care if you dislike Trump. If your entire discourse about him is to mock him for his appearance or call him derogatory names, then how is what you're doing any different? I wouldn't want you in charge either. And I wouldn't want anyone you vouch for in charge either.

There is the concept of being the better person. There's no point in wrestling with a pig. You'll both get muddy and the pig enjoys it.


>There is the concept of being the better person. There's no point in wrestling with a pig. You'll both get muddy and the pig enjoys it.

You're not entirely wrong, but in this case, the pigs are the ones that dragged us all into the mud, so it's understandable that some people wouldn't want to simply drown politely in the cesspool.


I wonder if the solution involves partitioning the social graph to allow accounts to coexist?

Instead of trying to censor accounts, because I’m going to assume accounts aren’t used purely for offensive content — that’s the easy case — but rather the account is generating ‘mixed’ content.

Bans are a primitive form of isolating a part of the graph. Particularly if they extend to commenting/replying to that account’s posts.

The false abuse reports similarly should carry an extremely high cost to the submitter. If an abuse report is flagged maybe the account is no longer trusted ever to report a post again. Maybe an abuse report should actually have to carry some monetary value (like hashcash).


The problem is not that multiple communities can't coexist. That isn't the problem. Unboxing videos and workout videos coexist peacefully on Youtube.

The problem is when a mob of people decides they want to attack someone (or a group of people) and does everything they can to harass them.

Offering the mob their own place isn't going to help at all when all they want to do is destroy someone elses.


It's frustrating to see a viable-enough-it-gets-talked-about Twitter-alternative but apparently no equivalent Facebook-alternative.

I believe this is related because it seems like the Twitter model by itself breeds a wide-range of inherently bad and ugly behavior. A platform for celebrities pretty much would have to. Moreover, an "everyone sees everyone by default" approach seems hard to moderate.

I treat Facebook as a combination blog and forum. It works for me. Effectively, I have the tools to do my own moderation. A federated version of this seems much more manageable.


While I would argue that the best alternative to Facebook is nothing, it is interesting that there is no "Internet community creation kit" which would just be a blog+forum software integrated so that one section of the forum is dedicated to comments on the blog posts while other sections allow users, not just site editors, to create threads.


Is that not just a forum? phpBB has been around for a while.


Forums lack the integration to the blog (main site) comments. Even modern forums like nodeBB only offer hacky solutions for that.


I think Talkyard does what you have in mind: https://www.talkyard.io — it's both community/forum software, and blog comments: https://www.talkyard.io/blog-comments, and the blog comments are placed in a blog-post-discussions category in the forum. The same login & @username_mentions work both in the blog comments and in the forum. (I'm developing Talkyard & it's open source beta software.)

I'm looking to create a PWA mobile app, meaning, people will get one's community as its own icon on their mobile phone.

(Any thoughts/feedback?)


>I treat Facebook as a combination blog and forum. It works for me.

I believe that the problem is that facebook is more than that, you have:

* User timelines -> Blog (with photo and video support)

* User groups -> Forum

* Messenger -> Chat

* Events -> Calendar

Using federation, this will probably be solved not by a "faccebook alternative", but multiple alternatives to each small service.


There's a whole slew of open p2p systems extant or emerging. You might want to look at Friendica: https://en.wikipedia.org/wiki/Friendica


> As a moderator working on a volunteer basis, it can also be hard to muster the willpower to respond to a report in a timely manner.

When I see the word "volunteer" in a response to a sensible critique of FLOSS software, I rankly speculate that a deep design flaw has manifested itself as a social problem.

It appears I am right in this case.

Are there known cases where I would be wrong?


Wikipedia?

I agree with you in general though. I think Wikipedia would be the exception.


Wikipedia quickly gets political. Even in technical articles, there is an incentive to impose one's own perspective for his field as the "authoritative" version. An important part of contributors, it turns out, are academics working in the field that more often than not are deeply vested in their version.

This is a great system for an encyclopedia that must find the best approximation of the truth, but I wouldn't call it "incentive free". Similar incentives in the social world (pushing out people that don't conform to the moderator's world view) could be disastrous.


Do you have a specific example?

For example, I don't think they used the term "volunteer" in their funding drives. Maybe that's for obvious reasons, but I also don't remember seeing "volunteer" when reading about their approaches to page vandalism, editors abusing power, etc.


Lack of a batch delete function made me chuckle a bit, This alone and a select all on page feature would seem to take the sting out of the late night or impromptu delete a bunch of f u comments from a cat post events. Why would that feature not be a priority?


Fun fact: reddit doesn't have batch delete for subreddit moderators. Using the interface reddit provides, you have to individually click "remove" on every single comment.

Reddit also doesn't have a built-in button to ban a user while you're looking at the offending post or comment; you have to go to a separate "ban users" page, and copy/paste the username to ban.

The official mobile app, so far as I can tell, doesn't have ban functionality at all. You can only approve/remove/mark spam and lock threads.

Moderator communities have worked around this, somewhat, with browser extensions that provide mass-remove and inline ban functionality, but they're only a partial solution. The mass-remove in the mod toolbox extension, for example, isn't always able to get everything, because reddit itself only partially loads busy comment threads, and the extension can only remove comments that are actually on the page. So you have to remember to force-load the whole thread and sometimes click through the "we don't care that you wanted all the comments, click to another page to see the rest" links.

The mod toolbox extension (/r/toolbox) also adds some pretty essential things like macros for common messages that need to be sent to users, shared notes all the moderators of a subreddit can see, etc.


Is this intentional to prevent mass censorship?


The simpler explanation, and the one consistent with history, is that reddit does not prioritize the use case of moderators. If the goal was to be an absolutely unmoderated free-for-all, they wouldn't provide half-assed mod functionality, they'd provide zero mod functionality.

Especially given that AutoModerator exists and can be set up to automatically remove posts or comments based on particular key words or even regex, and odes it so fast that not even the various "view removed reddit" services can show you what was there.


One of the possibly fortuitous aspects of Mastodon is that admins tend to be devs, hence their needs prioritised.

Automoderator, like RES, is a third-party addition to Reddit.


AutoModerator started as a third-party feature. Now it's built in to reddit (and IIRC the developer was hired, or at least collaborated with, on the integration).


It has just occurred to me that this centralized vs federated/decentralized discussion is similar to the monolith vs micro-services discussion.

Centralized/monolith means everyone follows the same rule regardless of they like it or not, and there's single point of failure.

Federated/decentralized/micro-services means every federation/team is free to have their own rules as long as they have consistent API contracts with other nodes/services. But it's harder to have consistent quality control process as the effort is distributed.

If you see it from this angle, it might be easier to analyze and understand pros and cons of each approach. Maybe people who like monolith will also like centralized services?


It's a mod failure and a tool failure. If you get 60 bullshit reports about a user, ones that aren't even debatable, you should be able to send out replies as a mod to their mods, and if they aren't heeded, defederate.

There needs to be some sort of federated ticket system or something. Federation is like a treaty, and a treaty is basically nothing but a court.


Sure, and these are good points, but they solve a problem far less interesting than the real problem.

Of the (let's guess) 60 people telling Wil to go do one, up to about half[1] might be the shitposting crew taking advantage to troll a not well-liked celebrity, but the rest were people and their friends who were genuinely incensed at his appearance in their graph.

For the latter, you might argue they should block him, but if he's on the same instance then effectively he's come in and sat down in their home. If he's on a remote instance, then there still exists the desire for retributive justice for past wrongs (yes, including perceived).

In this specific case, there was no-one (except the moderator!) sticking up for Wil. The groups that dislike him are small, but genuinely plural. (e.g. 4chan doesn't like him, a number of trans* artists don't like him, some Star Trek geeks...)

Given that he came to the fediverse to escape harassment and trolling on _mainstream_ platforms, I don't think there is a great solution.

So here's two options, instead:

One: Don't join the platform as "Wil Wheaton". If you want to join a community as another face in the crowd, then use an internet handle. Then you can interact as equals.

Pseudonymity is one of the great gifts of the internet.

If you come as a celebrity to link your blog posts and try and talk to fans, then I don't think the fediverse makes any sense. It's too small and people are territorial of the instances they adopt.

Secondly - a more social solution - find some way to calm the people involved. This may involve temporarily suspending instance links, or saying that (as moderator) you need time to discuss this and are working towards an acceptable solution, etc. Don't know what you do next.

Finally - and the point of my rant: Dismiss those you don't understand / greatly simplify social problems at your peril. (Sure is great reading a bunch of trans artists arguing in earnest turned into "abusers" - nice!)

As humans, we make great changes and build general solutions based on one-off undesirable acts, and if you don't even make an effort to completely understand the problem then you WILL build the wrong solution.

[1] Possibly more than 50%? There were people involved who kept saying they had just joined Mastodon and didn't know how to use it, which is weird.


This is relevant in a way: Modern anti-spam and E2E crypto

https://moderncrypto.org/mail-archive/messaging/2014/000780....


Decentralization definitely makes a hard problem, that twitter has barely been able to solve on a monolithic system, harder. I wonder if Mastodon could take techniques from things like BitTorrent that do a decent job of determining an individual node’s contribution in a decentralized way, like maybe a way for instances to pass hints to one another. For instance if someone is being banned/moderates frequently on other federates instances that might be a hint to block their content. Obviously there’s an opportunity for someone to control many instances and manipulate the system.

I love the idea of Mastodon and I’m hoping people smarter than me find solutions for these problems.


You need to have good, immediate abuse-banning tools.

So if Wil Wheaton says "look at my new movie" and you get 60 abuse reports about it, there should be an one-click tool that says "Everyone who reported this as abuse gets suspended and all the reports are removed from the queue {click}". If Wil Wheaton has 92 people replying "fuck you" to his posts, it should be one click to pull out everyone who wrote "fuck you" to Wil Wheaton and give them a slap.

You can set the level of violence done to whatever you want (maybe a temporary suspension is fine, maybe you just want them to lose abuse-reporting privileges, maybe you want to nuke them from orbit). But it should be one click to deal with mobbing. "Mark all these accounts as mob participants". Etc.

All this stuff is, frankly, EASY, you just have to think through the dynamics and have an understanding of social media. EVERY tool you give users, including the "report abuse" tool, can and will be abused, and therefore you need tools to deal with abuse of every tool.

Tools obviously won't help with BAD moderation. If your moderator hates Wil Wheaton and likes Nazis, they can take that side. But it will at least make moderation (good or bad) fast and efficient, which gives you the chance of hiring enough good moderators to provide overall good moderation. (Hint: if you have multiple moderators, you need tools to review your moderators....)

You should have all sorts of counters to record stats about abuse - how many problems you get from each other instance, ratios of problems/good content, that sort of thing. It should be easy to notice that you've gotten 5 mobbing attacks today from foobar.mastodon so maybe that entire instance needs a little timeout. And so on.


How is abused handled in Mastodon?

From the article it is implied that Mastodon couples "moderator" with "instance admin" and leaves it at that.


Poorly. That's topic of active current discussion.


Does anyone regularly use mastodon as their social network of choice? I've registered and looked around but it seems so.. boring. How did you kick-start your social graph to the point where it became interesting? With twitter, instagram and facebook I had actual in-life acquaintances using the network. With mastodon, I'd have to go seeking content pretty aggressively.


I had been using Mastodon for over a year until 2 months ago.

When I joined, it felt like a good way to meet people who share interests and learn about new ideas.

You kick-start your social graph by looking at the fediverse feed and following people who are cool and interesting. You also can find good discussions by commenting on their tweets (I'm not calling them toots).

I left because it eventually became the same drama you find on Twitter.

Anger becomes a hobby, and negativity then spreads like cancer.

People ruin other people's reputation and lives within a community (or real world if they can) all because of a single second of their life that may or may not have happened.

It is very sad and alarming that people are blind with unguided rage that they do not know how to direct positively.

In the end, Mastodon will be a good community, but not for everybody, and it is not an open or welcoming community.


Sounds like it's not for me then. I find little in the fediverse feed which appeals to me, and hashtag searches for topics which interest me turn up small clusters of unconnected, non-conversational toots going out into the void.


If you follow enough folks on Twitter who happen to also be on Mastodon then you can kickstart it by linking your Twitter account here: https://bridge.joinmastodon.org/


this sounds like a scare piece of "news." A possible competitor to twitter etc.


Well, it is frightening.


So I'm just going to drop this idea for all you Eager Internet Entrepreneurs here on HN:

influencers.social, a Mastodon instance for Verified Social Media Influencers.

If you have a Twitter/Instagram/Facebook/etc account with 5x10^6 or more followers, and a thousand dollars per month, then you can join our elite site and know that you are in the hands of our team of moderators, who are on the job 24/7. We use sentiment analysis to catch the abuse before you even see it! Why trust your social media presence to the underpaid minons of the other commercial sites, or to a volunteer running their site as a side project, when you can be a Verified Social Influencer?

Juggle the numbers as you see fit. Pick a better name while you're at it. Decide for yourself if you feel like passing any improved moderation tools back to the codebase of whichever ActivityPub-based social site you fork off of.


This sounds like satire, but I don't get the joke.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: