This was the first test of federation's claim that bad actors can be dealt with by instance-banning. They failed the test and without this key innovation performing to spec I no longer have faith that Mastadon is any more than a technically complicated open source Twitter.
IMO once the account was verified as Wheaton's the entire federation should have blacklisted every instance harboring a false accuser until they demonstrated they can be a functioning member of the fediverse.
Of course, there are better ideas in this thread such as removing the social proofing mechanisms which fuel tweet/toot abuse in the first place. But if we're just speaking in the nouns and verbs that are part of Mastodon's sales pitch they had a tool to address this and demonstrated they can't wield it.
> This was the first test of federation's claim that bad actors can be dealt with by instance-banning. They failed the test and without this key innovation performing to spec I no longer have faith that Mastadon is any more than a technically complicated open source Twitter.
It should have been clear to any person applying basic analysis from the very beginning that decentralization enables abusers. The parallels to and example provided by email is simply too strong.
Decentralized moderation, where a group of essentially unconnected moderators work independently on a problem, have no ability to coordinate or shift load. Centralized moderation systems have both of these abilities, as well as often financial incentives to keep working.
The article hits it right on one point: this is a basic structural problem.
What the article misses is that it assumes there are good solutions while remaining decentralized. There's a reason the approach email settled on was, effectively, centralization.
How is email 'effectively' centralized? There are tons of providers, you can even host it yourself. The only thing that might be somewhat centralized are spam filtering algorithmns that are shared, but many aren't.
By having big provider turn up the spam detector for small actors. It's currently common to have a perfectly set up mail server, with verification and reputation and longevity… and still have Gmail flags the emails that come from it as spam.
Also, forget about sending email from home. Email sent from residential IPs are instantly deleted, and not even sent to the spam folder, by big providers (Hotmail makes it an explicit policy, and bounce the email right back). You have to at least relay the damn mail through a non-residential IP.
I once spent a week on this problem spinning up a QA server, our application had an email function for sending receipts. Our company used GApps and the QA team was complaining of not getting the generated email receipts. I poured through every configuration and every key file, mail still failed from the server citing spam/verification problems from the GApp side.
A week of troubleshooting and we discovered GApps didn't like emails coming from Linode IP addresses.
A sprint and a half later we had rewritten our email functions to use sendgrid. An extra expense, extra dev time burned. Becaus we had done everything right but upstream it still wasn't good enough.
> Also, forget about sending email from home. Email sent from residential IPs are instantly deleted....
Sorry, but this hasn't been my experience at all. I see this claim all the time, and, obviously, ymmv, but I've been using residential IPs for ~10 years with no issues.
> 553 5.7.1 [BL21] Connections not accepted from IP addresses on Spamhaus PBL
> Dynamic or residential IP addresses as determined by Spamhaus PBL (Error: 553 5.7.1 [BL21])
> The Spamhaus PBL is a DNSBL database of end-user IP address ranges which should not be delivering unauthenticated SMTP email to any Internet mail server except those provided for specifically by an ISP for that customer's use. The PBL helps networks enforce their Acceptable Use Policy for dynamic and non-MTA customer IP ranges.
---
I guess you're lucky enough that your IP adress has not been tagged as "residential" by Spamhaus.
This suggests that the authorization/reputation system needs to be consolidated (vs. centralized) but that the services could still be decentralized.
The world of commerce has a variety of mechanism to ensure credibility and accountability (signed contracts, security deposits, identification systems, and so on). Extended validation certificates is one manifestation of this in the digital world.
A decentralized service that relied on a consolidated reputation system might be able to provide a framework for managing bad actors.
I'm curious how you distinguish a consolidated reputation system from a centralized one. I would expect that a consolidated system, in which one can get the same answer to the same reputation query from multiple places and perhaps multiple parties, to effectively be a centralized, distributed system. It seems to me like a distinction without difference - what am I missing?
Once you have a strong centralized reputation system, you are pretty close to having re-centralized your decentralized system once again. Especially in a context where a relatively small number of systems send and receive most of the email to begin with.
You're completely correct! There are lots of email providers, and you can run a server yourself. Having previously done so myself, it's possible I might be keenly aware of this fact.
I found the fine article we are discussing to contain a great link explaining emergent centralization, including of email. Perhaps you had a different experience.
Has such a famous person ever come onto Mastodon? Maybe it just needs to be battle tested. Realistically if Twitter designed their system to handle abuse from start to finish before the first famous person showed up, what are the chances it would have scaled?
It's become clear, as some, myself included, have been saying for well over a year, that federation and decentralisation is no silver bullet, and will not by itself prevent abuse. It does in fact facilitate certain types.
This is not a specific failing of Mastodon, however, and has affected other platforms as well. Many aspects of wilw's experience on Mastodon resemble my own on Imzy two years ago.
There's no substitute for effective administration of instances.
User-level blocking tools are not, of and by themselves, sufficient.
Admin tools are insufficient. This is a fairly frequent occurrence, and on virtually every social media platform I've seen, including staff-level access to several, this has been the case.
The early stage of system development is typically about user features. Then performance issues start to hit. Then abuse.
Mastodon have been encountering the performance and abuse problems, and no, they haven't handled the first encounter particularly well. This doesn't mean that the system as a whole has an invalid model. There are numerous people, many with years and decades of online experience, taking part in that conversation.
Throwing in the towel at this point is highly premature.
Right, because Twitter had everything, from load to abuse figured out right away.
There's already a solution under discussion; block incoming reports for a user, (with time limit), or block reports from a user, (with time limits). It was just something that wasn't expected. Next time, there would be better tools to deal with it.
> They failed the test and without this key innovation performing to spec I no longer have faith that Mastadon is any more than a technically complicated open source Twitter.
Maybe instead the question you should be asking is: who has the incentive to undertake such a resource-intensive attack? This seemed more a response targeted at Wil for promoting his logic regarding leaving Twitter, targeted at him directly after he reasoned why the good people of the world need to Quit Twitter.
Making new users feel unwelcome on Mastadon is, unfortunately, the easiest path for them to fend off the competitor. It would not surprise me in the least to discover this attack was, in fact, coordinated by some entity with strong ties to Twitter. Of course the attackers want it to look like what instead happened was a "famous person verification" problem within the Fediverse; bad and negative press about Mastadon are _exactly_ what the attackers wanted here. And, unfortunately, it worked. There is always financial motive.
Do you have anything to support that speculation? We have terrible people on all social media platforms. The simplest explanation is that there are genuinely terrible little on mastodon.
They did show their theory. They didn't really show supporting facts. If you claim "it's caused by something different than we see everywhere else" you really need more than "it's true, because money".
IMO once the account was verified as Wheaton's the entire federation should have blacklisted every instance harboring a false accuser until they demonstrated they can be a functioning member of the fediverse.
Of course, there are better ideas in this thread such as removing the social proofing mechanisms which fuel tweet/toot abuse in the first place. But if we're just speaking in the nouns and verbs that are part of Mastodon's sales pitch they had a tool to address this and demonstrated they can't wield it.