I'm not deeply versed in Gab's history... But lets hypothetically say that Gab was not created for that purpose, but for precisely the purpose it claims it was created for:
To be an open platform for free speech, no censorship.
Wouldn't it have ended up in the exact same state it is now? Any service that guarantees no censorship is going to have the majority of its userbase be the runoff from other major websites. When voat was created, I 100% believed that they were not attempting to create extremist havens, but their userbase was all the people expelled from reddit for targeted harassment campaigns.
I hate this dynamic. We need a way to break this cycle, because right now it's actively killing competitors to existing social networks.
I agree with your point - it is hard to tell the difference generally, and it is an important point to remember, but the behaviour of the company itself shows that it is not an issue this time.
Gab bans people openly and credulously discussing marxism. I have experimented with and experienced this directly. So, it fails my litmus test for "an uncensored platform."
And it's a bit comical, because Gab as a community experience is much smaller (in my perspective) from even weird sites like minds or funky social blockchain plays. Why they felt the need to ban discusions of marxism or a general strike is beyond me.
Fair. By my own admission, I don't know much about gab.
I think the first time I ever heard about it was when Firefox banned Dissenter from their addons. Dissenter to me was a genius idea that has an ugly userbase. I'd love to have a version of Dissenter that isn't populated entirely by bigots.
I think the idea of Dissenter really has some value, you walk along the web for all sorts of reasons, and then up in the corner in your toolbar you see "oh, someone from my community has said something about this". Rather than the social network taking you to a site, the site takes you to the network.
That by itself implies that every URL you visit has to be looked up to see if there's a related discussion.
No way I'd trust any add-on/startup/mega corp to do that. I barely trust Mozilla to keep my history on their servers, and that's only because they only keep the last few months and purge older data.
Because you'd still know everyone who went to a specific site because they'd be sending you unique hash. Even if you ignore that, you'd know what clusters of people all use the same sites, how often and when.
That was the intent of the bloom filter. Configured properly it would actively filter out the need to endlessly send the server requests like "Hey, have anything for this hash?"
However I suppose that if the site does have comments, then you do have to make requests to the server to get them...
Still, I believe you're being overly pessimistic here. I think there may be some solutions to this. Maybe not perfect, but better. Lets say our social network "Ascenter" has become corrupt and is looking blackmail its participants. What about this?:
The design currently requires you to send a sha(URL+salt) to the server to look up comments. This prevents Ascenter from directly knowing what site the comments refer to, but the comments themselves will be a big clue. What if to look at the comments you have to decrypt them using sha(URL+salt2)? Ascenter will have no means to derive this key, it will only be able to determine how many comments have being placed and how large they are. That improves things a bit...
But Ascenter might be able to crawl the web to discover conversations. Particularly for salacious sites if it's looking for blackmail. So... What if the salts are the answer to that? If you had your own set of salts you could use them to create your own private groups, Ascenter would have no way to access that conversation. Or even figure out that they have occurred.
With the presence of public and private salts, what if the browser plugin itself could be configured with a blacklist of sites not to send requests for? You could still have private channel comments, but not public. I could see the community per-generating a black list...
One last note I'd end on here is that the level of trust we are expecting right now is a huge bar lower than the level of trust we give general social networks
Consider what HackerNews could do if it went rogue like like Ascenter? To blackmail you all they would need to do is go to one of your old comments, rewrite it as something salacious, then blackmail you with it. Comments on HackerNews arn't signed, and arn't encrypted. We're quite vulnerable to them.
edit: sorry for the long post. This was a bit of a stream of consciousness.
TL;DR: The bloom filter limits the risk, and I think there are cryptographic solutions that reduce the level of user exposure to below that of current social networks.
The bloom filter wouldn't stop you from confirming folks gathered around any specific page that has content, and would have a fixed probability of leaking data even if there was no data there.
And that ignores the problem that you're not going to be able to sync the bloom filter in real time, so now you're going to need to have a very merge-friendly design for these annotations.
No, I'm not being pessimistic. This is just a candid analysis of the difficulties of doing this competently. If you'd like to do it incompetently, feel free.
Likely because the intended audience is not people who see it as a viable alternative or something worth even entertaining. The irony of the situation being someone took the time to make their own platform for like minded individuals, and people in other spaces who've been known to tow the "you're free to make your own platform" line get extraordinarily bent out of sort when people actusally go and do just that.
The hilarious part being that by having the censorship in the first place and not just letting folks work it out amongst themselves, you just increase the echo chamber factor, which at some point, you have to come to terms with in real life on the basis these people exist in the real world. The very act of technically enforced societal marginalization in and of itself is an "extremism" amplifier/polarization catalyst. What confuses me is why I feel like I'm the only one who regularly brings it up. It's not that hard of a realization to reach From first principles. Especially if you spent any time in your life as a social misfit.
Well, you don't see me make fun of Parler as much because they're honest about their intentions. Gab originally claimed they were about "freedom" so it seems pretty fair to me to call them out on moderation that is clearly political.
"Any service that guarantees no censorship is going to have the majority of its userbase be the runoff from other major websites"
Exactly, which is why you are wrong here:
"When voat was created, I 100% believed that they were not attempting to create extremist havens"
It is not as if the major websites are quick to censor their users. If anything they are too cautious and have only banned terrorists after widespread pressure and threats of advertiser boycotts. If you set up a platform that is open to the few people who were so extreme that even Reddit or Twitter banned them, then you are creating an extremist haven, no matter what language you use to describe your intentions.
One could argue that intentions don't really matter, ether. "The Purpose Of A System Is What It Does" [1]. If your system ends up being a forum for a certain type of posts, then that's kind of what it is, regardless of what you originally wanted it to be.
This is a useless hypothetical. We know what Gab was and why it was taken down.
If you want to know how a healthy alternative would have turned out, go looking for one. It almost certainly exists, there have been at least a dozen twitter competitors in the past and I know a few people who tried them out. (I can't remember their names, though.)
If they didn't intend it, they were being incredibly naive. If your defining characteristic is that you don't censor things that are banned on Reddit, then your site is only attractive to people who want to do things banned on Reddit.
Yes, any large no-censorship platform for humans will be swamped by Nazis.
You could avoid being large — small, high-trust groups work perfectly fine without censorship.
You could add more moderation (aka censorship), both platform-wide and within communities. Reddit seems to be heading in that direction.
You could avoid being a platform. Some sites are inherently platforms, but does every site need comments?
Or you could genetically engineer humanity into a kinder, better species. This would also be the way to make anarcho-communism work — the economic system with the greatest freedoms, but also the most susceptible to bad actors. The Culture series shows you a glimpse of what this future could be.
I've got to finish going through The Culture series. I thought it was bold stroke to write a book series about a future where humans are domesticated by their own AI.
I think the "fully automated gay space luxury communism" depicted in the Culture series is the best future we can hope for. I wouldn't mind welcoming new robot overlords if it makes that possible.
>To be an open platform for free speech, no censorship.
Unfortunately this kind of rhetoric is frequently coded language meaning "Hey Nazis, we won't kick you off of our platform for threatening to shoot up a synagogue". I think people should still legally be able to create unmoderated platforms, but the majority of people won't participate in them because they quickly become cesspools of hatred and harassment. Gab is the newer, shinier tech startup version of this, but it's existed before in the forms of 4chan, 8chan, and probably other platforms that I'm not familiar with.
Maybe that's a sign you should pause. Sometimes it is okay to admit you don't know about a subject. It is okay to sit and listen instead of voicing an opinion.
Well... I mean really I was not trying to say Gab was innocent. I really didn't know until others provided some helpful context. I was speaking more to the original topic of this HN submission.
I never know how to handle topic shifts in the conversation trees in Reddit, Hacker News, and others.
Do you speak only to the comment you're responding to? Do you speak as if that comment is in the context of the submission? Do you keep the context as on topic as possible? Do you indicate which one of the three you're doing when you start your comment off? I feel like this is an internet rule I have not sussed out on my own. And I can see its caused some trouble for others. Sorry.
The line of thinking seems to be: "Gab's target demographic is users that have been banned from the mainstream platforms for hate speech and conspiracies. Therefore by targeting those users, it's 'purpose built' for hate speech and conspiracies.". My question is, can't you apply this to basically anything that's "free speech"? The mechanism seems to be that most people are content with the mainstream platforms, and therefore the only people who go for the "free speech" platforms end up being the people who were banned from the mainstream platforms. By that logic, is tor "purpose built" for criminals, since basically only people with stuff to hide use it? How is it different than gab?
If your site has been overrun by ultra-far-right types for basically its entire existence and you do nothing to mitigate this then you're very clearly complicit.
But you're right, most of the "no moderation, anything goes" online communities tend to be overrun by extremists but it's easy to see why: you only need a minority of very dedicated trolls posting outrageous content 24/7 to ruin a community. It takes time and effort to post insightful content and analysis, meanwhile you can throw shit at the walls at a large scale very easily.
That's why it's pretty obvious to me that if you want to actually have interesting discussions online and a plurality of opinions you need moderation, otherwise the low effort bullies take over.
>If your site has been overrun by ultra-far-right types for basically its entire existence and you do nothing to mitigate this then you're very clearly complicit.
How do you mitigate without running counter to the original idea of "free speeech"?
It's not at all clear what "the original idea of 'free speech'" even was. In the US, the wording of the First Amendment is quite vague.
I think people have a mental model that social media sites and apps are like a communication medium. They are a neutral carrier that transmits an idea X from person A to B. The site itself is not "tainted" by the content of X or get involved in the choices of A and B.
But a more accurate model is that they are amplifiers and selectors. The algorithms and ML models at the heart of every social media app often determine who B is. A is casting X out into the aether and the site itself uses its own code to select the set of Bs that will receive it—both who they are and how large that set is. From that perspective, I think it is fair that apps take greater responsibility for the content they host.
Here's an analogy that might help:
Consider a typical print shop. You show up with your pamphlet, pay them some money, and they hand you back a stack of copies. Then you go out and distribute them. The print shop doesn't care what your pamphlet says and I think is free from much moral obligation to care.
Now consider a different print shop. You drop off your pamphlet and give them some money. Lots of other people do. Then the print shop itself decides how many copies to make for each pamphlet. Then it also decides itself which street corners to leave which pamphlets on. That sounds an awful lot to me like they have a lot of responsibility over the content of those pamphlets.
The latter is much closer to how most social media apps behave today.
but does gab use "algorithms and ML" to determine what gets shown? Doesn't it use a upvote/downvote model like hn or reddit? Is a site that uses a "order by upvotes" ranking system closer to the first print shop or the second? What about bulletin boards that ranks by last post?
> Doesn't it use a upvote/downvote model like hn or reddit?
Is that really any different? If your print shop counts the user-submitted tallies on a chalkboard to decide which pamphlets to print, the print shop is still choosing to use that rule to decide what to print.
Because then you can't make any sort of public facing site with UGC without being burdened with the responsibility of what's being posted. Come to think of it, the distinction is entirely arbitrary. Run a bulletin board that sorts by last reply? You are responsible for the user content. Run a mailing list that forwards every message to the end user, and the end-user implements the same sort by default? You're off the hook, even though the end result is the same.
> Because then you can't make any sort of public facing site with UGC without being burdened with the responsibility of what's being posted.
Now you've got it.
> Run a bulletin board that sorts by last reply?
There is maybe an argument that your level of responsibility somewhat depends on the complexity of the algorithm you use to decide how much amplification to apply to any given piece of content.
I don't think responsibility is black and white.
> Run a mailing list that forwards every message to the end user, and the end-user implements the same sort by default? You're off the hook, even though the end result is the same.
The end result is the same but the agency is not. The end-user chose to apply that sorting, so they have accepted some of the responsibility for what they consume.
If I shoot someone with a gun, I'm totally responsible. If I give you the gun and you shoot them, you are responsible. Maybe I still bear some responsibility for giving you the gun. But you certainly have taken on more responsibility than you would have if I shot them.
Here's maybe another way to think about it. If you're choosing to run a bulletin board, presumably you're doing so to get something out of it for yourself. Is it fair for you to receive that benefit while taking no responsibility for anything that happens on it?
Is that a net benefit for society? The last thing I want is for google (or other tech giants) to be even more trigger-happy about banning people because they view you as a high risk user. "decentralizing the web" isn't a good excuse, as most people don't have to know how to set up their own hosting, and only shifts the liability from the host to the search engine (because you have to find the content somehow).
>The end-user chose to apply that sorting, so they have accepted some of the responsibility for what they consume.
Don't we already have that? On reddit you can sort by "hot", "new", "rising", "controversial", and "top". On gab you can sort by "hot" and "top". I'm not sure how that would change things, other than forcing yet another modal that users have to click through.
>If you're choosing to run a bulletin board, presumably you're doing so to get something out of it for yourself. Is it fair for you to receive that benefit while taking no responsibility for anything that happens on it?
Not every website has to be a for-profit venture. Many (small) forums run essentially on donations, or are low maintenance side projects attached to a bigger project.
By having more than a middle school understanding of what "free speech" is about. There is no "original idea" of free speech, there never has been, it is a concept that is used to refer to a wide variety of legal frameworks across different times and places. In Germany a person's free speech rights do not include holocaust denial. For most of the history of the United States free speech has been more limited than it is today; it was not all that long ago that we had the "equal time" rule that required media outlets to host both liberal and conservative commentary. You generally do not have a right to organize an insurrection against any government and whining about free speech will not convince anyone otherwise.
>it was not all that long ago that we had the "equal time" rule that required media outlets to host both liberal and conservative commentary
That only ever applied to broadcast media (and maybe only to prime-time TV). Publishers of the written word have never been required by the US government to grant equal time.
>For most of the history of the United States free speech has been more limited than it is today
I don't know what you could mean by that unless you are referring to the fact that before the internet became mainstream, you had to own a printing press or something like that to reach a mass audience.
A century ago in the United States the phrase "shouting fire in a crowded theater" was used in a Supreme Court ruling upholding the censorship of anti-draft activists during World War I, and within living memory the United States had various laws censoring pornographic photos and videos. There was even a time when it was illegal to have the Post Office carry written information about contraception:
In case anyone tries to claim that the founders intended for the most expansive possible understanding of freedom of speech, the fact is that one of the earliest laws passed in the United States was a law that censored criticisms of the Federal government (in an attempt to crack down on foreign misinformation campaigns):
>In case anyone tries to claim that the founders intended for the most expansive possible understanding of freedom of speech, the fact is that one of the earliest laws passed in the United States was a law that censored criticisms of the Federal government (in an attempt to crack down on foreign misinformation campaigns):
I'm not sure whether that proves your point. The wikipedia article says that it was controversial, caused the federalist party to lose the following election, and ultimately expired after 4 years.
The fact that the law was passed by the same men who ratified the constitution says a lot about their concept of freedom of speech, even if it was controversial and short lived. If the founders really meant for free speech to be as expansive as it is today it is hard to see how such a law could have been passed in the first place.
>The fact that the law was passed by the same men who ratified the constitution says a lot about their concept of freedom of speech
You can also argue that it was defeated by the same men who ratified the constitution, and that the "free speech" side ultimately prevailed, therefore they really did mean free speech to be that expansive.
OK, but the topic is hate speech in particular, and there has never been a time when hate speech modulo calls to violent action (and possibly calls on landlords or employers to discriminate) has been unlawful in the US.
>Hate speech in the United States is not regulated due to the robust right to free speech found in the American Constitution. The U.S. Supreme Court has repeatedly ruled that hate speech is legally protected free speech under the First Amendment. The most recent Supreme Court case on the issue was in 2017, when the justices unanimously reaffirmed that there is effectively no "hate speech" exception to the free speech rights protected by the First Amendment.
Hate speech in the United States is not regulated by any government. Google is considered by the courts to be non-governmental, so the courts will not prevent Google from regulating hate speech on the platforms it owns as Google sees fit.
Just decide for yourself. Literally the first post I get at the moment comes from "QAnon and the Great Awakening" and is in support of that right-wing vigilante shooter in Kenosha, calling the district attorney who's prosecuting him "evil".
The next few posts I see mention either "arresting the dems", some anti-vax stuff and so many crypto-fascist dogwhistles that I'm getting tinnitus.
If you need more proof I'll let you ding into this.
From my perspective we still need other information, do they also ban other apps that are purpose-built for hate speech? Was it just this app because of the controversy over banned Twitter users going to it?
and why does that matter? They later switched to ActivityPub/Mastodon for a bit. They still run a Mastodon fork, but defederated back in May.
Gab was also banned explicitly by URL by any app makers and in many ActivityPub libraries (you can find checks where it hashes the URL and compares it to known Gab URLs).
> a purpose-built platform for conspiracies and hate speech
We're in a bizarre world right now where you can label any opinion you don't agree with as hate speech, dehumanize police and call every conservative a literal Nazi and that's all okay now for some reason.
At some point we have to remember that historically, a lot of people who thought they were right, about slavery, homosexuality, war, abortion, polygamy, and other controversial topics, eventually came down on the wrong side of history.
The attack on speech and ideas has never been more profound. If you don't like an idea, you don't have to listen, but people are going to continue to go to fringes whenever their voices are silences. That will create more extreme platforms and more extremism, not less.
Ah, the delicious irony of attacking certain types of speech and ideas as "attacks on speech and ideas". What they are doing and what you are doing are no different; it's all just politics. There's not even anything new about what's happening today. How do you think people's minds were changed on all those "controversial" topics you cite? A whole lot of social pressure. You don't get to make lofty statements about free speech, and then turn around and grumble that other people's speech somehow isn't "playing fair".
There is holocaust denying, calls to kill blacks and rape fantasies about female politicians and movie stars. There are literal calls for the rise of the white race to eradicate everyone else on the front page of gab daily. If you want to defend shit like this on the merit of free speech the probability that you are in fact a white nationalist are not to far.
And the history is plastered with censorship the world was never more free than it is now you are just repeating non facts without even doing a hint of research.