It's pretty simple, although it may not be obvious. There are 2 different views about "what freedom/democracy is" that are presently doing battle in america, at least among inteligentsia
One, the worldview embedded into your comment, is that freedom is about the limits of discourse. Under this system, limiting discourse is inherently undemocratic, so if facebook "censor(s) things they disagree with" this is bad and we don't need any analysis of what specifically the discourse was to make our determination, which is why your comment abstracts over any value it could be. Full disclosure, I don't fully understand this worldview because it seems to enforce a particular discourse on Mark Zuckerberg, which seems a bit contradictory to me although I assume there must be some way philosophically to resolve this objection.
The other worldview is that certain kinds of discourse are inherently against freedom. For example, misinformation that might persuade voters into voting based on a false premise undermines a democratic system. Under this worldview, the question of what the discourse is, is the whole analysis, and we don't need any analysis of what Zuckerberg "disagrees with" to decide if it's right to limit. Obviously the people who will advocate for removing a discourse are people who don't like it, but that's separate from analysis of whether the discourse itself is a force against democracy.
These ideas are at cross purposes, and success of the one is often at the expense of the other. For this reason we seem reluctant to just lay out the underlying value systems the way I did here, which is unfortunate, because I think the fundamental disagreement is really important to discuss.
You're doing a poor job of explaining the first worldview. Which is okay, you've acknowledged that you have trouble understanding it.
This worldview has been called the "liberal consensus". What it holds is that the power to determine a discourse is inherently against freedom is corrupting, and by allowing an authority to determine that, it will inevitably be used as a weapon.
Goalposts will shift. We will, hmm, go from suspending accounts which promote the theory that COVID-19 is caused by 5G towers, to suspending the account of a virologist who issued a preprint suggesting that gain-of-function mutations in SARS2 point to a laboratory origin.
Look, both of those things might be false, but surely we can agree that if so they are false in a different way.
There were many experiments with official truth in the 20th century. The general consensus was that they were unpleasant to live under and did a poor job of actually separating truth from falsehood. Many of us don't care to repeat those experiments, the effect size was large.
If we had an oracle of truth, then censorship of falsehood would be easy and practical. We also wouldn't need democracy at all, we could just ask the oracle of truth what to do, and do it. But we don't have any such creature.
I think there's a non-splippery-slope way to position the worldview you're describing that might be more palatable to the values of a worldview based on limiting discourse based on content.
I believe the hardcore freedom of speech view has a few important underlying assumptions:
1) More information is better, and shining a light on something is better than trying to selectively hide it, because eventually the real truth comes out through persistent discourse. This is only really possible with the maximum amount of information, and especially all viewpoints laid out on the table with the least amount of obstruction.
2) People are broadly able to parse out untruths, or irrelevant positioning, or anything that is of low quality, and they will not be persuaded by it. This isn't true of everyone, but it is true of enough people; that's an inherent assumption of democracy. We live (or want to live) in a free market of ideas, where ideas can compete, and the market (what people are persuaded by) will be broadly rational and land on the best position in aggregate, even if some people are persuaded by bad or malicious arguments.
3) Limiting the visibility of any information detracts from the overall quality of discourse because it robs people of the ability to improve their thinking. It negates the possibility of refutation, because the untruth is hidden. Giving people all information, including misleading information, in the long-run leads to a population that can have better discourse and evaluation of all the information thrown their way.
---
I think the above puts the ideas in the best possible light. However, I disagree with enough of these assumptions that I can't take this worldview myself. My main counter to these ideas is that, similar to (pure) free market proponents, it takes on a very idealistic view of rationality that doesn't match real behavior. In practice, people have to take shortcuts to understand things—it's inherent in human conciseness—and those shortcuts can be exploited. I don't believe this is something we can grow past on a large societal scale, because it's embedded in how we think. To improve the quality of discourse, we have to explicitly protect against these biases. There are a whole host of difficulties there, too, but I think they are more surmountable than all the downsides of allowing deliberate manipulation and misinformation to spread broadly.
It seems to me that you're still ignoring the tail risk of authoritarian control of the concept of truth, namely, gulags.
It's a justified fear, since it's happened repeatedly in living memory, and is happening still: you're welcome to go hand out pamphlets about the June Fourth Incident on Tiananmen Square if you don't believe me.
If you want to discount that risk, ignoring it or glossing it as some sort of slippery slope argument, that's your business. I won't, and we find ourselves on opposite sides of the debate for that reason.
To be clear, I don't think it's a slippery slope, because I don't think it's an accidental or avoidable consequence of allowing authoritarian control of the terms of discourse. I think it's the expected outcome, and that people who think that end state can be avoided are being used by people who crave that power over others.
Well, we’re talking about Facebook, so I’m not worried about gulags. At least from Facebook itself. From the people who get their information on Facebook, though, maybe I should be.
I’m positing that in the long run it will be easier and less detrimental to society to establish some kind of editorial guidelines around misinformation and hold platforms accountable to them than it will be to educate enough people quickly enough to vet misinformation for themselves, especially as misinformation becomes increasingly hard to spot. I say this knowing fully that defining what misinformation is will be extremely hard, and you’re putting a lot of power in the hands of whatever person or group does that. Every decision is a trade off where you choose what benefits you think are best and what problems you think you can solve best and trying to balance them. I think we can better solve the problem of effectively limiting power abuse than we can of limiting the broad abuse of inherent human biases.
Edit: I should also note, I agree with a handful of the assumptions I listed above, specifically that democracies are built on the idea that people can broadly reach the right answer together. I still believe that. But I also believe in clearing out the brush and debris in the way so the crowd can actually use that superpower well.
I think I'm generally on your side of the debate, but are we really talking about authoritarian control of public discourse here? I thought we were talking about Facebook, a private corporation, deciding whether they want to allow their platform to be used to spread false and potentially harmful information.
Facebook is powerful, but it's not an authoritarian state. I'm not sure I see much danger in Facebook weighing in on different kinds of falsehood (i.e., your insight that things can be false in different way). It seems more akin to a journal having standards for what it publishes, or HackerNews hiding or removing egregiously bad comments, than it does to the Chinese government jailing dissidents.
I see so much space between a journal and the CCP that Facebook is more like neither than like either.
The main historical analogy to Facebook which seems relevant is the phone company. Either Facebook is so large and influential that misinformation on Facebook can threaten our political process, or it isn't, and clearly it's the former.
When Ma Bell was the only game in town, they weren't allowed to deny use of the network to the Communist Party, because that was clearly a violation of the concept of free speech.
I don't think the difference between a natural monopoly and a state monopoly has much relevance here, what does matter is that if you ban, say, a political party, from Facebook, it absolutely cripples their ability to participate in the democratic process. That's too much power for me to simply shrug and say "their house, their rules".
It's a bad situation, and we should get out of it. And yes, Facebook itself isn't the Ministry of Truth, and doesn't have jackbooted security waiting in the wings.
But I firmly believe that the parties who are pushing for control over social media discourse absolutely want that end state, they are driven by power, and the only way to fight that outcome is to resist it early and often. Denying a victory here will spare expensive battles down the line.
As we are discovering there is a similar tail risk to allowing people lie. Facebook goes a step further and targets the people especially susceptible to specific less of the same type of lie. TBH my problem with Facebook is that it is a bullshit funnel into an echo chamber. If they removed those algos and showed people a varying mix of opinions and positions the problem would largely go away.
Agreed. I stopped using Facebook when they stopped providing a chronological timeline of all my friend's posts. That was actually a useful service, but I guess it didn't drive as much "engagement".
Aren't you ignoring the fact that actual authoritarians are using the lack of moderation in FB to spread misinformation and incite populism and take control? This has occurred in India, Sri Lanka, Myanmar, and I would argue currently occurring in the US.
I also used to be a free speech absolutist, but I'm not anymore since I realized that it doesn't actually result in the kind of educated discourse and debate where truth prevails (as libertarians seem to think it does) - it results in 8chan and QAnon. Human beings aren't rational, and are rife with cognitive biases towards bigotry and cruelty that can easily be hijacked by populists.
> I think there's a non-splippery-slope way to position the worldview (...)
I wouldn't discount the "slippery-slope way", because a slippery slope is not a fallacy when the slope is, in fact, slippery (and demonstrably so).
> My main counter to these ideas is that, similar to (pure) free market proponents, it takes on a very idealistic view of rationality that doesn't match real behavior.
That cuts both ways, though. Some vocal proponents of the "there's bad speech" view weaponize the Paradox of Tolerance argument, way past the point it's rationally applicable, and use it to beat people into submission. It's a very big problem in well-known online communities (including Facebook, Reddit and HN). For now, the effects are mostly limited to being called an -ist or -obe if you don't agree with maximally extremist view on some issues, and every now and then someone loses a job due to a Twitter mob. But I wouldn't want to live in a country ruled by the same principles.
Note that the side effect of extreme policing of wrongthing isn't just that the bad people get underground instead of being "disinfected by light". It's also extremely polarizing, because those on the fence now have to pick a side or get accused of being inssuficiently rightthinking - and some of those will adopt the wrongthink, at the very least because the wrongthinkers are nice to them. The historical equivalent of that is running your country by calling everyone not conspicuously patriotic enough a traitor and executing them; at some point you'll find that a chunk of your population actually defects to the enemy just to save their lives.
The way I see it, I'm all for maximizing accuracy and precision of beliefs and opinions, which correlates with rooting out disinformation. As for wrongthink - I believe the Paradox of Tolerance is recursive. That is, if in the process of rooting out the intolerant you start causing collateral damage among the innocent, you become the intolerant that should be rooted out.
Some vocal proponents of the "there's bad speech" view weaponize the Paradox of Tolerance argument, way past the point it's rationally applicable, and use it to beat people into submission.
Indeed. Here's the original Popper's statement of paradox of tolerance:
Less well known [than other paradoxes Popper discusses] is the paradox of tolerance: Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them.—In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be most unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant.
If anything, it is rather the censors that are "who are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive", and recent "peaceful protests" are exactly an "answer to arguments by the use of their fists or pistols".
The hardcore freedom of speech view ignores the societal transaction costs that come from dealing with lies by people who don't give a f... about truth, but only about power.
I don’t agree that the “hardcore freedom of speech” view requires or implies the assumptions you spell out.
They could all be false (eg more information is not better, people are frequently unable to parse out untruths and limiting information actually improves discourse quality) and still it is vastly more harmful to allow private institutions to wield the power of defining acceptable discourse than to permit all discourse.
The key issue is what the endowment of discourse-approval power grants to that institution. The issue is not whether discourse is true, productive or high-quality.
A bunch of nuts spouting off conspiracy theories on Facebook is low quality, likely to mislead people, and does not add useful information.
But allowing private entities to have the power to make policing decisions about it is a dramatically greater evil.
In what different way? Has not both been scientifically refuted by reputable scientists? Just because one sounded more plausible doesn't mean it's not false.
To me, both needn't be banned, because I believe and agree with your premise, but I don't agree there's a difference.
A bit off topic but interesting nonetheless, I'll take a crack at it.
I see an important distinction between pseudoscientific crankery and just bad/refuted science. I've read some of the 5g corona stuff; it's batshit insane, like I'm fairly sure the primary sources on it have serious mental illnesses.
Any physicist with a mailing address can tell you all about cranks. Refuting cranks is not a part of their job, if they do that, they won't have time for anything else.
I don't think the gain-of-function paper has been comprehensively refuted, yet. It's certainly been dismissed, dissed, and evidence is strongly pointing in the direction of it being refuted.
But it was written by a credentialed virologist, and more importantly, it is in the process of being submitted through channels.
Will it survive peer review and be published? Probably not, and that's what peer review is for. As far as I'm able to determine, it's just a bad paper, but it's real science, not a parody of science.
Banning such people sends the wrong message. We rationally should update our priors in the direction of a conspiracy to suppress the truth.
I'm no virologist, but I've spend enough time in genetics labs to be able to follow along with their conversations. What I can't do, is tell the difference between a shadowy ChiCom conspiracy to suppress the truth, and overzealous enforcement of a vague mandate to ban disinformation about the novel coronavirus.
I mean. I know some people who work for Twitter; it's the latter, you'll never go broke presuming institutional dysfunction as the cause of any malaise on the birdsite.
And yeah, I don't think the 5g whackjobs should be banned either, but for a different reason: they'll just go find another forum where people like me won't be around to tell them how full of shit they are. The only think worse than a weird cult of people who believe schizo stuff is a weird isolated cult of people who believe schizo stuff.
Determining how discourse occurs, its rules and the dividing line correct and incorrect speech is built in to the democratic process, not some holy "rules of the game" handed down by God. Your misguided understanding of "liberal consensus" is actually a fundamentally authoritarian position, a doctrinal decree about what can and cannot be discussed.
The irony of course is that the "experiments with official truth in the 20th century" in the US were often defended on the grounds of protecting "liberal consensus" from opposing viewpoints. See the Motion Picture Alliance for the Preservation of American Ideals statement of principles, the trade group responsible for the Hollywood Blacklist:
"We believe in, and like, the American way of life: the liberty and freedom which generations before us have fought to create and preserve; the freedom to speak, to think, to live, to worship, to work, and to govern ourselves as individuals, as free men; the right to succeed or fail as free men, according to the measure of our ability and our strength.
Believing in these things, we find ourselves in sharp revolt against a rising tide of communism, fascism, and kindred beliefs, that seek by subversive means to undermine and change this way of life; groups that have forfeited their right to exist in this country of ours, because they seek to achieve their change by means other than the vested procedure of the ballot and to deny the right of the majority opinion of the people to rule."
> The Devil can indeed cite Scripture for his purposes.
My point exactly. Aphorisms such as "Liberal consensus" don't exist in a vacuum, and are often used as an excuse for intervention rather than a protection against intervention.
> So, without knowing anything about me, would you guess I'm pro-blacklist, or anti-blacklist?
Do you think the Trump administration should outlaw certain subjects from being taught, such as Critical Theory? You surely would agree that the stranglehold certain departments have on academic discourse makes free speech impossible, correct?
I freely concede that I'm doing a poor job of defending it, because it isn't my view to defend. The part of it that I don't understand is limited to how it is self-consistent, which is not something I see addressed in your reply, although I could be mistaken.
But I think we are still saying talking points instead of getting to the core. I understand and I share the concern about an important voice being silenced. But I think the mechanism of that silence in the current environment is probably related to loudness of noise rather than quietness of signal. Obviously, it may be different at different situations and times in history. But I think if our goal is to hear quiet voices we ought to consider both sources of the issue pretty seriously. A philosophy that only considers the problem of transmitting and ignores the problem of receiving through a noisefloor seems an incomplete troubleshooting procedure to me.
Whereas you perceive a threat about the slide into censorship, I perceive a threat about the slide into unrest and violence. In reality, it seems likely we will get both: one of them first and the other following as a reaction. So I think our interests would really be best served by hammering out a workable compromise so as to hang together rather than separately.
I agree that we ought to return to the "liberal discourse", but we may perceive its makeup differently. Limitations on discourse have always been part and parcel of the institution. Some limitations have been very harmful. Others, like 'you can't threaten not to leave when you lose an election', have been very helpful. Liberal discourse is presently threatened because we have abandoned that sort of polite limitation, and it is by reintroducing it that we can recover the institution.
> If we had an oracle of truth, then censorship of falsehood would be easy and practical. We also wouldn't need democracy at all, we could just ask the oracle of truth what to do, and do it. But we don't have any such creature.
This is a bit of a strawman. I do empathize with the skepticism of authority in our present climate. However, you rely on some method to determine whether a person is doing censorship in the same way I rely on a method to determine if a person is doing misinformation. I expect it is a similar method, which is to say, imperfectly, based on values present in our historical age, individual biases, and so on. Which is the "same sort of stuff" that democracy otherwise uses to make any of its decisions.
I expect this dispute arises because, in your worldview, limiting the discourse is very exceptional, and doing it properly should require an exceptional method. Whereas from my perspective, laws, elections, jail, and wars are very serious, and we have processes to decide those.
> Whereas you perceive a threat about the slide into censorship, I perceive a threat about the slide into unrest and violence.
I certainly only addressed the former, but I'm also concerned about the latter.
It seems to me that cooler heads are more likely to prevail, if hotheads remain on the same platform as those cooler heads. Not going to go full horseshoe theory here, but part of what makes the far left and far right "far" is their willingness to engage in violent rhetoric and follow it up with action.
Garden-variety guns and beef conservatives are more likely to speak the language of actual white supremacists, but they aren't going to go onto Stormfront to do it.
Extremists are angry people, with a story about oppression: either the shadow globalist cabal is trying to replace them, or the evil capitalists are trying to grind them under their boot. So it's a bad idea to actually go in and oppress them. They will absolutely find a forum to air their grievances and plot revenge, and moderates will no longer be a part of the conversation.
>Goalposts will shift. We will, hmm, go from suspending accounts which promote the theory that COVID-19 is caused by 5G towers, to suspending the account of a virologist who issued a preprint suggesting that gain-of-function mutations in SARS2 point to a laboratory origin.
Why do you believe that only the the second viewpoint will suffer from this slippery slope? We also seen similar examples in the first worldview. A conversation can quickly go from being against illegal immigrants, to being against all immigrants, to being against a specific race of immigrants, to genocide of that race. We have already seen this laissez-faire approach from Facebook help contribute to genocide in Myanmar.
In fairness, Facebook's culpability in the Myanmar incident is because they launched in a language when they didn't have any moderators or AI who could understand the language. Further, much of the info was spread via memes - while extracting text from images is pretty good now, it wasn't always this way.
They were repeatedly warned by local groups, international NGOs and the US State Department that the speech was inciting violence. They refused to do anything and it escalated into a pogrom.
Not defending FB here - it looks like they have acknowledged their faults in this issue specifically.
That said, this argument structure sounds a lot like "US leadership was warned about the attacks on Pearl Harbor". It looks like FB under reacted to these warnings, probably because they didn't realize how bad the outcome would be. How can info/escalations be presented so as to break out of the noise? (I'm assuming here that FB also has been warned of a lot of really bad things that never came to pass, which isn't something we can know - but it's an interesting thought experiment.)
What is the expectation in terms of separate the signal from the noise? How can the critical factors be identified ahead of time? Was it foreseeable that the targeted hate speech would turn into violence? What level of reaction is appropriate, given the uncertainty of hate speech -> violence?
Apologies for the brain dump - not expecting answers to all of them. And not defending FB here. I just think these types of questions are very interesting (plus I just read Superforecasters, which examines similar decision making w/r/t the decision to kill bin Laden).
I think you're mischaracterizing the classical liberal consensus. That was best represented by John Stuart Mill, and his main point was not that censorshop will be used as a weapon -- it's simply that censorship infringes on liberty, and that the best cure for bad speech is more speech.
And the idea that the "general consensus" is against all forms of censorship is quite false. In America it is, but in Europe and other countries censorship of hate speech (e.g. racism, Nazism, etc.) is quite accepted as part of the general consensus.
And even in the US, "yelling fire in a crowded movie theater" isn't protected either. And arguably, spreading blatant viral lies on social media close to an election is akin to yelling fire in a crowded movie theater, since the national consequences could be so dire.
There are many intelligent arguments to be made that censorship of speech that is either a) primarily hate-directed rather than information-directed, or b) outrageously false but capable of swinging an election, could be outlawed, and neither of these would be incompatible with modern-day political liberalism, which is more commonly called "social democracy" to distinguish it from the classical liberalism that Mill did so much to defend.
And the idea that this would somehow depend on an "oracle of truth" is nonsense. Courts judge things like libel and defamation cases all the time. Sure, there are gray cases that could go either way, but drawing lines in gray areas is what courts have done ever since they existed in the first place. Holding Facebook moderators ultimately responsible to judges, for example, isn't inherently difficult to do if we wanted to.
> And even in the US, "yelling fire in a crowded movie theater" isn't protected either.
Well, if you believe tangential dicta that was grounded in no preexisting law offered in a since-overturned case allowing the repression of core political speech, sure...
If you're feeling pedantic, perhaps you can replace the phrase with "yelling things to incite an imminent lawless action" (to crib from Wikipedia's summarization of Brandenburg) any time anyone ever says it. Does that work for you? Because the underlying point remains: that there are limits to free speech.
> It's a widely understood example that is perfectly fine to use
It is a misquote (leaving out "falsely", a modifier which is key to the meaning of the original quote) of a statement the original of which is inaccurate as a description of the prior state of the law when the case it was in was decided, or the state of the law once the case was decided (being dicta, it itself had no binding effect), or the current state of the law, from a decision now widely recognized as anathema to the central protection of the First Amendment.
It is, almost literally, the worst example that you could use.
> It is, almost literally, the worst example that you could use.
But it's the one everybody knows. Society has chosen it as the common term for the concept by now. Its original source or accuracy is entirely irrelevant -- in the same way it would be missing the point to complain that "black holes" aren't technically black.
After all, we're not having a discussion between lawyers about the intracacies of US free speech law. It's just a phrase for referring to the general concept of rights not being absolute.
The ACLU was founded to protect the rights of Communists after World War I to say what they want. The ACLU fought to defend the free speech of KKK members and Nazis. The phrase "I disagree with you but I defend to the death your right to say it" is unique to America and was actually something fought for until recently when even the ACLU caved.
>spreading blatant viral lies on social media
You even admit this is arguable. You're assuming people are morons who need big brother to help them out - you don't really believe in democratic principles if this is your stance. The enforcement of punishing such lies is inconsistent as well. One needs only to look at the blatant left wing bias of Twitter and Facebook, where conservatives are banned for expressing opinions whereas lies told by media outlets (with small corrections added days later) are left up and not punished at all.
>For example, misinformation that might persuade voters into voting based on a false premise undermines a democratic system.
So, what's different now vs when there was paid political campaigns and advertising on TV, radio, magazines etc.?
Partisan political ads have never been a source of reliable information as long as i've been alive.
I'm curious as to what elections you've participated in within your lifetime that wasn't full of campaigns full of misinformation or hell even an election where the candidate that won kept their word on everything they said.
My problem with this all is it feels just engineered and over reactionary.
Misinformation has existed as long as i've been alive in pretty easily accessible forms. There's always been tabloids next to checkout stands, there's always been bullshit news, ads pretending to be factual and mountains of garbage info heaped onto people and the same people that believed it then believe it now.
None of this is new and the only thing the internet changed about it all is now we can hear about whatever nonsense Joe Blow believes in.
I have a problem with it all because what people call 'misinformation' isn't always such, it's 'disagreeable information'.
Many of the things i've seen labelled as information.over the years, not your examples in particular, but in general, aren't even things with an objective correctness to them.
The whole second viewpoint relies on the idea that there is a morally superior group of people out there who know the correct ideas and everyone would just be better off if we just listen to them as any other ideas are just misinformation against the 'correct ideas'.
Again, this reminds me very much of the way things were when the church ran things. Just replace God and his commandments with the correct world view and beliefs.
This is very much the idea behind wrongthink and thoughtcrimes in 1984. That holding a non conforming belief makes one guilty and requires them to be punished by those that believe 'the correct thing'.
In the end, it all comes down to the idea that one group of people has the moral authority to decide what's right for everyone and should be allowed to crush any dissenting opinion.
And this, yes, I have a huge problem with. It's no different than what any other oppressive dictators have done to crush dissent.
Your point of view is intrinsically liberal, and the latter point of view plays the victim and assumes people can't evolve to gain the intelligence needed to not fall for misinformation. You intrinsically think there are people who can't make decisions for themselves. Sounds like control
The absolute "freedom is simple, and restrictions are all bad" position is nonsense. Ignoring that there are people who would use their freedom to infringe on others is naive and childish.
If everyone has a gun, we aren't all safer, we're just all each at the mercy of whoever decides to pull the trigger first.
>If everyone has a gun, we aren't all safer, we're just all each at the mercy of whoever decides to pull the trigger first.
But isn't the basis behind MAD and the whole premise behind the continued existence of the world since nuclear weapons became a widespread thing, that, we're all safer having nukes instead of ridding the world of them, because if nobody had nukes, then someone would make one and use one?
I mean, i'm on the 'remove all nuclear weapons from existence' team myself but...
> If everyone has a gun, we aren't all safer, we're just all each at the mercy of whoever decides to pull the trigger first.
I wonder how the math breaks down on whether you're safer having fun at the gun range, or on the road driving to it.
> The absolute "freedom is simple, and restrictions are all bad" position is nonsense.
I think this statement is needlessly antagonistic. Restricting the freedom of individuals who are acting in good faith and with sufficient personal responsibility seems wrong to me. This harms everyone who ever had to get a license just to serve drinks and wipe tables, as a common case.
> Ignoring that there are people who would use their freedom to infringe on others is naive and childish.
I don't believe anyone here has said that we should ignore bad actors. If someone has bad ideas, people should be free to refute them. If someone engages in behavior that harms others the legal system can intervene.
Censorship is a stupid mechanism for improving society. If you can use it against those you disagree with, what happens if they end up in power and use it against you? Rigging the rules is short-term thinking.
While the "should all discourse be free vs. should it all be regulated" axis exists and people can be in different places on that scale (and almost all people are somewhere in between the extremes), there's also an orthogonal question, namely, who should be the party that makes and/or enforces the regulations.
Personally, I am very much in favour of regulating some forms of speech (regulating doesn't necessarily have to mean prohibiting btw), but I am very wary if big, quasi-monopolistic private companies are being tasked to do so, because their incentive structure now will lead them to over-restrict in order to avoid potential legal issues. If it's just a newspaper comment section, that's fine because there is enough competition, but for things like Twitter/FB/YouTube, I find it problematic, and this is why I find some of the more recent laws in the EU to be somewhat dangerous; we're basically asking unelected, unsupervised people from Facebook et al. to intransparently enforce laws.
It seems like making Facebook a government regulated public utility is the only way to square this circle. Because otherwise we either unjustly infringe on Mark Zuckerberg's use of his private property (Facebook); or Mark Zuckerberg is the de-facto arbiter of global public opinion for a broad swath of the population.
> The other worldview is that certain kinds of discourse are inherently against freedom. ... Under this worldview, the question of what the discourse is, is the whole analysis.
You will mostly find this worldview on the political left of the spectrum, and more often than not you will find the discourse they find to be “against freedom” to be the one held by their political opponent.
And when they promote their view in the form of a policy, they are effectively trying to outlaw people having opinions differing from theirs.
Rather current examples: identity politics and diversity policies. If you simply disagree with the basis of their argument, that disagreement is considered “hateful” in itself and your speech must be banned, no matter how civil.
How can one have free discourse, when the one righteous part has decided that only they are allowed to speak? There’s no freedom here. Not even close.
One, the worldview embedded into your comment, is that freedom is about the limits of discourse. Under this system, limiting discourse is inherently undemocratic, so if facebook "censor(s) things they disagree with" this is bad and we don't need any analysis of what specifically the discourse was to make our determination, which is why your comment abstracts over any value it could be. Full disclosure, I don't fully understand this worldview because it seems to enforce a particular discourse on Mark Zuckerberg, which seems a bit contradictory to me although I assume there must be some way philosophically to resolve this objection.
The other worldview is that certain kinds of discourse are inherently against freedom. For example, misinformation that might persuade voters into voting based on a false premise undermines a democratic system. Under this worldview, the question of what the discourse is, is the whole analysis, and we don't need any analysis of what Zuckerberg "disagrees with" to decide if it's right to limit. Obviously the people who will advocate for removing a discourse are people who don't like it, but that's separate from analysis of whether the discourse itself is a force against democracy.
These ideas are at cross purposes, and success of the one is often at the expense of the other. For this reason we seem reluctant to just lay out the underlying value systems the way I did here, which is unfortunate, because I think the fundamental disagreement is really important to discuss.