I think there's a non-splippery-slope way to position the worldview you're describing that might be more palatable to the values of a worldview based on limiting discourse based on content.
I believe the hardcore freedom of speech view has a few important underlying assumptions:
1) More information is better, and shining a light on something is better than trying to selectively hide it, because eventually the real truth comes out through persistent discourse. This is only really possible with the maximum amount of information, and especially all viewpoints laid out on the table with the least amount of obstruction.
2) People are broadly able to parse out untruths, or irrelevant positioning, or anything that is of low quality, and they will not be persuaded by it. This isn't true of everyone, but it is true of enough people; that's an inherent assumption of democracy. We live (or want to live) in a free market of ideas, where ideas can compete, and the market (what people are persuaded by) will be broadly rational and land on the best position in aggregate, even if some people are persuaded by bad or malicious arguments.
3) Limiting the visibility of any information detracts from the overall quality of discourse because it robs people of the ability to improve their thinking. It negates the possibility of refutation, because the untruth is hidden. Giving people all information, including misleading information, in the long-run leads to a population that can have better discourse and evaluation of all the information thrown their way.
---
I think the above puts the ideas in the best possible light. However, I disagree with enough of these assumptions that I can't take this worldview myself. My main counter to these ideas is that, similar to (pure) free market proponents, it takes on a very idealistic view of rationality that doesn't match real behavior. In practice, people have to take shortcuts to understand things—it's inherent in human conciseness—and those shortcuts can be exploited. I don't believe this is something we can grow past on a large societal scale, because it's embedded in how we think. To improve the quality of discourse, we have to explicitly protect against these biases. There are a whole host of difficulties there, too, but I think they are more surmountable than all the downsides of allowing deliberate manipulation and misinformation to spread broadly.
It seems to me that you're still ignoring the tail risk of authoritarian control of the concept of truth, namely, gulags.
It's a justified fear, since it's happened repeatedly in living memory, and is happening still: you're welcome to go hand out pamphlets about the June Fourth Incident on Tiananmen Square if you don't believe me.
If you want to discount that risk, ignoring it or glossing it as some sort of slippery slope argument, that's your business. I won't, and we find ourselves on opposite sides of the debate for that reason.
To be clear, I don't think it's a slippery slope, because I don't think it's an accidental or avoidable consequence of allowing authoritarian control of the terms of discourse. I think it's the expected outcome, and that people who think that end state can be avoided are being used by people who crave that power over others.
Well, we’re talking about Facebook, so I’m not worried about gulags. At least from Facebook itself. From the people who get their information on Facebook, though, maybe I should be.
I’m positing that in the long run it will be easier and less detrimental to society to establish some kind of editorial guidelines around misinformation and hold platforms accountable to them than it will be to educate enough people quickly enough to vet misinformation for themselves, especially as misinformation becomes increasingly hard to spot. I say this knowing fully that defining what misinformation is will be extremely hard, and you’re putting a lot of power in the hands of whatever person or group does that. Every decision is a trade off where you choose what benefits you think are best and what problems you think you can solve best and trying to balance them. I think we can better solve the problem of effectively limiting power abuse than we can of limiting the broad abuse of inherent human biases.
Edit: I should also note, I agree with a handful of the assumptions I listed above, specifically that democracies are built on the idea that people can broadly reach the right answer together. I still believe that. But I also believe in clearing out the brush and debris in the way so the crowd can actually use that superpower well.
I think I'm generally on your side of the debate, but are we really talking about authoritarian control of public discourse here? I thought we were talking about Facebook, a private corporation, deciding whether they want to allow their platform to be used to spread false and potentially harmful information.
Facebook is powerful, but it's not an authoritarian state. I'm not sure I see much danger in Facebook weighing in on different kinds of falsehood (i.e., your insight that things can be false in different way). It seems more akin to a journal having standards for what it publishes, or HackerNews hiding or removing egregiously bad comments, than it does to the Chinese government jailing dissidents.
I see so much space between a journal and the CCP that Facebook is more like neither than like either.
The main historical analogy to Facebook which seems relevant is the phone company. Either Facebook is so large and influential that misinformation on Facebook can threaten our political process, or it isn't, and clearly it's the former.
When Ma Bell was the only game in town, they weren't allowed to deny use of the network to the Communist Party, because that was clearly a violation of the concept of free speech.
I don't think the difference between a natural monopoly and a state monopoly has much relevance here, what does matter is that if you ban, say, a political party, from Facebook, it absolutely cripples their ability to participate in the democratic process. That's too much power for me to simply shrug and say "their house, their rules".
It's a bad situation, and we should get out of it. And yes, Facebook itself isn't the Ministry of Truth, and doesn't have jackbooted security waiting in the wings.
But I firmly believe that the parties who are pushing for control over social media discourse absolutely want that end state, they are driven by power, and the only way to fight that outcome is to resist it early and often. Denying a victory here will spare expensive battles down the line.
As we are discovering there is a similar tail risk to allowing people lie. Facebook goes a step further and targets the people especially susceptible to specific less of the same type of lie. TBH my problem with Facebook is that it is a bullshit funnel into an echo chamber. If they removed those algos and showed people a varying mix of opinions and positions the problem would largely go away.
Agreed. I stopped using Facebook when they stopped providing a chronological timeline of all my friend's posts. That was actually a useful service, but I guess it didn't drive as much "engagement".
Aren't you ignoring the fact that actual authoritarians are using the lack of moderation in FB to spread misinformation and incite populism and take control? This has occurred in India, Sri Lanka, Myanmar, and I would argue currently occurring in the US.
I also used to be a free speech absolutist, but I'm not anymore since I realized that it doesn't actually result in the kind of educated discourse and debate where truth prevails (as libertarians seem to think it does) - it results in 8chan and QAnon. Human beings aren't rational, and are rife with cognitive biases towards bigotry and cruelty that can easily be hijacked by populists.
> I think there's a non-splippery-slope way to position the worldview (...)
I wouldn't discount the "slippery-slope way", because a slippery slope is not a fallacy when the slope is, in fact, slippery (and demonstrably so).
> My main counter to these ideas is that, similar to (pure) free market proponents, it takes on a very idealistic view of rationality that doesn't match real behavior.
That cuts both ways, though. Some vocal proponents of the "there's bad speech" view weaponize the Paradox of Tolerance argument, way past the point it's rationally applicable, and use it to beat people into submission. It's a very big problem in well-known online communities (including Facebook, Reddit and HN). For now, the effects are mostly limited to being called an -ist or -obe if you don't agree with maximally extremist view on some issues, and every now and then someone loses a job due to a Twitter mob. But I wouldn't want to live in a country ruled by the same principles.
Note that the side effect of extreme policing of wrongthing isn't just that the bad people get underground instead of being "disinfected by light". It's also extremely polarizing, because those on the fence now have to pick a side or get accused of being inssuficiently rightthinking - and some of those will adopt the wrongthink, at the very least because the wrongthinkers are nice to them. The historical equivalent of that is running your country by calling everyone not conspicuously patriotic enough a traitor and executing them; at some point you'll find that a chunk of your population actually defects to the enemy just to save their lives.
The way I see it, I'm all for maximizing accuracy and precision of beliefs and opinions, which correlates with rooting out disinformation. As for wrongthink - I believe the Paradox of Tolerance is recursive. That is, if in the process of rooting out the intolerant you start causing collateral damage among the innocent, you become the intolerant that should be rooted out.
Some vocal proponents of the "there's bad speech" view weaponize the Paradox of Tolerance argument, way past the point it's rationally applicable, and use it to beat people into submission.
Indeed. Here's the original Popper's statement of paradox of tolerance:
Less well known [than other paradoxes Popper discusses] is the paradox of tolerance: Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them.—In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be most unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant.
If anything, it is rather the censors that are "who are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive", and recent "peaceful protests" are exactly an "answer to arguments by the use of their fists or pistols".
The hardcore freedom of speech view ignores the societal transaction costs that come from dealing with lies by people who don't give a f... about truth, but only about power.
I don’t agree that the “hardcore freedom of speech” view requires or implies the assumptions you spell out.
They could all be false (eg more information is not better, people are frequently unable to parse out untruths and limiting information actually improves discourse quality) and still it is vastly more harmful to allow private institutions to wield the power of defining acceptable discourse than to permit all discourse.
The key issue is what the endowment of discourse-approval power grants to that institution. The issue is not whether discourse is true, productive or high-quality.
A bunch of nuts spouting off conspiracy theories on Facebook is low quality, likely to mislead people, and does not add useful information.
But allowing private entities to have the power to make policing decisions about it is a dramatically greater evil.
I believe the hardcore freedom of speech view has a few important underlying assumptions:
1) More information is better, and shining a light on something is better than trying to selectively hide it, because eventually the real truth comes out through persistent discourse. This is only really possible with the maximum amount of information, and especially all viewpoints laid out on the table with the least amount of obstruction.
2) People are broadly able to parse out untruths, or irrelevant positioning, or anything that is of low quality, and they will not be persuaded by it. This isn't true of everyone, but it is true of enough people; that's an inherent assumption of democracy. We live (or want to live) in a free market of ideas, where ideas can compete, and the market (what people are persuaded by) will be broadly rational and land on the best position in aggregate, even if some people are persuaded by bad or malicious arguments.
3) Limiting the visibility of any information detracts from the overall quality of discourse because it robs people of the ability to improve their thinking. It negates the possibility of refutation, because the untruth is hidden. Giving people all information, including misleading information, in the long-run leads to a population that can have better discourse and evaluation of all the information thrown their way.
---
I think the above puts the ideas in the best possible light. However, I disagree with enough of these assumptions that I can't take this worldview myself. My main counter to these ideas is that, similar to (pure) free market proponents, it takes on a very idealistic view of rationality that doesn't match real behavior. In practice, people have to take shortcuts to understand things—it's inherent in human conciseness—and those shortcuts can be exploited. I don't believe this is something we can grow past on a large societal scale, because it's embedded in how we think. To improve the quality of discourse, we have to explicitly protect against these biases. There are a whole host of difficulties there, too, but I think they are more surmountable than all the downsides of allowing deliberate manipulation and misinformation to spread broadly.