>COWEN: Now, as you know, Wikipedia is open. It’s free. It doesn’t have ads. It’s a dream of the early tech utopians. Why is it the only surviving dream of that kind that has persisted?
>WALES: Well, it’s an interesting thing, and I’m not sure it’s the only, but it’s certainly the most famous and the largest.
Can HN think of any other examples? Funny enough I immediately thought of HN itself, although it does have some ads and I'm not sure it qualifies as big enough for what Cowen had in mind.
Most F/LOSS projects would qualify: quite a lot of unix distributions are old, active, big; gnu packages too; programming languages. Maybe you meant kind of social platforms and not just software, but imho they are just like wikipedia since usually a free software project encompasses some mailing lists, wiki, collaboration..
Stack Overflow, and the other Stack Exchange sites, could be another example. Content is licensed CC-BY-SA. Publically accessible, free as-in-beer. User moderated. There are some ads however, but quite minor in my opinion.
Stack is just burning through a giant pile of VC money at the moment (latest round was an $85m Series E back in July). When that runs out they'll start to ramp up the adverts and the paid user features.
I think there could be potential with allowing the community to moderate more of facebook/twitter. This is just based off of my theory that our perspective of who Others are is completely warped. We already know that the most extreme are the ones more likely to share extreme opinions, and that only some % of lurkers say things publicly, but even with that I think we're really ill-equipped to understand just how far it goes. If there's a huge silent population of kind reasonable people, people that don't spout off but are allowed to have some control of how content is authored and presented, maybe it could make a big difference.
It's killing me that no one is talking about authentic speech. We have very little.
Social mediums (Twitter, HN, Instagram, Yelp, etc) must also support verified identities. Opt-in. Just like metafilter.com. With onerous penalties for impersonation.
The outrage machine apologists (select examples below) are trying to post-authentic inauthentic speech. Algorithms will save us.
This cannot work. Ever. Because the belligerents in the computational propaganda arms race will always overwhelm authenticity.
Tristan Harris - The Social Dilemma
Casey Newton - The Verge
Philip N. Howard - Lies Machines
Frédéric Filloux - News Quality Scoring Project
All tortured performant whining about censorship and bias and Section 230 is not helping. No one is going to take away anyone's favorite chew toy. The food fight over lies and bias will continue unabated.
--
Said another way:
Journalism is real simple. Show your data, cite your sources, sign your name.
Support authentic journalism, with real infrastructure. Give people an alternative to the outrage machine.
If you wanted get semantic about it, user would be moderating their feed when they chose to unfollow a page/person.
> It does nothing to prevent people living in self-created silos full of misinformation.
Every single person in the history of our species, has always, will always, and will never be prevented from doing this. The underlying implication of this complaint is that you’d prefer to control what misinformation people consume, rather than letting people choose it for themselves, and make their own minds up about it.
I like how the moderation question has completely become the moderation of political content question, which has largely become the moderation of possible Russian or Chinese content, especially in the form of their chaos agents: people who aren't regime centrists (of whatever country they're discussing.)
Moderation has literally become moderation. We're sleepwalking into Western lèse-majesté laws.
> If there's a huge silent population of kind reasonable people
A silent majority, if you will (of people who agree with me about the reasonable boundaries of conversation.)
Tyranny of the Majority is somewhat brutal--by definition--to minorities :(. The hope that the majority is actually cool with everything seems flawed to me as the majority seems at least mildly puritan and racist, and so you are going to get a bias--even if only subtle--against non-hetero-cis-white-males :(.
Expert to me implies long experience across a variety of challenges. Long-term editors leaving sounds like another challenge, but it doesn't invalidate the expert status.
Both Wikipedia and StackOverflow have a problem of retaining new contributors. I always had my pet theory on possible solution for this problem, and I wonder what do you think about it. I see it like that.
New contributors come to contribute. Conflicts arise, thus rules are needed. People who enjoy rules tend to win arguments, geting a kick out of it, while losers feel miserable. So far so good. Normal churn of editors occur, but with time dynamic above leads to old crew / good rules players to stay, while more idealistic people who care about rules less tend to drop the project faster. Thus with time old guard becomes a problem - they feel skillful enough to control the process and perversely get enjoyment mostly from moderation /rules application process, not from actual content contribution.
My solution for this problem - force old guard to retire. I.e. you can edit Wikipedia for X years. After that you are banned for life.
Would that help?
(obviously, enforcement of this rule is not trivial, but that is another problem).
Principle is good. Maybe a lighter touch version where good or neutral karma actions like lengthening or otherwise generally modifying an article are excepted and only potentially demotivating actions like substantially shortening, deleting, or arguing with people are restricted (maybe 1 per week).
Congratulations - welcome to your time as a moderator. Your total punitive actions are X. Please feel free to use them as you wish, after which you are welcome to continue using the service forever.
Ah thanks for the downvote, you're very neutral and would make a great Wikipedia contributor. /s
How are the two sites not remotely in the same market? They are literally knowledge bases. Albeit, Golden is focusing more on structured data similar to Wikidata.org and Everipedia.org is focusing more on blockchain related content first with the backend using smart contracts/token incentivization.
How is this different from what Wikidata is already doing? Choosing any Wikipedia article and clicking "Wikidata item" in the sidebar will lead you to a list of structured data about the entity, including links to corresponding articles in online knowledge bases, encyclopedias and other reference works. FWIW, all of this structured data is freely available under CC0 license.
> There are cases where Wikipedia is biased, not because of any intention of people to be biased, but just because no one cares about that thing except fans of that thing.
This woefully understates the very intentional bias of editors.
As an example, take a look at the talk page for the Wikipedia article on Links between Trump associates and Russian officials, which includes bombshells such as "Secretary of Commerce Wilbur Ross has shares in Navigator, a publicly traded shipping company that has contracts with Russian gas company Sibur, held in off-shore accounts. Co-owners of Sibur have ties to Vladimir Putin and are under U.S. sanctions".
The deduction required to conclude that this is evidence of 'Links between Trump associates and Russian officials' borders on the absurd.
>COWEN: Now, as you know, Wikipedia is open. It’s free. It doesn’t have ads. It’s a dream of the early tech utopians. Why is it the only surviving dream of that kind that has persisted?
>WALES: Well, it’s an interesting thing, and I’m not sure it’s the only, but it’s certainly the most famous and the largest.
Can HN think of any other examples? Funny enough I immediately thought of HN itself, although it does have some ads and I'm not sure it qualifies as big enough for what Cowen had in mind.