Imagine a world where the only content you see is from publishers that you trust, and that your friends trust, and their friends, to maybe 4 or 5 hops or so, and the feed was weighted by how much they are trusted by your particular social graph.
If you start seeing spammy content, you downvote it, and your trust level from that part of your social graph drops, and they are less likely to be able to publish things that you see. If you discover some high quality content, and you promote it, then your trust level will improve in your part of the social graph.
I'd say that the actual web3 (they crypto kind) is largely about reclaiming identity from centralized identity providers. Any time you publish anything, you're signing that publication with a key that only you hold. Once all content on the internet is signed, these trust graphs for delivering quality content and filtering out spam become trivial to build.
In this world, it doesn't matter if content is generated with ChatGPT, or content farms, or spammers. If the content is good, you'll see it, and if it's not, then you won't.
In practice this is how social networks already work - and it turns out that most people treat "like" and "trust" as equivalent. So you get information filter bubbles where people are basically living in separate realities.
In theory there was a time in the past where there was such a thing as a generally "trusted" expert, and it was possible for the rest of us to find and learn from such experts. But the experts are also frequently wrong, and the rise of the early internet was exciting in part because it meant that you could sample a much wider range of "dissenting" opinion and, supposing you put thought and effort in, come away better informed.
These things -- trust, expertise, and dissent -- exist in great tension. That tension is the underpinnings of the traditional classical liberal University model. But that is also gone today as the hypermedia echo chamber has caused dissent in Universities to be less tolerated than ever.
I can't imagine any practical solution to this problem.
Yes, I think a lot of work needs to be done around content labeling. Getting away from simple up/down, and labeling content that you think is funny, spammy, insightful, trusted, etc. I don't think any centralized platform has gotten the mechanics quite right on this, but I think we're getting closer. Furthermore, in a world where everyone owns their own social graph, and carries it with them on every site they visit, we don't need to rebuild our networks every time new platforms emerge.
This is another key advantage of web3 social networks vs web2. You own your identity, you own your social graph, and you can use it to automatically curate content on your own without relying on some third party to do it. A third party that might otherwise inject your feed with ads, or "high engagement" content to keep you clicking and swiping.
This sounds nice but I doubt it will ever get enough traction. The vast majority of people in power at tech companies believe that they can make more money creating a walled garden than something interoperable. Heck, it's not even just tech companies, most companies in any industry want to create lock-in wherever possible. A captive audience is easier to squeeze for money, so lots of people want to create such an audience.
This reminds me of all the talk a couple years ago about using blockchains to make video game items work across different game worlds. Sounds great for the players, but game dev companies don't see the point in actually implementing it, so it never goes anywhere.
Not to mention that there are significant technical hurdles. Two different platforms might be different enough that it's difficult or impossible to use the same social graph or game items in both.
Been thinking about all this recently, and it's related to starting up something new. Here's a few thoughts I believe resonate with your comment. (I'm just hoping for some discussion to consider)
"the moat" = that thing which a business has that others do not = walled gardens and all sorts of anti-competitive behavior.
Expectations related to returns. Often 10x is a starting point. Nobody wants to invest, unless that 10x or some form of disruption is on the table. Forming that "moat" and making some sort of walled garden and or pool of locked in users almost always appears to be the primary piece able to make 10x plus claims plausible.
Those returns are never associated with cross platform, open type efforts. Frankly, those efforts can be seen as toxic, actually vaporizing "value" that would otherwise be on the table.
Web 1.0 was great!
Regarding "walled gardens", there is a secondary pattern in play. I didn't really notice until we saw Reddit and that "Sanders for President" sub kick into action. Prior to that time, /all was seen by everyone. It was possible to write something and have most of Reddit see that something. And that was, to some degree, true of other platforms too.
Suddenly, very large numbers of people could get behind an idea and act on it!
That happening is completely unacceptable to the established players. I don't care about the politics, or the players here. Just saying that large numbers of people all resonant in some way is a dynamic considered toxic by most, if not all, leaders in the world today.
Last time we saw that kind of thing happen in the USA, we also saw the New Deal happen.
This time, we didn't see any kind of legislative effort. What we did see was changes:
Government being involved with big tech. And top of the list seemed to be changes that insured people all saw different views. No more /all reaching millions at at time.
I'm trying to make a point here related to "lots of people want to create such an audience" and that point is, "yes they are, but they also need that audience fragmented in various ways too."
Some people have suggested public efforts. I'm totally open to those ideas, but am concerned about whether they would be implemented in a way that encourages competition and accountability.
And they will in one respect, that being the little guy having to compete hard to make it through a modest life while being held to account (via real names and ID linked to network activity in a very difficult to shake way), for what they say and do online while the "powers that be" are not experiencing either of those things to a degree of concern.
Right now, there is an authoritarian, puritanical move to "clean" the Internet up. It's everywhere and it looks to me like a move to bring traditional media online as a peer, not disadvantaged as it has always been, until recently. This last decade has been a big push to somehow make sure the likes of FOX and MSNBC have a placement advantage over [ insert indie voices here ].
The thing is pretty much anyone under 50 could care less about big, corporate media. And quite a few over 50 are right there with them, myself included.
I sure miss Web 1.0 in these respects.
But, getting back to tech and the basics of your comment:
Some how we need market rules that require competition. No enterprise wants it that way. They all want to flat out own their niche and keep their costs and risks low while also being free to deliver the least value for the highest dollars possible. If nothing else, that's needed to deliver those huge returns promised at some point in return for investments needed to get started.
Where there is meaningful competition:
Buyers tend to get the best value for the lowest dollars.
Where there isn't meaningful competition:
Buyers tend to get the least value for the highest dollars.
Market advocates often talk up competition as being the powerful justification for running everything as a market.
But that's for the rubes. It's totally obvious the intent is to limit competition to maximize profit and control and we see that play out all the time, almost everywhere!
One fun one I like to get people to think about is big mergers. They always say the same thing and that is some variation on combined resources and blah, blah, blah, mean lower prices and greater value for "consumers." When have you seen that happen?
I haven't.
Sadly, I don't have any solutions either, but did want to expand on your comment and see what others might have to say.
This was how Epinions worked for products - you built a graph of product reviewers you trusted and you inherit a relevance score for a product based on the transitive trust amplifying product reviews. It was a brilliant model (it was a bunch of folks from Netscape including guha and the Nextdoor CEO, got acquired a few times and google shopping killed their model, eventually acquired by eBay for the product catalog taxonomy system - which I helped to build)
I would say the current model of information retrieval against a mountain of spam is already broken and LLM will just kick it over into impossible. I feel like we are already back to the world of Lycos, Excite, and Altavista where searches give you a semi relevant cluster of crap and you have to query craft to find the right document. In some ways I think the LLM chatbot isn’t a bad way to get information if it can validate itself against a semantic verification system and IR systems. I also think the semantic web might have a bigger role by structuring knowledge in a verifiable way rather than in blobs of ascii.
The problem is this is how social networks work - what you're describing is the classic social media bubble outcome. Everybody and their networks upvotes content from publishers they trust and downvotes content from publishers they don't but half of them trust Fox News and half trust CNN. Then of course the most active engagers/upvoters are the craziest ones, and they're furiously upvoting extreme content.
That'll filter for content that's popular or acceptable to your inner bubble. We already have that and it's becoming a more massive problem every day. "My friends trust it / like it " is not the same as"this is objectively true ". It's a fantasy of hyper democratic good-actor utopia that's not born by reality - whether extreme politics or pseudoscience or racism or intolerance religion or whatever will likely massively out vote any voices trying to determine facts.
Put it other way, today you already have an option to go to sources which are as scientific or objective or factual as possible. Most people choose otherwise.
I think trust is somewhat transitive, but it's not ___domain independent.
I have friends whose movie recommendations I trust but whose restaurant recommendations I don't, and vice versa. I have friend that I trust to be witty but not wise and others the opposite.
A system that tried to model trust would probably need to support tagging people with what kinds of things you trust them in.
This. Nobody is 100% trustworthy in every circumstance. When I say I trust someone, what I mean is that I have a good handle on what sorts of things that person can be trusted about, and what sorts of things they can't.
Exactly - you have a reasonable model of a person. So it also includes things like a recommendations giving you the _opposite_ of the purported opinion. Or trusting the details are technically true, but missing the forest for the trees. Or any other contextual interpretation of the data.
On second thought, I'm not even sure what "transitive" means here. It seems like it should mean that if you trust your friend's movie recommendations then you trust your friend's friends' movie recommendations? Or maybe something like:
trustsMovieRecs(A, B) and trustsMovieRecs(B, C) => trustsMovieRecs(A, C).
Their movie recommendations are likely some function that takes their friends' movie recommendations as input (along with watching them), but that's more like an indirect dependency than a transitive closure.
Trust decays exponentially with distance in the social graph, but it does not immediately fall to zero. People who you second-degree trust are more likely to be trustable than a random person, and then via that process of discovery you can choose to trust that person directly.
For the limited purpose of finding interesting people to follow it can be okay, but I don't see it getting automated in a way that would work for web search or finding people with a common interest. For example, Reddit often works better because you're looking for something that can't be found by asking people you know. The people are out there but you're not connected.
Arguably Twitter with non-algorithmic timeline and a bit of judicious blocking worked really well for this, but even that's on the way out now.
> Any time you publish anything, you're signing that publication with a key that only you hold.
People could in theory have done this at any time in the PGP era, but never bothered. I'm not convinced the incentives work, especially once you bring money in.
If you're writing for the joy of writing (intrinsic motivation) and then start getting paid for it (attaching an extrinsic motivation to it) the original "for the joy of X" tends to get lost.
It isn't a "who wouldn't" but rather a "why would you".
Assuming that was a rhetorical question, but since there is a whole "homo economicus" theory of mind out there I'll answer anyway; An actor with other incentives beyond just monetary ones, like physical, social, or philosophical incentives.
That's what I've been feeling. Web3 is the organic web. Where we add back weight to transactions and build a noise threshold that drowns the spammers and SEOs.
I always envisioned it requiring some sort of micropayments or government-issued web identity certificates.
Everyone complaining about bubbles needs to realize that echo chambers are another issue entirely. Inorganic and organic content both create bubbles. We are talking about real/notreal instead of credible/notcredible
I feel this underestimates the seriousness of the difficulties we are facing in the area of social cohesion. The conflating of real/non-real and credible/non-credible is very much at the heart of the Trump/Brexit divide
> Imagine a world where the only content you see is from publishers that you trust, and that your friends trust, and their friends, to maybe 4 or 5 hops or so, and the feed was weighted by how much they are trusted by your particular social graph.
Sounds like what Facebook was (or wanted to be) during its best days, until they got afraid of being overtaken by apps that do away with the social graph (TikTok).
Social graphs will enable trust between people, just like governments are doing right now. Any person not included in the graph and shown up in your newsfeed is an illegal troll. The only difference with automated electronic governments and physical governments is that we can have as many electronic governments as we like, a million of them.
One other feature of LLMs is that they will enable people to create as many dialects of a language as they like, english, greek, french whatever. So it is very possible that 100.000 different dialects are going to pop up in English alone, 10.000 dialects in Greek and so on. That will supercharge progress by giving anyone as much free speech as they like. Actually it makes me very sad when i listen to young people speak the very same dialect of a language as their parents.
So we are heading for the internet of one million governments and one million languages. The best time ever to be alive.
Nope, people will be able to communicate in a language widely recognized, but speaking with their peers or community there will another language of choice. Just like the natural language evolution over the centuries and millenia, but easier and quicker. 1 century of language evolution compressed over 5 to 10 years. Programming languages are following the same pattern already.
What happens if the majority of your group is trusting fake news aka people who exclusively listen to sources like NewsMax. Do you just abandon these people as trapped?
I would hope that in some cases, if their friends and loved ones start explicitly signaling their distrust of NewsMax or whatever, then their likelihood of seeing content from shitty sources would decrease, slowly extracting them from the hate bubble. Of course these systems could also cause people to isolate further, and get lost in these bubbles of negativity. These systems would help to identify people on the path to getting lost, opening the path for some IRL intervention, and should the person chose to leave the bubble, they should have an easier path towards recovery.
Either way, a lot of those networks depend heavily on inauthentic rage porn, which should have a hard time propagating in a network built on accountability.
At some point you need to stop seeking and start building, and this requires you to set down some axioms to build upon. It requires you to be ok with your “bubble” and run with it. There is nothing inherently wrong with a bubble, it’s just for a different mode of operation.
not much privacy then, eh? somebody will be able to trace that key to you or other things you've signed at least
PS I'm not too obsessed with privacy and I'm ok with assuming all my FB things including DMs can be made public/leaked anytime, but there is a bunch of stuff I browse and value that I will never share with anybody.
Generally, you would only care about up-votes from people you trust, and if you vote down stuff that your friends up-voted, then your trust level in those friends would be reduced, rapidly turning down the volume on the other stuff that they promote.
If you start seeing spammy content, you downvote it, and your trust level from that part of your social graph drops, and they are less likely to be able to publish things that you see. If you discover some high quality content, and you promote it, then your trust level will improve in your part of the social graph.
I'd say that the actual web3 (they crypto kind) is largely about reclaiming identity from centralized identity providers. Any time you publish anything, you're signing that publication with a key that only you hold. Once all content on the internet is signed, these trust graphs for delivering quality content and filtering out spam become trivial to build.
In this world, it doesn't matter if content is generated with ChatGPT, or content farms, or spammers. If the content is good, you'll see it, and if it's not, then you won't.