Hacker News new | past | comments | ask | show | jobs | submit login

When the enshitification initially hit the fan, I had little flashbacks of Phil Zimmerman talking about Web of Trust and amusing myself thinking maybe we need humans proving they're humans to other humans so we know we aren't arguing with LLMs on the internet or letting them scan our websites.

But it just doesn't scale to internet size so I'm fucked if I know how we should fix it. We all have that cousin or dude in our highschool class who would do anything for a bit of money and introducing his 'friend' Paul who is in fact a bot whose owner paid for the lie. And not like enough money to make it a moral dilemma, just drinking money or enough for a new video game. So once you get past about 10,000 people you're pretty much back where we are right now.




Isn't the point of the web of trust that you can do something about the cousins/dudes out there? Once you discover that they sold out, even once, you sever them from the web. It doesn't matter if they took 20 years to succumb to the temptation, you can cut them off tomorrow. And that cuts off everyone they vouched for, recursively, unless there's a still-trusted vouch chain to someone.

At least, that's the way I've always imagined it working. Maybe I need to read up.


I think it should be possible to build something that generalises the idea of Web of Trust so that it's more flexible, and less prone to catastrophic breakdown past some scaling limit.

Binary "X trusts Y" statements, plus transitive closure, can lead to long trust paths that we probably shouldn't actually trust the endpoints of. Could we not instead assign probabilities like "X trusts Y 95%", multiply probabilities along paths starting from our own identity, and take the max at each vertex? We could then decide whether to finally trust some Z if its percentage is more than some threshold T%. (Other ways of combining in-edges may be more suitable than max(); it's just a simple and conservative choice.)

Perhaps a variant of backprop could be used to automatically update either (a) all or (b) just our own weights, given new information ("V has been discovered to be fraudulent").


True. Perhaps a collective vote past 2 degrees of freedom out where multiple parties need to vouch for the same person before you believe they aren't a bot. Then you're using the exponential number of people to provide diminishing weight instead of increasing likelihood of malfeasance.


But do we need an infinite and global web of trust?

How about restricting them to everyone-knows-everyone sized groups, of like a couple hundred people?

One can be a member of multiple groups so you're not actually limited. But the groups will be small enough to self regulate.


What’s that going to do about all of the top search results and a good percentage of social media traffic being generated by SEO bots? Nothing.

You want to chat with a Dunbar number of people get yourself a private discord or slack channel.


The Dunbar number of people could vouch for small web sites they come across. Or even for FB accounts if they choose to.


I suspect a lot of people here are the ones in their circle who bring in a lot of the cool info that their friends missed out on. This still sounds like Slack.


We're talking about webs of trust aren't we? Not about chat rooms.

I'm hypothesising that any such large scale structure will be perverted by commercial interests, while having multiple Dunbar sized such structures will have a chance to be useful.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: