Hacker News new | past | comments | ask | show | jobs | submit | mccorrinall's comments login

In all honesty, Mastodon feels like a bigger echo chamber than Twitter. At least for me.

The people on the server are somehow all like minded people. There is rarely any disagreeing going on. The missing exposure to diverse perspective makes mastodon boring to me, and is the reason why I stopped using it.

Maybe I used it wrong ¯\(°_°)/¯


That’s what I found too[0]. Made me realise that algorithms aren’t to blame for echo chambers and outrage; it is us humans that are at fault with this.

Examples of echo chamber behaviour on Mastodon: all differing replies are labelled as “replyguys”, but agreeing replies are allowed. Defederation is common due to drama between administrators. Debates are discouraged and free speech is commonly villianised.

Examples of outrage behaviour on Mastodon: Check out the Fediblock hashtag (don’t), which has a big “outrage of the week” tone, everything from Hive to Raspberry Pi to journalists.

I’ll try to add a bit of value to my comment. The way I’m resolving this is to recognise this and filter them out actively. This is one place where algorithms do indeed make it harder (because it can “sneak back in”). Particularly, I filter out all political posts and any anger posts, especially anger posts about an out-group (“us vs them”).

Is this creating an echo chamber as well? Maybe. But I can try rationalising this by saying that an echo chamber of neutrality is better than an echo chamber of a particular side, political or otherwise.

[0] https://mastodon.social/explore


The penalty for wrongthink on mastodon is banning. This ensures ideological homogeneity within each instance. For many people, this is a feature not a bug.


Literally defined an echochamber.


Thing about an echo chamber is that it feels amazing to be in one. Everyone agreeing with me, slapping my back when I say something to reinforce the groupthink. I feel so smart! Love it!

The negatives of the echo chamber are felt by people not in it, not me and my group.


Unfortunately for everyone else, feeling smart does not translate to being smart.


The current handling is what I’d like to call a Selbstverständlichkeit.


I actually say „Herr/Frau Professor Doktor Agolio“ irl when I want something from that person.


I (an American) joined a business call with some Germans and when they were speaking in German, they referred to each other as "Herr Müller" and "Frau Schmidt" then since I joined they switched to "Jens" and "Nadin." I just found that to be an interesting cultural difference.


That stuff is often strange in Germany. It's not uncommon to hear "Frau Müller, kannst Du mal ..." in professional environments where people know each other well ("Du" is informal and usually used with first names, formally it'd be "Frau Müller, können Sie", both is "Mrs Müller, could you" in English).


JSON5 makers apparently didn’t even know about prototype pollution until this week.


I thought about doing this when I was a teen. :)


My Audi has a knob which I can rotate in order to navigate through the menus without taking my eyes off the street. It also has a touch screen, but I only use it when standing still.


Mazda has something similar. I realized that "do __ without taking your eyes off the road" isn't really true, for the same reason that it's hard to read your phone and listen to someone at the same time. Most of the time you can strike a balance, but every once and a while your mind will completely blank and take a few moments to realize you stopped processing what you were hearing. Not a risk worth taking while driving.

I try to avoid using the knob while in motion as well.


There's also a difference between taking your eyes off the road momentarily to look at a simple predictable display in a fixed ___location, like a light or a needle or a fixed text display, and taking your eyes off the road indefinitely to look at a screen that displays something unpredictable, complex and arbitrary as part of an interactive session, usually with animations.


Do you refuse to ride in Ubers, or avoid Amazon delivery vehicles? I have to admit that the level of tech-denial in this thread seems to be getting out of hand. Everyone uses in-vehicles maps. Everyone deploys them on touch screens. They're pervasive and everywhere, and none of the arguments change when you bolt them to the dash.

Why is "Tesla" being held to a different standard than UPS/FedEx/Amazon/Uber/Doordash/etc...?


I would say because Tesla is doing something similar to this (well, not the ditching, but the bad part that comes before the eventual ditching) and basically what the study found:

    Navy ditches touchscreens for knobs and dials after fatal crash
https://techcrunch.com/2019/08/11/navy-ditches-touchscreens-...


Things did get diverted on a maps drove tangent but that usually on our pose to enable the same argument you're making now.

The argument started by asking why windshield wipers, environment controls or basic radio functions need to be buried in a touch screen.

Also most vehicles limit what's allowed in navigation screens while the vehicle is moving. Which is annoying when it prevents a passenger get from using it but sensible when there is a single driver.


Did you commented under wrong comment? I can't see how it fits here, it seems to be response to something that was not said.


> internal developments that match (maybe exceed) GPT3

Had a funny google prompt today which echoed wrong results by their ML model: https://www.google.com/search?q=single%20neighbor%20country

:D


It’s easier to train on all github repos if you own github. There is no real alternative to codex.

Stuff like GPT3 is trying to be recreated, but even the eleuther AI guys only collected like 800GB training data, which is much less than what OpenAI has (iirc around 45TB). And apparently their data is very high quality. EleutherAI is pretty much one of the few big model open source competitors with GPT-Neox etc.

Plus openai has great branding.


Interesting, thanks for sharing!

I wonder how LaMDA compares performance wise to ChatGPT. I definitely understand why training on Github is an advantage, but I'd expect Google to also be great at getting a good dataset, across the range of things they'd be interested in.


Or maybe they write on an iphone. I had to turn auto capitalization off because it would capitalize emoticons after a full stop: turning xD into Xd


How does it prevent one to put caps manually? I too have disabled mose auto-things on my smartphone yet I use caps. It is not a technical proble, it is about decency and respect.


They have, but it’s just not interesting. Similar to how the p2p scene doesn’t give a damn about software DRM.

Regarding WV: There is actually lots of open source software and even some discords where WV keys are being shared. FFMPEG is capable of decrypting it, so all you need as ffmpeg inputs is the netflix stream and a key and then dump it to disk.

Regarding pita DRM: the scene is the only capable group who still supply clean patches for modern DRM in games and applications. (Like denuvo)

Clean in the sense that they don’t just install a hypervisor and hook everything for big performance penalties, but actually patch the binary to remove the checks through static analysis. 3DM is the last p2p group I know which done that, but that was half a decade ago. I don’t think there are currently any p2p groups which take a look at this, because it’s just not interesting (effort vs reward) for them.


Software pirate groups you are talking about have very little overlap with people who were doing TV releases.

> They have, but it’s just not interesting. Similar to how the p2p scene doesn’t give a damn about software DRM.

But doing HDTV caps was interesting?


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: