Your argument is exactly the kind which makes me think people who claim LLMs are intelligent are trolling.
You are equating things which are not related and do not follow from each other. For example:
- A tool being useful (for particular people and particular tasks) does not mean it is reasoning. A static type checker is pretty fucking useful but is neither intelligent nor reasoning.
- The OP did not say he doesn't like R1, he said he disagrees with the opinion it can reason and with how the company advertises the model.
The fake "sorry" is a form of insult and manipulation.
There are probably more issues with your comment but I am unwilling to invest any more time into arguing with someone unwilling to use reasoning to understand text.
Please don't cross into personal attack and please don't post in the flamewar style, regardless of how wrong someone is or you feel they are. We're trying for the opposite here.
The issue with this approach to moderation is that it targets posts based on visibility of "undesired" behavior instead of severity.
For example, many manipulative tactics (e.g. the fake sorry here, responding to something else than was said, ...) and lying can be considered insults (they literally assume the reader is not smart enough to notice, hence at least as severe as calling someone an idiot) but it's hard for a mod to notice without putting in a lot of effort to understand the situation.
Yet when people (very mildly) punish this behavior by calling it out, they are often noticed by the mod because the call out is more visible.
I hear this argument a lot, but I think it's too complicated. It doesn't explain any more than the simple one does, and has the disadvantage of being self-serving.
The simple argument is that when you write things like this:
> I am unwilling to invest any more time into arguing with someone unwilling to use reasoning
...you're bluntly breaking the rules, regardless of what another commenter is doing, be it subtly or blatantly abusive.
I agree that there are countless varieties of passive-aggressive swipe and they rub me the wrong way too, but the argument that those are "just as bad, merely less visible" is not accurate. Attacking someone else is not justified by a passive-aggressive "sorry", just as it is not ok to ram another vehicle when a driver cuts you off in traffic.
I've thought about this a lot because in the past few years I've noticed a massive uptick in what I call "fake politeness" or "polite insults" - people attacking somebody but taking care to stay below the threshold of when a mod would take action, instead hoping that the other person crosses the threshold. This extends to the real world too - you can easily find videos of people and groups (often protesters and political activists) arguing, insulting each other (covertly and overtly) and hoping the other side crosses a threshold so they can play the victim and get a higher power involved.
The issue is many rules are written as absolute statements which expect some kind of higher power (mods, police, ...) to be the only side to deal punishment. This obviously breaks in many situations - when the higher power is understaffed, when it's corrupt or when there is no higher power (war between nation states).
I would like to see attempts to make rules relative. Treat others how you want to be treated but somebody treating you badly gives you the right to also treat them badly (within reason - proportionally). It would probably lead to conflict being more visible (though not necessarily being more numerous) but it would allow communities to self-police without the learned helplessness of relying on a higher power. Aggressors would gain nothing by provoking others because others would be able to defend themselves.
Doing this is hard, especially at scale. Many people who behave poorly towards others back off when they are treated the same way but there also needs to be a way to deal with those who never back down. When a conflict doesn't resolve itself and mods step in, they should always take into account who started it, and especially if they have a pattern of starting conflict.
There's another related issue - there is a difference between fairness/justice and peace. Those in power often fight for the first on paper but have a much stronger incentive to protect the second.
> people attacking somebody but taking care to stay below the threshold of when a mod would take action, instead hoping that the other person crosses the threshold
I agree, it is a problem—but it is (almost by definition) less of a problem than aggression which does cross the threshold. If every user would give up being overtly abusive for being covertly abusive, that wouldn't be great—but it would be better, not least because we could then raise the bar to make that also unacceptable.
(I'm not sure this analogy is helpful, but to me it's comparable to the difference between physical violence and emotional abuse. Both are bad, but society can't treat them the same way—and that despite the fact emotional abuse can actually be worse in some situtations.)
> somebody treating you badly gives you the right to also treat them badly (within reason - proportionally)
I can tell you why that doesn't work (at least not in a context like HN where my experience is): because everyone overestimates the provocations and abuses done by the other, and underestimates the ones done by themselves. If you say the distortion is 10x in each case, that's a 100x skew in perception [1]
As a result, no matter how badly people are behaving, they always feel like the other person started it and did worse, and always feel justified.
In other words, to have that as a rule would amount to having no rule. In order to be even weakly effective, the rule needs to be: you can't be abusive in comments regardless of what other commenters are doing or you feel they are doing [2].
> it is (almost by definition) less of a problem than aggression which does cross the threshold
Unless you also take into account scale (how often the person does it or how many other people do it) and second-order effects (people who fall for the manipulation and spread it further or act on it). For this reason, I very much prefer people who insult me honestly and overtly, at least I know where I stand with them and at least other people are less likely to get influenced by them.
> I'm not sure this analogy is helpful
This is actually a very rare occasion when an analogy is helpful. As you point out, the emotional abuse can (often?) be worse. TBH when it "escalates" to being physical, it often is a good thing because it finally 1) gives the target/victim "permission" to ask for help 2) it makes it visible to casual observers, increasing the likelihood of intervention 3) it can leave physical evidence and is easily spotted by witnesses.
(I witnessed a whole bunch of bullying and attempts at bullying at school and one thing that remained constant is that people who fought back (retaliated) were left alone (eventually). It is also an age where physical violence is acceptable and serious injuries were rare (actually I don't recall a single one from fighting). This is why I always encourage people to fight back, not only is it effective but it teaches them individual agency instead of waiting for someone in a position of power to save them.)
> I can tell you why that doesn't work
I appreciate this datapoint (and the fact you are open to discussing it, unlike many mods). I agree that it's often hard to distinguish between mistake and malice. For example I reacted to the individual instance because of similar comments I ran into in the past but I didn't check if the same person is making fallacious arguments regularly or if it was a one-off.
But I also have experiences with good outcomes. One example stands out - a guy used a fallacy when arguing with me, i asked him to not do that, he did it again so i did it twice to him as well _while explaining why I am doing it_. He got angry at first, trying to call me out for doing something I told him not to do, but when I asked him to read it again and pointed out that the justification was right after my message with the fallacy (not post-hoc after being "called out"), he understood and stopped doing it himself. It was as if he wasn't really reading my messages at first but reversing the situation made him pay actual attention.
I think the key is that it was a small enough community that 1) the same people interacted with each other repeatedly and that 2) I explained the justification as part of the retaliation.
Point 1 Will never be possible at the scale of HN, though I would like to see algorithmic approaches to truth and trust instead of upvotes/downvotes which just boil down to agree/disagree. Point 2 can be applied anywhere and if mods decide to step in, it IMO is something they should take into account.
Anyway, thanks for the links, I don't have time to go through other people's arguments rn but I will save it for later as it is good to know this comes up from time to time and I am not completely crazy when I see something wrong with the standard threshold-based approach.
Oh and you didn't say it explicitly but I feel like you understand the difference between rules and right/wrong given your phrasing. That is a very nice thing to see if I am correct (though I have no doubt your phrasing was refined by years or trial and error as to what is effective). In general, I believe it should always be made clear that rules exist for practical reasons, not pretend they are some kind of codification of morality.
Just a quick response to that last point: I totally agree—HN's guidelines are not a moral code. They're just heuristics for (hopefully) producing the the type of website we want HN to be.
Another way of putting it is that the rules aren't moral or ethical—they're just the rules of the game we're trying to play here. Different games naturally have different rules.
You are equating things which are not related and do not follow from each other. For example:
- A tool being useful (for particular people and particular tasks) does not mean it is reasoning. A static type checker is pretty fucking useful but is neither intelligent nor reasoning.
- The OP did not say he doesn't like R1, he said he disagrees with the opinion it can reason and with how the company advertises the model.
The fake "sorry" is a form of insult and manipulation.
There are probably more issues with your comment but I am unwilling to invest any more time into arguing with someone unwilling to use reasoning to understand text.