Hacker News new | past | comments | ask | show | jobs | submit | causalmodels's comments login

Personally I think people should generally be polite and respectful towards the models. Not because they have feelings, but because cruelty degrades those who practice it.


Computers exist to serve. Anthropomorphising them or their software programming is harmful¹. The tone of voice an officer would use to order a private or ranking to do something seems suitable — which obviously comes down to terse, clear, unambiguous queries and commands.

Besides, humans can switch contexts easily. I don't talk to my wife in the same way I do to a colleague, and I don't talk to a colleague like I would to a stranger, and that too depends on context (is it a friendly, neutral, or hostile encounter?).

1: At this point. I mean, we haven't even reached Kryten-level of computer awareness and intelligence yet, let alone Data.


> tone of voice an officer would use to order a private

Most people probably don't have the mental aptitude to be in that sort of position without doing some damage to their own psyche. Generally speaking, power corrupts. Militaries have generally come up with methods of weeding people out but it's still a problem. I think even if it's just people barking orders at machines, it has the potential to become a social problem for at least some people.

As for anthropomorphising being bad, it's too late. That ship sailed for sure as soon as we started conversing with machines in human languages. Humans already have an innate tendency to anthropomorphize, even inanimate objects like funny shaped boulders that kind of look like a person if you squint at it from an angle. And have you seen how people treat dogs? Dogs don't even talk.

Maybe it's harmful, but there's no stopping it.


> Anthropomorphising them or their software programming is harmful¹.

LLMs are trained on internet data produced by humans.

Humans tend to appreciate politeness and go to greater lengths answering polite questions, hence the LLMs will also mimic that behavior because that's what they're trained on.


I agree! I try to remember to prompt as if I were writing to a colleague because I fear that if I get in the habit of treating them like a servant, it will degrade my tone in communicating with other humans over time.


Agreed. I caught some shit from some friends of mine when I got mildly annoyed that they were saying offensive things to my smart speakers, and yeah on the one hand it's silly, but at the same time... I dunno man, I don't like how quickly you turned into a real creepy bastard to a feminine voice when you felt you had social permission to. That's real weird.


Yes. I tend to be polite to LLMs. I admit that part of the reason is that I'm not 100% sure they're not conscious, or a future version could become so. But the main reason is what you say. Being polite in a conversation just feels like the right thing to me.

It's the same reason why I tend to be good to RPG NPCs, except if I'm purposefully role playing an evil character. But then it's not me doing the conversation, it's the character. When I'm identifying with the character, I'll always pick the polite option and feel bad if I mistreat an NPC, even if there's obviously no consciousness involved.


I think we can look at examples:

People who are respectful of carved rocks, eg temple statues, tend to be generally respectful and disciplined people.

You become how you act.


That simply means you were raised right. :)


Yes, if you're communicating with a human language it pays off to reinforce, not undermine, good habits of communication.


ding ding ding.

If you're rude to an LLM, those habits will bleed into your conversations with barista/etc.


I think it depends on the self-awareness of the user. It's easy to slip into the mode of conflating an LLM with a conscious being, but with enough metacognition one can keep them separate. Then, in the same way that walking on concrete doesn't make me more willing to walk on a living creature, neither does my way of speaking to an LLM bleed into human interactions.

That said, I often still enjoy practicing kindness with LLMs, especially when I get frustrated with them.


Possibly. But that’s not the fault of any person except he who forced a fake social actor into our midst.

It’s wrong to build fake humans and then demand they be treated as real.


It seems like by default, the LLMs I've used tend to come across as eager to ask follow-up questions along the lines of "what do you think, x or y?" or "how else can I help you with this?" I'm going to have to start including instructions not to do that to avoid getting into a ghosting habit that might affect my behavior with real people.


Not necessarily, people will change behaviour based on context. Chat vs email vs HN comments, for example.


I think people in general are not all that great at doing this.

Anecdotal, but I grew up in a small town in rural New England, a few hours from NYC and popular with weekenders and second-home owners from there. I don’t think that people from NYC are inherently rude, but there’s a turbulence to life in NYC to where jockeying for position is somewhat of a necessity. It was, however, transparently obvious in my hometown that people from the city were unable to turn it off when they arrived. Ostensibly they had some interest in the slow-paced, pastoral village life, but they were readily identifiable as the only people being outwardly pushy and aggressive in daily interactions. I’ve lived in NYC for some time now, and I recognize the other side of this, and feel it stemmed less from inherent traits and more from an inability to context switch behavior.


... cos, I mean, what's the difference between ai and a barista? Both are basically inanimate emotion-free zones, right?


I wish in modern society i could assume the /s here.


Saying thank you to a plant for growing you a fruit is strange behavior. Saying thank you to a LLM for growing you foobar is also strange behavior. Not doing either is not degrading behavior of the grower.


Disagree wrt practicing gratitude towards resources consumed and tools utilized. Maybe it doesn't degrade you if you don't but I think it gives a bit more perspective


I think we agree on this if you agree that practicing gratitude in life and directly practicing it on non-sentient objects are not the same thing. Going to church to pray, going to therapy, practicing mindfulness, etc. isn't the same thing as seeing each grape growing on a vine as an anthropomorphic object. Don't anthropomorphize your lawnmower.


You also don't communicate with human language to your lawnmower to get it to work.



Many hunters say thank you to animals they just killed. Strange, or respectful. Depends on your perspective and cultural context.

LLMs are bound to change society in ways that seem strange to people stuck in outdated contexts.


> Saying thank you to a LLM.

Saying thank you to a LLM is indeed useless, but asking politely could appeal to the training data and produce better results because people who asked politely on the internet got better answers and that behavior could be baked into the LLM models.


Where do you draw the line though? I know some people that ask Google proper questions like "how do I match open tagx except XHTML self-contained tags using RegEx?" whereas I just go "html regex". Some people may even add "please" and "thank you" to that.

I doubt anyone is polite in a terminal, also because it's a syntax error. So the question is also, do you consider it a conversation, or a terminal?


Agreed.

When asked if they observed etiquette, even when alone, Miss Manners replied (from memory):

"We practice good manners in private to be well mannered in public."

Made quite the impression on young me.

A bit like the cliché:

"A person's morals are how they behave when they think no one is watching."


You're nice to AI for your own well being. I'm nice to AI so they spare me when they eventually become our overlords. We are not the same.


> You're nice to AI for your own well being. I'm nice to AI so they spare me when they eventually become our overlords.

Ahh, the full spectrum of human motivation -- niceness for the sake of it, fear and let me add my machiavellianism -- I think being polite in your query produces better results.


A lot of wisdom and virtue in this comment; I appreciate that.


Reminds of the elderly woman adding "please" to her queries in Google:

https://www.theguardian.com/uk-news/2016/jun/16/grandmother-...


The study is not about cruelty, but rather politeness. Impoliteness is not anything like cruelty.

Meanwhile, there is no such thing as cruelty toward a machine. That’s a meaningless concept. When I throw a rock at a boulder to break it, am I being cruel to that rock? When I throw away an old calculator, is that cruelty? What nonsense.

I do think it is at the very least insulting and probably cruel and abusive to build machines that assume an unearned, unauthorized standing the social order. There is no moral basis for that. It’s essentially theft of a solely human privilege, that can only legitimately be asserted by a human on his own behalf or on behalf of another human.

You don’t get to insist that I show deference and tenderness toward some collection of symbols that you put in a particular order


When coding with LLMs, they always make these dumb fucking mistakes, or they don't listen to your instructions, or they start changing things you didn't ask it to... it's very easy to slip and become gradually more rude until the conversation completely derails. I find that forcing myself to be polite helps me keep my sanity and keep the conversation productive.


It's just a different set of in group // out group signals, not some sort of moral failing. You're well within your rights to not like the signals though.


Who is this Publius guy anyway?


You can absolutely sue Wikipedia [1]

[1] https://en.wikipedia.org/wiki/Asian_News_International_vs._W...

edit: My bad, I get the joke now.


Safety and capabilities research are pretty much two sides of the same coin.


Doesn't the autobahn have unrestricted sections with no speed limit?


Well, yes, but not really. The recommended speed limit is still 130 km/h. However going past the recommended speed limit is not illegal.

If you get into an accident while going faster than 130km/h which could have been prevented by going the advised speed limit, you may be considered more liable by the courts. I’ve heard that there may be cases where your insurance might not cover you, but I’ve never found proof of that online.

Edit: rephrased “decriminalised” as “not illegal” as per comment.


That wording is not correct. Recommended speed is a mechanism that is employed in many countries (in the Netherlands you see it on highway exits). The only rules that apply on a highway without signs is a recommended speed of 130 km/h with all the legal consequences this entails (e.g. if you have an accident you are at least partially at fault because the traffic situation obviously didn't allow you to drive as fast as you did) but speeding is not decriminalized as THC products are decriminalized in the Netherlands. There just is no speed limit. Period.


It does. And people drive between countries all the time in Europe, I personally drive through Germany few times a year and I'd hate it if my car had a speed limit purely because I bought it outside of Germany. But perhaps that's somewhat of an edge case that would work with a simple GPS geofence.


My car instantly recognises that I’m in Germany and knows what speed I used in the unrestricted sections, and automatically switches back to that.

When I’m in Belgium it knows the speed on the highway is 120 km/h. In France or Denmark 130 km/h. In Germany in an unrestricted section? Whatever I last set it to.


There were talks for ending this last bit of "madness" to help with the climate (to go at double one's speed one needs to spend 4 times more energy).


A good sales person on a golf course with a VP can do a lot of damage


s/golf course/[strip club|ski/beach resort|yacht vacay]/ as suits the particular VP



Full disclaimer up top: I have been working on agents for about a year now building what would eventually become HDR [1][2].

The first issue is that agents have extremely high failure rates. Agents really don't have the capacity to learn from either success or failure since their internal state is fixed after training. If you ask an agent to repeatedly do some task it has a chance of failing every single time. We have been able to largely mitigate this by modeling agentic software as a state machine. At every step we have the model choose the inputs to the state machine and then we record them. We then 'compile' the resulting state-transition table down into a program that we can executed deterministically. This isn't totally fool proof since the world state can change between program runs, so we have methods that allow the LLM to make slight modifications to the program as needed. The idea here is that agents should never have to solve the same problem twice. The cool thing about this approach is that smarter models make the entire system work better. If you have a particularly complex task, you can call out to gp4-turbo or claude3-opus to map out the correct action sequence and then fall back to less complex models like mistral 7b.

The second issue is that almost all software is designed for people, not LLMs. What is intuitive for human users may not be intuitive for non-human users. We're focused on making agents reliably interact with the internet so I'll use web pages as an example. Web pages contain tons of visually encoded information in things like the layout hierarchy, images, etc. But most LLMs rely on purely text inputs. You can try exposing the underling HTML or the DOM to the model, but this doesn't work so well in practice. We get around this by treating LLMs as if they were visually impaired users. We give them a purely text interface by using ARIA trees. This interface is much more compact than either the DOM or HTML so responses come back faster and cost way less.

The third issue I see with people building agents is they go after the wrong class of problem. I meet a lot of people who want to use agents for big ticket items such as planning an entire trip + doing all the booking. The cost of a trip can run into the thousands of dollars and be a nightmare to undo if something goes wrong. You really don't want to throw agents at this kind of problem, at least not yet, because the downside to failure is so high. Users generally want expensive things to be done well and agents can't do that yet.

However there are a ton of things I would like someone to do for me that would cost less than five dollars of someones time and the stakes for things going wrong are low. My go to example is making reservations. I really don't want to spend the time sorting through the hundreds of nearby restaurants. I just want to give something the general parameters of what I'm looking for and have reservations show up in my inbox. These are the kinds of tasks that agents are going to accelerate.

[1] https://github.com/hdresearch/hdr-browser [2] https://hdr.is


Bots acting on the behalf of users should not be blocked but we have spent several decades treating bots (except for the googlebot) as bad.

Like if I want to programmatically unsubscribe from a subscription, why should I have to do it myself?


That's a bad example, "programmatically unsubscribing" means giving spammers information that this address is alive. A much better solution is to report the unwanted email as SPAM, so the sender's reputation takes a hit.

(and for that 1% of the cases where the address is not a spammer and user knows it, they can just hit "unsubscribe" manually)


I’m talking about subscription services a user signed up for at one time


That's a bad example, there's already the List-Unsubscribe header.


Subscription services like Netflix, not emails.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: