Hacker News new | past | comments | ask | show | jobs | submit login

Why? The choice to not do the post training would be every bit as intentional, and no different than post training to make it less sympathetic.

This is a designed system. The designers make choices. I don’t see how failing to plan and design for a common use case would be better.




We do not know if it is capable of sympathy. Post training it to reliably be sympathetic feels manipulative. Can it atleast be post trained to be honest. Dishonesty is immoral. I want my AIs to behave morally.


AIs don't behave. They are a lot of fancy maths. Their creators can behave in ethical or moral ways though when they create these models.

= not to say that the people that work on AI are not incredibly talented, but more that it's not human


thats just pedantic and unprovable since you cant know if it has a qualitative experience or not.

trainimg it topretend to be a feelingless robot or sympathetic mother are both weird to me. it should state facts with us.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: