Hacker News new | past | comments | ask | show | jobs | submit login

Were those trained using RLHF? IIRC the earliest models were just using SFT for instruction following.

Like the GP said, I think this is fundamentally a problem of training on human preference feedback. You end up with a model that produces things that cater to human preferences, which (necessarily?) includes the degenerate case of sycophancy.






Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: