Hacker News new | past | comments | ask | show | jobs | submit login

I really wish that every discussion about a new model didn’t rapidly become a boring and shallow discussion about AI safety.



AI is not an engineered system; it's emergent behavior from a system we can vaguely direct but do not fundamentally understand. So it's natural that the boundaries of system behavior would be a topic of conversation pretty much all the time.

EDIT: Boring and shallow are, unfortunately, the Internet's fault. Don't know what to do about those.


At least in some latest controversies (e.g. Gemini generation of people) all of the criticized behavior was not emergent from ML training, but explicitly intentionally engineered manually.


But that's the thing, prompt formulation is not engineering in the sense I'm talking about. We know why a plane flies, we know why an engine turns, we know how a CPU works - mostly. We don't know how GenAI gets from the prompt to the result with any specificity at all. Almost all the informational entropy of the output is hidden from us.


This announcement only mentions safety. What else do you expect to talk about?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: