Hacker News new | past | comments | ask | show | jobs | submit login

I pretty much agree with this, having some way to indicate model boundaries in an LLM parameter space to create back pressure on token generation would help a lot here.

For me though the interesting bits are how the lack of understanding surfaces as artifacts in the presentation or interaction. I'm a systems person who can't help but try to fathom the underlying connections and influences that are driving the outputs of a system.






Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: