Hacker News new | past | comments | ask | show | jobs | submit login

I want my AI support rep to have access to data and documents that it can vector/text search and forward to the user.

I want my AI support rep to create tasks to engage my team, with all the relevant data linked to it. It should be able to automatically schedule things and ask a human for confirmation.

It should be able to elevate communication to a human in the loop, using whatever mediums of communication makes sense given staff availability and workload.

At no point is it allowed to answer any questions unless the answers are constrained and probably directly quoted from a cited and linked section in our documentation.

In general, it should never confirm or deny things, it should never try to close things or acknowledge the content of any of the user's communication other than to call tools and surface public information which might be relevant to their request.

Most of this is a software architecture problem. The LLM is just there to provide an intuitive and extremely powerful natural language interface for search and tool calling. A little bit of glue between different systems, both internal and external.




> In general, it should never confirm or deny things, it should never try to close things or acknowledge the content of any of the user's communication other than to call tools and surface public information which might be relevant to their request.

If you were an end user of such a system, would you be happy?


As long as it addressed my needs by either pointing me to the correct documentation, or elevating me to a human, then yes, of course I'd be happy with that.

It's an incremental stop along the way to truly reliable agentic systems which we can trust with important things.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: