Hacker News new | past | comments | ask | show | jobs | submit login

There are a few layers to this worth considering.

- In this world the information delivered to agents should align with content delivered visibly to the human web. This is essentially how the bulk of SEO overloading is detected. There needs to be a way to validate this and establish trust - completely solvable. These techniques penalize these schemes from the outset. (this is probably not the best forum to go too deep into that)

- We're assuming agents have full buying decisions here. I do not believe we will see that as common place for a long time. Even if we did, the same systems for PCA compliance are in play and the interfaces pushed by both payment gateways and shopping carts protect against duplicate purchase attempts. Those attempting to abuse this fall more into the malicious actor camp.

- phishing and malicious actors are going to do what they have always done. There are some very important security, access control, and compliance measures we should put in place for the most sensitive of actions - as we always have where most existing ones still apply. The agent experience and the ecosystem in general will have to evolve to have verifiable trust patterns. So that when a human delegates to an agent to do something, the human can have confidence and ways to validate interactions.

I'll be the first to admit that I don't have all of the answers here but with agents becoming the new entry point or delegation tool for the next generation of digital users, these are questions we have to answer and solve for. It starts by focusing the industry around the ___domain of this problem, that is AX. How to do it effectively and what needs to evolve to achieve it... that's where the work is.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: