Hacker News new | past | comments | ask | show | jobs | submit login

Lots of people are building on the edge of current AI capabilities, where things don't quite work, because in 6 months when the AI labs release a more capable model, you will just be able to plug it in and have it work consistently.



> because in 6 months when the AI labs release a more capable model

How many years do we have to keep hearing this line? ChatGPT is two years old and still can't be relied on.


and where is that product that was developed on the edge of current AI capabilities and now with latest AI model plugged in it's suddenly working consistently? All I am seeing is models getting better and better in generating videos of spaghetti eating movie stars.


They're coming. I've seen the observability tools try to do this but I still have to tweak it. it's just time-consuming. Empromptu.ai is the closest to solving this problem. They are the only ones that have a library that you install in your to do system optimization, evals, for accuracy in real-time.


For me, they have come from the AI labs themselves. I have been impressed with Claude Code and OpenAI's Deep Research.


while i'm bullish on AI capabilities, that is not a very optimistic observation for developers building on top of it


In 6 months when FSD is completed, and we get robots in every home? I suspect we keep adding features, because reliability is hard. I do not know what heuristic you would be looking to conclude that this problem will eventually be solved by current AI paradigms.


GP comment is what has already happened "every 6 months" multiple times




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: