Hacker News new | past | comments | ask | show | jobs | submit login

It is a possibility, but my understanding of what OpenAI has said is that GPT-5 is delayed because of the apparent promise of RL trained things like o1, etc. and that they've simply decided to train those instead of training a bigger base model training on better data, and I think this is plausible.



OpenAI has an incentive to make people believe that the scaling laws are still alive, to justify their enormous capex if nothing else.

I wouldn't give what they say to much credence, and will only believe the results I see.


Yes, I think I agree that it seems unlikely that the spending they're doing can be recouped.

But it can still make sense for a state, even if it doesn't make sense for investors though.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: