Hacker News new | past | comments | ask | show | jobs | submit login

Except LLM capabilities have already peaked. Scaling has rapidly diminishing returns.



I have yet to see any published evidence of that.


Since you go that route, do you have published evidence that shows they HAVENT entered the top of the S-curve?


For one, thinking LLMs have plateaued is essentially assuming that video can't teach AI anything. It's like saying a person locked into a room his whole life with only books to read would be as good at reasoning as someone's who's been out in the world.


LLMs do not learn the same way that a person does


No, but both a person and an LLM benefit from learning from rich and varied data over multiple modalities.


What reason you have to believe we're anywhere close to the middle of the S-curve? S-curve may be only sustainable shape in nature in the limit, it doesn't mean any exponent you see someone claims is already past the inflection point.


Why are you thinking in binary. It is not clear at all to me that the progress is stagnating, and in fact I am still impressed by the progress. But I couldn't tell whether there is going to come a wall or not. There is no clear reason why there should be some sort of standard or historical curve for this progress.


This is not a linear process. Deep-learning models do not scale that way.


What kind of evidence could convince you?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: