Hacker News new | past | comments | ask | show | jobs | submit login

Yea. Of course for some tasks we need speed, but i've been kinda surprised that we haven't seen very slow models which perform far better than faster models. We're treading new territory, and everyone seems to make models that are "fast enough".

I wanna see how far this tech can scale, regardless of speed. I don't care if it takes 24h to formulate a response. Are there "easy" variables which drastically improve output?

I suspect not. I imagine people have tried that. Though i'm still curious as to why.




I think the problem is that 24 hours of compute to run a response would be incredibly expensive. I mean hell how would that even be trained.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: