Hacker News new | past | comments | ask | show | jobs | submit login

I think LLMs are gonna be the "jumpstart" of more general AGI prototypes over the next couple years.

The large corpus of text gives them a general basis of logical patterns, which can then be pruned iteratively in simulated environments.




So you think the logical patterns found in human language might also be similar enough to the logical patterns found in other systems that these LLMs have a jumpstart in figuring those out?

We already have such a poor idea of how these things seem to understand so much… it’ll be a wild day when cancer is cured and we have absolutely no idea why.


It feels like some type of Heisenbergian deal. You can solve what you want, but you cannot know how at the same time.


IMO a sufficiently advanced AI should be able to do a full analysis of its neural architecture and explain it and break down its functionality.


There is no capability in an LLM to do this and we don't know how to build an "AI" that is not an LLM.

(Just deciding that your LLM has magic powers because you've put it in a category called "AI" and you've decided that category has said magic powers is what that guy Wittgenstein was complaining about in philosophical problems. Besides, intelligence doesn't mean all your thoughts are automatically correct!)


> Besides, intelligence doesn't mean all your thoughts are automatically correct!)

This is the true bitter lesson for HN.


That would be even more intelligent than the majority (or even all, YMMV) of humans.


You can't "cure cancer" because "cancer" is a label for different things with different causes. If your brain cells all have the wrong DNA there's no way to get different brain cells.


I was using the phrase sort of idiomatically, but fair enough!


My experience with ChatGPT is that it cannot reason from the data it has, i.e., it cannot take abstract concepts it can write about and use them on something it doesn't have the solution for. So not sure how the logical patterns really can evolve into something closer to AGI there. Maybe other LLMs? The inability to do math properly is really limiting, I think.


I haven't yet seen an example of an LLM failing at math that cannot be very easily solved by having the LLM use a calculator, much as all humans do with math of any significance. It needs to use a calculator more often, but that's a completely negligible compute cost.


I meant more abstract math, e.g., construct something that needs math to construct it. Take a concept like risk neutral pricing and get it to construct a replicating portfolio for something not totally trivial (i.e., with not a lot of solved examples on the web). Fails for me.


That's fair, I've seen that it can really struggle coming up with novel algorithms. I am curious on if there is more improvement on that front in future models, because even its current performance at algorithmic manipulation is far, far better than e.g. GPT-3.


Give computers the alphabet. It contains all the texts. Imagine the possibilities.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: