Hacker News new | past | comments | ask | show | jobs | submit login

Current AI as exemplified by LLMs like ChatGPT is not propositional/logic based, though, so I'm unsure what your objection is. Artificial intelligence can be just as messy and alogical as natural intelligence. It's not because it runs on a computer that we fully see or understand what it does.

I suppose there is a discussion to be had about what "true intelligence" is and whether some or most human beings have it in the first place.




It’s still propositional and logic based. Every instruction a computer can interpret is propositional and logic based. AI models are chaotic, but deterministic. All computers are deterministic. The distinction I’m making between propositional systems and intelligence is like the distinction between being able to generate a transcendental number like pi (which is propositionally generated/you can calculate it without “understanding” the sequence/being able to reduce it into a smaller pattern) and being able to know whether your procedure to generate pi is correct or incorrect. That latter ability is what I mean by true intelligence, and this transformers approach doesn’t solve that. I believe this is called the “grounding” problem.

Gödel proved it’s impossible for any deterministic system to determine its own correctness close to a hundred years ago. Whatever we’re doing when we say an output A is “right” and output B is “wrong” is fundamentally different than what a propositional deterministic system does, regardless of how large or chaotic or well fitted to expected output it is.

What qualifies as “true intelligence” is the core issue here, yes, and transformers don’t qualify. That doesn’t mean they aren’t valuable or can’t give correct results to things much faster and better than humans, but I think anything we could deliberately design inevitably ends up being deterministic and subject to the introspective limitations of any such system, no matter how big or all encompassing.

I think we’re going to create better and more comprehensive “checkpoint”/“savepoints” for the results of collective intelligence, but that’s different than the intelligence itself.


Our understanding of computer instructions is propositional and logic-based. The hardware itself is a material thing: it's a bunch of atoms and electrons. So is the human brain. Whatever distinction we can make between them cannot be purely conceptual, it has to be physical at some level. The fact that we "logically" designed and made computers, but not brains, is not a relevant distinction. It's not part of the physical reality of either thing, as a physical thing.

> Gödel proved it’s impossible for any deterministic system to determine its own correctness close to a hundred years ago.

I don't think non-deterministic systems fare much better. At the core, completeness is a quantitative problem, not a qualitative one. In order to fully understand and analyze a system of a certain size, there is no way around the simple fact that you need a bigger system. There are more than ten things to know about ten things, so if you only have the ten things to play with, you will never be able to use them to represent a fact that can only be represented using eleven things. Gödel's theorem pushes this to infinity, which is something that we are notoriously poor at conceptualizing. I think this just obfuscates how obvious this limitation is when you only consider finite systems.

Which brings me to the core disagreement that I think we have, which is that you appear to believe that humans can do this, whereas I believe they blatantly cannot. You speak of the grounding problem as if the human brain solves it. It doesn't. Our reasoning is not, has never been, and will never be grounded. We are just pretty damn good at lying to ourselves.

I think that our own capacity for introspection is deeply flawed and that the beliefs we have about ourselves are unreliable. The vast array of contradictory beliefs that are routinely and strongly held by people indicates that the brain reasons paraconsistently: incoherence, contradiction and untruth are not major problems and do not impede the brain's functioning significantly. And so I think we have a bit of a double standard here: why do we care about Gödel and what deterministic systems can or cannot do, in a world where more than half of the population, but let's be honest, it's probably all of us, is rife with unhinged beliefs? Look around us! Where do you see grounded beliefs? Where do you see reliable introspection?

But I'll tell you what. Evolution is mainly concerned with navigating and manipulating the physical world in a reliable and robust way: it is about designing flexible bodies that can heal themselves, immune systems, low-power adaptable neural networks, and so on. And that, strangely enough, is not something AI does well. Why? My theory is that human intelligence and introspection are relatively trivial: they are high impact, for sure, but as far as actual complexity goes, they are orders of magnitude simpler than everything else that we have in common with other animals. Machines will introspect before they can cut carrots, because regardless of what we like to think about ourselves, introspection is a simpler task than cutting carrots.

I think this explains the current trajectory of AI rather well: we are quickly building the top of the pyramid, except that we are laying it directly on the ground. Its failure will not be intelligence, introspection or creativity, it will be mechatronics and the basic understanding of reality that we share with the rest of the animal kingdom. AKA the stuff that is actually complicated.


Yeah, we disagree pretty fundamentally.

Our perception of the physical world is itself an evolved construct. The wider thing that perception is fitting is something I don’t think we can fit a machine to. I think we can only fit it to the construct.

I get where you’re coming from/appreciate the argument, and you may be right, but I‘ve come to appreciate the ubiquity of hidden context more and more/lean that way.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: