Hacker News new | past | comments | ask | show | jobs | submit login

> I personally think AGI is impossible

That’s a very weird statement to make. Do you think you have a soul or something? Otherwise how is your own Intelligence something a computer can’t replicate?




>> Do you think you have a soul or something? Otherwise how is your own Intelligence something a computer can’t replicate?

Strictly speaking, it is for you to say why you think that a digital computer, which is nothing like a human, can be intelligent like a human.

Btw, I see this quip about souls often on HN, in response to comments that AGI is impossible and it is invariably introduced in the conversation by the person arguing that AGI is possible. I have never, ever seen anyone argue that "digital computers can't be intelligent because they don't have a soul". I don't even know where that "soul" bit comes from, probably from something someone said centuries ago? In any case it's irrelevant to most modern conversations where most people don't believe in the supernatural, and have other reasons to think that AGI may be impossible.

And why wouldn't they? For the time being, we don't have anything anyone is prepared to call "AGI". We also don't have a crystal ball to see into the future and know whether it is possible, or not. For all we know, it might be a theoretical possibility, but a practical impossibility, like stable wormholes, or time travel, or the Alcubierre Drive. We will know when we know, not before.

Until then, invoking "souls", or any other canned reply, only seems to serve to shut down curious conversation and should be avoided.

To say it in very plain English: you don't know if AGI is possible, you only assume it to be, so let's hear the other person say what they assume, too. Their opinion is just as interesting as yours, and just as an opinion.


One answer to this would be that humans aren't general intelligences either, but existing in the real world/human society keeps you honest enough you have to try and act like one.


One of the many ways AGI is often defined is “human-level intelligence,” so that seems like a tautological impossibility.


I do think I have a soul, and I don’t think that AGI is impossible.

So, I really don’t get why people think AGI is impossible.

I mean, I have hope that it won’t ever happen, but not because I suspect it is impossible.


I think it’s impossible because I don’t think you can leapfrog evolution, and there’s almost definitely hidden context essential to our intelligence and perception that you can only get through a very very long history.

However I do think it’s quite possible and likely that we’ll create mimics that convince most people they’re intelligent. But I think that’s a weird type of reflection/anything we can make is limited to mirroring our observable past and existing collective thought and can never be truly introspective, because true introspection requires evolutionary context and hidden motivation you can’t transfer.


Evolution is a process that works without an intelligent steward. In a way it's a brute force technique. Plus, nothing is optimizing for intelligence in evolution, it is merely a happy accident that humans ended up with the brains that we did. A different environment could yield drastically different evolutionary history

It doesn't seem very logical to think that because evolution took so long to get us to where we are now, we consequently won't be able to design an intelligent AGI system.


It’d take a while to justify this argument to the extent I think it’s justified, but I think we’re in an inescapable frame set by evolution, and our adaptation to our environment probably goes a lot deeper than what we can see. I don’t think the visible context is the full context, and think true intelligence probably requires an implicit understanding of context which is invisible to us.


I don’t think true intelligence is propositional/logic based. I also think the observable universe is inherently limited by our perceptual framework, and there will always be something outside it.

Imagine an amoeba that can only detect light and dark. Its observable universe exists on a simple gradient. But the cells that are used in its perceptual system cannot be described on a gradient. I think that probably scales up indefinitely.

We confuse what we see for all that exists because of our recent history and scientific advances. The triumph of science comes from focusing solely on what we can see and reason about and improving our vision because thats what we have control over, but it doesn’t mean we see everything. I think true intelligence originates from something we can’t see. You can call it a soul, or call it a scaled up amoeba cell, but I personally think the origin of true intelligence and qualia is from some weird very very old evolutionarily created thing we can’t see (and I think other animals have similar invisible perceptual machinery somewhere, I don’t think it’s just a human thing).


Current AI as exemplified by LLMs like ChatGPT is not propositional/logic based, though, so I'm unsure what your objection is. Artificial intelligence can be just as messy and alogical as natural intelligence. It's not because it runs on a computer that we fully see or understand what it does.

I suppose there is a discussion to be had about what "true intelligence" is and whether some or most human beings have it in the first place.


It’s still propositional and logic based. Every instruction a computer can interpret is propositional and logic based. AI models are chaotic, but deterministic. All computers are deterministic. The distinction I’m making between propositional systems and intelligence is like the distinction between being able to generate a transcendental number like pi (which is propositionally generated/you can calculate it without “understanding” the sequence/being able to reduce it into a smaller pattern) and being able to know whether your procedure to generate pi is correct or incorrect. That latter ability is what I mean by true intelligence, and this transformers approach doesn’t solve that. I believe this is called the “grounding” problem.

Gödel proved it’s impossible for any deterministic system to determine its own correctness close to a hundred years ago. Whatever we’re doing when we say an output A is “right” and output B is “wrong” is fundamentally different than what a propositional deterministic system does, regardless of how large or chaotic or well fitted to expected output it is.

What qualifies as “true intelligence” is the core issue here, yes, and transformers don’t qualify. That doesn’t mean they aren’t valuable or can’t give correct results to things much faster and better than humans, but I think anything we could deliberately design inevitably ends up being deterministic and subject to the introspective limitations of any such system, no matter how big or all encompassing.

I think we’re going to create better and more comprehensive “checkpoint”/“savepoints” for the results of collective intelligence, but that’s different than the intelligence itself.


Our understanding of computer instructions is propositional and logic-based. The hardware itself is a material thing: it's a bunch of atoms and electrons. So is the human brain. Whatever distinction we can make between them cannot be purely conceptual, it has to be physical at some level. The fact that we "logically" designed and made computers, but not brains, is not a relevant distinction. It's not part of the physical reality of either thing, as a physical thing.

> Gödel proved it’s impossible for any deterministic system to determine its own correctness close to a hundred years ago.

I don't think non-deterministic systems fare much better. At the core, completeness is a quantitative problem, not a qualitative one. In order to fully understand and analyze a system of a certain size, there is no way around the simple fact that you need a bigger system. There are more than ten things to know about ten things, so if you only have the ten things to play with, you will never be able to use them to represent a fact that can only be represented using eleven things. Gödel's theorem pushes this to infinity, which is something that we are notoriously poor at conceptualizing. I think this just obfuscates how obvious this limitation is when you only consider finite systems.

Which brings me to the core disagreement that I think we have, which is that you appear to believe that humans can do this, whereas I believe they blatantly cannot. You speak of the grounding problem as if the human brain solves it. It doesn't. Our reasoning is not, has never been, and will never be grounded. We are just pretty damn good at lying to ourselves.

I think that our own capacity for introspection is deeply flawed and that the beliefs we have about ourselves are unreliable. The vast array of contradictory beliefs that are routinely and strongly held by people indicates that the brain reasons paraconsistently: incoherence, contradiction and untruth are not major problems and do not impede the brain's functioning significantly. And so I think we have a bit of a double standard here: why do we care about Gödel and what deterministic systems can or cannot do, in a world where more than half of the population, but let's be honest, it's probably all of us, is rife with unhinged beliefs? Look around us! Where do you see grounded beliefs? Where do you see reliable introspection?

But I'll tell you what. Evolution is mainly concerned with navigating and manipulating the physical world in a reliable and robust way: it is about designing flexible bodies that can heal themselves, immune systems, low-power adaptable neural networks, and so on. And that, strangely enough, is not something AI does well. Why? My theory is that human intelligence and introspection are relatively trivial: they are high impact, for sure, but as far as actual complexity goes, they are orders of magnitude simpler than everything else that we have in common with other animals. Machines will introspect before they can cut carrots, because regardless of what we like to think about ourselves, introspection is a simpler task than cutting carrots.

I think this explains the current trajectory of AI rather well: we are quickly building the top of the pyramid, except that we are laying it directly on the ground. Its failure will not be intelligence, introspection or creativity, it will be mechatronics and the basic understanding of reality that we share with the rest of the animal kingdom. AKA the stuff that is actually complicated.


Yeah, we disagree pretty fundamentally.

Our perception of the physical world is itself an evolved construct. The wider thing that perception is fitting is something I don’t think we can fit a machine to. I think we can only fit it to the construct.

I get where you’re coming from/appreciate the argument, and you may be right, but I‘ve come to appreciate the ubiquity of hidden context more and more/lean that way.


A computer could indeed replicate our intelligence, but our intelligence might not be sufficient to write the AGI sourcecode :)




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: