> To conclude, we can never automate the construction (nor the testing) of computer programs. So says logic and so says the theory of computation.
On the other hand, the strong Church-Turing thesis is that nothing is more powerful than a Turing machine when it comes to solving halting problems, including humans. This is not proven or really provable but it seems more likely than not (humans certainly don't seem especially good at solving halting problems in general).
If the Church-Turing thesis is true (at least as applied to humans), discovering valid, halting programs by heuristic using a machine isn't barred by the halting problem as far as I can tell.
In other words, humans almost certainly aren't solving halting problems when they program, why should computers necessarily have to? Humans mess up and produce programs that don't halt or don't produce the output they expect all the time, but we still produce useful programs at least sometimes, even if we can't know for sure which programs are which.
Indeed. It reminds me of Penrose in The Emperor's New Mind (or one of his other books), where a large chunk of his argument that human consciousness and understanding can't be fully explained by computational processes, because humans can understand and see the truth of certain statements that formal systems cannot.
Mathematicians have spilled a bunch of ink explaining why Penrose didn't understand Godel's proof.
The linked article is ridiculous, but your comment is a misrepresentation of the Penrose debate. The main weak point of Penrose's argument isn't that he "didn't understand Godel's proof". He rightly observed that Godel's incompleteness theorem requires that either human minds are capable of things algorithms are not capable of, or that their perception of mathematical truth is limited in certain ways, an observation that Godel himself made. The main point of disagreement with his critics is not the mechanics of Godel's theorem which, unsurprisingly, this Nobel prize winning mathematician (*) understands perfectly well, but whether human perception of mathematical truth is limited in these ways.
The question of whether human perception of mathematical truth is infallible is open to philosophical debate. Penrose distinguishes between mistakes, which all mathematicians make but which in principle they can later come to recognise as mistakes, and true incorrect perception of mathematical truth, where no amount of checking, re-derivation, etc. could possibly right the incorrect perception. His view is that the latter type of error does not exist, that there is a mathematical ground truth that we can directly perceive and that is infallible, and that the mistakes made by mathematicians are of a different nature to inconsistencies in that perception of mathematical ground truth.
Penrose went too far in presenting this argument as a proof of the impossibility of strong AI based on algorithms. The point about human perception of mathematical truth can very reasonably be disputed, and is part of a much larger debate about the fundamental nature of maths that is far from settled. Personally, though, my intuition is that Penrose is right about the human perception of mathematical truth, and I therefore find the Godel-based argument persuasive. It's not a proof because it rests on an assumption that has not been proven, but I find it convincing to the extent that I tend to think that the assumption is probably true.
So the debate is a philosophical one more than it is a mathematical one, and while Penrose may be guilty of some rather bad PR, accusing him of "not understanding Godel's proof" does him a disservice.
* Yes, there's no Nobel prize in mathematics, but he is a mathematician (as well as a physicist) and he has won a Nobel prize.
Not OP, feel free to correct me, but Godel proved that any formal system of logic can be shown to be logically inconsistent at at least one point.
"Formal system of logic" maps well onto Turing machines, and Turing machines map onto "any computer system" if you are very abstract about it.
Now, people are wrong about mathematical facts, but seemingly not forever. We puzzle it over, come back to it, someone makes a breakthrough. It doesn't seem like there is a fixed blind spot where our logic breaks down.
So what are people doing that is not captured by a formal system of logic?
Options:
1. The mapping of Turing machines to computers isn't airtight - they are doing something "more" than a TM with infinite time + space is capable of
2. People are not "more than" an infinite TM either, we just romantically believe we are, and ignore our own flaws
3. People are doing something special that is not captured by our theory of computation
> Godel proved that any formal system of logic can be shown to be logically inconsistent at at least one point.
He proved they can be inconsistent, or incomplete. The ones that mathematicians work with are incomplete and assumed to be consistent[0], namely that there are statements which are true, but not provable, within that system. Consistent-but-incomplete systems don't have any contradictions or logical holes; they just can't determine the truth of every possible statement.
From an incomplete system you can build a "more powerful" system- one with another axiom that makes more things provable without contradicting anything in the original one.
In inconsistent systems, everything is provable, even statements' own negations, so they aren't very useful.
[0] one statement that you can't prove within a (sufficiently powerful) system is "this system is consistent."
>> but any talk of fully automating human intuition (Turing’s Oracle!) in coming up with novel solutions to complex problems is fantasy and the computer science police department (CSPD) should issue a citation for any such talk.
You and I both intuit that there are conjectures that, likely, do not have a solution. The author is proposing (and maybe rightly) that an AI as a Turing machine with this intuition would inherently (have to) be a Turing Oracle...
It's an interesting take that has some implications...
Err, only if humans have to be turing oracles to have that intuition, right? Plus, the halting problem only says that you can't have a procedure which will be 100% right on 100% of programs about whether or not they halt. Relax either of those 100%s, and it becomes possible again, just like how human intuition can be right about things 90% of the time, even if those things can't possibly be predicted 100% of the time.
> Plus, the halting problem only says that you can't have a procedure which will be 100% right on 100% of programs about whether or not they halt.
Your missing it still...
"Given a computer program and an input, will the program terminate or will it run forever?"
We both know that the collatz conjecture likely wont ever halt. We understand that. WE are reasonable oracles. Now write me an oracle that can detect this. Dont feed a list of problems to avoid, write a program that detects these types problems and skips them... Hint, you likley cant. You could simulate one, but that would be cheating...
A real AI needs this, or needs to be able to do it, or it risks working on the collatz conjecture till the heat death of the universe, or till it consumes the universe in a paperclip style problem...
The implication is that real ai might not come out of a Turing machine ever...
The implication of the article isn't that you can't produce a Turing oracle, it's that it's mathematically impossible to ever replace programmers with computers at all.
The latter claim doesn't follow: programmers aren't Turing oracles either. A programming machine that can emulate whatever humans are doing when they program, would be able to program too.
I got a flashback to Zed Shaw's infamous Turing completeness quip about Python 3, except that at least his was really just a stupid joke. Here, the author is using a motte-and-bailey fallacy in which he attacks a more complex and open ended statement ("AI will Soon Replace Programmers") by instead retreating to a more straightforward and falsifiable statement (the ability to write any computer program).
Isn't the point closer to, humans simply go "hey that seems to be taking a little long?" when a program doesn't halt, so why couldn't a machine? Basically a fairly obvious constraint on the solution space is "completes in less then N wall-clock time".
You can definitely detect a portion of halting machines this way, but it's probably a relatively small portion because the Busy Beaver numbers grow inconceivably quickly: the longest-running machines that halt can go practically forever, you'd need more time than the universe has negentropy left to detect them.
Came here to write this, was glad to see it's already here.
All the limits that exist for computing machines almost certainly exist for us too; we don't need to solve the halting problem; we don't solve large NP-hard problems; etc.
Of course, if one believes there's something mystical about brains -- whether a soul or access to quantum computing -- perhaps one would disagree.
This is a particular philosophical conjecture, not a proven scientific fact. We don't understand enough about the human brain to prove whether it is fundamentally different from a very complex computer.
I think this is quite possible, but it doesn't imply that the strong Church-Turing thesis isn't true: brains could still be Turing equivalent to computers, in that neither would be able to solve the halting problem for the other (at least insofar as humans have a "halting problem", which maybe we don't, but you could formulate other things like predicting behavior, which in standard computation is also Turing complete usually)
Since humans don't seem to be particularly good at solving the halting problem even for very small TMs, it doesn't seem that likely to me that we are categorically stronger at it than computers, whether our cognition is truly "computation" or not.
The section "Forget the Theory of Computation, AI will Soon Replace Programmers" feels as if it's missing a practical slant that explains this trend. I think approaching it from a purely theoretical perspective misses the relationship between real software and real problems.
First: most practical problems are computable, and most practical programs do not run forever. Even if an agent had to run a program to termination, it could still use the feedback gathered from running it to make modifications.
Second: an agent with a sufficiently good intermediate representation _which is computable_ doesn't need to actually execute a program to model its behavior. Humans do this all the time--we can read code and understand its approximate function as a means to edit it. I don't want to claim that LLMs have a concept of "understanding," but they definitely build an intermediate representation which, when combined with external input (e.g. having to kill a program, because it exceeded a timeout), can be used to modify the program.
Now with all of that said: I don't feel confident about whether or not AI is actually serious risk to programmers, I just don't feel as if this argument is sufficiently compelling.
People in the comments started to attack the author's argument about halting problem, which I agree isn't the best one, but what about other arguments? They are not less damning for the current technology.
The claim that "there can be no explainability of such models" is also completely unsupported. We know that some simple networks can be explained, and explanation is a human notion, not a formal one, so we can't know how much explainability is possible. There's active research in explainability of neural networks and progress is being made. I don't know how far it'll go, but it hasn't hit any sort of theoretical boundary yet. I think the author is making a similar sort of mistake that they do in invoking the halting problem here, confusing a "not forall" for a "forall not". There are certainly neural network models that can't be explained ("not every model can be explained" is true), but that doesn't mean that there aren't ones that can ("every model can not be explained" is not true).
As part of the argument on explainability, the author says "we have known for some time these models cannot represent or model symbolic structures". There might be some particular weird way of defining things where this is true, but it's certainly not true in the most basic reading. In fact, we have very strong theoretical results that a sufficiently wide neural network can compute any function, including those that are "symbolic". Whether they can be trained using gradient optimization to compute every function is an open problem, but the idea that there are functions they can't represent is provably false.
The third point, that LLM's aren't as step towards AGI. They again make the claim that there are functions a DNN can't compute (or approximate arbitrarily well) (see https://en.wikipedia.org/wiki/Universal_approximation_theore... for the theorem that this isn't true).
The rest of this point, and the fourth point, are basically just about how current LLM's are actually pretty dumb in a lot of situations. This is the only argument that's actually compelling to me; there's a lot of hype around LLMs and their abilities, but a lot of that might have to do with our brains being happy to paper over flaws to anthropomorphize things. And the fact that we haven't yet learned what to look for in artificial output, like we spent decades doing for other machine outputs before LLMs. Recall that when dumb pattern matching conversation bots were invented in the 60s (https://en.wikipedia.org/wiki/ELIZA), people thought they couldn't possibly be artificial and must really be human, even though they are obviously artificial by todays standards.
So, I agree that we don't know if LLM's are the first step towards AGI, and they probably aren't in a sense more than the fact that inventions tend to build on each other. But we don't have enough information to say definitively that they aren't that first step.
How does AI programming relate to computability and halting? I can't determine whether a program will halt either. But I can kill it if it doesn't halt and just try another program, because I can operate a stopwatch. Where does this recursive argument form? We can already see LLMs solving simple programmer tasks: creating software from an informal description.
> Automatically discovering formal (i.e., unambiguous) solutions to novel problems can never be automated.
Things like alphafold would suggest to me that this doesn’t always hold true.
Perhaps a matter of definition but to me there does seem to be “some” problem solving ability. And once you’ve got even a little of that then it’s a scaling question and not really compatible with “ can never be automated.”
I’m not sure the Halting Problem is a compelling argument theoretical reason why AI can’t program.
I do see a compelling argument on the limits of LLMs being probabilistic (rather than logic/symbolic), as I’ve seen on GPT/Pilot coming up with code that looks syntactically valid but calls completely made-up APIs.
> Let us suppose some AI came up with an algorithm A to solve a specific problem P. To ensure that A solves P, the AI has to execute A to test its functionality.
Not necessarily. It could (in theory) prove through formal methods that A solves P.
I don’t see how the author’s arguments about impossibility results pertaining to “distributed sub-symbolic architectures” apply any more strongly to LLMs or DNNs than they do to human brains. Human programmers aren’t magically capable of solving the halting problem either, but we muddle through somehow.
Yea. Most of what we do isn't that rigorous and it's fine. When we do need rigor, we use external tools, like classical computer programs or writing math down on paper. LLMs can use external tools too. We're also hopeless at explainability - people usually have no idea why they make most of the decisions they do. If we have to, we try to rationalize but it's not really correct because it doesn't reproduce all the intuition that went into it. Yet somehow we can still write software!
On the other hand, the strong Church-Turing thesis is that nothing is more powerful than a Turing machine when it comes to solving halting problems, including humans. This is not proven or really provable but it seems more likely than not (humans certainly don't seem especially good at solving halting problems in general).
If the Church-Turing thesis is true (at least as applied to humans), discovering valid, halting programs by heuristic using a machine isn't barred by the halting problem as far as I can tell.
In other words, humans almost certainly aren't solving halting problems when they program, why should computers necessarily have to? Humans mess up and produce programs that don't halt or don't produce the output they expect all the time, but we still produce useful programs at least sometimes, even if we can't know for sure which programs are which.