Hacker News new | past | comments | ask | show | jobs | submit login

That's true, but we also don't know that the multiple levels of structural specialization are necessary to produce "approximately human" intelligence.

Let's say two alien beings landed on earth today and want you to settle a bet. They both look weird in different ways but they seem to talk alike. One of them says "I'm intelligent, that other frood is fake. His brain doesn't have hypersynaptic gibblators!" The other says "No, I'm the intelligent one, the other frood's brain doesn't have floozium subnarblots!"

Who cares? Intelligence is that which acts intelligent. That's the point of the Turing test, and why I think it's still relevant.




I think we are arguing on different tracks, probably due to a difference in understanding of ‘model’.

There are arguments to be made, including the Turing test, for some sort of intelligence and potential equivalence for LLMs. I am probably more skeptical than most here that current technology is approaching human intelligence, and I believe the Turing test is in many ways a weak test. But for me that is different, more complex discussion I would not be so dismissive of.

I was originally responding to the claim “isn’t a neural network a simplified model of the working of the human brain”. A claim I interpreted to mean that NNs are system models of the brain. Emphasis on “model of the working of”, as opposed to “model of the output of”.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: