Hacker News new | past | comments | ask | show | jobs | submit login

In fact there's ample reason to think that humans are at the lowest threshold of intelligence that makes a technological civilization possible, because if we'd reached that threshold earlier in our evolution, we'd have created civilization then instead of now. There's probably plenty of room above us.

While I don't disagree that humans are essentially at the lowest level of intelligence that make civilization possible (else why would it have taken hundreds of thousands of years to get started?), this claim has no bearing on the claim that the upper limit of intelligence is immediately above human genius level. You seem to be assuming that there is necessarily a wide gap between the lowest civilization-producing level and the highest practical level, and that's not at all clear. Some (weak) evidence that we're already near the top can be found in the higher incidence of mental health issues among very intelligent humans: perhaps this is a result of a limit on complexity rather than merely a feature of human brains.




While I don't disagree that humans are essentially at the lowest level of intelligence that make civilization possible (else why would it have taken hundreds of thousands of years to get started?), this claim has no bearing on the claim that the upper limit of intelligence is immediately above human genius level.

But there's no particular reason to believe that the upper limit of intelligence is pretty much right where we are, either.

In fact, I'd argue that based on what we know about other evolved systems, our priors for "the biologically evolved system that does X has plenty of room for improvement" should be much, much, much higher than "the biologically evolved system for X is almost optimal for achieving X".

Like, a thousand to one or more - there are very few tasks that evolution has found optimal solutions to (which is not surprising, its "goal" is gene survival, everything else is an accident), and most of those are extremely low level physical things, chemical processes and the like. I'd need to hear some really strong argument that suggested that there's something special about human intelligence that should make us think it's near the limit, otherwise the odds ratio is just too hard to overcome...


Imagine a human genius. Now imagine that same genius with a brain that thinks the same thoughts, but a hundred times faster. Now imagine a million instances of that person.

At no point in this thought experiment have we changed the thoughts this person can think. If any of these sped-up million geniuses took an IQ test or something, the score would be the same. And yet, they would seem superhumanly intelligent by any reasonable definition of the phrase.

I doubt that evolution -- with a crazy biological substrate, no less -- somehow managed to find an upper limit on intelligence.


Everything we know about genetic algorithms would lead us to believe that evolution has found an upper limit -- but only in a very specific context with respect to the environment, available resources, and evolutionary baggage.

Everything I know about the development of technology leads me to believe that any given trans-human AI will also somehow be a local maxima.


We have not converged.


If you consider a million canine geniuses at a hundredfold speedup, I think the problem becomes clear: they wouldn't even match one human. What you've described is a civilization, effectively, and while a civilization can accomplish much that a single member cannot, they're still not more intelligent.


I agree with you but only while limiting the scope to the best that can be done for a biological agent which must pass through a birth canal and evolved under strong resource limits.

Can you apply intelligence to more than one person? Can a group of 10 people be considered intelligent as a whole? What about 100? Can you distinguish between the intelligence of different equally sized groups? I say yes to all of those.

Let us say we put groups of 1, 10, 100, 10^n people in a box and gave each box intelligence tests. Which would be most intelligent? Eventually communication costs will outweigh size advantages but only at some n >>0. So a trivial example of an entity whose intelligence surpasses a single human intelligence. Now why do you expect that this level of computation and knowledge management is not replicable on a smaller more efficient device not distracted by the need to calculate proprioception?

Next consider effective intelligence. Do computers when used in the right way amplify intelligence? Is it possible that a human/computer interface is possible? Forget self awareness, is it possible that a computer could do what all of what we call intelligent work better than humans? The answer to all of these is more subjective, especially the last, but I say yes. The human brain is one architecture for intelligence. It is riddled with weaknesses that only make sense when considering the resource limited nature of its development. It was created under a specific set of constraints - a vehicle to manage the more effective transmissions of genes of an energy limited biological entity that must pass a birth canal. I believe that the human mind set on designing an intelligence can do better.

Will a superhuman mind do better at designing a mind in a proportionally similar way to how humans will surpass evolution? Or will they hit too many NP complete problems? The answer to this I cannot give but if forced to bet would be yes, yes.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: