The trouble with this kind of artificial intelligence is that I don't think it's possible to think like a human without actually having the experience of being human.
Sure, I think we could aim to build basically a robot toddler that had a sensory/nervous/endocrine system wired up analogously to ours. It would basically be a baby, and would have to go through all the developmental stages that we go through.
But I suspect we'll have a hard time modeling that stuff well enough to create anything more than "adolescent with a severe learning disability". People underestimate just how carefully tuned we are after millions of years of evolution. The notion that we could replicate that without having another million years of in situ testing and iteration seems naive.
And even then, why would we expect the AI to be smarter than a human? There is already quite a lot
of variation in humans. Many people at the ends of the bell curve have extraordinary processing power in ways typical humans don't. But it turns out while those things are useful in some ways, they limit those people in other ways. So it's not like we're not trying out evolved designs, it's that on balance they don't seem to actually function fundamentally better.
One cool thing about the robot is that you could have many bodies having many experiences all feeding into one brain. But I'm not convinced that would actually lead to "smarter". I mean, look at old people. Yes we get smarter as we age. But age also calcifies the mind. All of that data slowly locks you into a prison of past expectations. In the end, it's a blend of naive and experienced people in a society
that maximizes intelligence. And again, it's not like societies haven't been experimenting with that blend. Cultures have evolved to hit the sweet spot. It's not clear that adding 1000 year old intelligences would help.
And anyway, we already have 1000 year old intelligences: books!
You could say that there is benefit to having all of that in one "head" but then you have to organize it! Which experience drives the decisions, the one from 2014 or the one from 3014?
Again, culture evolved explicitly to solve this problem. People write books and the ones that work stick around.
I guess what I'm saying is the evolution of the human being is already here: it's the human race, fed history via culture, connected by the internet, in symbiosis with computers.
The idea that removing the humans from that system would make it smarter makes no sense to me. Nor does the idea of writing programs to do the jobs that humans do well. It's like creating a basketball team with 5 Shaquille O'Neils. I don't think they'd actually be able to beat a good, diverse team with one or two Shaqs.
Or think of it this way: if numerical/logical aptitude is such a huge advantage in advancing capital-U Understanding, why do smart people bother learning to paint? Why do we bother listening to children? Why do we bother having dogs?
I would argue it's because intelligence is as multi-media as the universe is. Sometimes a PhD has something to learn from a basset hound. And the human race has just as good a handle on it as any AI ever will. We just have a different view of the stage. We have the front row and they have the balcony.
Sure, I think we could aim to build basically a robot toddler that had a sensory/nervous/endocrine system wired up analogously to ours. It would basically be a baby, and would have to go through all the developmental stages that we go through.
But I suspect we'll have a hard time modeling that stuff well enough to create anything more than "adolescent with a severe learning disability". People underestimate just how carefully tuned we are after millions of years of evolution. The notion that we could replicate that without having another million years of in situ testing and iteration seems naive.
And even then, why would we expect the AI to be smarter than a human? There is already quite a lot of variation in humans. Many people at the ends of the bell curve have extraordinary processing power in ways typical humans don't. But it turns out while those things are useful in some ways, they limit those people in other ways. So it's not like we're not trying out evolved designs, it's that on balance they don't seem to actually function fundamentally better.
One cool thing about the robot is that you could have many bodies having many experiences all feeding into one brain. But I'm not convinced that would actually lead to "smarter". I mean, look at old people. Yes we get smarter as we age. But age also calcifies the mind. All of that data slowly locks you into a prison of past expectations. In the end, it's a blend of naive and experienced people in a society that maximizes intelligence. And again, it's not like societies haven't been experimenting with that blend. Cultures have evolved to hit the sweet spot. It's not clear that adding 1000 year old intelligences would help.
And anyway, we already have 1000 year old intelligences: books!
You could say that there is benefit to having all of that in one "head" but then you have to organize it! Which experience drives the decisions, the one from 2014 or the one from 3014?
Again, culture evolved explicitly to solve this problem. People write books and the ones that work stick around.
I guess what I'm saying is the evolution of the human being is already here: it's the human race, fed history via culture, connected by the internet, in symbiosis with computers.
The idea that removing the humans from that system would make it smarter makes no sense to me. Nor does the idea of writing programs to do the jobs that humans do well. It's like creating a basketball team with 5 Shaquille O'Neils. I don't think they'd actually be able to beat a good, diverse team with one or two Shaqs.
Or think of it this way: if numerical/logical aptitude is such a huge advantage in advancing capital-U Understanding, why do smart people bother learning to paint? Why do we bother listening to children? Why do we bother having dogs?
I would argue it's because intelligence is as multi-media as the universe is. Sometimes a PhD has something to learn from a basset hound. And the human race has just as good a handle on it as any AI ever will. We just have a different view of the stage. We have the front row and they have the balcony.