In terms of DNA difference, humans are very close to mammals that don't have much language capability but demonstrate some degree of intelligence. This suggests that language is not as fundamental as some researchers claim.
I once told Rod Brooks, back when he was proposing "Cog" (look it up), that he'd done a really good insect robot, and the next step should be a good robot mouse. He said "I don't want to go down in history as the guy who built the world's best robot mouse". "Cog" was a dud, and Brooks went back to insect level AI in the form of robot vacuum cleaners.
We need more machines which successfully operate autonomously in the real world. Then they may need to talk to humans and each other. That might work.
The big problem in AI isn't language, anyway. It's consequences. We don't have common sense for robots. There's little or no understanding of the consequences of planned actions. We need to get this figured out before we can let robots do much.
I once told Rod Brooks, back when he was proposing "Cog" (look it up), that he'd done a really good insect robot, and the next step should be a good robot mouse. He said "I don't want to go down in history as the guy who built the world's best robot mouse". "Cog" was a dud, and Brooks went back to insect level AI in the form of robot vacuum cleaners.
We need more machines which successfully operate autonomously in the real world. Then they may need to talk to humans and each other. That might work.
The big problem in AI isn't language, anyway. It's consequences. We don't have common sense for robots. There's little or no understanding of the consequences of planned actions. We need to get this figured out before we can let robots do much.