Hacker News new | past | comments | ask | show | jobs | submit login

I think about the AI language problem a lot while raising my kids. The article notes the word "forever" and how an AI must distinguish the literal from the figurative meaning of the word in context. My five-year-old still doesn't grasp the literal meaning of this word as "never-ending." To him, "forever" is simply a very very long time. He has the same problem with the concept of "infinity," where the word means both "the biggest number" but also has the characteristic (in his mind) of having an upper quantifiable bound. His young mind has not yet recognized the paradox that "infinity" is the biggest number, so what does it mean when I say, "Infinity plus one"?

Neural networks are going to make huge inroads to the AI language problem simply by exposing the AI to example after example of words in varying contexts. But I wonder if the real problem is getting those neural networks to let go of unnecessary data? Humans rely on excited neurons to recognize patterns, but our neurons let a lot of sensory input pass us by to keep from getting bogged down in the details. Are the image-recognition AI's described in the article capable of selective attention? Will they get bogged down in the morass of information in trying to pattern-match every word to every image and context they learn?




>Are the image-recognition AI's described in the article capable of selective attention?

Yes they are: https://indico.io/blog/sequence-modeling-neural-networks-par... https://github.com/harvardnlp/seq2seq-attn


Kids are an awesome way to think about these problems. I did a teaching abroad stint once and it was incredible to witness kindergartners wrastle with this new language I was introducing to them, adopt it while they're still learning their native one. It got even cooler when there'd be a half native kid who already spoke English as well as Chinese.

I wonder if there's AI focused research that analyses how children learn language as an aspect of their research? Especially kids learning a "new" language.


Maybe this is good. After all, mathematical abstractions may cause more philosophical problems than they solve. What if there is nothing infinite in this world? Having a firm grasp of reality before venturing into hypotheticals can be good.


But what is language if not a tool for manipulating hypotheticals? If your language can only describe what you know to be possible, it can only describe things you've already seen, and the required dataset in your memory to have a conversation or provide useful information is way too large. Abstraction is the very problem of language that AIs are trying to solve.


> His young mind has not yet recognized the paradox that "infinity" is the biggest number, so what does it mean when I say, "Infinity plus one"?

Paradox? In the extended reals, where infinity is the biggest number, infinity plus one gets you infinity, just as you'd expect. In, say, the study of ordinal numbers, where there are many infinite quantities, it doesn't make any sense to talk about "the biggest number".


A paradox is "a statement or proposition that seems self-contradictory or absurd but in reality expresses a possible truth". If you encounter a paradox, it tells you more about how your framework of thinking is flawed than it does about reality.

"Infinity plus one" is a paradox to a child who believes that infinity is a finite number. When they realize what infinity actually means, the paradox will be resolved.


That reminded my of this blog article by Scott Aaronson: http://www.scottaaronson.com/writings/bignumbers.html




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: