Computers are nothing more than a set of persisted electrical 0/1 signals and a series of logic gates with intricate timing. There is no 'facts' in that world.
For example, "Chat GPT" does not understand questions, it does some pattern matching to find responses that probabilistic-ally would follow. Another example, AI does not understand it is drawing "a hand", it's just a probability algorithm that indicates these pixels are likely to be "good" following these other pixels.
If someone can build a 'reasoning' step into AI - that would likely be a game changer. An AI that generates an answer, and then compares that answer against an actual fact-database; and then modifies that answer in light of the existing facts. Even better, one that can also challenge when an understood fact is likely to be wrong. To do all that, the AI would need to understand abstract concepts. To give a baseline of where we are on that, my cat is able to do that and AI is currently at zero capability to do it.
Though, is the AI baseline even worse though?
Computers are nothing more than a set of persisted electrical 0/1 signals and a series of logic gates with intricate timing. There is no 'facts' in that world.
For example, "Chat GPT" does not understand questions, it does some pattern matching to find responses that probabilistic-ally would follow. Another example, AI does not understand it is drawing "a hand", it's just a probability algorithm that indicates these pixels are likely to be "good" following these other pixels.
If someone can build a 'reasoning' step into AI - that would likely be a game changer. An AI that generates an answer, and then compares that answer against an actual fact-database; and then modifies that answer in light of the existing facts. Even better, one that can also challenge when an understood fact is likely to be wrong. To do all that, the AI would need to understand abstract concepts. To give a baseline of where we are on that, my cat is able to do that and AI is currently at zero capability to do it.