Look, guys, sure, in some sense computing is part of
the best promise for AI. Fine. I'll even agree that
at least for now computing is necessary.
But, note, nearly everything we've done in computing,
especially in Silicon Valley for the past 15 years, has
been to apply routine software development to
work that we already well understood how to do
manually. A small fraction of the efforts have been
some excursions into more, but these have been
relatively few and with rarely very impressive
results. Net, what Silicon Valley does know how to
do is build, say, SnapChat (right, it keeps the NSA
spooks busy looking at the SnapChat intercepts
from Sweden!).
But for anything that should be called AI,
there is another challenge that is very much
necessary -- how to do that. Or, if you will,
write the software design documents from the
top down to the level of the individual
programming statements. Problem is, very likely
and apparently, no one knows how the heck to do
that.
Given a candidate design, people should want to
review it, and about the only way to convince
people, e.g., short of the running software passing
the Turing test or some such, is to write out the
design in terms of mathematics. Basically the only
solid approach is via mathematics; essentially
everything else is heuristics to be validated only
in practice, that is, an implementation and not
a design.
Thing is, I very much doubt that anyone knows how
to write a design with such mathematics. If so,
then long ago there should have been such in
an AI journal or with DARPA funding.
Basically, bluntly, no one knows how to write
software for anything real about AI. Sorry 'bout that.
Wby? We just do not know hardly anything about
how the brain works. We don't know more about how
the human brain works than my kitty cat
knows about how my computer works. Sorry 'bout that. And
AI software will have a heck of a time catching up
with my kitty cat.
By analogy, we don't know more about how to
program AI than Leonardo da Vinci knew about
how to build a Boeing 777. Heck the Russians
didn't even know how to build an SR-71. Da Vinci
could draw a picture of a flying machine, but
he had no clue about how to build one. Heck,
Langley fell into the Potomac River! Instead, the
Wright brothers built a useful wind tunnel (didn't
understand Reynolds number), actually were able to
calculate lift, drag, thrust, and engine horsepower,
and had found a solution to three axis control --
Langley failed at those challenges, and da Vinci
was lost much farther back in the woods.
We now know how our daughters can avoid
cervical cancer. Before the solution, "we dance
'round and 'round and suppose, and the secret
sits in the middle, and knows.", and we didn't
know. Well, the cause was HPV, and now there
is a vaccine. Progress. Possible? Yes. Easy?
No. AI? We're not close enough to be in the
same solar system. F'get about AI.
Well we do actually have a purely mathematical approach to AI worked out. Granted it requires an infinite computer, and personally I don't think it will lead to practical algorithms. But still, it exists. And from the practical side of things, machine learning is making progress in leaps and bounds. As is our understanding of the brain.
Remember that airplanes weren't built by Da Vinci because he didn't have engines to power them. It wasn't that long after engines were invented that we got airplanes. The equivalent for AI, computing power, is already here or at least getting pretty close.
> Well we do actually have a purely mathematical approach to AI worked out.
Supposedly with enough computer power and
enough data, a one stroke solution to
everything is stochastic optimal control,
but that solution takes, say, just brute
force to, say, planetary motion instead of
Newton's second law of motion and law of
gravity. Else, need to insert such laws
into the software, but we would insert only laws humans
knew from the past, or have the AI software
discover such laws, not so promising. This
stochastic optimal control approach is
not practical or even very insightful.
But it is mathematical.
> machine learning is making progress in leaps and bounds.
I looked at Prof Ng's machine learning course, and
all I saw was some old intermediate statistics,
in particular, maximum likelihood estimation (MLE),
done badly. I doubt that we have any
solid foundation to build on for any significantly
new and powerful techniques for machine learning.
I see nothing in machine learning that promises
to be anything like human intelligence. Sure,
we can write a really good chess program, but
no way do we believe that its internals are
anything like human intelligence.
> As is our understanding of the brain.
Right, there are lots of neurons. And if
someone gets a really big injury
just above their left ear, then we have a good
guess at what the more obvious results will be.
But that's not much understanding of
how the brain actually works.
It's a little like we have a car,
have no idea what's under the
hood, and are asked to build a car. Maybe
we are good with metal working, but
we don't even know what a connecting rod is.
> It wasn't that long after engines were invented that we got airplanes.
The rest needed was relatively simple, the wind tunnel,
some spruce wood, glue, linen, paint, wire, and
good carpentry. For the equivalent parts of AI,
I doubt that we have even a weak little hollow
hint of a tiny clue.
In some of the old work in AI, it was said that
a core challenge was the 'representation problem'.
If all that was meant was just what programming
language data structures to use, then that was
not significant progress.
Or, sure, we have a shot at understanding the
'sensors' and 'transducers' that are connected
to the brain: Sensors: Pain, sound, sight,
taste, etc. Transducers: Muscles, speech,
eye focus, etc. We know some about how the
middle and inner ear handles sound and
the gross parts of the eye. And if we show
a guy a picture of a pretty girl, then we can
see what parts of his brain become more
active. And we know that there are neurons
firing. But so far it seems that that's about
it. So, that's like my computer: For sensors
and transducers it has a keyboard, mouse, speakers,
printer, Ethernet connection, etc. And if we
look deep inside then we see a lot of circuits and transistors.
But my kitty cat has no idea at all about the
internals of the software that runs in my computer,
and by analogy I see no understanding of the
analogous details inside a human brain.
Or, we have computers, and we can write software for
them using If-Then-Else, Do-While, Call-Return, etc.,
but for writing software comparable with a human
brain we don't know the first character to type
into an empty file for the software. In simple
terms, we don't have a software design. Or,
it's like we are still in the sixth grade,
have learned, say, Python, and are asked to
write software to solve the ordinary differential
equations of space flight to the outer planets --
we don't know where to start. Or, closer in,
we're asked to write software to solve the
Navier-Stokes equations -- once we get much
past toy problems, our grid software goes
unstable and gives wacko results.
Net, we just don't yet know how to program
anything like real, human intelligence.
I was referring to AIXI as the perfect mathematical AI.
The main recent advancement in machine learning is deep learning. It's advanced the state of the art in machine vision and speech recognition quite a bit. Machine learning is on a spectrum from "statistics" with simple models and low dimensional data, to "AI" with complicated models and high dimensional data.
>if someone gets a really big injury just above their left ear, then we have a good guess at what the more obvious results will be. But that's not much understanding of how the brain actually works.
Neuroscience is a bit beyond that. I believe there are also some large projects like Blue Brain working on the problem.
I swear I saw a video somewhere of a simulation of a neocortex that could do IQ test type questions and respond just like a human. But the point is we do have more than nothing.
I looked it up: His 'decision theory' is essentially
just stochastic optimal control. I've seen
elsewhere claims that stochastic optimal control
is a universal solution to the best possible AI.
Of course, need some probability distributions;
in some cases in practice, have those.
That reference also has
> Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution.
Hmm? Then the text says that this solution is
not computable -- sound bad!
Such grand, maybe impossible, things are not
nearly the only way to exploit mathematics
to know more about what the heck we are doing
in AI, etc.
Approximations to AIXI are possible and have actually played pacman pretty well. However I still think solomonoff induction is too inefficient in the real world. But AIXI does bring up a lot of real problems with building any AI, like preference solipsism and the anvil problem, and designing utility functions for it.
> I was referring to AIXI as the perfect mathematical AI.
I will have to Google AIXI. A big point about
being mathematical is that that is about the only
solid way we can evaluate candidate work before running
software and, say, something like a Turing test.
Some math is most of why we know, well before any
software is written, that (1) heap sort will run
in n ln(n), (2) AVL trees find leaves in ln(n),
and (3) our calculations for navigating a space
craft to the outer planets will work. More
generally, the math is 'deductive' in a
severe and powerful sense and, thus, about the
only tool we have to know well in advance of, say,
writing a lot of software.
But math does not have 'truth' and, instead,
needs hypotheses. So, for hypotheses, for some
design for some software for AI, we need some.
Enough hypotheses are going to be a bit tough
to find. And math gives only some mathematical
conclusions, and we will need to know that these
are sufficient for AI; for that we will want,
likely need, a sufficiently clear definition
of AI, that is, something better than just an
empirical test such as a Turing test or
doing well on and IQ test. Tough challenge.
Instead of such usage of math, about all we have
in AI for a 'methodology' is, (1) here I have
some intuitive ideas I like, (2) with a lot
of money I can write the code and, maybe,
get it to run, and (3) trust me, that program
can read a plane geometry book with all the
proofs cut out and, then, fill in all the
proofs or some such. So, steps (1) and (2)
are, in the opinion of anyone else, say, DARPA,
'long shots', and (3)
will be heavily in the eye of the beholder.
The challenges of (1), (2), and (3) already
make AI an unpromising direction.
> The main recent advancement in machine
learning is deep learning.
It's advanced the state of the art
in machine vision and speech recognition quite a bit.
AI has been talking about 'deep knowledge' for a
long time. That was, say, in a program that
could diagnose car problems, 'knowledge' that
the engine connected to the transmission
connected to the drive shaft connected to the
differential connected to the rear wheels or
some such and, then, be able to use this
'knowledge' in 'reasoning' to diagnose
problems. E.g., a vibration could be
caused by worn U-joints.
When I was in AI, when I worked
in the field, there were plenty of people who
saw the importance of such 'deep knowledge'
but had next to nothing on really how to
make it real.
For 'deep learning', the last I heard, that was
tweaking the parameters 'deep' in some big
'neural network', basically a case of nonlinear
curve fitting. Somehow I just don't accept
that such a 'neural network' is nearly all that
makes a human brain work; that is, I'd expect
to see some promising 'organization'
at a higher level than just the little
elements for the nonlinear curve fitting.
E.g., for speech recognition, I believe an important
part of how humans do it is to take what they
heard, which is often quite noisy and by itself
just not nearly enough, and
compare it with what they know about the
subject under discussion and, then, based
on that 'background knowledge', correct the
noisy parts of what they heard. E.g., if
the subject is a cake recipe for a party
for six people, then it's not "a cup of salt"
but maybe a cup or two or three of flour.
If the subject is the history of US
presidents and war, then "I'll be j..."
may be LBJ and "sson" maybe "Nixon". Here
the speech recognition is heavily
from a base of 'subject understanding'.
An issue will be, how the heck does the
human brain sometimes make such
'corrections' so darned fast.
For image recognition, the situation has
to be in part similar but more so: I doubt that
we have even a shot at image recognition
without a 'prior library' of 'object
possibilities': That is, if we are
looking at an image, say, from
satellite, of some woods and looking
for a Russian tank hidden there,
then we need to know what a Russian
tank looks like so that we can guess
what a hidden Russian tank would look
like on the image so that we can, then,
look for that on the image. Here we
have to understand lighting, shadows,
what a Russian tank looks like from
various directions, etc. So, we are
using some real 'human knowledge'
of the real thing, the tank, we
are looking for.
E.g., my kitty cat has a food
tray. He knows well the difference
between that tray and everything
else that might be around it --
jug of detergent, toaster, bottle
of soda pop, decorative vase,
a kitchen timer. Then I can move
his food tray, and he doesn't get
confused at all. Net, what he is
doing with image recognition is
not just simplistic and, instead,
has within it a 'concept' of his
food tray, a concept that he
created. He's not stupid you know!
So, I begin to conclude that for
speech and image recognition, e.g.,
handwriting recognition, we need
a large 'base' of 'prior human
knowledge' about the 'subject area',
e.g., with 'concepts', etc.,
before we start. That is, we need
close to 'full, real AI' just to,
say, do well reading any handwriting.
From such considerations, I believe
we have a very long way to go.
Broadly one of my first cut guesses about
how to proceed would be to roll back
to something simpler in two respects.
First, start with brains smaller, hopefully
simpler, than those of humans.
Maybe start with a worm and work up to
a frog, bird, ..., in a few centuries, a
kitty cat! Second, start with the baby
animal and see how it learns once it
starts to as an egg, once it's born,
what it gets from its mother, etc.
So, eventually work up to software that
could start learning with just "Ma ma'
and proceed from there. But can't just
start with humans and "Ma ma" because
a human just born likely already has
somehow built in a lot that is crucial
we just don't have a clue about.
So, start with worms, frogs, birds,
etc.
Another idea for how to proceed is to
try for just simple 'cognition'
with just text and image input
and just text output. E.g., start
with something that can diagram
English sentences and move from there
to some 'understanding', e.g., have
made progress enough with
'meaning' that, e.g., know
when two sentences with quite different
words and grammar really mean essentially
the same thing and when they don't mean
the same thing report why not and be
able to revise one of the sentences so that
the two do mean the same thing.
So, here we are essentially assuming
that AI has to stand on some capabilities
with language -- do kitty cats have
an 'internal language'? Hmm ...! If
kitty cats don't have such an 'internal
language', then I am totally stuck!
Then with some text and
image input, the thing should be able
to cook up a good proof of the
Pythagorean theorem.
I can believe that
some software can diagram
English sentences or come close
to it, but that is just a tiny
start on what I am suggesting.
The real challenge, as I am guessing,
is to have the software keep track of
and manipulate 'meaning', whatever
the heck that is.
And I would anticipate
a 'bootstrap' approach: Postulate and
program something for doing such things
with meaning, 'teach' it, and then
look at the 'connections'
it has built internally, say, between
words and meaning, and also observe
that the thing appears to work well.
So, it's a 'bootstrap' because it
works without our having any
very good prior idea just why;
that is, we could not prove in
advance that it could work.
So, for kitty cat knowledge,
have it understand its environment
in terms of 'concepts' (part of 'meaning') hard, soft, strong, weak,
hot, cold, and, then, know when
it can use its claws to hold on to
a soft, strong, not too hot or too
cold surface, push out of the way
a hard, weak obstacle, etc.
Maybe some such research direction
could be made to work.
But I'm not holding my breath waiting.
Keep in mind evolution managed to make strong AI, us, through pretty much blind, random mutations, and inefficient selection.
The thing about deep learning is that it's not just nonlinear curve fitting. It learns increasingly high level features and representations of the input. Recurrent neural networks have the power of a Turing machine. And stuff like dropout are really efficient at generalization. My favorite example is word2vec. Creating a representation for every English word. Subtracting "man" from "king" and adding "woman" gives the representation for "queen".
Speech recognition is moving that way. It outputs a probability distribution of possible words, and a good language model can use that to figure out what is most likely. But even a raw deep learning net should eventually learn those relationships. Same with image recognition. I think you'd be surprised at what is currently possible.
> It learns increasingly high level features and representations of the input.
In the words of Darth Vader, impressive. In
my words, astounding. Perhaps beyond belief.
I'm thrilled if what you say is true, but I'm
tempted to offer you a great, once in a life time
deal on a bridge over the East River.
> The future looks bright.
From 'The Music Man', "I'm reticent. Yes, I'm
reticent." Might want to make sure
no one added some funny stuff to the Kool Aid!
On AI, my 'deep learning' had a good 'training
set', the world of 'expert systems'. My first
cut view was that it was 99 44/100% hype and
half of the rest polluted water. What was left
was some somewhat clever software, say, the Forgy
RETE algorithm. My views
after my first cut view was that my first
cut view was quite generous, that expert systems
filled a much need gap in the literature and would
be illuminating if ignited.
So, from
my 'training set' my Bayesian 'prior probability'
is that nearly anything about AI is at least
99 44/100% hype.
That a bunch of neural network nodes can somehow
in effect develop internally just via adjustments
in the 'weights' or whatever 'parameters' it has
just from analysis of a 'training set' images of
a Russian tank (no doubt complete with
skill at 'spacial relations' where it is claimed
that boys are better than girls) instead of somehow
just 'storing' the data on the tank separately
looks like rewiring the Intel processor
when download a new PDF file instead of just
putting the PDF file in storage. But, maybe
putting the 'recognition means' somehow
'with' the storage means is how it is actually done.
The old Darwinian guess I made was that
early on it was darned important to understand
three dimensions and paths through three dimensions.
So, going after a little animal, and it goes behind
a rock. So, there's a lot of advantage to
understanding the rock as a concept and that
can go the other way around the rock and get
the animal. But it seems that the concept of
a rock stays even outside the context of
chasing prey. So, somehow intelligence
works with concepts such a rocks and also
uses that concept for chasing prey,
turning the rock over and looking under it,
knowing that a rock is hard and dense,
etc.
Net, my view is that AI is darned hard,
so hard that MLE, neural nets, decision
theory, etc. are hardly up to the level
of even baby talk. Just my not very well
informed, intuitive, largely out of date
opinion. But, I have a good track record:
I was correct early on that expert systems are
a junk approach to AI.
There is a degree of hype. They are really good at pattern recognition, maybe even superhuman on some problems and with enough training and data. But certainly they can't "think" in a normal sense or are a magical solution to the AI problem. And like everything in AI, once you understand how it actually works, it may not seem as impressive as it did at first.
>instead of somehow
just 'storing' the data on the tank separately
looks like rewiring the Intel processor
when download a new PDF file instead of just
putting the PDF file in storage.
Good analogy, but how would you even do that? One picture of a tank isn't enough to generalize. Is a tank any image colored green? Is it any object painted camouflage? Is it any vehicle that has a tube protruding from it?
In order to learn, you need a lot of examples, and you need to test a lot of different hypotheses about what a tank is. That's a really difficult problem.
But, note, nearly everything we've done in computing, especially in Silicon Valley for the past 15 years, has been to apply routine software development to work that we already well understood how to do manually. A small fraction of the efforts have been some excursions into more, but these have been relatively few and with rarely very impressive results. Net, what Silicon Valley does know how to do is build, say, SnapChat (right, it keeps the NSA spooks busy looking at the SnapChat intercepts from Sweden!).
But for anything that should be called AI, there is another challenge that is very much necessary -- how to do that. Or, if you will, write the software design documents from the top down to the level of the individual programming statements. Problem is, very likely and apparently, no one knows how the heck to do that.
Given a candidate design, people should want to review it, and about the only way to convince people, e.g., short of the running software passing the Turing test or some such, is to write out the design in terms of mathematics. Basically the only solid approach is via mathematics; essentially everything else is heuristics to be validated only in practice, that is, an implementation and not a design.
Thing is, I very much doubt that anyone knows how to write a design with such mathematics. If so, then long ago there should have been such in an AI journal or with DARPA funding.
Basically, bluntly, no one knows how to write software for anything real about AI. Sorry 'bout that.
Wby? We just do not know hardly anything about how the brain works. We don't know more about how the human brain works than my kitty cat knows about how my computer works. Sorry 'bout that. And AI software will have a heck of a time catching up with my kitty cat.
By analogy, we don't know more about how to program AI than Leonardo da Vinci knew about how to build a Boeing 777. Heck the Russians didn't even know how to build an SR-71. Da Vinci could draw a picture of a flying machine, but he had no clue about how to build one. Heck, Langley fell into the Potomac River! Instead, the Wright brothers built a useful wind tunnel (didn't understand Reynolds number), actually were able to calculate lift, drag, thrust, and engine horsepower, and had found a solution to three axis control -- Langley failed at those challenges, and da Vinci was lost much farther back in the woods.
We now know how our daughters can avoid cervical cancer. Before the solution, "we dance 'round and 'round and suppose, and the secret sits in the middle, and knows.", and we didn't know. Well, the cause was HPV, and now there is a vaccine. Progress. Possible? Yes. Easy? No. AI? We're not close enough to be in the same solar system. F'get about AI.