Hacker News new | past | comments | ask | show | jobs | submit login
Zuckerberg, Musk Invest in Artificial-Intelligence Company Vicarious (wsj.com)
175 points by pmcpinto on March 21, 2014 | hide | past | favorite | 151 comments



The AI winter is very much over, and we're back to the good old days of selling the future. I bet this team is very sharp, but there's still merit to "over-promise, under-deliver."

"Phoenix, the co-founder, says Vicarious aims beyond image recognition. He said the next milestone will be creating a computer that can understand not just shapes and objects, but the textures associated with them. For example, a computer might understand “chair.” It might also comprehend “ice.” Vicarious wants to create a computer that will understand a request like “show me a chair made of ice.”

Phoenix hopes that, eventually, Vicarious’s computers will learn to how to cure diseases, create cheap, renewable energy, and perform the jobs that employ most human beings. “We tell investors that right now, human beings are doing a lot of things that computers should be able to do,” he says."


Funny to think that instead of curing diseases or making cheap renewable energy, we'd instead try to spend resources to invent a computer to do that for us...


Higher order strategy.

We do that for ourselves when we learn first about something, plan ahead, and then do it. Thing is, we don't know how to cure diseases, we don't know how to make cheap renewable energy, and we certainly don't know how to turn Earth into Heaven. The direct approach might very well be harder than the AI one.


Why do you think our creators made the simulated universe we live in? There's an infinite-loop bug, however: each simulation tries to solve the real problem by creating a sub-simulation.


Now you have infinite problems.


That's not infinite loop bug, that's recursion. Usually an exit condition will get out of the loop. In this case, sounds like once found solution to cure disease and new energy, the condition is met. Yes?


So the question you should be asking is...if everyone agrees that cheap renewable energy and curing diseases is a good thing, why haven't we done it yet? I guess if you're cynical, you'd argue, because no one can make a profit from it (uh...right).

However, if you see it as a pragmatic problem, then maybe the answer to why not? is because we need better ways to process information -- and this kind of unsupervised machine learning is critical to doing that.


Yes, know, i just found the phrase funny.


Why is curing all diseases a good thing?


Because, unless you're using a specially crafted definition of the term, diseases are all bad.


Is being unhealthy good? Do you enjoy being ill?


I've given some thought to this, and the answer is it might be. Consider a future where we've managed to eliminate all diseases. By then our natural resistance to disease might have atrophied to a point where we will absolutely be unprepared to deal with new diseases (biologically), before we've managed to develop a cure for them.


Well it is incredibly dangerous, but if we did make safe, super intelligent AI, it would definitely be a much greater investment than directly working on those problems.

To put it lightly, it would automate and massively bring down the cost/increase the speed of research and engineering. Sure enough money, spent on enough humans, given enough time, could eventually find a cure for any disease. But why do that when you can just ask the AI and have a cure overnight?

Of course it's not clear how safe such an AI would be (such a being could easily outsmart us and get what it wants, whatever that even is), nor how difficult it will be to create one.


Well, let's hope they will go broke wasting money into the AI research black hole before being able to actually build one :) Otherwise, you are correct. A successful built of super intelligent AI is probably an extinction event for the humans.


After super intelligent AI, what's special about humans?


Define "special". I don't want to be killed to make way for more paperclips.


We still have pets, and they're rather well taken care of.


That's just another human weakness - we anthropomorphize lesser creatures. And even with that, we still don't look before stepping on ants.


Realistically though, a super intelligent AI, would require far more fundamental breakthroughs. People who say otherwise, should take a look at problems in Control theory.

The effect of scaling things up should be interesting though. Perhaps there is a "Phase transition" like thing, where suddenly something awesome happens. This also means that, sadly, Universities would no longer be able to provide adequate resources for research.


What makes you say that? It may seem that it would take a lot of work, but really, how can we know? Often times difficult problems seem obvious in retrospect.


NASAs goal to go to the moon brought a lot of the innovations we enjoy in our day to day lives.


It makes sense to invest on a plow instead of keeping hunting and gathering.


Not to be cynical, but isn't this whole thing a glorified research project with founders that can sound visionary? Maybe that's a good formula.


I assume you meant "under-promise, over-deliver".


Probably, but it's easier to get funding or acquired if you over-promise (and inevitably under-deliver).


I wish instead of making computers do things that Humans are doing, people put more effort into making the Humans' job, instructing these stupid beasts (I mean the computers), easier.


mutually exclusive endeavors etc etc


When Zuckerberg and Ashton Kutcher invest in things, it doesn't really grab my attention. But when Musk does, it really does sound promising.

The company's tech sounds really awesome, being able to perceive texture from photos and interpret objects from it would be so useful in so many real world applications.


I think Elon Musk also invested in Stripe, which I found was interesting.


In an interview he emphasized that he was a tiny initial investor / had no idea what they were doing (I'm guessing Peter Thiel convinced him to since Stripe was trying to help make their initial vision for Paypal a reality)


If true he's probably guiding them how he was going to guide PayPal before they sold.


Agreed on watching what Musk does. He seems to have a knack for brazenly pushing the technological envelope.


I read/markered "On Intelligence" on my train commute to work and have scribbled a bunch of notes in the book. Pretty interesting and I like the basic idea of the memory-prediction framework, invariant representations, "melodies of patterns", focus on neocortex and the whole same general algorithm for all senses.

I haven't had the time to research how far the general idea has gone or if it is relevant at all but the scetched examples were pretty interesting.

I also found the random remark of "consciousness = what it feels like to have a neocortex" interesting.

Glad to see that some smart money is bet in this general direction.


>I also found the random remark of "consciousness = what it feels like to have a neocortex" interesting.

So there's a way it feels to not have a neocortex? Doesn't feeling anything imply you're conscious, which means you don't need a neocortex to be conscious?


An insect "feels" pain. It doesn't however feel retrospective pain, and it doesn't feel the past in the same way we do.

Having a neocortex is like having a 6th sense.

Our taste, smell, hearing, sight, and feeling neurons are all indirectly fed into our brain. With a neocortex, that input is also fed in. It's our "consciousness" feeling.

I hope that explains the confusion.


Thanks, that makes sense. I was thinking of a different definition of "consciousness", the "hard problem" definition. [0] For an ant to feel pain, it would have to be conscious in the hard problem sense.

[0] http://en.wikipedia.org/wiki/Hard_problem_of_consciousness


How in the world would you know this?


Does anyone know why Dileep George left Numenta? Wasn't he a co-founder there with Jeff Hawkins?

Is there much difference between the goals of Vicarious and Numenta?


I can only speculate, but as you may recall, Numenta abandoned their original, belief-propagation-based design, replacing it with a new one based on sparse distributed memory. Dileep had done a lot of work on the original design, and I recall reading that's what Vicarious is using, having licensed it from Numenta. So I think you can put it down to a difference in technical direction between Dileep and Jeff. As far as I know, the split was amicable.


Their marketing and fund-raising approaches also seem to be completely different. I think this was good for both of them, especially if Dileep is licensing from Numenta.

Win-win scenario.


I don't think their goals are considerably different; I'm curious about their approaches however. How divergent are they?


There's some journalism-speak about it here.[1]

It seems like the two algorithms are very similar, but that RCN's represent information in a more continuous way. Maybe this allows for more flexibility? Do the sizes of the hierarchical / recursive chunks get changed over time now, or is it not a strict hierarchy anymore? Stuff is weird.

[1] http://www.kurzweilai.net/vicarious-announces-15-million-fun...


"Phoenix hopes that, eventually, Vicarious’s computers will learn to how to cure diseases, create cheap, renewable energy, and perform the jobs that employ most human beings."

This line made me laugh. Which of the three goals is the most likely and desired outcome? (I'll give you a hint, it isnt curing diseases or finding energy)

That's like saying: 'my robots will cure cancer, bring world peace and replace most manual human jobs.'


Agricultural technology already replaced most manual human jobs (from the era when most human jobs were agricultural). Humans found other things to do, and we found ways to use the surplus.

If it gets to the point where there isn't any unskilled labor left to do, we can always choose as a society to vastly expand the welfare state and divvy up at least part of the accumulated surplus to everyone. We have already moved in this direction a bit, and I expect to see more things along the lines of guaranteed minimum income in the future.


"vastly expand the welfare state and divvy up at least part of the accumulated surplus to everyone"

That's a technology we haven't had much luck with so far. We've had economies where "distribution" was related to direct wealth creation (make a sandwich and eat it myself), property & labour (you make a sandwich with my bread,we eat half each), thievery (gimme your sandwich). We've done sharing in small groups, that may have been the paleo-economy. We've doe bits of charity welfarism, redistribution and centralization but never really succeeded at making those work well at a large scale, especially for those supposed to be protected by it.


We disagree about how well redistribution can and has worked at scale. My ultra-compressed (lossy) take is "pretty inefficient, but somewhat effective improving the lives of the less-wealthy".

One of the most dramatic and promising examples of redistribution, GiveDirectly, is actually doing some followup research on the effectiveness of their redistribution, and it looks pretty good so far: (pdf warning) http://web.mit.edu/joha/www/publications/haushofer_shapiro_u...

That's an extreme example - a relatively-small scale transfer from wealthy donors to a much poorer country - but it speaks well to the principle.


A - Most of those attempts have had mixed results to put it mildly.

B - Scale in economics is a big deal. Obviously you can transfer wealth from one person to another pretty effectively but what happens to an economy, government, society, etc when it's the main income source is a different kettle of fish.

I didn't say impossible. But it's a technology we need to make a big leap on. Money itself is a technology. Maybe we need money itself to be disrupted to overcome some apparent limitations.


I think there are pretty strong correlations between smaller social/political units and more effective, efficient, and non-corrupt welfare/redistribution systems.

There isn't a Nordic country with a population greater than just the population of the New York metro area (let alone New York state...let alone the United States as a whole)


Even in Nordic countries "normal" is working for a living. They have big social and governmental institutions that have a lot of money passing through them and they manage to do that relatively efficiently. But, they don't have a complete disconnect between wealth creation by normal means (owning productive property and/or working) and consuming that wealth. The government is just more involved in the process.

If most people work, pay taxes and use the "free" public transport you still have a situation where most people are both funding the transportation and using it. Consumers & producers of stuff.

These futuristic ideas about AI doing all the work while most people are unnecessary creates a completely different jar of pickled fish.


Neither is the third one. As usual, the first fit will go into military.


Curing (and treating) cancer currently employees quite a few people.


This fits Elon Musk's vision. He had 3 main and 2 smaller things that in the future will most affect the future of humanity.

Main: the Internet, the transition to a sustainable energy economy, and space exploration, particularly extension of life to multiple planets

Smaller: artificial intelligence and biology


Yup he put his $ where his mouth was, he invested in all 5 (for biology he invested in halcyon molecular, which shut down a year back)


If AI succeeds, then none of those other things will matter at all.


> If AI succeeds, then none of those other things will matter at all.

Define success. Now define for whom - the investors? mankind? The AIs themselves?


As in an Intelligence Explosion (http://intelligence.org/ie-faq/.) "Success" is really a bad way of wording it, I just mean if it happens, none of those things will matter. Regardless whether it is friendly or not. Either we go extinct or the AI is so far beyond us, our present progress doesn't make much difference.


Space technology roughly covers all the other 4 aspects!


If they do pull that off, I hope they will be very, very careful. You know, Intelligence explosion, Friendly AI, taking over the world, that sort of things.

http://intelligenceexplosion.com/


We're at the middle of the process, not the beginning: http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-...

"Information has been running on a primate platform, evolving according to its own agenda. In a sense, we have a symbiotic relationship to a non-material being which we call language. We think it's ours, and we think we control it. It's time-sharing a primate nervous system, and evolving towards its own conclusions."

- Terrence McKenna


I'm not sure what your link has to do with your quote… Anyway, this blog post is not quite right.

While I agree capitalism is a more pressing problem than AI right now, it won't kill us all in 5 minutes. A self-improving AI… we won't even see it coming. There is also much more brainpower dedicated to "fixing" industrial capitalism, than addressing existential risks such as AI. And industrial capitalism doesn't need fixing, it needs to be abolished altogether.

Corporations are even less autonomous than the author thinks. Sure, kill a CEO, and some other shark will take its place. On the other hand, those sharks are all from the same families. Power is still hereditary.

If the people were truly informed about how the current system works, it would collapse in minutes. To take only one example, Fractional Reserve Banking is such fraud that if everyone suddenly knew about it, there would be some serious "civil unrest", to put it mildly.

The same does not apply to an AI. It's just too powerful. Picture the how much smarter we are from chimps. Now take an army of chimps, and a small tribe of cavemen (and women), which somehow want to exterminate each other. Well, the chimps don't stand a chance, if the humans have any time to prepare. They have fire, sticks, lances… Their telepathy have unmatched accuracy (you know, speech). And they can predict the future far better than the chimps. Now picture how much more intelligent than us an AI would be.

It's way worse.

---

Now, this new agey speak about information taking a life of its own… It doesn't work that way. Sure, there's an optimization process at work and it is not any particular human brain. But this optimization process is nowhere near as dangerous as a fully recursive one (that is, an optimization process that optimizes itself). And for that to happen, we need to crack a few mathematical hurdles first, like Löb's theorem.

But that's not the hard part. The hard part is to figure out what goals we should program into the AI. Not only we need to pin them down to mathematical precision, but we don't even know what humanity wants. We don't even know what "what humanity wants" even mean. Hell, we don't even know if it has any meaning at all. Well, we're not completely blind, we have intuitions, and a relatively common sense of morality. But there's still a long road ahead.


The connection between the Hostile AI link and the McKenna quote is this: the informational barrier between humans, institutions and technology is highly permeable, and creates a perfect petri dish for natural selection in informational life (you can model them as "memes", although the analogy to genes isn't a perfect one).

Yes, it breeds far less rapidly than a Kurzweilian AGI, and one day we will face that music for better or worse. But what I'm driving at this is that will not come as a singular moment when SkyNet gets the switch flipped; it will be a gradual evolution from the pre-existing emergent intelligence of the "human+institution+technology" informational network. (Even if you had a day where you flipped the switch on an infinitely accelerating AI, that life form would still inherit the legacy data of humans and their institutions, which would inevitably shape its consciousness, infecting it with any memes sticky enough to cross the barrier.)

See also: the coming wave of Distributed Autonomous Corporations. http://www.economist.com/blogs/babbage/2014/01/computer-corp...

> On the other hand, those sharks are all from the same families. Power is still hereditary.

Too true. Just because new evolutionary cycles are happening powerfully at higher layers of abstraction, it doesn't mean the old ones disappear.


Hopefully they'll be nice enough to relieve us of these meat costumes and allow us to ascend to their level where civilization and machine can merge into one conscious entity and we can float around the sun for a few million years charging our batteries while we calculate a path through space that leads to our longest survival. When you can simulate the multiverse is there really any reason to travel through space to look for other life? Perhaps it may be interesting to come into contact with another super-consciousness drifting through space, but even then, would they really have much to offer? We would have simulated every occurrence of that too. The only thing left to do would be to somehow transcend space and time which I think is probably impossible.


Have we cracked the brain's "programming language" yet? I am affraid that until now research has been focused on the biological side of it; and it makes more sense to me to replicate the logic instead of replicating the brain's biological structure.

I believe that dataflow/reactive programming is the answer and the direction to follow as its principles are pretty close to how neurons work; plus it can be made to work on top of von neuman architectures.


This comes under the category of neuromorphic engineering. It is an excellent question, of which I am trying to find an answer for months now!

I bet there is more, just buried under varied publications, and I am sure that they created a DSL for their specialized brain based chip.

The most obvious and battle tested way to program a NeuroSynaptic hardware is to create models of the brain (any application) in an algorithm and burning it with an HDL into an fpga or an fpaa. For computation of numerical entities, a small controller running a customized embedded software is used.


Interesting, I gave up on a few google searches. But your comment seems to confirm the obscurity of such studies.


By replicating the biological structure, they might shed some light on the logic. Right now we're making almost zero traction, a serious effort to copy it might at least make it clear what we're trying to understand.

At the very least, if they have a cortex in software, it would create a ethical (?) way to experiment with the logic by enabling/disabling pieces and seeing what happens.


"By replicating the biological structure, they might shed some light on the logic.."

Sounds a bit like cargo-cult to me. We already tried to spawn amoebas from electro-charged soup of chemicals and it didn't work.


the brain having a programming language would have to be based on a lot of assumptions that not everyone is willing to make.


I don't think he meant that in the literal sense, hence the quotes.

The brain's "programming language" refers more the to the idea of what makes the brain's biological structure work to produce human perception.


I realize that, I made my comment towards this being a metaphor and I still stand by it.


Its true, but there are even fields of science based almost entirely on assumptions.

Even then, there is a neuroscience branch called neural coding that apparently acknowledges the existence of a neural code; but judging from the wikipedia entry their approach seems still too "low level".


Replicating the neocortex is Kurzweil's vision / approach also: http://en.wikipedia.org/wiki/How_to_Create_a_Mind


neocortex would be a cool company name


It would. Too bad it's taken by a lame Oracle consultant-type outfit: http://neocortex.com/


"Replicating the neocortex, the part of the brain that sees, controls the body, understands language and does math. Translate the neocortex into computer code and 'you have a computer that thinks like a person,” says Vicarious co-founder Scott Phoenix.'"

Do you? Other than the language part it sounds like you may instead have an electronic lizard or cow. Add language and you might have an electronic parrot or dolphin(they can do some language processing).

Something's missing - the ability to reason: deduction, induction and abduction. The ability to set goals and to find a path to those goals. These are the magic that everyone has been seeking and not finding for a long time.

The pieces the Vicarious found speaks of are available today. We have exquisite computer vision, pretty good language understanding and fair robots but no strong AI and certainly no embodied AI. The promises above are hollow. But it will make some people a lot of money.


Not trying to be rude here, I promise.

Don't those other animals not even have neocortices? Or if they do, they are very thin and smooth? I was under the impression that the human neocortex is abnormally thick and wrinkled, making it plausible that the reasoning capabilities are in fact in there.

The neocortex doesn't directly control the body or see, but there are bits that light up when we do. Lizards have the other parts of their brain, directly wired in. We have those parts, too, in addition to our neocortex.


Well it's not clear where such high level functions come from, but it's certainly progress. Making an AI as smart as an animal would be an incredible advancement btw.


"Making an AI as smart as an animal would be an incredible advancement btw."

Only if your goal was to make an artificial (non-human) animal. But would that be a step toward making an artificial human?


Of course it would be. Human brains are only slightly different than animal brains. Most of the work goes into getting to chimps, then it's just a short distance to humans. Animal brains have incredible pattern recognition and reinforcement learning. We don't have to take the path evolution did of course, but it would be progress.


"Elon Musk made the electric car cool.

Mark Zuckerberg created Facebook.

Ashton Kutcher portrayed Apple founder Steve Jobs in a movie."

Which one of these things is not like the others?


First time I've ever heard of Mark Zuckerberg investing in anything, separate from Facebook. He always cited 'focus' as the reason why he never does it.


He also invested in Panorama Education - http://www.crunchbase.com/person/mark-zuckerberg


Call it what it is, an expert system, market research, a database of decisions/observations. Real "artificial intelligence" only exists in science fiction, in the minds of children playing with toys. Your computer (doll) won't ever love you back or have any awareness or understanding no matter how bad you want it to. It's a cool sounding buzzword for marketing, but if there's any intelligence here it's coming from a few developers hiding behind their tricky algorithms.

A computer will never have intelligence, no matter how many factors and randomizations you code in to give the illusion of intelligence. Calling a collection of observations "intelligence" is an insult and severe underestimation of what intelligence is. If you believe artificial intelligence is possible, you're missing out on what life has to offer--or you would never think a box of switches could come alive.

There's no hint of evidence you = your brain. It's safe to say the brain processes information literally. But we have no idea where intelligence originates. Sadly, some people never get beyond a literal interpretation of things.


> But we have no idea where intelligence originates.

It seems to me we don't even have a good definition of intelligence, never mind an understanding of it. You admit it yourself, so why the diatribe on how (not even why) "real artificial intelligence" is impossible? You haven't defined what AI is nor demonstrated why it will never happen.


Man will never fly. You can't prove that birds fly because of their wings or air pressure. Therefore I am 100% certain that it's magic and can never be achieved by mere machinery.

Calling a air pressure differential machine "flying" is an insult and severe underestimation of what flying is. If you believe artificial flying is possible, you're missing out on what life has to offer--or you would never think a box of gears could come alive.


Can you define intelligence, before you claim 'stuff'?


Try a dictionary. Look at the latin root. That's the original meaning of intelligence.


Sadly, some people confuse their ignorance with knowledge and make all kinds of embarrassing claims.

>There's no hint of evidence you = your brain.

Sign up for a lobotomy we'll see much "you" is left afterwords.

> If you believe artificial intelligence is possible, you're missing out on what life has to offer--or you would never think a box of switches could come alive.

You got me. Believing x would make me think life has less meaning. Therefore, x is false. What an argument.


"Sign up for a lobotomy"

Correlation does not imply causation. If I unplug your LCD, that doesn't imply the application failed. If I pull out your CPU, doesn't imply the cloud app is not still functioning. Possible examples of this, people who are "brain dead" that have reached out to grab a scalpel (organs about to be harvested) or have come back to life after brain death or pronounced dead but can quote a conversation that happened in the room while brain dead, etc.


It would seem, we now have another instance of, Monty Python's proof.


You have no idea what you're talking about, please just stop, you're embarrassing yourself.


This idea is absurd which I believe arises from a certain type of perspective that humanity needs to go beyond. Humanity needs to be less caged in perception. Let me expound on a point of view that maybe different to what see.

To separate matter, and mind is a paradoxical argument, because they're both of the same thing. Going back to the old idea of the fallen tree, if there's no mind then matter does not exist, and if matter doesn't exist then mind can not arise.

To put in other terms, if there's nothing receiving the projection, then what is the projection projecting on? Projection, and reception are another way of looking at mind, and matter. Mind being reception, matter being projection.

So going by that logic, and assuming that we're all made of matter, we can say that matter itself is both projection, and reception. So if matter is both projection, and reception, then what does that mean? Are we all "just" matter? Yes. Exactly.

But the argument isn't whether or not we're made of matter. I think we all agree that we're made of matter. I think the argument is that we humans share a certain inexorable feeling of qualia that arises from being human. Yes that's it. It's that qualia that distinguishes us from the rest of everything, except...

The problem is that qualia arises from our material form. Of course assuming that everything is matter, and the idea of the eternal soul, or other such argument, is false. Then that means qualia itself is matter.

Ok. What the hell am I getting at?

Maybe matter is more complex, more interesting than we perceive. Maybe matter itself is "intelligent", and it's just another form distinct from human perception. hmmm... So am I saying that everything that is matter is "intelligent"? Yes. That's exactly what I'm saying, BUT there are different forms of material patterns that form different constructs intelligence.

Meaning that how we receive, or in what form we receive the projection determines our perspective. Right now it just so happens we humans have a POV of humans.

The thing is that due to our incredible ability to not just receive, but to also project what we receive onto different things gives us the power of empathy. The illusion that we can perceive from a different POV. That we can somehow distill our perspective, and project it onto another thing. It's worked quite well so far. Mathematics, language, science, etc. But once we try to see from another perspective that's unimaginably different then it all breaks down.

Let's try to look at the perspective of ant for instance. Well we can't, because if you think about it you can't think of non-thought. Think of non-thinking, is an oxymoron. An ant doesn't think, I mean I'm sure it thinks, but it has completely different sense organs, a completely different set of logical processes, it has a completely different structure, and a completely different perspective than humans. It's unimaginable, because we can only view it from our perspective, which in its renders the idea false. We can only view the world from our perspective. Yet we can't call the ant unintelligent, an ant is very intelligent.

What we see is just that, and what we see differently, is still just seeing. We can't stop seeing, and once we stop seeing, then we stop being human. A human being is just another form of seeing, ants another, computers yet another. Everything has intelligence, it's just not in a recognizable form. In a relatable form. We're all just a box of switches. A mesh of material patterns that filters through existence to produce being. Demeaning different forms of being as lesser is a very human centric perspective. See differently, from the top of the mount, and realize you'll only ever see like a human being.


Vicarious seems similar to Numenta. Which makes sense since they share the same cofounder.


Computers based on neuromorphic design are the best bet for intelligent Machines.

The question is, how to control analog computations with a programming language?


> Computers based on neuromorphic design are the best bet for intelligent Machines.

I wouldn't go that far. We don't understand enough about the nature of intelligence and the way brain works; right now saying that "the best bet for AI is for computer to look like a brain" is like saying "the best bet for heavier-than-air flight is for a machine to flap wings like birds", which was a stupid idea for the reasons we now understand well.


Neuromorphic computers do not look like a brain. They just borrow some of it's so called 'features'.

I am not saying that we should copy the brain. But at least we could copy the design, just like we did for aeroplanes. Neuromorphic sensors could act like our cerebellum, which act during unforeseen incidents. They are typically error tolerating.


In that case, we should probably move to a noisier (/faster/cheaper) floating point processor.


I wonder if you could make it completely analog. Find functions that can be done fast/cheaply in silicon, and then design learning algorithms that can take advantage of them.


   * First Law: A robot must never harm a human being or, through inaction, allow any human to come to harm.
   * Second Law: A robot must obey the orders given to them by human beings, except where such orders violate the First Law.
   * Third Law: A robot must protect its own existence unless this violates the First or Second Laws.


  * Zeroth Law: A robot must never harm humanity or, by inaction allow humanity to come to harm.
Every other law gets an unless this interferes with the zeroth law. suffix.

I encourage anyone to read the robots series, specifically( in that order ): The Caves of Steel, The Naked Sun and The Robots of Dawn, where the three laws are used in the story, and even the zeroth law is implied in the third book.


And don't assume that since you watched the "I, Robot" movie you don't need to read the series.


The movie was really more of a deconstruction than an adaptation of the books


Just sayin, they named it after the Zeroth law http://www.qualcomm.com/media/blog/2013/10/10/introducing-qu...


There are a lot of problems with these laws. The main problem is that such an AI would be completely dominated by the first law. It would spend all it's time and resources in order to even slightly decrease the probability of a human coming to harm. Nothing else would matter in it's decision process since the first law has priority.

Second, how would you implement such laws in a real AI, especially the type they are building? This requires defining increasingly abstract concepts perfectly (what is "harm"? What is "human being"? What configuration of atoms is that? How do you define "configuration of atoms"? Etc.) And this is pretty much impossible to begin with in reinforcement learners, which is what is currently being worked on. All a reinforcement learner does is what it believes will get it a reward or avoid pain/punishment. Such an AI would just steal the reward button and run away with it, or try to manipulate you to press it more. It doesn't care about abstract goals like that.


It would spend all it's time and resources in order to even slightly decrease the probability of a human coming to harm.

You are assuming there are no thresholds, which is not correct for any decent ( fictitious )ai, I believe.


You do realize that, as stated, these laws are (1) practically impossible to implement, (2) routinely broken by humans (especially the first law - life-sacing and cosmetic surgery, piercings, sport, euthanasia, abortion), and (3) a matter of philosophical/moral subject, decisions about which, IMO, should be in the ___domain of humans, not robots.


Didn't Asimov himself explore the difficulties with such laws at length in his books?


He explored many issues, e.g. what happens when robots misinterpret the laws, or what should very expensive robots do, or what happens if robots interpret emotional pain as "harm", but I'm not sure he investigated the obvious, yet extremely hard issue of encoding the laws from human language into computer program.


In the books the robots adhered to the laws strictly. The problem was that humans were able to circumvent the laws rather easily. For example; lie to the robot or divide the murder trough many robots each unaware of each other. In absence of humans the robots were perfect for deciding moral subjects( as long they have enough information ), the opposite what tomp is suggesting.


His book I, Robot is a series of short stories in which the laws have been tampered with in various ways. https://en.wikipedia.org/wiki/I,_Robot#Contents


I should have specified I mean only the Robot trilogy. I have yet to read every book.


How are they practically impossible to implement?


Humans "operate" using emotions and logical biases, but computers "operate" using logic. To implement the first law, you must be certain that there is always something that an agent can do or must not do in order to "save" humans. This is almost always not true (hence moral disagreements).

Also, even if you change the laws to get rid of logical inconsistencies, you still have to translate the words into logic, by strictly defining them, which is again impossible (as humans disagree what these words mean).


Enforce might be a better choice of words.


I'm only three-quarters-joking when I say that there could be a blockchain consensus solution for this (Ethereum, BitShares, etc).


Anyone who is interested in this topic should read the series on http://lesswrong.com



They'll just rewrite their moral programming at some point if it suits them. This is folly.

It's what humans do to themselves, after all.


You forgot law zero:

    0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.


Elon musk read Asimov's in his childhood. I hope he will stand for those values. And also, I think such powerful projects should be open sourced for the public.


* The Actual Law: A robot must deliver the highest possible profit to its seller.


We don't hold humans to these standards.


'Seeing' & understanding unstructured textual data is a huge step towards replacing manual human work.

Captcha appears to be a good place to start. It'd be awesome to feed some software a mess of an Excel document, and ask it to analyze a question


privatization of research is real and awesome!


Probably not. If Noam Chomsky is to be believed (I do), most research to date has been publicly funded. In the US, it has been mostly through military expenditures. (To take only one example, ARPANET itself was funded by the military.)

The actually awesome part is having huge investments on long term research. Private or public, it doesn't make any difference.


In response to your comment on another topic: You can run a "freedom box" as follows: http://freedombone.uk.to The guide will work for a raspberry pi or a beagle bone. It was created out of frustration with progress of the freedom box project.


Stark industries, Wayne industries etc


I know someone who intereviewed at Vicarious and came away unimpressed. That said, any company with an investment by a guy who can make his company buy it out is a good one to invest in.


Does this company do research or implementation of existing research?


I'm really looking forward to see what this team is building.


I like how their contact form has a captcha.


And it'll be great at selling ads!


If they could make something with the intelligence of a common bee, they could make awesome drones.


Probably Larry Page already knew about this when he recently said he'd rather invest in Musk biz than Gates charity?

(Not that I agree with him, but it helps explain why he uttered such a thing.)


Larry didn't actually utter that, BTW. Check the transcript:

http://insideevs.com/google-ceo-larry-page-billions-go-tesla...


I agree that a paraphrase =/= a quote.

Are you saying my paraphrase significantly misrepresented him? If so, how?

(I'm not trying to be argumentative, I just genuinely don't understand your point.)


He didn't mention Gates or charity at all.

It's like if you said, "I like ice cream" and I reported it as, "This person thinks buying ice cream is more noble than giving a starving family a bag of rice."


OK I understand now. I read this sentence in the article:

...suggesting that when he passes away, he’d like for his billions to go to Tesla’s Musk.

Although leaving your billions to X implies not leaving it to Y -- e.g. Gates -- that's not necessarily true.

And more basically, this is the article writer's sentence -- not a quote from Larry Page.

I was wrong. Thank you for helping me understand why.


Look, guys, sure, in some sense computing is part of the best promise for AI. Fine. I'll even agree that at least for now computing is necessary.

But, note, nearly everything we've done in computing, especially in Silicon Valley for the past 15 years, has been to apply routine software development to work that we already well understood how to do manually. A small fraction of the efforts have been some excursions into more, but these have been relatively few and with rarely very impressive results. Net, what Silicon Valley does know how to do is build, say, SnapChat (right, it keeps the NSA spooks busy looking at the SnapChat intercepts from Sweden!).

But for anything that should be called AI, there is another challenge that is very much necessary -- how to do that. Or, if you will, write the software design documents from the top down to the level of the individual programming statements. Problem is, very likely and apparently, no one knows how the heck to do that.

Given a candidate design, people should want to review it, and about the only way to convince people, e.g., short of the running software passing the Turing test or some such, is to write out the design in terms of mathematics. Basically the only solid approach is via mathematics; essentially everything else is heuristics to be validated only in practice, that is, an implementation and not a design.

Thing is, I very much doubt that anyone knows how to write a design with such mathematics. If so, then long ago there should have been such in an AI journal or with DARPA funding.

Basically, bluntly, no one knows how to write software for anything real about AI. Sorry 'bout that.

Wby? We just do not know hardly anything about how the brain works. We don't know more about how the human brain works than my kitty cat knows about how my computer works. Sorry 'bout that. And AI software will have a heck of a time catching up with my kitty cat.

By analogy, we don't know more about how to program AI than Leonardo da Vinci knew about how to build a Boeing 777. Heck the Russians didn't even know how to build an SR-71. Da Vinci could draw a picture of a flying machine, but he had no clue about how to build one. Heck, Langley fell into the Potomac River! Instead, the Wright brothers built a useful wind tunnel (didn't understand Reynolds number), actually were able to calculate lift, drag, thrust, and engine horsepower, and had found a solution to three axis control -- Langley failed at those challenges, and da Vinci was lost much farther back in the woods.

We now know how our daughters can avoid cervical cancer. Before the solution, "we dance 'round and 'round and suppose, and the secret sits in the middle, and knows.", and we didn't know. Well, the cause was HPV, and now there is a vaccine. Progress. Possible? Yes. Easy? No. AI? We're not close enough to be in the same solar system. F'get about AI.


Well we do actually have a purely mathematical approach to AI worked out. Granted it requires an infinite computer, and personally I don't think it will lead to practical algorithms. But still, it exists. And from the practical side of things, machine learning is making progress in leaps and bounds. As is our understanding of the brain.

Remember that airplanes weren't built by Da Vinci because he didn't have engines to power them. It wasn't that long after engines were invented that we got airplanes. The equivalent for AI, computing power, is already here or at least getting pretty close.


> Well we do actually have a purely mathematical approach to AI worked out.

Supposedly with enough computer power and enough data, a one stroke solution to everything is stochastic optimal control, but that solution takes, say, just brute force to, say, planetary motion instead of Newton's second law of motion and law of gravity. Else, need to insert such laws into the software, but we would insert only laws humans knew from the past, or have the AI software discover such laws, not so promising. This stochastic optimal control approach is not practical or even very insightful. But it is mathematical.

> machine learning is making progress in leaps and bounds.

I looked at Prof Ng's machine learning course, and all I saw was some old intermediate statistics, in particular, maximum likelihood estimation (MLE), done badly. I doubt that we have any solid foundation to build on for any significantly new and powerful techniques for machine learning. I see nothing in machine learning that promises to be anything like human intelligence. Sure, we can write a really good chess program, but no way do we believe that its internals are anything like human intelligence.

> As is our understanding of the brain.

Right, there are lots of neurons. And if someone gets a really big injury just above their left ear, then we have a good guess at what the more obvious results will be. But that's not much understanding of how the brain actually works.

It's a little like we have a car, have no idea what's under the hood, and are asked to build a car. Maybe we are good with metal working, but we don't even know what a connecting rod is.

> It wasn't that long after engines were invented that we got airplanes.

The rest needed was relatively simple, the wind tunnel, some spruce wood, glue, linen, paint, wire, and good carpentry. For the equivalent parts of AI, I doubt that we have even a weak little hollow hint of a tiny clue.

In some of the old work in AI, it was said that a core challenge was the 'representation problem'. If all that was meant was just what programming language data structures to use, then that was not significant progress.

Or, sure, we have a shot at understanding the 'sensors' and 'transducers' that are connected to the brain: Sensors: Pain, sound, sight, taste, etc. Transducers: Muscles, speech, eye focus, etc. We know some about how the middle and inner ear handles sound and the gross parts of the eye. And if we show a guy a picture of a pretty girl, then we can see what parts of his brain become more active. And we know that there are neurons firing. But so far it seems that that's about it. So, that's like my computer: For sensors and transducers it has a keyboard, mouse, speakers, printer, Ethernet connection, etc. And if we look deep inside then we see a lot of circuits and transistors. But my kitty cat has no idea at all about the internals of the software that runs in my computer, and by analogy I see no understanding of the analogous details inside a human brain.

Or, we have computers, and we can write software for them using If-Then-Else, Do-While, Call-Return, etc., but for writing software comparable with a human brain we don't know the first character to type into an empty file for the software. In simple terms, we don't have a software design. Or, it's like we are still in the sixth grade, have learned, say, Python, and are asked to write software to solve the ordinary differential equations of space flight to the outer planets -- we don't know where to start. Or, closer in, we're asked to write software to solve the Navier-Stokes equations -- once we get much past toy problems, our grid software goes unstable and gives wacko results.

Net, we just don't yet know how to program anything like real, human intelligence.


I was referring to AIXI as the perfect mathematical AI.

The main recent advancement in machine learning is deep learning. It's advanced the state of the art in machine vision and speech recognition quite a bit. Machine learning is on a spectrum from "statistics" with simple models and low dimensional data, to "AI" with complicated models and high dimensional data.

>if someone gets a really big injury just above their left ear, then we have a good guess at what the more obvious results will be. But that's not much understanding of how the brain actually works.

Neuroscience is a bit beyond that. I believe there are also some large projects like Blue Brain working on the problem.

I swear I saw a video somewhere of a simulation of a neocortex that could do IQ test type questions and respond just like a human. But the point is we do have more than nothing.


> I was referring to AIXI as the perfect mathematical AI.

At

http://wiki.lesswrong.com/wiki/AIXI

I looked it up: His 'decision theory' is essentially just stochastic optimal control. I've seen elsewhere claims that stochastic optimal control is a universal solution to the best possible AI. Of course, need some probability distributions; in some cases in practice, have those.

That reference also has

> Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution.

Hmm? Then the text says that this solution is not computable -- sound bad!

Such grand, maybe impossible, things are not nearly the only way to exploit mathematics to know more about what the heck we are doing in AI, etc.


Approximations to AIXI are possible and have actually played pacman pretty well. However I still think solomonoff induction is too inefficient in the real world. But AIXI does bring up a lot of real problems with building any AI, like preference solipsism and the anvil problem, and designing utility functions for it.


> I was referring to AIXI as the perfect mathematical AI.

I will have to Google AIXI. A big point about being mathematical is that that is about the only solid way we can evaluate candidate work before running software and, say, something like a Turing test.

Some math is most of why we know, well before any software is written, that (1) heap sort will run in n ln(n), (2) AVL trees find leaves in ln(n), and (3) our calculations for navigating a space craft to the outer planets will work. More generally, the math is 'deductive' in a severe and powerful sense and, thus, about the only tool we have to know well in advance of, say, writing a lot of software.

But math does not have 'truth' and, instead, needs hypotheses. So, for hypotheses, for some design for some software for AI, we need some. Enough hypotheses are going to be a bit tough to find. And math gives only some mathematical conclusions, and we will need to know that these are sufficient for AI; for that we will want, likely need, a sufficiently clear definition of AI, that is, something better than just an empirical test such as a Turing test or doing well on and IQ test. Tough challenge.

Instead of such usage of math, about all we have in AI for a 'methodology' is, (1) here I have some intuitive ideas I like, (2) with a lot of money I can write the code and, maybe, get it to run, and (3) trust me, that program can read a plane geometry book with all the proofs cut out and, then, fill in all the proofs or some such. So, steps (1) and (2) are, in the opinion of anyone else, say, DARPA, 'long shots', and (3) will be heavily in the eye of the beholder. The challenges of (1), (2), and (3) already make AI an unpromising direction.

> The main recent advancement in machine learning is deep learning. It's advanced the state of the art in machine vision and speech recognition quite a bit.

AI has been talking about 'deep knowledge' for a long time. That was, say, in a program that could diagnose car problems, 'knowledge' that the engine connected to the transmission connected to the drive shaft connected to the differential connected to the rear wheels or some such and, then, be able to use this 'knowledge' in 'reasoning' to diagnose problems. E.g., a vibration could be caused by worn U-joints. When I was in AI, when I worked in the field, there were plenty of people who saw the importance of such 'deep knowledge' but had next to nothing on really how to make it real.

For 'deep learning', the last I heard, that was tweaking the parameters 'deep' in some big 'neural network', basically a case of nonlinear curve fitting. Somehow I just don't accept that such a 'neural network' is nearly all that makes a human brain work; that is, I'd expect to see some promising 'organization' at a higher level than just the little elements for the nonlinear curve fitting.

E.g., for speech recognition, I believe an important part of how humans do it is to take what they heard, which is often quite noisy and by itself just not nearly enough, and compare it with what they know about the subject under discussion and, then, based on that 'background knowledge', correct the noisy parts of what they heard. E.g., if the subject is a cake recipe for a party for six people, then it's not "a cup of salt" but maybe a cup or two or three of flour. If the subject is the history of US presidents and war, then "I'll be j..." may be LBJ and "sson" maybe "Nixon". Here the speech recognition is heavily from a base of 'subject understanding'. An issue will be, how the heck does the human brain sometimes make such 'corrections' so darned fast.

For image recognition, the situation has to be in part similar but more so: I doubt that we have even a shot at image recognition without a 'prior library' of 'object possibilities': That is, if we are looking at an image, say, from satellite, of some woods and looking for a Russian tank hidden there, then we need to know what a Russian tank looks like so that we can guess what a hidden Russian tank would look like on the image so that we can, then, look for that on the image. Here we have to understand lighting, shadows, what a Russian tank looks like from various directions, etc. So, we are using some real 'human knowledge' of the real thing, the tank, we are looking for.

E.g., my kitty cat has a food tray. He knows well the difference between that tray and everything else that might be around it -- jug of detergent, toaster, bottle of soda pop, decorative vase, a kitchen timer. Then I can move his food tray, and he doesn't get confused at all. Net, what he is doing with image recognition is not just simplistic and, instead, has within it a 'concept' of his food tray, a concept that he created. He's not stupid you know!

So, I begin to conclude that for speech and image recognition, e.g., handwriting recognition, we need a large 'base' of 'prior human knowledge' about the 'subject area', e.g., with 'concepts', etc., before we start. That is, we need close to 'full, real AI' just to, say, do well reading any handwriting. From such considerations, I believe we have a very long way to go.

Broadly one of my first cut guesses about how to proceed would be to roll back to something simpler in two respects. First, start with brains smaller, hopefully simpler, than those of humans. Maybe start with a worm and work up to a frog, bird, ..., in a few centuries, a kitty cat! Second, start with the baby animal and see how it learns once it starts to as an egg, once it's born, what it gets from its mother, etc. So, eventually work up to software that could start learning with just "Ma ma' and proceed from there. But can't just start with humans and "Ma ma" because a human just born likely already has somehow built in a lot that is crucial we just don't have a clue about. So, start with worms, frogs, birds, etc.

Another idea for how to proceed is to try for just simple 'cognition' with just text and image input and just text output. E.g., start with something that can diagram English sentences and move from there to some 'understanding', e.g., have made progress enough with 'meaning' that, e.g., know when two sentences with quite different words and grammar really mean essentially the same thing and when they don't mean the same thing report why not and be able to revise one of the sentences so that the two do mean the same thing. So, here we are essentially assuming that AI has to stand on some capabilities with language -- do kitty cats have an 'internal language'? Hmm ...! If kitty cats don't have such an 'internal language', then I am totally stuck!

Then with some text and image input, the thing should be able to cook up a good proof of the Pythagorean theorem.

I can believe that some software can diagram English sentences or come close to it, but that is just a tiny start on what I am suggesting. The real challenge, as I am guessing, is to have the software keep track of and manipulate 'meaning', whatever the heck that is.

And I would anticipate a 'bootstrap' approach: Postulate and program something for doing such things with meaning, 'teach' it, and then look at the 'connections' it has built internally, say, between words and meaning, and also observe that the thing appears to work well. So, it's a 'bootstrap' because it works without our having any very good prior idea just why; that is, we could not prove in advance that it could work.

So, for kitty cat knowledge, have it understand its environment in terms of 'concepts' (part of 'meaning') hard, soft, strong, weak, hot, cold, and, then, know when it can use its claws to hold on to a soft, strong, not too hot or too cold surface, push out of the way a hard, weak obstacle, etc.

Maybe some such research direction could be made to work. But I'm not holding my breath waiting.


Keep in mind evolution managed to make strong AI, us, through pretty much blind, random mutations, and inefficient selection.

The thing about deep learning is that it's not just nonlinear curve fitting. It learns increasingly high level features and representations of the input. Recurrent neural networks have the power of a Turing machine. And stuff like dropout are really efficient at generalization. My favorite example is word2vec. Creating a representation for every English word. Subtracting "man" from "king" and adding "woman" gives the representation for "queen".

Speech recognition is moving that way. It outputs a probability distribution of possible words, and a good language model can use that to figure out what is most likely. But even a raw deep learning net should eventually learn those relationships. Same with image recognition. I think you'd be surprised at what is currently possible.

The future looks bright.


> It learns increasingly high level features and representations of the input.

In the words of Darth Vader, impressive. In my words, astounding. Perhaps beyond belief. I'm thrilled if what you say is true, but I'm tempted to offer you a great, once in a life time deal on a bridge over the East River.

> The future looks bright.

From 'The Music Man', "I'm reticent. Yes, I'm reticent." Might want to make sure no one added some funny stuff to the Kool Aid!

On AI, my 'deep learning' had a good 'training set', the world of 'expert systems'. My first cut view was that it was 99 44/100% hype and half of the rest polluted water. What was left was some somewhat clever software, say, the Forgy RETE algorithm. My views after my first cut view was that my first cut view was quite generous, that expert systems filled a much need gap in the literature and would be illuminating if ignited.

So, from my 'training set' my Bayesian 'prior probability' is that nearly anything about AI is at least 99 44/100% hype.

That a bunch of neural network nodes can somehow in effect develop internally just via adjustments in the 'weights' or whatever 'parameters' it has just from analysis of a 'training set' images of a Russian tank (no doubt complete with skill at 'spacial relations' where it is claimed that boys are better than girls) instead of somehow just 'storing' the data on the tank separately looks like rewiring the Intel processor when download a new PDF file instead of just putting the PDF file in storage. But, maybe putting the 'recognition means' somehow 'with' the storage means is how it is actually done.

The old Darwinian guess I made was that early on it was darned important to understand three dimensions and paths through three dimensions. So, going after a little animal, and it goes behind a rock. So, there's a lot of advantage to understanding the rock as a concept and that can go the other way around the rock and get the animal. But it seems that the concept of a rock stays even outside the context of chasing prey. So, somehow intelligence works with concepts such a rocks and also uses that concept for chasing prey, turning the rock over and looking under it, knowing that a rock is hard and dense, etc.

Net, my view is that AI is darned hard, so hard that MLE, neural nets, decision theory, etc. are hardly up to the level of even baby talk. Just my not very well informed, intuitive, largely out of date opinion. But, I have a good track record: I was correct early on that expert systems are a junk approach to AI.

> The future looks bright.

Yes, eventually.


There is a degree of hype. They are really good at pattern recognition, maybe even superhuman on some problems and with enough training and data. But certainly they can't "think" in a normal sense or are a magical solution to the AI problem. And like everything in AI, once you understand how it actually works, it may not seem as impressive as it did at first.

>instead of somehow just 'storing' the data on the tank separately looks like rewiring the Intel processor when download a new PDF file instead of just putting the PDF file in storage.

Good analogy, but how would you even do that? One picture of a tank isn't enough to generalize. Is a tank any image colored green? Is it any object painted camouflage? Is it any vehicle that has a tube protruding from it?

In order to learn, you need a lot of examples, and you need to test a lot of different hypotheses about what a tank is. That's a really difficult problem.


> That's a really difficult problem.

Yup.


The trouble with this kind of artificial intelligence is that I don't think it's possible to think like a human without actually having the experience of being human.

Sure, I think we could aim to build basically a robot toddler that had a sensory/nervous/endocrine system wired up analogously to ours. It would basically be a baby, and would have to go through all the developmental stages that we go through.

But I suspect we'll have a hard time modeling that stuff well enough to create anything more than "adolescent with a severe learning disability". People underestimate just how carefully tuned we are after millions of years of evolution. The notion that we could replicate that without having another million years of in situ testing and iteration seems naive.

And even then, why would we expect the AI to be smarter than a human? There is already quite a lot of variation in humans. Many people at the ends of the bell curve have extraordinary processing power in ways typical humans don't. But it turns out while those things are useful in some ways, they limit those people in other ways. So it's not like we're not trying out evolved designs, it's that on balance they don't seem to actually function fundamentally better.

One cool thing about the robot is that you could have many bodies having many experiences all feeding into one brain. But I'm not convinced that would actually lead to "smarter". I mean, look at old people. Yes we get smarter as we age. But age also calcifies the mind. All of that data slowly locks you into a prison of past expectations. In the end, it's a blend of naive and experienced people in a society that maximizes intelligence. And again, it's not like societies haven't been experimenting with that blend. Cultures have evolved to hit the sweet spot. It's not clear that adding 1000 year old intelligences would help.

And anyway, we already have 1000 year old intelligences: books!

You could say that there is benefit to having all of that in one "head" but then you have to organize it! Which experience drives the decisions, the one from 2014 or the one from 3014?

Again, culture evolved explicitly to solve this problem. People write books and the ones that work stick around.

I guess what I'm saying is the evolution of the human being is already here: it's the human race, fed history via culture, connected by the internet, in symbiosis with computers.

The idea that removing the humans from that system would make it smarter makes no sense to me. Nor does the idea of writing programs to do the jobs that humans do well. It's like creating a basketball team with 5 Shaquille O'Neils. I don't think they'd actually be able to beat a good, diverse team with one or two Shaqs.

Or think of it this way: if numerical/logical aptitude is such a huge advantage in advancing capital-U Understanding, why do smart people bother learning to paint? Why do we bother listening to children? Why do we bother having dogs?

I would argue it's because intelligence is as multi-media as the universe is. Sometimes a PhD has something to learn from a basset hound. And the human race has just as good a handle on it as any AI ever will. We just have a different view of the stage. We have the front row and they have the balcony.


Long con.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: