Hacker News new | past | comments | ask | show | jobs | submit login
The Myth of AI – Jaron Lanier (edge.org)
139 points by discreteevent on Jan 6, 2016 | hide | past | favorite | 169 comments



I think Jaron knows what many engineers who have worked on AI know: That the external promise of AI is far more inflated than the capability of the actual technology. It's simply that when the current technology gets just complicated enough for the above-average person to no longer understand, everyone starts assigning magical properties and expectations to it. This results in short term over-valuations, which inevitably lead to disappointment [1].

Jaron has observed this several times and rightly seems to be tired of repeating the same naive cycle.

As I'm just now entering my second cycle and watching tech repeat itself again -- I'm beginning to understand his weariness.

[1] https://hbr.org/2015/12/the-overvaluation-trap


I understand how the technology works, and I still believe it's amazing and rapidly progressing. Certainly it isn't human level now, and it might take a decade or two to get there. But it's still an incredible advancement.

AI has gotten way better than anyone expected in the last 5 years or so. Progress in many of these areas like image recognition was super slow and challenging. And all of a sudden computers are approaching human level performance with a relatively general algorithm.

For decades robotics has been limited by AI tech. We didn't have the ability to do object recognition, control was really hard, teaching the robot to do things was nearly impossible, etc. Now those limitations are gone, and in the next 10 years it's likely there will be a huge explosion in robotics and automation.

Regardless of the success of current developments, in the long term AI is still inevitable. I have no doubt we will eventually figure it out.

I have no idea why people confuse long term predictions about something with short term ones. Someone saying "we will invent AI by the end of the century" is not saying that they it's definitely going to happen in 5 years and you should invest heavily now. Every time Bostrom and AI risk stuff comes up, there's a bunch of confused comments about how AI isn't that good currently, which misses the point entirely.


One of the many good points that Lanier made was that there is not currently an inevitable, moore's-law-like progression that will take modern AI methods towards the "superintelligence" that is meant when AI is discussed as an existential threat.

Coupling that with the fact that any AI will be limited by its human-defined interface leaves me pretty unconcerned about AI, even if we ever figure out what truly makes a conscious, intelligent being and determine how to replicate it fully in some other medium.

That said, I appreciate Bostrom's work as a philosophical rather than practical matter.


Taking a best-fit line like Moore's law as a fundamental law of the universe is a classic mistake.

Of course, there's no such thing as an inevitable progression of progress in AI research. Until we have AI software capable of contributing to AI research, of course.


"Certainly it isn't human level now, and it might take a decade or two to get there."

Any chance you could define "human level" in a way to make this a meaningful sentence? Twenty-five years after I was introduced to this problem and I still have never heard a better definition of "intelligence" than the Turing test, and it's a purely operational, "I'll know it when I see it" thing.


I.J. Good: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

A merely human-level AI is one that can match to some window of performance a random human in the intellectual outputs of that human, which are vast and general -- playing games, writing poetry, critiquing film, building small rockets, having conversations, programming, carving wood, teaching a class, diagnosing and fixing a leak, constructing mathematical models... For things that require a body, it suffices to instruct a human what to do, but this thing might as well help design a robot body that's at least as versatile as a human body since we humans do that too even if Atlas is still pretty far off. We're unlikely to have an AI perfectly human-equivalent in the sense that it would probably by default already be superior in some ways by virtue of its ability to not need sleep, do calculations efficiently and correctly, have access to perfect memory, and so on, and unless it's ultraintelligent then it's probably not better than a random human at everything (say directing a movie) just like you aren't.


> it would probably by default already be superior in some ways by virtue of its ability to not need sleep, do calculations efficiently and correctly, have access to perfect memory, and so on

Or maybe we'll learn that in order to be intelligent the machine needs to sleep, do mistakes, forget some things, etc.


Of course, hence probably. That would be really weird and unexpected though.


This drives a long cycle of AI booms and AI winters. Right now we seem to be in another boom. When the limits of deep learning and other new techniques become apparent, we'll be back in another AI winter.


My prediction: One or both of two problems will be the straw that breaks the "wait, this doesn't work as well as you've hyped it to" camel's back.

1. Explanation of why a particular decision was made. (I was incredibly happy to see the Toyota article this morning that mentioned this very subject.)

2. Flexibility in the face of change. (Or is there a better response to a model with declining effectiveness than "start over".)


The real question for me is:

What is motivating otherwise very intelligent people to promote the idea that AI will take all our jobs and/or enslave us?

Possible answers:

1) To prop up their investments in AI to get higher valuation.

2) To reach a political goal, such as basic income, which is often justified as necessary in a world where computers and robots take over the work force.

3) ?


A lot of intelligent people are promoting the idea that AI will take a lot of our jobs, for the simple reason that a lot our jobs do not require that much I. Whether it's driving a truck, plowing a field, or making food at a McEatery, a frighteningly large proportion of our economy consists of tasks which can be automated by technology coming online in the next decade or two.


Here's a big task that is a) not directly related to computers, and b) the coming wave of automation can't take away - maintenance. All those buildings, tracks, machines, etc. can't clean and fix themselves. We can't jump over everything with the standard market approach of making cheap disposable crap and replacing it wholesale when some component breaks. There's infrastructure. Like water mains, electrical conduits and cabling, etc.

We need one more breakthrough before we can free people from jobs - self-maintaining machines. Which will probably involve self-healing materials. Imagine if you could simplify maintaining underground cables to just making sure there's enough nutrient-rich fluid around them. Then we could automate the production and delivery of such fluid, and we'd be done.

The future without jobs is the future where our infrastructure heals itself, just like our bodies do.


I don't have an analogical example in mind, but I think that often-times, what happens when a revolution gets past a hurdle like automation vs maintenance, is that the hurdle gets worked around.

In this case, I think that, to a large extent, maintenance can be side-stepped by recycling and rebuilding. A bit like how the current world works: we can fix anything, but for most things, it's simpler to throw and buy a new one because the labour is more costly than the product itself. One example is asking a computer repair shop to fix your laptop screen (particularly so if said screen is touch-sensitive).

In a world where energy is free (fully solar for example), this is awesome: oh, you just got a month old car but a better design just came out? No problem, you'll get the newest one when your turn comes.

I doubt I'll see this kind of world (if it ever comes) in my lifetime, but it doesn't seem so far-fetched, I think.


Maintenance turns out to be a subdiscipline of logistics.

And if your instrumentation is good enough, it really becomes a subdisicipline of logistics, plus maybe the machines can begin to defend themselves from ordinary abuse.


Whilst the "enslave us" thing is a bit alarmist, regarding it taking our jobs there's a certain analogy here with mechanisation.

You pointed out (below) that historically people have often been terrified that machines would take all of our jobs, and that terror has turned out to be unfounded. But they weren't wrong, they were just wrong in thinking it would be a bad thing.

Over the last 150 years, the proportion of Americans employed in agriculture has dropped from ~70% to ~2% [1]. They've literally been replaced by machines.

A large proportion of those people are now doing menial intellectual jobs that likely will be replaced by "AI". A complete shift in the nature of the work we do isn't unprecedented, and it shouldn't be considered impossible, but it shouldn't be considered disastrous either.

edit: [1] https://en.wikipedia.org/wiki/Agriculture_in_the_United_Stat...


The industrial revolution created a ton of jobs. Lots of people were needed in the factories. The industrial revolution wasn't about automating jobs that had existed before, it was about increasing productivity. A single skilled craftsmen could only make 1 chair a day, but 100 children working in a factory could make 1,000 chairs a day.

The only people who lost jobs were skilled craftsmen, who made up a tiny percent of the population anyway. And they weren't really lost, just replaced with lots of unskilled work.

Agriculture automation came years later, and the farmers weren't entirely screwed because they mostly still owned the land. If you own the robot that replaces you, you aren't necessarily worse off.

The coming technological revolution is entirely about automation. And not just automating skilled jobs, but most unskilled ones. A large percent of the population will be affected.


"Over the last 150 years, the proportion of Americans employed in agriculture has dropped from ~70% to ~2% [1]. They've literally been replaced by machines."

In most of the fields I've seen they've been replaced by Mexicans.


Then fine, it's probably 5% if you count them in. It doesn't change the fact that all the machines we now have, combined with the fertilizer revolution of the early XX century that started with the Haber process[0], has increased the food production output per worker by orders of magnitude.

[0] - https://en.wikipedia.org/wiki/Haber_process


Most of the industrial revolution has been about replacing mindless physical labor with machines. The unemployed humans (or their children, typically) were given new jobs as "knowledge workers." In other words, jobs which required thought as the principal skill. Now thinking is being done by machines. So where will the new jobs come from?


The difference is that this time we're running out of work we could give to those who will get displaced by machines. Yes, we can keep inventing bullshit jobs, but we've already reached the point when people are starting to ask, if working for the sake of working makes any sense.


Wat. I keep hearing the argument that there "won't be useful work to be done soon." Considering that a median house still costs 4x yearly median salary means that not only do we live in a resource-scarce world but we will continue to for a long time.

Just because you don't "see" the work humans currently do doesn't allow you to be ignorant to it happening. Just because you personally don't want to work (as most people) doesn't mean there isn't plenty out there for people to do.


>Considering that a median house still costs 4x yearly median salary means that not only do we live in a resource-scarce world but we will continue to for a long time.

Are housing prices, at least in the U.S. actually attached to the value of work, or they mostly affected by the banks willingness to hand out money?

Let's go another step and take work out of it completely. If houses were free, there would still be houses that are worth far more than others, simply because of proximity to other things of human interest.


Why, there still will be plenty of work to do - maintenance, for instance. But probably not enough for everyone.

That "a median house still costs 4x yearly median salary" means exactly nothing. The size of median salary is market driven, and the price of housing reflects the games banks and housing developers are playing, and thus can be arbitrarily high.


The house price is set by what buyer's are willing to pay for them. This is taking into account that in median, 4 years of that person's work-output will be required to purchase a house.


In which country is that? Last time I checked, the common way for financing a house purchase in the entire Western world is through a mortgage loan, which makes the house price tied to the size of the loan a bank is willing to give to an average person.


That's not how prices are set. Just because I can borrow $X thousand dollars on my credit doesn't mean I go ahead and purchase all the big screen tvs and computers I want. Just because I can borrow $XX thousand to purchase a car doesn't mean I'm driving around in a new mercedes. Just because I can borrow $X million to purchase a house doesn't mean I will purchase real estate which will turn my net accrued savings to 0.

When people only think short-term (what's can I afford monthly) vs. long-term (what will I pay out over the life of the loan vs. what I'm getting now) they make mistakes. These mistakes are self-inflicted but these people tend to then blame any and all others for why they can't get ahead, why the system doesn't work, why the American dream is dead :).


That's besides the point. Most young people today can't afford a house. So they take a mortgage loan to buy one, and surprisingly, houses tend to be priced just at the level of the loan an average person can get.

The situation is different from the TVs and computers and cars because those are not considered as important as your own house. Most people eventually have to move out, so they have to participate in the game. I believe we call it "inelastic demand".


At that point perhaps our society will have to recognize that we've reached the point where we simply don't need the entire population working. There's nothing inherently wrong with a fraction of the population working to support the rest of it, assuming everyone's on board. It might not be a easy change, but I imagine something like that will happen eventually.


>assuming everyone's on board

I'm on board with surfing all day while you maintain the robots to make my food. Is that cool with you?


Not GP, but yeah, it's cool with me. I can use some break from coding my own projects, but not too much. :).


Sure. And I'll have a bigger house, better food and more luxuries in compensation for volunteering my time to help support humanity.


Hence, IanDrake's point 2) in the comment upthread.


>Yes, we can keep inventing bullshit jobs

What is an example of a non-government bullshit job?


Most of the advertising industry, and resources they draw from other industries, e.g. print (think printing leaflets).


Let's elect dictator TeMPOraL to tell society what he/she deems worthy of human ouptut. Whatcouldpossiblygowrong.


Why thank you, but sadly, I must refuse. I am yet too stupid to hold such position responsibly. :).


You should still accept and then offload your decisions onto someone else you think is responsible enough.


Greeter?


3) Because it's a real possibility. What makes you think the otherwise very intelligent people are wrong? Maybe we should look to the stupid and to philosophers who don't understand computers for answers?


>What makes you think the otherwise very intelligent people are wrong?

Mostly history. Specifically the history of such predictions made by smart people of their own time that never came to pass.

I assure you, this has all been predicted before. AI taking over the earth is always a decade away.

There are people who wear tin-foil hats and there are those that sell tin-foil hats.

The former think they're saving the world, while the latter are just making money off of the former. It's the same business model as religion and global warming.


There are predictions and there are predictions. There will always be those predicting that strong AI is a decade away, right until the point when the last group is actually right about it.

But there's a different kind of "prediction", which I think is the type those smart people subscribe to - that we are on the path to an AI, that there's nothing that would make it impossible to achieve as the technology progresses (a reminder: we have a working example that intelligence can be built - our brains). Of course there are a lot of obstacles on the way. Personally, the most probable is that our civilization ends before we reach the necessary level - because of war, or all the economic and political shenanigans we see every day.


So their predictions are wrong, but yours are correct? You can't argue against someone else by saying all predictions are unreliable, and then insert your own predictions instead.

And nice, you managed to cram global warming denial in there too.


>So their predictions are wrong, but yours are correct?

Wrong/correct? We're talking about predictions, so let's talk about probability.

In the course of 20 years is it more probable that life will look more like it does today or that AI will make us all jobless?

That is why I'm "more likely" right and Elon is "more likely" wrong.

I'll put money down on it. If, in 10 years, it only takes 1 employee per shift to run an entire McDonalds, I'll give you $1,000.

And yes, I don't believe your single factor over-fit models prove anything about global warming. I'll put money down on that too.


> AI taking over the earth is always a decade away.

I can't say I've seen that said. Any sources? Most predictions I've seen for AI surpassing human intelligence are for mid 21st century. The reasoning isn't rocket science. I wrote an essay about it for some school exam 34 years ago. If you discount the religious stuff then brains are biological computers of roughly fixed ability and regular computers are less able but get better each year. If so, they'll overtake and you can kind of graph it and make a rough estimate of when.


I was with you until you plunged straight into global-warming denial.


This is one of those cases where you can't really extrapolate from history. At least linearly extrapolate.


If it's not a snark question, and you are truly interested in the dangers intelligent people see in AI, then I recommend the book "Superintelligence: Paths, Dangers, Strategies" from Nick Bostrom. I had the same question you had here, and that book shown me well reasoned arguments from the AI worry camp.


3) Press/Media

It's low hanging fruit, the public eats it up. Also people can be very intelligent but not have done the research in the field they are commenting on. Elon Musk and Stephan Hawking aren't AI PhDs.


I'll grant you Elon and Hawking, especially because the former is a "new guy", but otherwise intelligent people were talking about those issues for decades, and until very, very recently most people have either never heard of the issue, or thought those talking about it are loonies. Mass media is late to the party.


This all reminds me of the Y2K debacle. The world would end. Everything would stop. A common example given in the media (by "IT-professionals") was that all elevators would stop, I was always like: "WTF would an elevator check what date it is?"


Yeah. I remember Y2K and since I was already taking baby steps in programming back then, I was also perplexed as for why it would matter.

16 years later now, I see that about the only thing that could do some real damage was software in factories and stock exchanges breaking, and requiring downtime to fix. It would be that downtime that could do actual harm. Because I've now seen some code that runs in factories and... man, how this is totally and utterly fucked up. I'm happy nothing bad happened back in 2000. I'm also surprised that some manufacturing plants are running at all.


Yes, after seeing my fair share of code in different production systems, vehicles, med tech, etc, I'm amazed the world doesn't stop every day..


My explanation is that it's a power fantasy of people who believe they possess superior intelligence and like to dream-up a scenario where super-intelligence becomes a super-power.


It's just another apocalyptic narrative. They are apparently fun; you get yards of cognitive dissonance out of it.


Imagination.


I find the failures to be sufficiently spectacular that they're entertaining.

But on a more serious note, as some of the higher-order techniques of applied mathematics become more and more popular and find expression in tools ordinary people can use, that's liable to change a lot of things. If we get better at characterizing systems and making model-controllers for them, this might just matter a lot.


Your link is about banks pretending the financial markets were OK before the 2008 crash which is really a different thing. Probably some people assign magical properties to AI but a lot of average people use things like Siri and Google Translate and have seen self driving cars on video and have a realistic idea of what they can and can't do.


People always overestimate technological progress at the short time, and underestimate it at the long time. (Somebody else said that, don't remember who.)

On what time-frame are you make those claims?


Applies to human behaviour as well - we overestimate how much we will achieve or change in the short run, and underestimate the same in the long run.


Relevant: Clark's Third Law

"Any sufficiently advanced technology is indistinguishable from magic."


Yeah, we might as well quit calling it AI altogether. It's a terrible name. It really encourages pendulum swinging.


Actually, it's not as inflated as you might imagine. For example, extend Deep Minds work on video games to a fully simulated 3D environment of a factory. Now, have Deep Mind learn to 'play the factory' to produce goods (starting off with the current process as the baseline). Suddenly, the potential gets pretty exciting.

Assuming of course, the AI doesn't get control of the Factory by accident and does something terrible in the name of improving productivity.


You comment is already forecasting, overvaluing, and then presently justifying yourself using guesses based on a technology you likely don't fully understand.


Reinforcement learning isn't magic pixie dust that suddenly solves AI.

One thing we often do is assume for some compact subproblem that AI will offer massive optimizations relative to our current human baseline. However it often turns out that engineers have spent enough time to do a good enough job that the relative improvement is either not existant or small when focused on optimizing subcomponents of a system. With your video game reference there are some games that q learning does better than humans and others that does worse. For the factory floor I know data scientists and industrial engineers run optimizations to increase productivity and reduce costs. With their constraints it becomes hard or impossible for any algorithm to find a much more optimal solution especially when their problem is convex where the human generated solution is provably globally optimal.


This article seems very confused. Most importantly, it doesn't distinguish between what AI can do right now, and what the theoretical limit of AI is, in 50 or 100 or 500 years. Everyone agrees that AI today, and in the near future, doesn't even vaguely resemble a "person". That doesn't imply that much more powerful, "person-like" AI will forever be impossible. Airplanes weren't a practical means of travel in 1910, but by 1960 it was a different story, and some people in 1910 had already realized that plane travel was coming.

Secondly, it does a lot of handwaving about "religion". I am an atheist, and think most religious beliefs are irrational. However, that doesn't mean that every belief that "looks like" religion (in some vague, poorly-defined way) is irrational. The Aztec religion was false, but Hernan Cortes and his army of men with guns was very real. It would have been stupid for Aztec atheists to ignore Cortes because "that sounded like religion". The right question to ask is "is this claim supported by the evidence?", not "how much like religion does this claim sound?".


> Airplanes weren't a practical means of travel in 1910, but by 1960 it was a different story, and some people in 1910 had already realized that plane travel was coming.

Indeed, arguing about the potential threats of air-travel in 1910 (let alone in 1810) would have been silly. The point isn't whether or not AI is possible (or could pose a serious threat), but whether or not discussing it as a threat given our current, near-zero, understanding of it is productive. Jaron Lanier argues that not only is it not productive, it distracts from more pressing challenges related to machine learning.

> I am an atheist, and think most religious beliefs are irrational. However, that doesn't mean that every belief that "looks like" religion (in some vague, poorly-defined way) is irrational.

You think now that religion is irrational, but when most religions were established there was little reason to believe they were. As to the "vague, poorly-defined way" all I can say is that religion has many definitions[1].

The famous anthropologist, Clifford Geertz defined it as a "system of symbols which acts to establish powerful, pervasive, and long-lasting moods and motivations in men by formulating conceptions of a general order of existence and clothing these conceptions with such an aura of factuality that the moods and motivations seem uniquely realistic."

Another famous anthropologist said (again, quoting from Wikipedia) that narrowing the definition to mean the belief in a supreme deity or judgment after death or idolatry and so on, would exclude many peoples from the category of religious, and thus "has the fault of identifying religion rather with particular developments than with the deeper motive which underlies them".

It is therefore common practice among social researchers to define religion based more on its motivation rather than specific content. If you believe in a super-human being and an afterlife and not for scientific reasons (and currently AI is not science, let alone dangerous AI), that may certainly be a good candidate for a religious or quasi-religious belief.

[1]: https://en.wikipedia.org/wiki/Religion


>The point isn't whether or not AI is possible (or could pose a serious threat), but whether or not discussing it as a threat given our current, near-zero, understanding of it is productive.

It definitely is productive. We can either slow research on AI, or we can research AI safety now. Or both. There's no reason we have to just accept our fate and do nothing, or just hope everything works out when the time comes.


That is not what we're doing. We have no idea what AI is, we have no idea about the relationship our current research has to real AI (because machine learning is not even within sight from true AI), and so we're not even sure that anything we're doing can be classified as "AI research" that we can slow down. How can we research the safety of something we know nothing about?

Currently, much of the discussion on the subject is done in various fringe forums, where they imagine AI to be a god and then discuss the safety of creating a god. You can even find reasoning that goes like this: "we don't know how capable AI can be, but it could be a god with a non-zero probability, and the danger has a negative-infinity utility, so you have a negative-infinity expected value, which means it must be stopped now". Now, this sounds like a joke to us (we know that every argument with the words "non-zero probability" and infinite utility can conclude just about anything), but the truth is that such foolishness is not far from the best we can do given how little we know of the subject.


>because machine learning is not even within sight from true AI

Well I don't agree at all, and neither do many experts. It may not be very intelligent currently, but it's certainly getting there.

>How can we research the safety of something we know nothing about?

Even if AI uses totally unknown algorithms, that doesn't mean we can't do anything about it. The question of how to control AI is relatively agnostic to how the AI actually works. We don't need to know the details of a machine learning algorithm to talk about reinforcement learning. We don't need to know the exact AI algorithm to talk about how certain utility functions would lead to certain behaviors.


> I don't agree at all, and neither do many experts. It may not be very intelligent currently, but it's certainly getting there.

Which experts say that? Besides, it's not a question of time. Even if strong AI is achieved next year, we are still at a point when we know absolutely nothing about it, or at least nothing that is relevant for an informed conversation about the nature of its threat or how to best avoid it. So we're still not within sight today even if we invent it next month (I am not saying this holds generally, only that as of January 2016 we have no tools to have an informed discussion about AIs threats that will have any value in preventing them).

> We don't need to know the details of a machine learning algorithm to talk about reinforcement learning. We don't need to know the exact AI algorithm to talk about how certain utility functions would lead to certain behaviors.

Oh, absolutely! I completely agree that we must be talking about the dangers of reinforcement learning and utility functions. We are already seeing negative examples of self-reinforcing bias when it comes women and minorities (and positive bias towards hegemonic groups), which is indeed a terrible danger. Yet, this is already happening and people still don't talk much about it. I don't see why we should instead talk about a time-travelling strong AI convincing a person to let it out of its box and then use brain waves to obliterate humanity.

We don't, however, know what role -- if any -- reinforcement learning and utility functions play in a strong-AI. I worked with neural networks almost twenty years ago, and they haven't changed much since; they still don't work at all like our brain, and we still know next to nothing about how a worm's brain works, let alone ours.


> There's no reason we have to just accept our fate and do nothing, or just hope everything works out when the time comes.

And there's the religion.


And there's the useless comment. Please how that sentence has anything to do with religion. Out of context, it doesn't even look like it's about AI. The same sentence could appear in a discussion about climate change, or nuclear proliferation. But certainly not a religious discussion.

And after you explain that, explain how having something vaguely in common with religion automatically means it's wrong.


> And after you explain that, explain how having something vaguely in common with religion automatically means it's wrong.

No one is saying it's wrong, only that the discussion isn't scientific.

It is not only unscientific and quasi-religious; there are strong psychological forces at play, that muddy the waters further. There are so many potentially catastrophic threats that the addition of "intelligence" to any of them seems totally superfluous. Numbers are so much more dangerous than intelligence: the Nazis are more dangerous than Einstein; a billion zombies obliterate humanity; a trillion superbugs are not much more dangerous if they are intelligent or even super-intelligent; we intelligent humans are very successful for a mammal, but we're far from being the most successful (by any measure) species on Earth.

This fixation on intelligence seems very much like a power fantasy of intelligent people who really want to believe that super-intelligence implies super-power. Maybe it does, but there are things more powerful -- and more dangerous -- than intelligence. This power fantasy also helps cast a strong sense of irrational bias over the discussion. This power fantasy is palpable and easily observed when you read internet forums discussing the dangers of AI. This strong psychological bias tends to distract us from less-intelligent, though possibly more dangerous, threats. It is perhaps ironic, yet very predictable, that the people currently discussing the subject with the greatest fervor are the least qualified to do so objectively. It is not much different from poor Christians discussing how the meek are the ones who shall inherit the earth. It is no coincidence that people believe that in the future, power will be in the hands of forces resembling them; those of us who have studied the history of religions can therefore easily identify the same phenomenon in the AI-scare.


> explain how having something vaguely in common with religion automatically means it's wrong.

It doesn't. Something having religious undertones and something being incorrect are completely orthogonal, a priori.

But the fact remains that discussion of supreme AI has distinctly religious undertones. Such discussions progress in, what I believe an interested observer from another culture would deduce to be, directions distinctly influenced by the West's history of monotheism, and particularly Christianity.


Religion itself does a lot of hand waving, and people tend to talk about the "existential threat" of AI with a lot of hand waving. Its an easy comparison to make. Also, a claim, by itself, can't be a religion. It would need people to attach to it with a certain fervor.

>it doesn't distinguish between what AI can do right now, and what the theoretical limit of AI is, in 50 or 100 or 500 years.

AI, by itself, cannot interact with the physical world, past bit-flipping registers on a cpu (a negligible action). For AI to be a threat, in my opinion, it would need to be coupled with some form of physical manifestation that could, in some fashion, not be under the control of a human. Until that happens, we can just unplug the machine. And we are already seeing the intense difficulty of robotics, making good robots, and interfacing with the real world. Historically, everything with AI is much, much harder than originally thought.


There's a pretty cool story about underestimating bit-flipping :).

http://lesswrong.com/lw/qk/that_alien_message/

Besides, just look at us, humans. Who would ever build an AI and then keep it away from any means of communication? That wouldn't be a very useful AI. If one wants that, one can pick a rock and imagine it has a mind. Or talk to a cat.


That was a fun read.


we can just unplug the machine

A look at the history of managing downsides of industrial technology tells us that we can't or won't "unplug" it if it's still profitable or an integral part of a vital system. Carbon dioxide changing the climate and acidifying the oceans? Unplug the coal-fired power stations! Overuse of antibiotics, especially in animals, leading to deaths in humans? How about we don't do that!

The first "hostile" AI will only be selectively hostile. Perhaps something like automated "redlining" racial discrimination. Some sort of system where the benefits accrue to investors while the disadvantages are spread around. AI "pollution".


>AI, by itself, cannot interact with the physical world, past bit-flipping registers on a cpu (a negligible action). For AI to be a threat, in my opinion, it would need to be coupled with some form of physical manifestation that could, in some fashion, not be under the control of a human.

Now that's just silly.

An AI does not need a body for any reason. What it needs is a voice, software is pretty good at simulating that, and we can simulate images and video pretty well too. If an AI can become a master in the art of persuasion (say coupled with a great hacking ability) it could easily convince people to act in its name (much like old books do now) and pay off others do to its bidding.


Indeed. On the Internet, nobody knows that the person ordering and paying for your services is in fact a superintelligent AI :).


I wouldn't underestimate the effects that just bit-flipping registers on a cpu can have, even today. In the future, with increased automation, the effects of controlling those bits will be even more severe.


On the other hand look where airplanes are now. It turns out that even though it's theoretically possible to have supersonic human/cargo transport we gave up on it because it turned out not be economical.

Hell we were supposed to have suborbital launches, anywhere in the world 20 minutes.


Well, at a logical level, it's one of the biggest existential threats, if 1. it can be done; 2. humans are intelligent enough to pull it off. There is a non 0 chance of AI (and related existential threat), as there is a non 0 chance of a meteorite coming down and wiping all of us off the Earth. But my guess is that most of us would say "YES" to a meteorite deviation project, while "Meh" to any attempt to better understand the AI problem.


Compared to nuclear war, superbugs, volcanoes, meteorites or catastrophic climate change, naughty-AI is a very distant threat. It's like worrying the geneticists are going to breed Ridley Scott's alien sometime in the relevant future.

The world is actually pretty messed up with problems right now; some of which we could actually do something meaningful about. Continuing the naughty-AI bunfight isn't one of them.


I think it's far more of a threat than any of those things. Nuclear war isn't inevitable, and wouldn't destroy the entire world, just the countries at war. Even a ridiculous world wide nuclear war would leave tons of survivors. Volcanoes and meteorites are so unlikely they aren't even worth mentioning. Superbugs would have to be intentionally engineered by a malicious group with a lot of resources, and so are very unlikely.

AI is inevitable. It's very likely to happen in our lifetime. And it's very likely to kill us all. And explaining every reason why I believe those things could fill an entire book, so I seriously recommend reading Superintelligence by Nick Bostrom.


Nick sounds like a crazy guy, but then you watch some of his presentations and you see he is not that crazy. What he says make sense and I think everybody agrees the threat is non zero. The next step is understanding that if you look over the past 100,000 years, nothing posed an existential threat and thus we expect an incredibly low chance of that happening in the next 100 years. The whole point relies on believing that AI is a bigger threat in the next 100 years than any other natural one. It's not that far fetched.


Adding to this, imagine you were part of an ancient alien species checking in on earth every so often. Around 4.5 billion years ago, Earth was formed. 500 million years later, the first self-replicators appeared. Another half billion years (now 3.5 billion years ago) for things to settle down and create the most recent ancestor from which all life descends from. Over the next 3 billion years nothing much happens, until the Phanerozoic Eon is reached around 500 million years ago where more "interesting", larger life starts to develop. But again for another half a billion years nothing that much happens other than evolution continuing playing its game of optimizing for gene replication in more and more interesting ways. Around one hundred thousand years ago, an interesting branch of a species has been developing in ways that indicate the rise of general intelligence (their few pounds of squishy matter in their heads are pretty neat: http://www.yudkowsky.net/singularity/power/). But they don't seem to be doing all that much until about 10-20 thousand years ago. And even then, less than 10 thousand years ago, they built some interesting structures like the pyramids, but their population is still tiny and their global influence is limited. And it wasn't until about 500 years ago when things really started picking up in population growth and world influence (some mark that as the discovery of a formal scientific process to build and record more and more truths about the world that are actually true). Just around 100 years ago, this species developed air travel. In about 60 years after that, they went beyond their planet to their moon. This species at present could wipe out almost all life from the planet (if it wanted, which it doesn't) in multiple ways, to really get every last bit of life exterminated would take some doing but in a large enough time span doesn't seem impossible.

An alien species would have to be observing this planet at very frequent intervals to be able to catch any of these recent developments -- because if millions of years ago they determined nothing interesting was happening, and decided to only look in every five hundred thousand years or so, it's very likely the last time they looked our species wasn't even around.

Now you take the development of an AI that is roughly human level intelligent with a given amount of resources. If you give it the same amount of resources again, it can "breed", copy itself perfectly onto them, and now we have a "population" of two AIs whose combined intelligence should roughly match that of two twin humans. Maybe it doesn't even copy perfectly, maybe it switches some things about to see if it can create something smarter without modifying its own code directly. In any case its breeding is only restricted by its ability to breed and desire to breed in the first place and the compute and power resources it needs to run a new copy, and these can always get lower than initially. What does the long-term look like? Even if the initial resources for the first one are astronomical (e.g. all computing power in use on the planet as of 2016), I would still bet you that left to breed without any other restriction it would take the AI family far less time than 100 thousand years to reach 7 billion instances. So you're looking at many, many more AIs than non-AIs, plus each with human level intelligence that isn't distributed normally (assuming soft eugenics doesn't become widespread in non-AI populations) but is mainly all the same with perhaps increases here and there (assuming a soft takeoff), each coordinating with the same goals in mind. This situation without further specification can be either extremely good for the non-AIs or extremely bad for the non-AIs. The question of whether this will happen (and whether it happens in that exact form, personally I think hard takeoff from a single AI is likely) in the next 100 years or the next hundred thousand years (perhaps a near-extinction-level event occurs forcing our species to basically start over but with an even harder battle to survive since many low-hanging-fruit resources have been depleted) might help you determine whether to worry now about doing actions that make the extremely-good more likely than the extremely-bad, but if one thinks human-level AI can't happen at all in any amount of time, that there's something special about our squishy brains or something inherently limiting about our so-far-general intelligence such that we can't solve the engineering problem of creating another intelligence directly and must always go through breeding, the argument needs to take place at a different level.

The paperclip maximizer argument (not sure if Bostrom repeats it in his book) is yet another level of argument but it's meant for those who already buy the premise of human level AI and self-improving AI but who also think increased intelligence will naturally converge to a human-loving benevolent AI that sees how self-evidently obvious it is to love all life or whatever, that we don't have to worry about all the messy details around Friendliness because they'll just fall out of the entity naturally. No one is actually worried about Clippy tiling the solar system with himself, its probability is epsilon. But a lot of people just see this one argument out of context and infer the arguer believes it to be a relevant possibility to worry about and write off the whole group as crazy...


> This species at present could wipe out almost all life from the planet (if it wanted, which it doesn't) in multiple ways

That's interesting. How could it? Please present a plausible scenario (more plausible than "The US president decides to cover the globe with nuclear explosions, and everyone with any degree of influence over the process just lets him do it").


My parenthetical is important. The "it doesn't [want to]" is precisely why many ideas won't work, because many adversaries within the species (the rest of us) will work against anyone trying, and getting enough people with the know-how and desire to try will be hard. In fact the hardest part of the fragment you pulled out is developing a strategy that will indeed end almost all life without first killing all humans. A lot of strategies are likely relatively long-term (and inplausible to actively work on because of adversaries, however some believe human activity is already an ongoing inadvertent mass-extinction event for other life and maybe ourselves eventually). However the thing in my mind when I wrote that was intentional asteroid impact. We've already demonstrated we can land on the things, NASA's been busy finalizing the details of a redirection mission, so redirecting a sufficient number of smaller ones or just one really big one (but why stop at one?) ought to do the trick for most plant and animal life.


Cobalt salted hydrogen bombs are one fairly plausible route - how many and how large they'd need to be is another question.

https://en.wikipedia.org/wiki/Cobalt_bomb

[Of course, they don't exist at the moment - but they wouldn't be that difficult to make as one advantage of a Doomsday bomb is that you don't need to deliver it so it can be made as large as you want through staging].


Interesting. So the idea is that a small group of people could unilaterally build and detonate one of these without anyone else being able to stop them?


In David Brin's novel Earth World War 3 is basically the world vs Switzerland - with the Swiss threatening to detonate large cobalt bombs buried under the Alps.


>And it's very likely to kill us all.

This is absolutely absurd. Could you elaborate on that instead of just pointing to a book?


The tl;dr is that we will eventually have AIs which are smarter than humans. Much smarter than humans, to the point that we are to them what chimpanzees are to us. They would have godlike powers and abilities. And this could happen very rapidly, as the first smart AIs design even better AIs and better computers, which then design even better AIs, and so on.

The second point is that they would have no reason to keep us around. They would do whatever we programmed them to do, to the point of absurdity. An AI programmed to collect as many paperclips as possible, would convert the mass of the solar system into paperclips. An AI programmed to solve the Riemann hypothesis would build the biggest computer possible to do more calculations. An AI programmed to value self preservation would try to destroy anything that was even a tiny percent chance of being a threat to it. It would try to preserve as much energy and mass as possible to last through the heat death of the universe. Etc.

The only AIs which actually do things we want them to do would have to be explicitly programmed to want to do that. And we have no idea how to do that. We can't just hope that arbitrary AIs will happen to value humans instead of something else. We need to figure out how to control them, which requires solving something called "the control problem".


>The tl;dr is that we will eventually have AIs which are smarter than humans. Much smarter than humans, to the point that we are to them what chimpanzees are to us. >The second point is that they would have no reason to keep us around.

Questions:

(1) What reason do we have to keep chimpanzee's around? Why don't we just kill them all? They can't even do calculus.

(2) Whatever the answer for question 1, wouldn't an AI consider that same scenario regarding humans? Assuming an AI is truly smarter than a human?

(3) If AI gets to the point where they are much smarter than humans, wouldn't it follow that there would be AI philosophers? Or activists? Or pacifists? Would there be such a thing as an AI civilization? Or is your scenario just a few AI overlords that some how slipped through the cracks of regulation by humans?


1. Because we have empathy and because we have no compelling economic reason to (economics always overruling silly things like empathy).

2. Empathy, which is a part of our emotions, is an incredibly complex thing we can't even begin to formalize yet - it's extremely unlikely that a random AI we'd build would happen to have the exact same emotional makeup as we do.

3. "Smart" has a particular meaning here when used in discussing AI. It implies raw intelligence, not morals. A smart AI may be incredibly good at evaluating information, inferring new facts and doing complex tasks. None of it means it will necessarily have to ask itself philosophical questions that we do - those actually mostly come from our emotions, from our desire to understand our place in the universe.

The core of the AI-threat issue is that for an AI to not be dangerous it would have to value us and the same things we value. We can't agree about our exact values even among ourselves, and we're nowhere near even trying to formalize it on a level of detail that would be useful for programming.


That's a good point. How is AI worse than a smart, amoral human?


It may have very long time frames to work in, may be faster or able to evade the nefarious detection intent of humans by working in ways very different from how we work.

Read a book recently that contained an interesting AI attack. Basically, it set a lot of small objects in motion, with complex orbits all computed to result in a large impact at some point in the future.

We would not notice something done this way, by analogy.


> The second point is that they would have no reason to keep us around. They would do whatever we programmed them to do, to the point of absurdity.

This is where this line of reasoning completely falls apart for me. Any complex system is going to do unexpected things that weren't deliberately designed for. My laptop computed doesn't do "whatever it was programmed to, to the point of absurdity." Why would you expect a program complex enough to simulate the human mind to act in such a predictable way, when none of the much simpler programs we already have act that way?


Nah, that's the key. Your laptop is not doing what it was designed to, it is doing things exactly the way it was built. That's the 101 of programming - programs follow what you wrote, not what you meant.

> Why would you expect a program complex enough to simulate the human mind to act in such a predictable way, when none of the much simpler programs we already have act that way?

Exactly. We suck at making complex programs do what we want them to. This is the source of worry.


Your laptop computer does do what it was programmed to do, to the point of absurdity. It will keep executing a computer bug over and ever. Despite the fact that isn't what you intended, it still executes the code anyway. An AI will continue to execute, even when it stops doing what we intended it to do. Computers don't care about our original intentions, they just execute code.

A program "complex enough to simulate the human mind" would probably have even more bugs that result in unintentional behavior. Not fewer.


I hate to be the logical positivist here (I really, really do), but I'm not sure what your first two sentences mean. What would "smarter than humans" mean and how exactly is it going to happen?


My take on how it's going to happen is that it allready has.

Most technologies we build, and use, surpass humans in some capacity. Thinking is no exceptions and we allready have machines far better than humans at some thnking tasks, such as arithmetic.

We are now working on systems surpassing humans in various recognition and prediction tasks.

I suspect the catalog of tasks far better done by machines will continue to expand to finally encompass all tasks required to form a general intelligence. Only, the trick to assemble them, into a complete system could very well prove tricky.

The result, though, is that there will never exist such a thing as human level artificial intelligence because, if and when, a system of general intelligence is created all its components will far surpass human equivalent capabilities.

So "smarter" simply means far superior to any human in any task requiring thinking, in the same sense that computers are far superior to any human at arithmetic.

Even some odd genius savant of today cannot compete with computers at number crunching. More interestingly it's not even interesting to compare, the difference in potential performance is just at different scales all together.


I think a shorter tl;dr is simply that the ways a human-level or beyond AI can be programmed incorrectly (it eventually kills us all or worse), even trying our best to do it right, outnumber the ways it can be programmed correctly. If you don't believe this, I'm not sure how to convince you, other than to poke holes in your ideas about how a benevolent AI could be programmed correctly so you see that such an idea is actually a terrible one or at least insufficient. There are many level of detail attacks I could use, other attacks around the lack of a proof that the AI's goals and preferences will remain exactly the same let alone within a window of benevolence over time.


How about positive feedback loops? Could we not employ ai-technology as an aid in designing, and implementing, benevolent ai?


That strikes me as sort of circular in reasoning. To create benevolent AI using AI, the latter would already have to know what 'benevolent' means. In which case we might as well just create benevolent AI in the first place.

In fact, by relying on a secondary AI it's quite possible we're even more likely to accidentally end up with AI that isn't benevolent.


Ok, bad choice of words. How about augmented reasoning capacity enabled by technologies involved in creating AI.


We could, maybe, for various degrees of help and with various dangers depending on how much help (look up Oracle AI), and I would easily wager using relevant tools (not just "AI tech") would make the positive outcome more likely than not using them, but there are still many ways it can go wrong, and while past and present weak-AI technology might be useful in creating a sentient machine I'm doubtful much would be that helpful in solving the Friendliness (or benevolence) problem directly.

For instance the other comment brings up the uncertainty around giving an Oracle AI the task of finding what benevolence means to humans, to then plug into the final AI hoping it will be Friendly. You don't know how to precisely define 'benevolent' so let the Oracle AI do it (or help you do it), though you think you can at least program the Oracle AI to do a good job in finding answers to vague, complex, fragile problems. (Why? Past success in AI tech?) Is what the Oracle AI outputs actually good? How do you know? What did the Oracle AI have to do to reach its answer, or gain a last bit of certainty in it? (e.g. Did it reason it needed more data and so built or hastened the arrival of brain-scanning technology, started scanning brains from volunteers, the very recently deceased, cryonics patients, or just without its creators knowledge through black market deals, and at some point did it simulate trillions of human minds under all sorts of gruesome scenarios to test responses? Do you even morally care about sentience on silicon? Would you care more if you were an upload?) You've got a lot of problems just building an Oracle AI (or anything powerful enough that can help you solve the hardest problems). Even supposing the output it gives is good and the costs (practical and ethical) were acceptable, will a seed AI allowed to recursively self-improve be guaranteed to preserve this Friendliness property in all subsequent improvements?

With the idea of using "AI tech", you've basically drawn a line in the outcome space that on one side says something like "let's just program the thing" and the other says "hold on, let's just program the thing but also use all these other relevant computer tools we've developed over the decades to help program the thing". It's the start of an approach, but does it really prune all that much? What do these relevant tools actually buy you in safety, if a safety-guarantee-tool is not among them? Another line which may be what you were heading towards, could be on one side say something like "we might not be able to solve all the necessary problems with our current intelligence, so let's also spend time looking for ways to augment human intelligence -- brain-computer interfaces, brain emulations, tweaks to our genetic code, that sort of thing" and the other side "no, we're capable, let's try our best right now". Using ems based on friendly-ish human minds that become better than human could very well be a better approach to solving the superintelligent problems correctly and efficiently than doing so on raw human levels, on the other hand it could turn very bad if one of those ems instead just solves the problem of recursively improving themselves and their goal and value system is insufficiently friendly. Or many other failure modes. (Though an interesting "most likely scenario" given certain assumptions and only looking at a span of 2 years which by itself doesn't look too terrible is Robin Hanson's Age of Em idea.) Just like approaches involving molecular nanotech to help us, there's reasons why it could be seen to help the best outcome along, but it also opens the door to a lot of other risks that aren't necessarily there if you took the other side, so again how much is really pruned?

What I take as your general idea of "maybe we need and should employ additional help and knowledge that we've yet to gain to even really start" isn't bad, though, much better than "Benevolence is easy, a King and his subjects all know he is a great ruler when everyone is smiling frequently", it's just lacking in detail to constitute a plan. Throwing in the positive feedback loop idea (which might even manifest as impressive accelerating returns, who knows) to me seems only relevant to figuring out time scales, I don't think it says anything about the fraction of good vs. bad outcomes apart from whether short or long time scales matter.


Since I'm most likely will have no impact on any plans I'm only hoping for some optimistic predictions (a plausible bright future) and what mechanics would be involved. So no, I'm not expecting an air tight plan, just a hint at how it might look.

My general take is that, AI or not, the world is continuing to progress towards a future where extinction level capabilities are not only within reach, but also within reach for more and more people. It is, to me, not a question of if, but when, such capabilities will be in the hands of an individual, or organization, that either by malice, or incompetence, would be a serious problem.

Last year a pilot intentionally crashed a plane in act of sucicde, taking 150 passengers with him. Consider a likeminded person capable of engineering a virus.

So, my optimistic take on the future is that while such capabilities are developed, one would hope, that in parallel capabilities to mitigate those threats are also developed, so that when the problems arise the combined capabilities of all benign actors is sufficient to evade extinction.

In the case of AI it just seems implausible that any debate could manage to stop the technological race. Just look at the climate change debate, it has been going on for decades and is just gaining political traction. AI isn't even on the radar yet...

So the plausible futures too me are either that AI will wipe us out, or not. In the event that it does not I think one plausible future is that the potentially malign AI will operate in an environment where other AI, or near AI-systems, will also perceive it as a threat. Hopefully aggregating into the capacity to suppress the dangers in a benign way.

Also, I believe any path from here to AI will consist of a complex system of AI-like technology interwoven with human systems resulting in some aggregates with considerable capablites itself not to be discounted.

In some sense I believe we can extrapolate how things will play out just by looking at some systems that might resemble AI today. Consider corporations, these are actors created from a complex system of legal, economic, social and other constructs. While ostensibly owned by, and operated by, humans, the reality is that there is little any individual human can influence them in view of all other forces directing their actions, making them, in a sense, allready existing, AI:s.

Creating actual AI, as a useful thing, will probably involve similar complexity and influences. So in some sense there are probably similar patterns that will play out. Just as corporations can act to influence the world in ways malign to humans, the AI systems would. And just as corporations, the AI would find it self acting in an environment doing its best to contain the malice.


Ridley Scott's alien is inevitable. Human history and scientific progress has been driven by conflict. Whenever a better weapon is within reach, we reach out for it. Modern genetics is advancing very fast. We'll definitely have Ridley Scott's alien in our lifetime. It's very likely to kill us all.


You're so clever. Alien, el oh el.

I said that it would take a book to explain. You are parodying an extremely shortened summary of a complicated subject.

The difference between AI and "alien" is no one is trying to make alien. There aren't research labs and billions of dollars in funding. There is no incentive at all. And the problem is far more difficult. AI is just algorithms.

Second, aliens arent actually that powerful. Superintelligent AI would be godlike in power to us. I'd much rather have aliens.


Similarly my own views are the result of a wealth of material not summarisable in a HN comment. I intend to read the book, although I'm sceptical that AI will be established as an issue of political urgency.

The purpose of my facetious approach is to make clear that statements such as, "Superintelligent AI would be godlike in power to us," are more likely to have originated by analogy to science fiction than from a grounded understanding of what is possible or realistic in the relevant future.

Being "just algorithms" doesn't really mean very much. It's like saying breeding the Alien is "just genetics". It's something we imagine might be possible, but is far beyond our current capabilities.


All speculation about the future will sound like science fiction. Someone in 1800 making accurate predictions about 1900 would have sounded crazy. Someone predicting the atomic bomb in 1900 would have sounded crazy. Or airplanes, or television, or the internet, etc. Just because something sounds absurd doesn't mean it's not possible.

We live in a world where the only intelligences are humans. The idea of superintelligent minds seems absurd to us, because it isn't something we've ever encountered in reality before. But there is absolutely no law of nature that says it's impossible, or even difficult. If you look at history, it'd actually be really surprising if it was. Human minds were just the very first intelligence coughed up by evolution. We are far from the limits of what is possible.

And there is a difference between genetic engineering and AI research. Biology is incredibly complicated. Like millions of complicated, interconnected, moving parts. It really is impossible to make Alien, or really anything beyond modifications to existing biology.

Algorithms are a whole nother ball game. Our current AI algorithms are actually reasonably intelligent at many tasks, even smarter than humans at some things. And they are extremely simple. Could be described in a line or two of equations, or a few paragraphs of text. Not because they are primitive, but because simple algorithms work. And they have been improving every single year for a long time, and will likely improve well into the future.

But you don't have to take my word for it. If you survey AI researchers and experts, they almost unanimously agree that AI will be invented in our lifetime. If you survey biologists, I highly doubt they would make similar claims about Alien.


Well, we can predict more-or-less what will happen when a large meteorite hits earth, but we have no clue (other than sci-fi scenarios) for the nature of the threat AI poses, if only because we are so far away from understanding what AI is. We don't know if intelligence can be separated from agency and agenda, we don't know if intelligence can be separated from emotions, and we don't even know if intelligence can even exceed our own (the argument for "human intelligence at a higher speed" fails on several grounds). Hell, we don't even know what intelligence is (and the definition of "a general ability to solve problem" doesn't work, as there are problems that what we generally regard as intelligence clearly cannot solve while other qualities do).

What's the point of arguing about the potential threats something we have so little knowledge of? Maybe it will come too fast, but maybe that unkillable virus will, too. At this point in time, we simply have too little information to have an intelligent assessment of the risks posed by AI. We can and should, however, discuss the more immediate related dangers of our real-world machine learning, such as self-reinforcing bias.


> We can and should, however, discuss the more immediate related dangers of our real-world machine learning, such as self-reinforcing bias.

Quoting to emphasize.

It is one thing to acknowledge that there is a non-zero probability that an uncontrollable AI might be created, but that we have other things to worry about and so why hand-wring over it.

What should be the strong counterpoint to this hand-wringing is that what we should be doing is better understanding the behavior of the "weak" AI we have now. We need grasp where it does and doesn't work, a difficult problem to "sell" since it lacks that alien-like doomsday aspect or predatory agency.

These kinds of problems are the problems of our own creation and thus we must take ownership.


> the argument for "human intelligence at a higher speed" fails on several grounds

I'd be interested if you said more about that.


Sure, but you need to understand that I'm not trying to predict what would necessarily happen, only to show how such a thing could possibly fail.

Think what could happen if your brain worked much faster. The world will mostly seem to slow down. A very conceivable outcome is one of boredom and possibly madness. You could perhaps multi-task and think of many different things, but we are generally terrible at multitasking, so we have no idea whether an intelligence even could multitask well.

Another problem is that we don't know anything about the information capacity of a mind. A faster mind of the same size could potentially have trouble dealing with all that extra information (again -- madness, maybe?)

One thing that hints at those capacity limits is the need for sleep. We don't know why, but some hypotheses say that it's necessary to do some "information cleanup" operations in the brain. It is possible that an AI would need to sleep, too. The AI may need to sleep a lot more to handle much more information.

It is therefore conceivable at least that a mind has to work at a speed that is commensurate with the speed of the world around it, and one that is appropriate for its capacity.


Doesn't any attempt to better understand the AI problem just bring the (potential) AI problem closer?


"It's not so much a rise of evil as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies. That's what this type of AI turns into."

That's the danger. I think Lanier is a lot closer to Musk on that point than he imagines, though.


Am I missing something or does the first reply (by George Church) completely ignore everything the original article says?


I don't think you are, he seems to respond with the predictable hyperbolic future-boosting expected of futurologists, "singulatarians" etc. Everything is all "exponential growth" and inevitable transformation right-around-the-corner.

Except it isn't.

And this ignored every point that jaron was trying to make and falls into all the traps he was trying to point out. It is an oblivious comment.

I must say, I don't find most of the contributors to edge particular insightful.


I agree. It seems he doesn't really get it.


Not only the first reply. Going through the replies, most of them do not seem to have read.

Pretty disappointed with the Edge here. It's clear that the participants are just exploiting it as a platform to put forward their own ideas.


Great article covering a lot of good topics. Especially the overpromise->winter effect and the fact that lots of whats in AI's are nonsense. My favorite part was this though:

"The truth is that the part that causes the problem is the actuator. It's the interface to physicality. "

BOOM! That was my exact argument in counterpoints to Superintelligence risks. I thought it was ridiculous to worry about what it thought when you could easily control what it did at the interface. I also pointed out that high assurance security already has decades of work dealing with this exact problem and pretty effectively. So, anyone worried about that sort of things should focus on securing the interface that would be used in various domains to catch issues.

Now, that's not to say a superintelligence can't break an evil scheme down into a series of safe actions that result in catastrophe. There's possibilities there. Just that all methods for handling them can and should be at the interface. And can be implemented by verifiable, dumb algorithms.


Once AI is invented, what's to keep anyone in the world from building one? And not keeping it in a box? And that's assuming the box even works, and the AI can't hack out of it, or trick you into letting it out.


My AI that works in concert with surveillance state to hunt their AI's. Plus, anything or anyone too smart will have to be registered and monitored. Basically, the reaction to mutant powers on X-men.


> There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program.

Huh? A program, by definition, has no autonomy. Given its inputs, it performs the appropriate sequence of actions it has been programmed to perform. At no point is it governing its own behavior.

Can anybody name a single realizable program that exhibits autonomy?

Even machine learning algorithms, which adapt to their input, are deterministic and incapable of self-governance. Given identical initial conditions and inputs, the machine learning program will generate identical deterministic output.


We live in a deterministic universe. You'll need a different definition of autonomy if you want anything to have it.


"There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us."

On a different discussion site I recently pointed out there is a similar religious belief or religious dogma in the idea of the self driving car, that went over like a lead balloon. People acting in a religious manner and expressing religious belief don't like to have that pointed out to them. Probably vestigial monotheism, if they act religiously at their traditional church on Sundays they don't like it pointed out that they worship a different altar online, or the atheists get really wound up when they are called out for joining the atheist movement but acting as deacons of the church of the self driving car of the future.

Anyway just saying its a line of reasoning, that while true, can only lead to unpleasantness. Like discussion of scientific racial / gender differences or discussions about IQ, all you're gonna get is social signalling screaming "see no evil".


> Anyway just saying its a line of reasoning, that while true, can only lead to unpleasantness. Like discussion of scientific racial / gender differences or discussions about IQ, all you're gonna get is social signalling screaming "see no evil".

I'm waiting for it to fall apart. The amount of information that you have to willingly ignore these days to participate in the ''progress solves all ills'' narrative is huge.

On vestigial monotheism read Straw Dogs by John Gray if you haven't.


I fully agree with the religious aspect of it (religion doesn't have to be theistic at all, but the particular beliefs in an after-life/resurrection/omniscient-omnipotent being do resemble the theistic, and indeed, monotheistic religions), but I think there's another aspect to the "AI threat" belief that is more than just "vestigial monotheism".

I think this is (in part) a power fantasy. The believers in the AI threat normally perceive themselves to be different from other people due to their (self-perceived or real) higher intelligence. They therefore describe that property (that they believe makes them special) as being particularly dangerous, and therefore powerful, because they want to believe that intelligence imbues its possessor with power and a lot of it. Sadly, superbugs, meteorites, H bombs, climate change, human-rights violations, inequality and other threats don't require or possess much intelligence, but an unstoppable AI is the purest manifestation of intelligence-as-power.

In the human world, intelligence is not very correlated with power, except very roughly. The most powerful people are generally more intelligent than average, but the correlation doesn't extend far beyond that. If anything, charm, confidence and courage seem to imbue their bearers with much more power than raw intelligence. Which is why I fear (only partly jokingly) a super-charming machine than a super-intelligent one. A super-charming machine is far more likely to bend humanity to its will -- and therefore pose a greater danger -- even if what it wants may be truly idiotic; a super-intelligent machine would be far more limited in its effects...


"A super-charming machine"

A television, or a tablet with facebook and youtube apps?

Its interesting to think of mass propaganda a la Bernays as an example of "super charming machine". In the old days the cogs and wheels in the marketing business are/were humans, just like in the old days "math calculator" was a job description but it got automated and accelerated into computers.

Populist political machine might be another example.


> unstoppable AI is the purest manifestation of intelligence-as-power

Brilliant observation.


vestigial monotheism

That's a fantastic metaphor.

I'm curious though how someone could act in a religious manner about a self-driving car.

To the original point though, I am very much in the camp of "historical determinism that we're inevitably making computers that will be smarter and better than us" and here is why: I believe that intelligence (and "consciousness" if you want to go down that rabbit hole) is completely material and as such it is possible that we will eventually understand the mechanics of them.

If we can understand those, then history would indicate (Historical fallacy) that we can replace or replicate those mechanics with more durable systems or materials than the fairly fragile.


That's a religious position. We have absolutely no idea what consciousness is or how it works.

More than that, we have don't even have ad hoc good models for motivation and personality, never mind useful formal and explicit models.

I think Lanier is absolutely right, and I think there's a strong quasi-theological element running through all of AI and programming.

Programming is a lot like making spells and incantations. If you get them just so you get the outcome you want. But you have to speak the language of the system to make that possible. And you have to be very careful about unintended consequences.

There's something very medieval about this - both in the sense of the pure scholasticism of academic CS, but also in the practical sense of knowing how to formulate the correct "prayers" to make useful things happen.

Lanier seems to be pointing out that CS is still haunted and influenced by these religious metaphors, and that AI is the most visible example of that.

I think he's right - and more, I think that real AI, in the sense of autonomous personalities, won't be possible until that's no longer true.


How is it a religious position? A religious position would be thinking that some unknowable process would deliver some kind of techno-rapture.

So because we don't know how something works now, we will never know? C'mon.

Programming is a lot like making spells and incantations.

No, it's not. Maybe for laymen it looks like that, but for us developers it is definitely not the case. Granted, sometimes something works and you don't realize why, but that just means you have a big debug problem on your hands - not that it's unknowable. Usually only happens when you port something or copy code over from one system to the next.

Not sure how you come to those conclusions.


Because it's based in faith that something will happen, not a rational determination based on facts on hand.


What? We are actually building this stuff...

It's about as much faith as Doug Englebart had about the PC.


No, we're building ELIZA with larger databases. You can't take any research we have on hand today and say "a natural extension of this leads to consciousness". Part of that is because we don't even understand what consciousness is. Part of that is because we don't have any realistic work beyond pattern matching.

That's why it's not rational. There is no fact you can exploit to come to this conclusion. That's called "faith", regardless of the particular subject matter.

A little faith is important. We didn't know if the first astronauts would survive orbit, and we didn't really have any data to prove it unequivocally. We just had faith that our limited understanding wasn't missing anything and the only way we could find out would be to put people in orbit. We found out soon enough.

But it's really, really a bad idea to call something rational when it's really faith, just because you don't want to be associated with religion.


"a natural extension of this leads to consciousness"

Did I say that? In fact I didn't.

I don't care what you call anything, the fact remains that AGI doesn't exist today, but we can work on the best guesses at what we think will get us there. That is the exact OPPOSITE of faith if there ever was one.

We are working on computer vision with segmentation. That's a tiny subsection of supervised learning. By the way you say pattern matching as though it's some trivial thing. In fact that's what a hefty portion of our brain does to understand anything - so it's actually very important and our visual system (eyeball to cortex to representation) is arguably the most important part of that.

I couldn't care less about whether a computer has consciousness anyway. I think the whole consciousness argument is a waste of time (which is why I called it a rabbit hole previously).

I think you're hung up on this religious thing and not able to discriminate between actually building things that may be able to lead to AGI versus writing novels and daydreaming like so many "futurists."


> We have absolutely no idea what consciousness is or how it works.

We know what it is, we just don't know how it works.

Self-awareness comes in stages. We generally move through those stages throughout our daily lives. Animals display wide variations in qualities of self-awareness as well. So we can define consciousness as the sum of all of these qualities.

What I think is going to happen is that we'll start separating out lots of aspects of consciousness and explore them in software, adding more and more of them in as the state of the art of hardware gets better. The consciousness algorithms will slowly, over time, shed complexity.

Eventually, well before we get to even human-level self-awareness, we'll run into hard physical limits and realize that biology is way more effective at making the sort of compact, yet incredibly complex evolved system than any design process could be.

Biology has an advantage we don't have, it does not have to understand what it is doing, and it works ceaselessly. It can simply try over and over again over millions of years.

I predict that getting machines to become truly self-aware will be more trouble than it's worth, and that then you'll be choosing levels of comparatively-lower self-awareness for each individual component of a software system as part of architecting it. In fact, we have that tradeoff now. Do I really need to pull out ML to write a shell script?


It is surprising how even after 50+ years of Moore law technology is still does not cope with biology. Single cell carries about 700MB of genetic information in its nucleus.

https://medium.com/precision-medicine/how-big-is-the-human-g...

It is orders of magnitude more dense than current technology.


> That's a religious position. We have absolutely no idea what consciousness is or how it works.

Well we know that it exists, or at the very least seem to exists. And there is nothing in the physical word that we have understood so far that cannot be ultimatly expressed computationally—even if it’s usually not the most useful formalism. It is of course somewhat a leap of faith to then say _everything_ is computation (or can be expressed as computation), but I would argue that science has already made this leap in it’s infancy: The Book of Nature is written in the language of mathematics… Admittedly, modern philosophy of science is a bit more nuanced but the old the statement from Galileo basically stands. Now I agree that it’s not entirely rational, but it certainly isn’t “religious”.

So yeah, I think there are very good arguments to be made that conciousness can at least in principle emerge from some kind of computation. That computation could be a complete simulation of every atom in a brain or maybe some shortcut is possible it doesn’t matter for my point. That does not mean however that I think it can be made or will be made, since has you pointed out, we have no idea what we are talking about. I completely agree with Lanier in this, the tech industry should stop focusing on this nonsense.


It is of course somewhat a leap of faith to then say _everything_ is computation (or can be expressed as computation)

You don't need any kind of leap of faith to actually start working on it. That's the wonderful thing about AI and AGI more broadly. You can actually go work on it, today. Are you going to solve it immediately? Of course not, but at least you can chip away at the problems.


> And there is nothing in the physical word that we have understood so far that cannot be ultimatly expressed computationally

Except for those physical phenomena that can only be modelled with a recursive form that doesn't start with coefficients defined with infinite precision - which unfortunately includes a lot of useful things like weather prediction, and even the classical mechanics of n-body systems.

How about telling me what tomorrow's Dow close is going to be? Or the oil price four years from now? Or which parts of the universe that electron over there passed through on its way to your monitor?

I think a lot of the responses here just make the point for Lanier:

"There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science, not just because we raise expectations and then fail to meet them repeatedly, but because we confuse generations of young scientists."

Anyone who believes that consciousness is computable by definition, because it just has to be, has a dilettante-level view of the problem. Philosophers have been arguing about this for centuries, and on the whole they're not so certain about it.

So I'll repeat - we don't have adequate models for human or even animal personality, or for emotional responsiveness, or even for the self-aware perception of qualia, which is apparently one of the key things that defines consciousness.

Big data isn't a solution to this, any more than throwing the works of Shakespeare into a database and looking for clustering statistics about word proximity and sentence structure will get you a new Julius Caesar.

The reality is there are levels of intelligent behaviour - especially self-aware creative behaviour - that are completely opaque to any modelling technique available in current CS. I believe that anyone who thinks the problem is simple and just needs moar code and a faster processor thrown at it is expressing a faith-based position of hopeful belief, not a reality-based fact.

Shakespeare is Shakespeare because the writing isn't word salad. Shakespeare is interesting because of the unusual the density and richness of the experiences that are referred to - not just in the plot, but in the details of the metaphors in each sentence.

You need to model experience before you can recreate that, and you can't do that unless you have a deep understanding of what experience is. Big data gives you slightly more focussed monkeys and more typewriters, but it's in no way a solution to the basic modelling problem.


I'll take your analogy and run with it, to note the extreme focus on symbolism in programming.

Its all about the copy by reference vs copy by value, type systems, etc.

Now when the medieval mystics wanted to talk enlightenment values, politics, psychology, political science, but the local powers that be were not down with the cool new stuff, they got obscurely alchemical, astrological, or divinational to avoid having the local leaders figure out what they were saying and therefore separate head from body via guillotine. Long after both the topics, and schemes they used to obscure the topics, were no longer contemporary and relevant, we laugh at the ancient alchemists.

Come to think of it, don't the cool kids programmers laugh at BASIC, assembly, C++, perl? Yet another similarity. Ha ha those ancient mystics trying to write quicksort in BASIC, LOL.


Self-driving cars will come and save humanity from driving related deaths and ensure a new era of prosperity as humans get to relax while self-driving cars do everything for them.


I did a poor job of providing examples of religious behavior, but as history shows people can believe and behave in a religious manner about most anything. Including self-driving cars.

I took a comparative world religion a long time ago; I can't easily find the handy behavior list we had to memorize. We were taught something vaguely Mandaville-ian, I don't know if that outlook is still current and cool, but at least you can find Mandaville's opinion in the wikipedia. The "aspects" section of the wikipedia religion article would be a good start.

Shared faith based belief in the inevitability of some rather objectively unrealistic future outcome, in this case "We're all gonna use self driving cars", critically based solely on faith. A mythology, using the "a story that is important for the group whether or not it is objectively or provably true" wiki definition, where progress has always happened in the past and therefore will always happen in the future, when reality is progress only comes from obtaining and using energy, and our petroleum is running out while our population expands, so you do the math on the realistic future trajectory of progress, furthermore there's no question in the religion about the narrative that progress necessarily automatically equals self driving cars, instead of "busses and trains and subways" or not commuting at all or whatever. There's a fair amount of superstition involved aka everything the general public or journalists say about anything technical, legal, or economic WRT the topic of self driving cars, they're just making unreasoned noises at each other. Its always accompanied by extensive and exhaustive ethical baggage, in this case about preferred lifestyles, the value of work, urban planning, as if the self driving car is actually important relative to VPN deployment or subsidized public transit or star trek style transporters, yet despite its irrelevance there's a great pile of associated ethical baggage and related extreme judgment. There's a strong self policing society, where certain aspects of the religion are strongly amplified via endless ceremonial repetition, and the not-devout-enough are tossed out or punished via ridicule or otherwise.

Note that you can discuss self driving cars in non-religious ways, much as you can scientifically study aspects of a religion. That doesn't mean the adherents are not thinking/believing, communicating, and organizing in a religious style or manner, that just means you looked at them from a non-religious perspective.

Some historical religions being a little hard to believe in the modern scientific world, doesn't mean all religions are false; quite possibly we will have self driving cars in the future. That future success won't be a disproof of past observational behavior, under the "walks like a duck, quacks like a duck, it IS a duck" doctrine of what is a religion. Praying to each other for deliverance in the form of a self driving car won't bring it here any faster, but if it does somehow arrive anyway, the physical manifestation of that idol/miracle doesn't prove the praying was the cause of the effect/idol/miracle.


the physical manifestation of that idol/miracle doesn't prove the praying was the cause of the effect/idol/miracle

Interestingly enough, if enough people "pray" in the form of demanding these products through the marketplace, then yes actually it can cause an actual product to be developed. See: Oculus. It might be expensive but damnit there it is. So I think this actually weakens your argument.

I can't in my mind compare techno-optimism with religious behavior it's way too different.

The self driving car as a technology actually physically works. Sebastian Thrun proved that back in 2001 and google has been proving it over the last 5 years.

There is no possible way to prove - or disprove for that matter - that your Prayer had anything to do with your uncle/grandmother/wife's cancer going into remission.

Those are infinitely different things.

I think you are turning some people's hyperbolic exuberance into something more than it is. Just like people say "everyone loves X" or "everyone is going to want to X."


With regard to "vestigial monotheism": I think what you're describing is just a property of discourses in general, in that they're all built around bodies of knowledge which have fixed points [assumptions/truths/whatever], and that participants act in such a way as to keep those things stable.

Definitely seen things like this before, very poignant. Not to knock on Kurzweil and friends too hard [lots of interesting stuff in both spaces obviously], but I've had this vibe from talking with self-described Singularitarians as well (there's some overlap between those circles).


It is kind of religion - sort of classical American Yankee machine-worship. Even the Puritans/Pilgrims thought that organization - essentially a technology - could save them from the swamp gases of British life.

And I can't say it didn't work, but I'm probably a fanatic at this religion anyway.

I think Mark Twain more or less says this outright in "Connecticut Yankee." Speaking of Twain, he grievously injured himself financially chasing the dream of a typesetting machine far ahead of its time.

If it's religion, it's religion that (mostly) works. <insert Panglossian diatribe here>


Religion and ideological herd behaviors are a core part of human nature. You can't expect these to magically vanish just because people have abandoned belief in supernatural deities and magic. Those are just one form of it. Religion is a social phenomenon and cognitive style that is independent from the subject matter.

That being said: the techno-critics also display these behaviors as do the new agers, "naturalists," and others with which they tend to travel. Head over to one of those circles or boards and start arguing the futurist line and see what it gets you.

Both sides have valid points.


While I don't agree with Lanier, being an AI skeptic isn't quite the same as saying that self driving cars can't happen: we don't yet have a good understanding of either consciousness or tractable general optimization algorithms, so it's still possible that there's something that will make it impossible. Unlikely, perhaps, but possible.

Also, your sarcasm about the self driving car might have been too subtle. ;)


What you see with respect to self-driving cars IMO is a lot of people who SO want them to exist that they assign timelines that are so optimistic they'd be rolling on the floor laughing if applied to projects they had direct familiarity with. Of course, it's also fueled by a lot of hype and hand-wavy statements about exponential growth and the like.

Like many other aspects of AI, I'm not sure just about anyone thinks we need to have strong AI to make self-driving cars work under the vast majority of conditions. We "just" need to create automation that can handle a reasonable number of corner cases without flipping out or grinding to a halt. But that's a debate about engineering timelines not something more philosophical.


I'm a little confused. We have self-driving cars. Of course there are still corner cases and bugs and regulatory issues, but cars have driven for fair distances on existing roads without a human driving. I had assumed, therefore, that the point @VLM was making was that, like self-driving cars, we already have AI by many of the standards of earlier AI researchers (the "when it works, it's not called AI any longer" argument).


I was reading "self-driving cars" as "Level 4 autonomy" under general conditions. Which IMO is many decades away.

But we're in violent agreement about the "when it works, it's not called AI any longer" argument.



The Edge has lost it.

Jaron's essay was a fantastic read, highly recommend it, but the respondents are not engaging with it at all.

They are just using the Edge as a platform to promote their own ideas. Most of them clearly did not read what he wrote. Even though I respect most of the participants, the platform is not eliciting their best.


I don't think this is as much about AI as it may seem. I mean, yes, it's about AI. But what it's really about is a human tendency which is to create things and meaning when we don't necessarily have the evidence of them.

What this is really about is religion. More specifically, we have people who are the most non-religious group out there (like scientists) and they dismiss formal religion only to recreate another version of it:

"A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions!"

And I see this more often than just in computer science. It happens all the time in economics.


There is a point in the article which the author kind of glosses over that is really important. A lot of the bots (Siri, Cortana, Slack bot, etc...) are refined through free data provided by the masses. These systems get smarter at what they do, essentially for free.

And his second point, that once an AI understands it users preferences, it ability to change slows down because unlike when it started, it ignores virgin data, and only follows what hit has been taught. That is a huge potential issue. Seems like a topic ripe for researchers in Machine learning to jump on.


He might have seemed to gloss over it because he's written an entire book about it called "Who Own's the Future".


Another thing I'll add is many people forget that the only known super-intelligence, savant humans, still takes years of training data and introspection to get the intelligence to function in society. Takes even more to function better esp accounting for human nature.

So, I call bullshit on the idea there's a startup anywhere that's just going to turn on an AI we can't handle. We'll likely see it coming a mile away or the isolation it operated in will do it in when it faces human strategists.


I think there are two valid existential concerns regarding AI:

1) Widespread use of advanced AI in robotics/production has the ability to bankrupt the working populace, at which point something closer to pure socialism may take over; not so much an annihilation scenario.

2) AI is weaponizable -- at a certain point, given other technological advances in addition to AI, a relatively small amount of like-minded people will be able to produce unique weapons of mass destruction.


More likely, pure socialism will not take over. While not an annihilation scenario, I can very much see a decimation scenario unfolding as a war between the elite and those no longer useful...

Edit: In support of this scenario. Consider the refugee crisis and the response from the various countries. Even Sweden, ostensibly one of the most welcoming (and socialistic) countries, started to close its borders as soon as the situation started to get uncomfortable.


1) Or we enter a post-scarcity era where there will be an abundance of food produced by autonomous machines. No one "needs" to work in order to feed themselves. 2) There are a lot of things that are weaponizable - today.


1) yeah! socialism!! 2) exactly! like a quadcopter with a handgun attached :)


God, what a lucid, piercing look at reality and the human condition he has. The whole part on translators, just brilliant.


It is, in some sense, a very simple problem. When and if we advance to the point that AI algorithms are themselves capable of writing AI algorithms, what happens then? Quantify (albeit vaguely) the rate at which AI algorithms can improve AI algorithms. If that number turns out to be roughly as slow as human beings, there's no significant problem. If it turns out that number is greatly larger than human beings, that's a problem.

It is not particularly clear than the number is large; hard problem is hard. No sarcasm; arguably "general intelligence" is the hardest possibly engineering problem. Adding a bit more intelligence to the problem when it will already be getting worked on by a lot of humans may not move the needle much to speak of at all. But it is not particularly clear that that rate is small either, though; humans are smart, yet at the same time, really dumb in a lot of ways. We have terrible working memory (7 +/- 2 items is absurdly small). We have lots of irrational biases. We have lots of things to do in our lives that are not "thinking" (eating, sleeping, etc.), and the vast majority of our brain is dedicated to those problems, not rational thought. We are terrible at manipulating vast symbol systems without taking immense shortcuts, which inevitably completely color our manipulations. What happens when something that lacks those restrictions is turned loose on the problem of writing AI algorithms?

Heck if I know. I'm a human too.

I honestly think that those people who are utterly convinced the rate must necessarily be slow are just as wrong as those who are utterly convinced it must necessarily be fast. We really don't know, but it is hardly an invalid concern.

So far, AI algorithms have not written very many AI algorithms. I've seen some toys where one AI algorithm is hooked up to tune another one, but I'm not sure if any of them have ever been practically useful. (All I know is all the ones I've personally seen have amounted to toys.) And the system as a whole is generally constrained by what the final AI algorithm can output anyhow; if the ___domain and range of the AI function are immutably fixed, there's not much the AI-meta-system can do to "escape" and do anything terribly nasty.

I do think we're still a ways away from this being an issue. AI is currently still very much constrained to problems a great deal simpler than "writing AI algorithms", which is right up there with the hardest possible things that human intelligences are currently capable of, and that only to a rare few highly trained and highly talented individuals. We're easily decades away from this problem. But when the day comes, well, I wouldn't bet on the self-improving AI experiencing multiple orders-of-magnitude improvement in mere minutes, but I'd sure hate to bet the future of the species that it's not possible. It is not unreasonable to be concerned that the knee could be very steep. We'll know more as we get closer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: