Hacker News new | past | comments | ask | show | jobs | submit login

> If he's a crank, it should be easy to explain specifically why he's wrong.

Sure. The premise that a super intelligent AI can create runaway intelligence on its own is completely insane. How can it iterate? How does it test? Humans run off consensus. We make predictions and test them against physical reality, then have others test them. Information has to be gathered and verified, it's the only rational way to build understanding.




> How can it iterate? How does it test?

Honest/dumb question - does it need to test? In nature mutations don't test - the 'useful' mutations win.

Couldn't a 'super intelligent AI' do the same?


> the 'useful' mutations win.

Thats testing.


It sounds like you disagree with Eliezer about how AI technology is likely to develop. That's fine, but that doesn't show that he's a crank. I was hoping for something like a really basic factual error.

People throughout history have made bold predictions. Sometimes they come true, sometimes they don't. Usually we forget how bold the prediction was at the time -- due to hindsight bias it doesn't seem so bold anymore.

Making bold predictions does not automatically make someone a crank.


There used to be a subreddit called Sneerclub where people would make fun of Eliezer and some of his buddies. Here's a discussion of a basic factual error he made on how training AI works, even though this topic is supposedly his life's work:

https://www.reddit.com/r/SneerClub/comments/131rfg0/ey_gets_...

I enjoyed the comment that his understanding of how AI training works is like "thinking that you need to be extremely careful when solving the equations for designing a nuclear bomb, because if you solve them too quickly then they'll literally explode."


Read the mesa-optimization paper I linked elsewhere in this thread: https://arxiv.org/pdf/1906.01820.pdf Eliezer's point is that if AI researchers aren't looking for anomalous behavior that could indicate a potential danger, they won't find it.


The issue isn't whether "his point" as you put it is correct. If I said people should safety test the space shuttle to make sure the phlogiston isn't going to overheat, I may be correct in my belief that people should "safety test" the space shuttle but I'm still a crank because phlogeston isn't a real thing.


See my comment here: https://news.ycombinator.com/item?id=38336374

The reason AI alignment is challenging is because we're trying to make accurate predictions about unusual scenarios that we have essentially zero data about. No one can credibly claim expertise on what would constitute evidence of a worrisome anomaly. Jeremy Howard can't credibly say that a sudden drop in the loss function is certainly nothing to worry about, because the entire idea is to think about exotic situations that don't arise in the course of ordinary machine learning work. And the "loss" vs "loss function" thing is just silly gatekeeping, I worked in ML for years -- serious people generally don't care about minor terminology stuff like that.


That's not what the conversation was about- you're just doing the thing Howard said where you squint and imagine he was saying something other than he did.


He is engaging in magical thinking. I showed a factual error, that AI has neither information gathering and verifying capability or a network of peers to substantiate their hypothesis, and you refuse to engage it.


Opinions about what's necessary for AGI are a dime a dozen. You shared your opinion as though it was fact, and you claim that it's incompatible with Eliezer's opinion. I don't find your opinion particularly clear or compelling. But even if your forecast about what's needed for AGI is essentially accurate, I don't think it has much to do with Eliezer's claims. It can simultaneously be the case that AGI will make use of information gathering, verifying capability, and something like a "network of peers", AND that Eliezer's core claims are also correct. Even if we take your opinion as fact, I don't see how it represents a disagreement with Eliezer, except maybe in an incredibly vague "intelligence is hard, bro" sort of way.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: