Hacker News new | past | comments | ask | show | jobs | submit login

I'm surprised you'd consider Hinton as not being "someone who is actually doing it".

Are you basically saying that you only trust warnings about AI from people who have pushed the most recent update to the latest headline-grabbing AI system at the latest AI darling unicorn? If so, aren't those people strongly self-selected to be optimistic about AI's impacts, else they might not be so keen on actively building it? And that's even setting aside they would also be financially incentivized against publicly expressing whatever doubts they do hold.

Isn't this is kind of like asking for authoritative opinions on carbon emissions from the people who are actually pumping the oil?




No, that’s the opposite of what I’m saying. Asking Hinton for his opinions on the societal impact of new AI tech is like asking the people who used to pump oil 20 years ago. It’s both out of date and not really relevant to their skill set even if it’s adjacent.


Let me clarify: who does qualify to offer an authoritative opinion, in your view? If, say, only Ilya Sutskever qualifies, then isn't that like asking someone actively pumping oil today about the danger of carbon emissions? If only Sam Altman, then isn't that like asking an oil executive?

If not Geoff Hinton, then, who?

Ultimately the harm is either real or not. If it is real, then the people with the most accurate beliefs and principles will be the ones who never joined the industry in the first place because they anticipated where it would lead, and didn't want to contribute. If it is not real, then the people with the most accurate beliefs will be the ones leading the charge to accelerate the industry. But neither group's opinions carry much credibility as opinions, because it's obvious in advance what opinions each group would self-select to have. (So they can only hope to persuade by offering logical arguments and data, not by the weight of their authoritative opinions.)

In my view, someone who makes landmark contributions to the oil industry for 20 years and then quits in order to speak frankly about their concerns with the societal impacts of their industry... is probably the most credible voice you could ever expect to find expressing a concern, if your measure of credibility involves experience pumping oil.


If you want an authoritative opinion on the societal impact of something I would want the opinion of someone who studies the societal impact of things.


So that seems to me like someone like Stuart Russel or Nick Bostrom? But what Geoff Hinton is saying seems to be vaguely in general agreement with what those people are saying.


I’m not arguing Hinton is wrong. I’m arguing that Hinton doesn’t really matter here. The “godfather of AI” doesn’t make him particularly prescient.


His opinion obviously does matter because he is a founder of the field. No one believes that he is prescient. You are exaggerating and creating a strawman argument, infantilizing the readers here. We don't worship him or outsource our thinking.


You seem to be taking my usage of the word prescient as meaning he can either see the future perfectly or he cannot. That’s… not what it conventionally means. I simply mean his track record of predicting the future trajectory of AI is not great.


Well he bet on neural networks in the early days when it was unpopular, and that turned out to be the right trajectory.

He received a Turing Award for his work that was foundational to the current state of the art.


Your argument sounds like (and correct me if I'm wrong) something along the lines of "he chose to do X, and afterwards X was the correct choice, so he must be good at choosing correctly."

Isn't that ad hoc ergo propter hoc?

That argument would also support the statement "he went all in with 2-7 preflop, and won the hand, so he must be good at poker" -- I assume you and I would both agree that statement is not true. So why does it apply in Geoffrey's case?


It was a straightforward response to "I simply mean his track record of predicting the future trajectory of AI is not great."


I still don't follow. In your example, how would you differentiate between that choice of his being lucky vs. prescient? Or was the intent to just provide a single datapoint of him appearing to make a correct choice?


LOL. Hinton won the f**ing Turing Award for his research in deep learning / neural networks, and you're telling us his knowledge is irrelevant.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: