There is part of me that thinks that this A.I. fear-mongering is some kind of tactic by Google to get everybody to pause training their A.I.s so they can secretly catch up in the background. If I was to do some quick game theory in my mind this would be the result.
Imagine being Google, leading the way in A.I. for years, create the frameworks (tensorflow), create custom hardware for A.I. (TPUs), fund a ton of research about A.I., have access to all the data in the world, hype up your LLM as being sentient (it was in the news a lot last year thanks to Blake Lemoine) and then out of nowhere OpenAI releases chatGPT and everyone is losing their minds over it. You as Google think you are ready for this moment, all those years of research and preparation was leading to this point, it is your time to shine like never before.
You release Bard and it is an embarrassing disaster, a critical fail leading to an almost 50% reduction of Google's stock price and for the first time and to the surprise of literally everybody people are talking about Bing but in a positive light and google is starting to look a lot like Alta Vista. Suddenly in the news we start hearing how openAI needs to stop training for 6 months for safety of the human race (and more importantly so Google can catch up!).
I have been playing with and using chatGPT to build tools and I don't feel like it will take over the world or pose any real danger. It has no agency, no long term memory, no will, no motivations nor goals. It needs to have it's hands held by a human every step of the way. Yes I have seen AutoGPT but that still needs a ton of hand holding.
I find the current LLM very impressive but like any tool they are as dangerous as the human in the drivers seat and I find the current fear-mongering a bit inorganic and insincere.
I think a comment on the reddit thread about this is somewhat appropriate, though I don't mean the imply the same harshness:
> Godfather of AI - I have concerns.
> Reddit - This old guy doesn't know shit. Here's my opinion that will be upvoted by nitwits.
Point being, if you're saying that the guy who literally wrote the paper on back propagation is "fear mongering", but who is now questioning the value of his life's work, then I suggest you take a step back and re-examine why you think he may have these concerns in the first place.
I think there are two distinct points here that need to be clearly separated.
When Hinton gives an estimate on how fast things are going to move and how far they can go, that is the part where his background gives his estimates much higher credibility than any random person on the Internet.
But how dangerous that level is to humanity as a whole is a separate question, and one that he is not an expert on.
This is just flat-out wrong. You make it sound like Hinton hasn't done much since his famous back propagation paper, or that he hasn't been intimately involved in productizing some of his research.
Hinton's startup, DNNresearch Inc., which made breakthroughs in machine vision (particularly around identifying objects in images and image classifications), was acquired by Google in 2013, specifically to help with image search (and also, obviously, for the talent of the team). Hinton's cofounders in that startup were Alex Krizhevsky (of AlexNet fame) and Ilya Sutskever, current Chief Scientist at OpenAI.
I aim to make it sound like Hinton isn't in the cutting edge of LLM research - not that he is somehow incapable of it, but rather that anyone who isn't at OpenAI at the moment is probably in the dark. The most recent thing I have seen of him on my feed for example was a paper into the fundamentals of learning (The forward-forward paper), for example.
I didn't say he invented it, and for some reason I see lots of comments wanting to nitpick over the details of his contributions. I'll just copy the relevant sentence from his Wikipedia article, which I think is a very fair assessment:
> With David Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper published in 1986 that popularised the backpropagation algorithm for training multi-layer neural networks, although they were not the first to propose the approach.
This is actually interesting. If you get you finance news from twitter and reddit you would actually assume that the claim/lie about "50% reduction of Google's stock price" is true and that FAANG is about to collapse along with the rest of the S&P500 and the petrodollar has gone to 0.
LOL Google stock price is more that what it was before ChatGPT's release. Search engine market share hasn't changed by a even 1% neither did profit from search. Every day HN's hyperbole is increasing.
Everyone can extrapolate. One of the most irritating tendencies of public intellectuals is the assumption that only they understand the word exponential, and then insist on asserting that every trend they can lay their eyes on must be an exponential trend (or if it's clearly not, then it will be soon).
Progress comes in fits and spurts. Sometimes there's fast progress, and then the field matures and it slows down. It was ever thus. Measured in tech demos, AI progress has been impressive. Measured in social impact it has way underperformed, with the applications until November of last year being mostly optimizations to existing products that you wouldn't even notice unless paying close attention. That's what 10+ years of billion-dollar investments into neural nets got us: better Gmail autocomplete and alt tags on facebook images.
Now we have a new toy to play with at last, and AI finally feels like it's delivering on the hype. But if we extrapolate from past AI experience it's going to mostly be a long series of cool tech demos that yields some optimizations to existing workflows and otherwise doesn't change much. Let's hope not!
There's plenty of us with Twitter taglines such as "changing the world one line of code at the time," but I've been around a while that if tech has changed the world, it's not always for the better. It's not always to make the masses more powerful. Not all of us are working on sending rovers to Mars or curing Parkinson's.
Like everything else, AI will be used to control us, to advertise to us, to reduce variance between each other. To pay us less. To make plutocrats more rich, and everybody else poorer.
But at least you now have a personal assistant, smart recommendation engines and AI generated porn to keep you busy.
This isn't really true. There isn't consensus among people who have the history and background, but the "it's going to change everything" and especially "we're all screwed" make for better copy so they are getting a lot of media play right now.
You are partially right — OpenAI is way ahead of everybody else. Even though OpenAI team is thinking and doing everything for safe deployment of (baby) AGI, public and experts don’t think this should be effort lead by single company. So Google naturaly wants to be the counterweight. (Ironic that OpenAI was supposed to be counterweight, not vice versa.) However, when you want to catch up somebody, you cheat. And cheating with AI safety is inherenťy dangerous. Moratorium for research and deployment just doesn’t make sense from any standpoint IMO.
Regarding the hand-holding: As Hinton noted, simple extrapolation of current progress yields models that are super-human in any ___domain. Even if these models would not be able to access Internet, in wrong hands it could create disaster. Or even in good hands that just don’t anticipate some bad outcome. Tool that is too powerful and nobody tried it before.
Is there anything in particular that you disagree with or was it only general negative comment? What do you see as direct evidence of them not deploying (baby) AGI in a safe manner? Is this about "100x capped for profit" strategy?
Imagine being Google, leading the way in A.I. for years, create the frameworks (tensorflow), create custom hardware for A.I. (TPUs), fund a ton of research about A.I., have access to all the data in the world, hype up your LLM as being sentient (it was in the news a lot last year thanks to Blake Lemoine) and then out of nowhere OpenAI releases chatGPT and everyone is losing their minds over it. You as Google think you are ready for this moment, all those years of research and preparation was leading to this point, it is your time to shine like never before.
You release Bard and it is an embarrassing disaster, a critical fail leading to an almost 50% reduction of Google's stock price and for the first time and to the surprise of literally everybody people are talking about Bing but in a positive light and google is starting to look a lot like Alta Vista. Suddenly in the news we start hearing how openAI needs to stop training for 6 months for safety of the human race (and more importantly so Google can catch up!).
I have been playing with and using chatGPT to build tools and I don't feel like it will take over the world or pose any real danger. It has no agency, no long term memory, no will, no motivations nor goals. It needs to have it's hands held by a human every step of the way. Yes I have seen AutoGPT but that still needs a ton of hand holding.
I find the current LLM very impressive but like any tool they are as dangerous as the human in the drivers seat and I find the current fear-mongering a bit inorganic and insincere.