The AI winter happens when outside hype and investment money overshoots what can be delivered in the timescale needed for good ROI. Disappointments are followed with periods of underinvestment.
Neural Networks are already decades old invention. Underlying all this new boom is the same basic architecture, alternating layers of affine transformations and nonlinear transformations trained with backpropagation and gradient descent.
What made the field suddenly explode was GPGPU's and bunch of tweaks that helped to solve vanishing/exploding gradients problems (started with RNN training layers and now with skip connections and handful of other techniques that make deep networks possible) combined with better regularization etc. I'm not trying to downplay the innovation, but from the larger point of view they are technical tweaks.
There is limit to where tweaking/scaling the current techniques can go. There needs to be leaps.
Geoff Hinton made the case better in his "What is wrong with convolutional neural nets?" https://www.youtube.com/watch?v=Jv1VDdI4vy4 see also https://arxiv.org/abs/1711.11561
from Jason Jo, Yoshua Bengio. The same "what's wrong with" applies to the field in general.
Hinton's capsule networks are attempt to take another leap forward. Just like with his earlier work from 80's that culminated around 2006, it may take years and years of slow work to get there. Hinton suggests that there needs to be unsupervised learning revolution that comes up with something else than backprobagation. https://www.axios.com/artificial-intelligence-pioneer-says-w...
"AI winter" can only happen if something better than "AI" appears. It has nothing to do with hype or investment.
If someone comes up with ways to do speech recognition, image classification, and NLP using traditional (rule based) programming, better than the current state of the art deep learning algorithms, then the DL algorithms are in trouble.
However, this would mean "DL winter" not "AI winter". People are not going to stop using speech recognition and NLP on their phones, or object detection in their cars, just because someone overhyped something or overinvested in some companies. And as long as people use them, the development of better ways to do it will continue.
Why would you think there is an AI Winter upon us?? From my subjective viewpoint, I don't feel like I've seen any evidence of that. If anything, AI (in a general sense) seems to be hotter than ever.
What evidence is there for an notion of a developing (or existent) AI Winter at the current moment?
Yeah, that one article is all I've seen. And IMO, it doesn't even come close to making any kind of case for an actual "AI Winter". Especially in comparison to everything else we see around us. AI is still very much in the news, AI companies are being founded, and acquired, etc. AI (in the form of Deep Learning in particular) is being used to create value today, even if it has been oversold to a point (and yes, it probably has).
Of course, the interesting thing about predicting an upcoming AI Winter (or "bubble collapse", or recession, etc.) is that you will eventually be right if you keep predicting it long enough. I certainly can't say an AI Winter isn't coming, but I am unconvinced that one has stared or is imminent.
I supposed it's really hard to know if there is an AI winter because if a company/research body has made big advances, then it's possible they are still in testing mode before rolling it out and don't want anyone to know about it until it's good to launch.
Neural Networks are already decades old invention. Underlying all this new boom is the same basic architecture, alternating layers of affine transformations and nonlinear transformations trained with backpropagation and gradient descent.
What made the field suddenly explode was GPGPU's and bunch of tweaks that helped to solve vanishing/exploding gradients problems (started with RNN training layers and now with skip connections and handful of other techniques that make deep networks possible) combined with better regularization etc. I'm not trying to downplay the innovation, but from the larger point of view they are technical tweaks.
There is limit to where tweaking/scaling the current techniques can go. There needs to be leaps. Geoff Hinton made the case better in his "What is wrong with convolutional neural nets?" https://www.youtube.com/watch?v=Jv1VDdI4vy4 see also https://arxiv.org/abs/1711.11561 from Jason Jo, Yoshua Bengio. The same "what's wrong with" applies to the field in general.
Hinton's capsule networks are attempt to take another leap forward. Just like with his earlier work from 80's that culminated around 2006, it may take years and years of slow work to get there. Hinton suggests that there needs to be unsupervised learning revolution that comes up with something else than backprobagation. https://www.axios.com/artificial-intelligence-pioneer-says-w...