If, like me, you're using LLMs on a daily basis and getting real personal value out of them, it's hard not to conclude that they're going to have a big impact.
I don't need a crystal ball for this. The impact is already evident for us early-adopters, it's just not evenly distributed yet.
That's not to say they're not OVER hyped - changing the entire company roadmap doesn't feel like a sensible path to me for most companies.
Early Adopters are always True Believers. That's why they are early adopters. Every single one of them is going to say "The impact is clear! Look around you! I use XYZ every day!" You really don't know what the adoption curve will look like until you get into the Late Majority.
I’m not that much of a believer, but what is clear is that “AI” still has a plug incompatible with your regular wall socket, if you get the analogy. It’s too early to draw a circle around adopters count.
We’ll talk counts when my grandma will be able to hey siri / okay google something like local hospital appointment or search for radish prices around her. It already is possible, just not integrated enough.
Coincidentally, I’m working on a tool at my job (unrelated to AI) that enables computer device automation on much higher level than playwright/etc. These two things combined will do miracles, for models good enough to use it.
We're already entering Late Majority stage. Early Majority is like a good chunk of the western population, which already should tell you something - the "technology adoption model" might not make much sense when the total addressable market is literally everyone on the planet, and the tech spreads itself organically with zero marketing effort.
And/or, it's neither hard nor shameful to be True Believers, if what you believe in is plain fact.
Early adopters were using gpt-2 and telling us it was amazing.
I used it and it was completely shit and put me off openai for a good four years.
gpt-3 was nearly not shit, and 3.5 the same just a bit faster.
It wasn't until gpt-4 came out that I noticed that this AI thing should now be called AI because it was doing things that I didn't think I'd see in decades.
I don't know what your operating definition of "grifter" is but for me, a grifter is not a company that delivers a product which gains a large adoption and mindshare (ChatGPT) and essentially sets the world on fire. (not an opinion on Altman specifically but OpenAI at large)
My definition is someone outright lying that gpt-2 was agi that should be regulated just so they could raise more money for the next round of training.
I don't need a crystal ball for this. The impact is already evident for us early-adopters, it's just not evenly distributed yet.
That's not to say they're not OVER hyped - changing the entire company roadmap doesn't feel like a sensible path to me for most companies.