Hacker News new | past | comments | ask | show | jobs | submit login

If, like me, you're using LLMs on a daily basis and getting real personal value out of them, it's hard not to conclude that they're going to have a big impact.

I don't need a crystal ball for this. The impact is already evident for us early-adopters, it's just not evenly distributed yet.

That's not to say they're not OVER hyped - changing the entire company roadmap doesn't feel like a sensible path to me for most companies.




Early Adopters are always True Believers. That's why they are early adopters. Every single one of them is going to say "The impact is clear! Look around you! I use XYZ every day!" You really don't know what the adoption curve will look like until you get into the Late Majority.


I’m not that much of a believer, but what is clear is that “AI” still has a plug incompatible with your regular wall socket, if you get the analogy. It’s too early to draw a circle around adopters count.

We’ll talk counts when my grandma will be able to hey siri / okay google something like local hospital appointment or search for radish prices around her. It already is possible, just not integrated enough.

Coincidentally, I’m working on a tool at my job (unrelated to AI) that enables computer device automation on much higher level than playwright/etc. These two things combined will do miracles, for models good enough to use it.


We're already entering Late Majority stage. Early Majority is like a good chunk of the western population, which already should tell you something - the "technology adoption model" might not make much sense when the total addressable market is literally everyone on the planet, and the tech spreads itself organically with zero marketing effort.

And/or, it's neither hard nor shameful to be True Believers, if what you believe in is plain fact.


>Early Adopters are always True Believers.

Early adopters were using gpt-2 and telling us it was amazing.

I used it and it was completely shit and put me off openai for a good four years.

gpt-3 was nearly not shit, and 3.5 the same just a bit faster.

It wasn't until gpt-4 came out that I noticed that this AI thing should now be called AI because it was doing things that I didn't think I'd see in decades.


I tried GPT-2 and thought it was interesting but not very useful yet.

I started using GPT-3 via the playground UI for things like writing regular expressions. That's when this stuff began to get useful.

I've been using GPT-4 on an almost daily basis since it came out.


The hype around gpt2 was ridiculous. It made me firmly put openai into 'grifters, idiots and probably both' territory.

Turns out they were just grifters as the hilarious mess around Sam Altmans coup/counter coup/coup showed us.


I don't know what your operating definition of "grifter" is but for me, a grifter is not a company that delivers a product which gains a large adoption and mindshare (ChatGPT) and essentially sets the world on fire. (not an opinion on Altman specifically but OpenAI at large)


My definition is someone outright lying that gpt-2 was agi that should be regulated just so they could raise more money for the next round of training.


Who said gpt2 was agi?


Otoh early adopters and true believers were often different people for cryptocurrency.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: