I encounter "spicy auto complete" style comments far more often than techbro AI-everything comments and its frankly getting boring.
I've been doing AI things for about 20+ years and llms are wild. We've gone from specialized things being pretty bad as those jobs to general purpose things better at that and everything else. The idea you could make and API call with "is this sarcasm?" and get a better than chance guess is incredible.
Eh, I see far more "AI is the second coming of Jesus" type of comments than healthy skepticism. A lot of anxiety from people afraid that their source of income will dry and a lot of excitement of people with an axe to grind that "those entitled expensive peasants will get what they deserve".
I think I count myself among the skeptics nowadays for that reason. And I say this as someone that thinks LLM is an interesting piece of technology, but with somewhat limited use and unclear economics.
If the hype was about "look at this thing that can parse natural language surprisingly well and generate coherent responses", I would be excited too. As someone that had to do natural language processing in the past, that is a damn hard task to solve, and LLMs excel at it.
But that is not the hype is it? We have people beating the drums of how this is just shy of taking the world by storm, and AGI is just around the corner, and it will revolutionize all economy and society and nothing will ever be the same.
So, yeah, it gets tiresome. I wish the hype would die down a little so this could be appreciated for what it is.
We have people beating the drums of how this is just shy of taking the world by storm, and AGI is just around the corner, and it will revolutionize all economy and society and nothing will ever be the same.
Where are you seeing this? I pretty much only read HN and football blogs so maybe I’m out of the loop.
I'm saying the intelligence factor doesn't matter. Only the utility factor. Today LLMs are incredibly useful and every few months there appears to be bigger and bigger leaps.
Analyzing whether or not LLMs have intelligence is missing the forest from the trees. This technology is emerging in a capitalist society that is hyper optimized to adopt useful things at the expense of almost everything else. If the utility/price point gets hit for a problem, it will replace it regardless of if it is intelligent or not.
I agree and as a non-software engineer, all that matters to me right now is how much can these models replace software engineering.
If a language model can't solve problems in a programming language then we are just fooling ourselves in less defined domains of "thought".
Software engineering is where the rubber meets the road in terms of intelligence and economics when viewing our society as a complex system. Software engineering salaries are above average exactly because most average people are not going to be software engineers.
From that point of view the progress is not impressive at all. The current models are really not that much better than chatGPT4 in April 2023.
AI art is a better example though. There is zero progress being made now. It is only impressive at the most surface level for someone not involved in art and who can't see how incredibly limited the AI art models are. We have already moved on to video though to make the same half baked, useless models that are only good to make marketing videos for press releases about progress and one off social media posts about how much progress is being made.
But if you want to predict the future utility of these models you want to look at their current intelligence, compare that to humans and try to figure out roughly what skills they lack and which of those are likely to get fixed.
For example, a team of humans are extremely reliable, much more reliable than one human, but a team of AI's isn't mean reliable than one AI since an AI is already an ensemble model. That means even if an AI could replace a person, it probably can't replace a team for a long time, meaning you still need the other team members there, meaning the AI didn't really replace a human it just became a tool for huamns to use.
I personally wouldn't be surprised if we start to see benchmarks around this type of cooperation and ability to orchestrate complex systems in the next few years or so.
Most benchmarks really focus on one problem, not on multiple real-time problems while orchestrating 3rd party actors who might or might not be able to succeed at certain tasks.
But I don't think anything is prohibiting these models from not being able to do that.
I've been doing AI things for about 20+ years and llms are wild. We've gone from specialized things being pretty bad as those jobs to general purpose things better at that and everything else. The idea you could make and API call with "is this sarcasm?" and get a better than chance guess is incredible.