It’s interesting because often the revolution of LLM is compared to the calculator but a calculator that does a random calculation mistake would never have been used so much in critical systems. That’s the point of a calculator, we never double check the result. But we will never check the result of an LLM because of the statistical margin of error in the feature.
Right: When I avoid memorizing a country's capital city, that's because I can easily know when I will want it later and reliably access it from an online source.
When I avoid multiplying large numbers in my head, that's because I can easily characterize the problem and reliably use a calculator.
Neither are the same as people trying to use LLMs to unreliably replacing critical thinking.
The critical difference is that (natural) language itself is in the ___domain of statistical probabilities. The nature of the ___domain is that multiple outputs can all be correct, with some more correct than others, and variations producing novelty and creative outputs.
This differs from closed-form calculations where a calculator is normally constrained to operate--there is one correct answer. In other words "a random calculation mistake" would be undesirable in a ___domain of functions (same input yields same output), but would be acceptable and even desirable in a ___domain of uncertainty.
We are surprised and delighted that LLMs can produce code, but they are more akin to natural language outputs than code outputs--and we're disappointed when they create syntax errors, or worse, intention errors.
> But we will never check the result of an LLM because of the statistical margin of error in the feature.
I don't follow this statement: if anything, we absolutely must check the resut of an LLM for the reason you mention. For coding, there are tools that attempt to check the generated code for each answer to at least guarantee the code runs (whether it's relevant, optimal, or bug-free is another issue, and one that is not so easy to check without context that can be significant at times).
I mean I do check absolutely everything an LLM outputs. But following the analogy of the calculator, if it goes that way, no one will in the future check the result of an LLM. Just like no one ever checks the result of a complex calculation. People get used to the fact that a large percentage of the time it’s correct. That might allow big companies to manipulate people because a calculator is not plugged to the cloud to falsify the results depending on who you are and make your projects fail
I see a whole new future of cyber warfare being created. It'll be like the reverse of a prompt engineer: an injection engineer. Someone who can tamper with the model just enough to sway a specific output that causes <X>.
100% agree, loyalty goes both ways and we rarely see loyal employers (massive layoffs including hi-profile employee who dedicated their life to the company)
Loyalty to a corporation is misplaced because it can only be as loyal as its agents are, and those are numerous and constantly shifting.
I think its fine to be loyal to individuals that have earned it, but don’t make the mistake of thinking your boss can guarantee your employment in all circumstances, that’s not how the corporate world works.
Going this route, what’s the point of of learning anything if everything is instantly accessible from an AI with working solutions ? So no learning = no teaching or teaching that feels useless. That’s a weird and dangerous road. Everyone should own this technology for the situation to be balanced, not private or country based. Because we make ourself kind of useless in the process we loose leverage and value and we are at the mercy of the powerful ones
> If people stop the occasional deadlock of grinding teeth, looking at a problem, crying, going for a walk, praying and screaming until suddenly it makes sense (and you learn something!), I’d call it severe regression, not progress.
People for which development is not their job will absolutely want to get rid of it as much as possible because it costs money. I really agree with the author, it does feel like a regression and it’s so easy to overlook what makes the most part of the job when it looks like it can be fully automated. Once you don’t have people who are used to do what’s quoted, and there is 500 million lines of code and bugs, good luck with that to ask a human to take a look. Maybe AI will be powerful enough to help debugging but it’s a dangerous endeavor to build critical business around that. If for any reason (political or else) AI got more expensive it could kill businesses (twitter api ?)
I downloaded the turbo whisper model optimized for Mac, created a python script that get the mic input and paste the result. The python script is LLM generated and it works with pushing a key.
For 80% of the functionality for free and done locally.
Well the extent is much broader from a calculator vs an LLM. Why should I hire you if an agent can do it ? LLM is every job is a calculator and can be replaced. Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent. So all the students who let the LLM do their assignment and learn basically nothing, what’s their value for a company to be hired ? The company will and is just using the agent as well …
An agent can't do it. It can help you like a calculator can help you, but it can't do it alone. So that means you've become the programmer. If you want to be the programmer, you always could have been. If that is what you want to be, why would you consider hiring anyone else to do it in the first place?
> Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent.
It was Shopifiy, but that's just a roundabout way to say that there is a hiring freeze due to low sales (no doubt because of tariff nonsense seizing up the market). An agent, like a calculator, can only increase the productivity of a programmer. As always, you still need more programmers to perform more work than a single programmer can handle. So all they are saying is that "we can't afford to do more".
> The company will and is just using the agent as well …
In which case wouldn't they want to hire those who are experts in using agents? If they, like Shopify, have become too poor to hire people – well, you're screwed either way, aren't you? So that is moot.
So like arguably when people were not using calculators they made calculations by hand and there was a room full of people that did calculations. That’s gone now thanks to calculators. But it the analogy goes to an order of magnitude higher, now fewer people can « do » the job of many so less hiring maybe but not just on « do calculations by hand » but almost all fields where the use of software is required.
Where will all those new students find a job if :
- they did not learn much because LLM did work for them
- there is no new jobs required because we are more productive ?
Never in the history of humans have we been content with stagnation. The people who used to do manual calculations soon joined the ranks of people using calculators and we lapped up everything they could create.
This time around is no exception. We still have an infinite number of goals we can envision a desire for. If you could afford an infinite number of people you would still hire them. But Shopify especially is not in the greatest place right now. They've just come off the COVID wind-down and now tariffs are beating down their market further. They have to be very careful with their resources for the time being.
> - they did not learn much because LLM did work for them
If companies are using LLMs as suggested earlier, they will find jobs operating LLMs. They're well poised for it, being the utmost experts in using them.
> - there is no new jobs required because we are more productive ?
More productivity means more jobs are required. But we are entering an age where productivity is bound to be on the decline. A recession was likely inevitable anyway and the political sphere is making it all but a certainty. That is going to make finding a job hard. But for what scant few jobs remain, won't they be using LLMs?
> Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent.
Spotify CEO is channeling The Two Bobs from Office Space: "What are you actually doing here?" Just in a nastier way, with a kind of prisoner's dilemma on top. If you can get by with an agent, fine, you won't bother him. If you can't, why can't you? Should we replace you with someone who can, or thinks they can?
You as the employer are liable, a human has real reasoning abilities and real fears about messing up, the likely hood of them doing something absurd like telling a customer that a product is 70% off and them not losing their job is effectively nil. What are you going to do with the LLM, fire it?
Data scientist and people deeply familiar with LLMs to the point that they could fine tune a model to your use case cost significantly more than a low skilled employee and depending on liability just running the LLM may be cheaper.
As an accounting firm ( one example from above ) far as I know in most jurisdictions the accountant doing the work is personally liable, who would be liable in the case of the LLM?
There is absolutely a market for LLM augmented workforces, I don't see any viable future even with SOTA models right now for flat out replacing a workforce with them.
I fully agree with you about liability. I was advocating for the other point of view.
Some people argue that it doesn’t matter if there is mistakes (it depends which actually) and with time it will cost nothing.
I argue that if we give up learning and let LLM do the assignments then what is the extent of my knowledge and value to be hired in the first place ?
We hired a developper and he did everything with chatGPT, all the code and documentation he wrote. First it was all bad because from the infinity of answers chatGPT is not pinpointing the best in every case. But does he have enough knowledge to understand what he did was bad ? And then we need people with experience that confronted themselves with hard problems and found their way out. How can we confront and critic an LLM answer otherwise ?
I feel student’s value is diluted to be at the mercy of companies providing the LLM and we might loose some critical knowledge / critical thinking in the process from the students.
I agree entirely on your take regarding education. I feel like there is a place where LLMs are useful but doesn't impact learning but it's definitely not in the "discovery" phase of learning.
However I really don't need to implement some weird algorithms myself every time (ideally I am using a well tested Library) but the point is that you learn to be able to but also to be able to modify or compose the algorithm in ways the LLM couldn't easily do.
For your GPS at worst you follow directions road sign by road sign.
For a job without the core knowledge what’s the goal of hiring one person vs an unqualified one doing just prompts or worse, hiring no one and let agents do the prompting ?
> "Human brains lack any model of intelligence. It's just neurons firing in complicated patterns in response to inputs based on what statistically leads to reproductive success"
Are you sure about that ? Do we have proof of that ? In happened all the time trought history of science that a lot of scientists were convinced of something and a model of reality up until someone discovers a new proof and or propose a new coherent model. That’s literally the history of science, disprove what we thought was an established model
Indeed, a good point. My comment assumes that our current model of the human brain is (sufficiently) complete.
Your comment reveals an interesting corollary - those that believe in something beyond our understanding, like the Christian soul, may never be convinced that an AI is truly sapient.
reply