Hacker News new | past | comments | ask | show | jobs | submit login

I don't know why anyone is surprised that a statistical model isn't getting 100% accuracy. The fact that statistical models of text are good enough to do anything should be shocking.



I think the surprising aspect is rather how people are praising 80-90% accuracy as the next leap in technological advancement. Quality is already in decline, despite LLMs, and programming was always a discipline where correctness and predictability mattered. It's an advancement for efficiency, sure, but on the yet unknown cost of stability. I'm thinking about all simulations based on applied mathematical concepts and all the accumulated hours fixing bugs - there's now this certain aftertaste, sweet for some living their lives efficiently, but very bitter for the ones relying on stability.


You're completely correct, of course. The issue is that most people are not looking for quality, only efficiency. In particular, business owners don't care about sacrificing some correctness if it means they can fire slews of people. Worse, gullible "engineers" that should be the ones prioritizing correctness are so business-brainwashed themselves that they like wise slop up this nonsense at the expense of sacrificing their own concern for the only principles that even made the software business remotely close to being worthy of the title "engineering".


That "good enough" is the problem. It requires context. And AI companies are selling us that "good enough" with questionable proof. And they are selling grandiose visions to investors, but move the goal post again and again.

A lot of companies made Copilot available to their workforce. I doubt that the majority of users understand what a statistical model means. The casual, technically inexperienced user just assumes that a computer answer is always right.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: