Hacker News new | past | comments | ask | show | jobs | submit login

> Despite feeling like a "let me draw it for you" answer is a tad condescending, I want to address something here.

I didn't mean it to be condescending - though I can see how it can come across as such. FWIW, I opted for a diagram after I typed half a page worth of "normal" text and realized I'm still not able to elucidate my point - so I deleted it and drew something matching my message more closely.

> This would be great if LLMs did not tend to output nonsense. Truly it would be grand. But they do. So it isn't.

I find this critique to be tiring at this point - it's just as wrong as assuming LLMs work perfectly and all is fine. Both views are too definite, too binary. In reality, LLMs are just non-deterministic - that is, they have an error rate. How big it is, and how small can it get in practice for a given tasks - those are the important questions.

Pretty much every aspect of computing is only probabilistically correct - either because the algorithm is explicitly so (UUIDs and primality testing, for starters), or just because it runs on real hardware, and physics happen. Most people get away with pretending that our systems are either correct or not, but that's only possible because the error rate is low enough. But it's never that low by accident - it got pushed there by careful design at every level, hardware and software. LLMs are just another probabilistically correct system that, over time, we'll learn how to use in ways that gets the error rate low enough to stop worrying about it.

How can we get there - now, that is an interesting challenge.




Natural language has a high entropy floor. It's a very noisy channel. This isn't anything like bit flipping or component failure. This is a whole different league. And we've been pouring outrageous amounts of resources into diminishing returns. OpenAI keeps touting AGI and burning cash. It's being pushed everywhere as a silver bullet, helping spin lay offs as a good thing.

LLMs are cool technology sure. There's a lot of cool things in the ML space. I love it.

But don't pretend like the context of this conversation isn't the current hype and that it isn't reaching absurd levels.

So yeah we're all tired. Tired of the hype, of pushing LLMs, agents, whatever, as some sort of silver bullet. Tired of the corporate smoke screen around it. NLP is still a hard problem, we're nowhere near solving it, and bolting it on everything is not a better idea now than it was before transformers and scaling laws.

On the other hand my security research business is booming and hey the rational thing for me to say is: by all means keep putting NLP everywhere.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: