Hacker News new | past | comments | ask | show | jobs | submit login

There's someone with this comment in every thread. Meanwhile, no one answers this because they are getting value. Please take the time to learn, it will give you value.



I’m a consultant. Having looked at several enterprises, there’s a lot of work being done to make a lot of things that don’t really work.

The bigger the ambition, the harder they’re failing. Some well designed isolated use cases are ok. Mostly things about listening and summarizing text to aid humans.

I have yet to see a successful application that is generating good content. IMO replacing the first draft of content creation and having experts review and fix it is, like, the stupidest strategy you can do. The people you replace are the people at the bottom of the pyramid who are supposed do this work to upskill and become ___domain experts so they can later review stuff. If they’re no longer needed, you’re going to one day lose your reviewer, and with it, the ability to assess your generated drafts. It’s a foot gun.


> Having looked at several enterprises, there’s a lot of work being done to make a lot of things that don’t really work.

Is this a new phenomenon that started post-LLM?


I mean, no, not generally. but the success rate of other tools is much higher.

A lot of companies are trying to build these general purpose bots that just magically know everything about the company and have these but knowledge bases, but they just don’t work.


It gives me value but I am not even sure it is $20 a month of value at this point.

It was in 2023 but I picked all the low hanging fruit.

More importantly though, where is all the great output from the people who are getting so much value out of the models?

It is all privately held? How can that be with millions of people using these models?


I'm someone who generally was a "doubter", but I've dramatically softened my stance on this topic.

Two things: I was casually watching Andreas Kling's streams on Ladybird development (where he was developing a JIT compiler for JS) and was blown away at the accuracy of completions (and the frequency of those completions)

Prior to this, I'd only ever copypasta'd code from ChatGPT output on occasion.

I started adopting the IDE/Editor extensions and prototyping small projects.

There's now small tools and utilities I've written that I'd not have written otherwise, or would have taken twice the time invested had I'd not used these tools.

With that said, they'd be of no use without oversight, but as a productivity enhancement, the benefits are enormous.


For my mental health I’ve stopped replying to comments where it’s clear the author has no intention of having a discussion and instead wants their share their opinion and have it reinforced by others.

No, we don’t have AGI or anything close to it. Yes, AI has come a long way in the past decade and many people find it useful in their day-to-day lives.

It’s difficult to know where AI will be in 10 years, but the current rate of improvement is staggering.


Something can generate value and still have negative unit economics.


> Meanwhile, no one answers this because they are getting value.

You're literally doing the same thing you're accusing of. Every HN thread is full of AI boosters claiming AI to be the future with no backing evidence.

Riddle me this. If all these people are "getting value", why are all these companies losing horrendous amounts of money? Why has nobody figured out how to be profitable?

> Please take the time to learn, it will give you value.

Yeah, yeah, just prompt engineer harder. That'll make the stochastic parrot useful. Anyone who has criticism just does so because they're dumb and you're smart. Same as it always was. Everyone opposed to the metaverse just didn't get it bro. You didn't get NFTs bro. You didn't get blockchain bro.

None of these previous bubbles had money in it (beyond scamming idiots), if AI wants to prove it's not another empty tech bubble, pay up. Show me the money. Should be easy, if it's automating so many expensive man-hours of labour. People would be lining up to pay OpenAI.


There’s clearly some value. People are paying for something.

> AI start-ups generate money faster than past hyped tech companies

https://www.ft.com/content/a9a192e3-bfbc-461e-a4f3-112e63d0b...


> Riddle me this. If all these people are "getting value", why are all these companies losing horrendous amounts of money? Why has nobody figured out how to be profitable?

While I agree that LLMs are not currently working great for most envisioned use cases; this premise here is not a good argument. Large LLM providers are not trying to be profitable at the moment. They’re trying to grow and that’s pretty sensible.

Uber was the poster child of this, and for all its mockery, Uber is now an unqualified profitable company.


I'm not sure I would call incinerating 11b dollars a year to the point where you need to do one of the biggest raises ever and it doesn't even buy you a year of runway sensible.


Based on their forecasts it’s still pretty sensible. I don’t personally believe the forecasts are sensible. But that’s besides the point.


Think of all the search engines alltheweb, yahoo, astalavista,... where sooo much money got poored in, and finally there was just one winner taking it all. That's the race openai is trying to win now. The competition is fierce and we can just play with all kinds of models for free and we do nothing but complaining.


> Why has nobody figured out how to be profitable?

From what I've seen claimed about OpenAI finances, this is easy: It's a Red Queen's race — "it takes all the running you can do, to keep in the same place".

If their financial position was as simple as "we run this API, we charge X, the running cost is Y", then they're already at X > Y.

But if that was all OpenAI were actually doing, they'd have stopped developing new versions or making the existing models more efficient some time back, while the rest of the industry kept improving their models and lowering their prices, and they'd be irrelevant.

> People would be lining up to pay OpenAI.

They are.

Not that this is either sufficient or necessary to actually guarantees anything about real value. For lack of sufficiency: people collectively paid a lot for cryptocurrencies and NFTs, too (and before then and outside tech, homeopathic tinctures and sub-prime mortgages); For lack of necessity: there's plenty of free-to-download models.

I get a huge benefit even just from the free chat models. I could afford to pay for better models, but why bother when free is so good? Every time a new model comes out, the old paid option becomes the new free option.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: