Hacker News new | past | comments | ask | show | jobs | submit login

[flagged]



> Imagine not understanding that their main way of doing money is through their API for other companies, and not through a product.

Or, more to the point: Their primary product is B2B, not B2C.


There's no shortage of valuable B2B products to be built through an AI API.


In principle, yes.

In practice, it's a little more tricky. AI APIs give mostly human-level reliable results in many cases. But they don't work well 100% of the time, which affects trust in an automated B2B product meant to replace human labor.

There are still applications that aren't meant to replace human labor, just to generate natural language very quickly. I've even seen this done in academia. And also, the API AIs are expected to become more reliable over time.


Reliability is much more possible than it was 1-2 years ago.

There was an article early on I'm trying to find that said the direction of attention was counter-intuitive this time around.

Some of this is in the weeds, but those who have kept up actively trying to use the AI for 12-18 months are often saying and seeing different things because they know what capabilities have arrived while some of us hang on to the understanding from the last time we might have looked at something.

Where I think I see things differently both in what I've been able to show and deliver, is the application of this technology does need to come before, or right along with development of it. This has largely been overlooked so far

Learning how to use something new and pretty different requires trying to use it, see what it can do reliably, and what it's improving at, and helping organizations with $ to fund something they need.

The big one still, is that something that's not possible today, has to be looked at more through a lens of not if, but when. The when has been happening much quicker.

They can get much closer to working well 100% of the time.

Often by not using them at each step, or not wanting a magic wand to figure it all out.

Human labor replacement isn't anything new. Anything simple or repetitive technology in general (including LLMs) will make a dent.

The applications for natural language, or not are quite valid. I had someone in academia approach me for designing a guide to help grade in the personalized style of a professor. The irony was there's ways to do it without LLM that might equal what is posisble today, but what they were asking for was valid.

One of the big issues that's improving is not just API "reliability", but the cost of running things per request off API, or reasonably running som eor part of it locally.


If other people build it using OpenAIs API that's still B2B from OpenAI's point of view though.


And at release ChatGPT was meant as a marketing gimmick. A fun way to interact with a slightly finetuned version of GPT3.5 to showcase how good their models had become.

If anything it's remarkable how much they leaned into this success, building an iOS and Android app, speeding up the models, adding a premium plan, lots of new features, and eventually deprecating their text-completion mode and going all in on chat as the interaction mode for their LLMs.


Meanwhile Amazon will host Llama and other models in AWS (which you are already using) at reasonable rates.


Their numbers aren't public, but I'm not 100% certain that they're making significantly more money through the API than they are through paid subscribers to their products.

They have a LOT of paid subscribers, and they're signing big "enterprise" deals with companies that have thousands of seats.


I believe I read somewhere that it’s estimated only ~20% of their revenue is via the API.

Found a similar source[0] saying 15% estimated from API use.

[0] https://www.notoriousplg.ai/p/notorious-openais-revenue-brea...


Of course, you need to have companies that build product on top of it, that takes times. So I would not be surprised if in the first few years, subscription will earn more than API usage. But on the long term, if OpenAI stay as the top AI model, they will earn massive money through API calls.


How do you know they have a lot of paid subscribers if their numbers aren't public?

I'd guess that outside of a core "fanbase" / early adopter type they don't have that many subscribers.


Rumors. The Information had a story about this (but it's behind a paywall): https://www.theinformation.com/articles/openais-annualized-r... - their story was based on "people with knowledge of the situation" aka insider leaks.

I don't know the source for https://www.notoriousplg.ai/p/notorious-openais-revenue-brea... but it could just be that same Information article.


They leak like a sieve to their trusted testers.


> Imagine not understanding that their main way of doing money is through their API for other companies, and not through a product. They are focused on doing something they are good: good AI models, they let other companies take the risk to build product on top of it, and reap benefits from theses products.

There is no moat in an API-gated foundation model. One LLM is as good as any other, and it'll be a race to the bottom.

The only way to mint a new FAANG is to build a platform that captivates and ensnares the populace, like iPhone or Instagram.

The value in AI will be accrued at the product layer, not the ML infra tooling, not the foundation model. The product layer.

It might be too late to do this with LLMs and voice assistants, though. OpenAI is super distracted, and there's plenty of time for Google, Meta, and Apple to come in and fill the void.

Everyone was too busy selling the creation of gods, or spreading FOMO to elevate themselves to lofty valuations. At the end of the day, business still looks the same as it always has: create value for customers, ideally in a big market where you can own a large slice. LLMs and foundation models are fungible and easy.


> LLMs and foundation models are fungible and easy.

The top couple LLMs are extraordinarily expensive - will get dramatically more expensive yet - and are one of the most challenging products that have been created in all of human history.

If what you're claiming were true, it wouldn't cost so much for Meta and OpenAI to do their models, and it wouldn't take trillion dollar corporations as sponsors to make it all work.

> One LLM is as good as any other, and it'll be a race to the bottom.

Very clearly not correct. There will be very few top tier LLMs due to the extreme cost involved and the pulling up of the ladders regarding training data. This same effect has helped shield Google and YouTube from competition.

> There is no moat in an API or foundation model.

Do you have a billion dollars? No? Then you can't keep up over the next decade. Soon that won't even get you into the party. Say hello to an exceptional moat.


> The top couple LLMs are extraordinarily expensive - will get dramatically more expensive yet - and are one of the most challenging products that have been created in all of human history.

I disagree. The more we learn about LLMs the more it appears that they're not as difficult to build as it initially seemed.

You need a lot of GPUs and electricity, so you need money, but the core ideas: dump in a ton of pre-training data, then run layers of instruction tuning on top - are straight-forward enough that there are already 4-5 organizations that are capable of training GPT-4 class LLMs - and it's still a pretty young field.

Compared to human endeavors like the Apollo project LLMs are pretty small fry.


100%. I don't think we should at all minimize the decades of research that it took to get to the current "generative AI boom", but once transformers were invented in 2018, we basically found that just throwing more and more data at the problem gave you better results.

And not to discount the other important advances like RLHF, but the reason everyone talks about the big model companies as having "no moat" is because it's not really a secret of how to recreate these models. That is basically the complete opposite of, say, other companies that really do build "the most challenging products that have been created in all of human history." E.g. nobody has really been able to recreate TSMC's success, which requires not only billions but a highly educated, trained, and specialized workforce.


I also immediately thought about Apollo, when i read "most challenging products that have been created in all of human history".


> [Large Language Models] are one of the most challenging products that have been created in all of human history.

That statement seems, er... hyperbolically grandiose.

Are there examples of other products which you think are similarly "challenging" to create?


Sort of like railroads, a super simple product, but insanely expensive to build a network at scale except for those who already have.


Mistral AI has released their updated Mistral Large model and it gets basically the same scores on the chatbot arena leaderboard as a GPT4 version from the end of 2023.

OpenAI has to constantly keep moving and improving their models with zero forgiveness for any complacency and so far they have only achieved less a lead of less than a year versus an underfunded competitor.

Meanwhile Anthropic and Google are managing to build commercial models that are on par with GPT-4.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: