Hacker News new | past | comments | ask | show | jobs | submit login

I think this is the best economic function of open source—it forces innovation by elimination of rent seeking.

If you sit on your laurels, someone will make a solution thats at least 70% as good for free.




It's a nice sentiment and I agree, but I'm not sure we'll see this applied to AI going forward. These training runs cost a lot of money - at some point, every player in the game will realise that they need to charge something or they'll left without enough chips to play in the next round.

DeepSeek's altruism has taken them far, but they have costs, too, and High Flyer / their personal warchest can only take them so far. And that's before any potential government intervention - it's very likely that this will become a natsec concern for all nations involved.


There are models with open training as well as weights. It's not chatgpt, but that trained model can be fully audited and reused without further training. I think for most daily use I'd be fine using a greatly outdated model. Open source has always relied on contributors eating costs—even just the opportunity cost of contributing time.

FLOSS software is slow, much slower than VC-funded explosive growth, but it's hard to compete with in the long term.


It'll get increasingly difficult for open models to keep up with proprietary frontier models as the costs involved increase. Yes, there will always be older open models available, but at some point, they just won't be as competitively useful.

Your code assistant that can output a file's worth of code will pale in comparison to systems that can create entire projects in seconds. If it costs billions to reproduce the latter, who's going to do it and give it away for free?

There's a possible solution here in the form of distributed training, but that's still tentative and will always be at a lag to the centralised training the big players can do.


That assumes optimistic scaling behavior between money spent on training and performance. Surely there are diminishing returns.


>who's going to do it and give it away for free

Why not Deepseek? Sometimes rich tech bros have billion dollar hobbies instead of billion dollar yachts or billion dollar foundations. Just takes a few not monetarily motivated but has more money then sense visionaries types willing to turn their very, very expensive hobbies into charities. Now if we enter the 100s of billions / trillions territory...


facebook has also taken the approach of undercutting and betting on subdizing their other products. It's a risky bet to assume the llm tech itself will last long as a moat.


Depends on if DeepSeek is llm business foremost VS hobby founder is willing to sink obscene resources into simply for personal fulfillment and the lulz. Maybe deepseek will become the core business / primary money maker vs highflyer hedgefund. Or maybe Deepseek continues to be expensive yacht or car collection where protecting the moat doesn't matter. Maybe founder gets more fulfillment doing the work and inspiring others i.e. Liang was pretty explicit he wanted to see a world where PRC innovates and contributes to global standards instead of merely following. In the meantime, seems like Deepseek isn't loss-making, so it's not even prohibitive hobby yet. But it's not outside of realm of possiblity that rich bro simply dgaf about returns on passion project funded by his primary income stream.


Yeah well, but you should add OpenAI to that "altruism" first. They're the ones burning money like crazy that could find themselves in the end of their runway quite quickly. Isn't crazy to think DeepSeek could break even and make a good profit before OpenAI goes broke, if they don't find another investor with even bigger pockets.


I think there's always going to be somebody that doesn't want to bother with the complexities of setting up a model to run locally.

Spending way too much time trying to track down a very particular version of a GPU driver or similar just isn't going to be worth it if you can make an API call to some remote endpoint that's already done the heavy lifting.

Plenty of value in handling the hard part so your customer doesn't have to.

I don't know how much of the current focus on local models comes from privacy concerns, but at least some does. Once there's something like the gdpr but for data provided for inference, I think even more people will put down the docker containers and pick up the rest endpoints.


or they will regulatory capture, that seems more likely




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: