I contributed a minor (but imho still neat :p) improvement to Redis under its original license, and personally moved to using redict when the unexpected license change to SSPL was announced - and I was feeling betrayed as a contributor to a properly-FOSS-codebase. (Had they switched to AGPL right away, I'd have been perfectly fine with that change from a moral perspective, ftr.)
I have a great deal of respect for antirez and recgnize him as a kind and benevolent member of the FOSS community, but no matter what Redis, Inc. announced or does, they have lost my trust for good, and I will continue to use Redis forks for as long as they exist.
Yeah, we just did this whole ride with Elastic [0]: company changes the license out from under the community, community revolts, company gives up and changes it back. Both companies even pulled the same "it worked" excuse ("while it was painful, it worked", "this achieved our goal").
Neither company has built in a legal safety mechanism to prevent themselves from pulling the rug again later and both companies have shown themselves to be untrustworthy stewards. They came groveling back when it turned out that community goodwill really did matter after all, but this is definitely a "fool me twice, shame on me" situation.
Dunno about Redis, but for Elastic I still feel sorry for them being thrown around like a rag doll by Amazon. On principal I will not use the Amazon fork, because I don’t want to support a company that would prefer to fork a project rather than fork over some cash. Amazon is more than willing to sell you their Elasticsearch fork at a loss as long as they can eventually recoup the losses when Elastic inevitably dies. At which point they will naturally abandon the open sources side of their fork and continue development in private, at a much slower rate, while doubling the price of the AWS service. At which point you’ll have no choice but the pay up, cause there aren’t any competitors left.
> On September 16, 2024, the Linux Foundation and Amazon Web Services announced the creation of the OpenSearch Software Foundation.[15][16] Ownership of OpenSearch software was transferred from Amazon to OpenSearch Software Foundation, which is organized as an open technical project within the Linux Foundation.
OpenSearch is Apache License 2.0. You can do whatever you want to/with it. How are Elastic the good guys in your mind?
Amazon's product had confusing naming and positioning, and lagged in compatibility which created a headache for Elastic.
And they were using an open source piece of software, but scaling it with closed source secret sauce on top of that. The license was saying they'd have to open up their secret sauce or pay, and their response was to leave the table instead.
OpenSearch still lacks tons of really basic functionality from Elastisearch, and watching how far ahead Elasticsearch got as a product shows how much free lunch Amazon was getting... and how much they actually cared about the product when it was time for them to put their money where their mouth was (despite AWS pulling Elastic's yearly revenue every 4 days).
Elasticsearch has generally matured like it has a passionate builder with a vision. They pushed on two major fronts (LLMs and Observability) and managed to execute effectively without letting product quality slip.
Meanwhile OpenSearch is still the place where tickets requesting 3+ year old ES functionality go to die.
-
I'm honestly on the opposite side: how is the hyperscaler offering a strictly inferior product with closed extensions and still managing to eat the main developers' lunches for the sole reason they already have sales pipelines established not the bad guy?
I guess I'm not really someone who believes in infinite scaling rules: something that can work at the individual scale doesn't have to work when hyperscalers do it (like selling hosted clusters)
> Amazon's product had confusing naming and positioning, and lagged in compatibility which created a headache for Elastic.
Irrelevant.
Elastic decided to release software to the public with a license that granted everyone under the sun the right to use it how they saw fit. This included also selling services.
That became a big part of their business model as it drove up it's adoption rates and popularity. If Elastic kept things like Elasticsearch proprietary, they would hardly have the same adoption rates.
You can't have it both ways. Why do you think people drop these projects the moment these corporations decide to pull the rug from under their userbase?
Of course it is relevant. TooBigTech is killing smaller companies just because it is too big.
When there are issues like this ("we made a successful product and someone forked it and it's killing us"), the "someone" is generally a BigTech, not 3 students in a garage. Then people complain about open source working/not working, and forget that the root problem is the monopoly that is screwing them.
No, it is irrelevant. Try to think about it for a second. You release a project under a license that grants everyone the right to use it as they see fit, including building a business on it. Everyone starts using the project as they see fit.
Do you believe you now have the right to retroactively pull the license on a whim?
Try to think about it for a second: multiple problems can exist in parallel:
1. Some company uses a permissive licence, sees a competitor making a proprietary product from their base, and starts whining. Too bad, they should have used a copyleft licence from the beginning on.
2. Some company makes a nice product. TooBigTech sees it and builds an alternative (be it a fork or from scratch, I don't care). TooBigTech offers it for free (because they can, because they are too big) and capture the market. The original smaller company dies, because they can't offer their product for free. Now TooBigTech can start enshittifying because they own the market, again.
> Do you believe you now have the right to retroactively pull the license on a whim?
If you own the copyright to the whole codebase, of course you can. All those contributors who signed a CLA and are now whining should think about that.
> Large companies using their scales to compete is a feature of capitalism, not a bug.
It's more fundamental that that. It's this idea that a corporation can arbitrarily change licensing terms already granted to end-users to extort them.
This is not limited to any managed service provided by a random cloud provider. The core reasoning is a corporation identifying end-users who are profiting from a service that directly or indirectly involves a project they release to the public under a FLOSS license. They see people getting paid, and retroactively change terms to coerce them into paying them. The same argument they throw at AWS providing a managed service also applies to any company using their project. How does this make any sense?
> It's this idea that a corporation can arbitrarily change licensing terms already granted to end-users to extort them.
If contributors refused to sign CLAs, then corporations would either not get contributions or not be able to arbitrarily change the licence. It's also the contributor's fault if they contribute to a permissive project and give up their copyright. Is it extortion if they agreed to it?
And then those contributors come whining because the corporation does whatever they want. Too bad, don't sign a CLA. And don't contribute to permissively-licenced projects if you don't want your code to end in proprietary products.
> Large companies using their scales to compete is a feature of capitalism, not a bug.
Large companies using their dominant position to compete is not a feature of capitalism. Antitrust laws prevent that. It's just that the US ignores them.
> AWS only gets to compete using their source code because they opened it in the first place. The competition was explicitly invited.
This part I agree with: if you build a permissively-licenced project, you don't get to whine when competitors use it in their own proprietary alternatives.
> Elastic decided to release software to the public with a license that granted everyone under the sun the right to use it how they saw fit. This included also selling services.
The original licensing decision was made well before the formation of Elastic the company. Maybe Shay didn't quite give it the full consideration. I don't know him super well, but like most developers starting open source projects, I don't think he has a legal background. I confess to a certain about of naivety, thinking that contributors would adhere to the spirit of contribution rather than the absolute letter of the license. I don't think it's unreasonable to have societal expectations above and beyond the legal requirements.
I've been doing open source for ~25 years and it appears to me there's been a distinct shift in how companies approach open source. I feel like what I started with was almost a eutopic ideal and now I'm expected to just do free support work for companies that don't want to pay anything to anyone. I don't see any problem with devs realizing their mistake or changing their mind and updating the license accordingly. For my part, I used to use ASLv2 for everything and now I default to AGPL.
> You can't have it both ways. Why do you think people drop these projects the moment these corporations decide to pull the rug from under their userbase?
Just like you have no legal obligations above what the license says, they have no legal obligations to make their work available under the license you want. It's open source, so feel free to take the commit from version before the license change. Of course, it turns out it's quite a bit messier to have two incompatible versions floating around, but as long as we're limiting ourselves to legal requirements that's a non-issue.
The most obvious reason these projects get dropped after a license change is because they now use a license that needs to be reviewed. It's a lot easier to justify spending on a new AWS service than it is getting a new license approved. But, that was more or less happening anyway. If you're already on AWS it's easier to add a new service than pay for hosting a service with a third party. I don't particularly fault Elastic for not wanting to be Amazon's R & D arm and support team.
We have a lot of systems and infrastructure in place that only mostly work because both parties hold up their own end of an unspoken agreement. Large tech companies have decided they can save money by breaking breaking away from that and just sticking to the letter of the license text. Legally, they have that right, but the other parties have predictably responded. I think what it really shows is our licenses don't fit all scenarios. Maybe I'm okay with individuals and companies with a market cap < $1T using the software under terms akin to open source but don't want to be a de facto employee for a large company reselling my software. We're well past the point of open source being about software freedom.
Most of these projects start off as a developer choosing a license without having the legal background to truly assess it and then they run into an 800 lbs. gorilla with loads of in-house counsel. Blaming them for not thinking it all the way through or realizing the implication of their license choice seems to me counter-productive. Okay, so they made a mistake. What now? They don't want to continue working under the framework where they feel being taken advantage of. I don't think they should be obligated to continue doing so because all of the non-Amazon folks don't want to incur the headache of forking. I'd love to see some of that ire aimed at the company willing to break the illusion to make some extra money than the folks that have freely given away their work for years.
> Well they obviously do have the rights, are you saying they broke the law?
No, they do not. They can release new versions with some other license if they get all contributors to agree. They absolutely cannot pull the FLOSS licenses from existing releases.
> They can release new versions with some other license if they get all contributors to agree.
Contributors must sign a Contributor License Agreement (CLA) before a contribution will be merged. The CLA signers gave Elastic the right to change licenses, among others, when they signed the agreement. Consequently, Elastic doesn't need to get individual sign-off from each contributor before changing the project license. The code wasn't based on copyleft license, so there's no compulsion to continue using the same licensing terms for all future distributions. That's a motivating factor for many CLAs. If you made a contribution prior to the introduction of the CLA then things get murkier.
You keep spamming this to anyone who replies to you: maybe your take is the irrelevant since it didn't stop them from pulling the rug in the first place?
> people drop these projects the moment
Noisy people always proclaim that they will, and hyperscalers gleefully market their forks... meanwhile it's mostly hyperscaler customers that they weren't going to get in the first place that actually move.
Elasticsearch was source available for 4 years and still saw massive growth.
People tried to spin Redis db-engine rankings dropping as being due to Valkey, but their trendline doesn't seem to have been affected at all by the change: https://db-engines.com/en/ranking_trend/system/Redis
-
But to be more direct, I'm honestly unimpressed with this brand of reductionism.
It's super easy to keep yelling IRRELEVANT THEY WERE RUNNING AN OPEN LIBRARY AND SHOULD HAVE EXPECTED SOMEONE TO SHOW UP WITH A HAULER TO TAKE ALL THE BOOKS, but it's not useful.
I'd also rather have a software ecosystem that uses targeted defenses to counter hyperscalers (which are more like an invasive species) than one that throws up its hands at them.
For anything that isn't Linux-scale, the alternative is that most of the well funded development ends up being from hyperscalers and that just concentrates their power.
> Yes, anti trust rules should have stopped amazon. But they didn't, this directly hurts open source.
What absurd, twisted logic. No, it does not affect open source. What are you talking about?
> The problem is Amazon, not elastic that's trying to survive.
No, the problem is Elastic trying to coerce end-users to pay them for using FLOSS projects. It makes absolutely no difference if AWS provides a managed service or if I run Elasticsearch in a AWS EC2 instance I pay for.
If a corporation wants to build a business model around FLOSS, it's their responsibility to figure out how to do that. What they cannot do is abuse FLOSS licenses to pull bait-and-switch on their userbase to coerce them to meet their revenue targets.
They did figure it out. Contributors signed CLA and so their license change was absolutely legal and in line with what everyone agreed upon. When you started using the product, you should have known this.
This risk has always existed. It existed when they chose to open source it in the first place.
Time and time again we see the lesson being learned the hard way that "if your business value is your codebase, it's hard to build a business whilst literally giving it away".
> "if your business value is your codebase, it's hard to build a business whilst literally giving it away".
perhaps then it comes as no surprise that some very outspoken open-source proponents do not open-source their core business components. I can understand they do it in order to exist as a company, as busineses, but I don't understand why they have to endure being shamed for staying closed-source, while all their stack is open. many such companies exist.
and let's add to this all the fact that everything released in 2025 as open-source gets slurped by LLMs for training. so, essentially, you feed bro's models breakfast with your open-source, like it or not. in the very near future we'll be able to perhaps prompt a 'retell' of Redis which is both the same software, but written so that it is not.
in essence there seems to be now little reason to open-source anything at all, and particularly if this is a core-business logic. open-source if you can outpace everybody else (with tech), or else you shouldn't care about the revenue.
A sufficiently capable LLM might be good enough to do cleanroom design on its own, with little to no human assistance. That would destroy the entire idea of copyright as it exists for software.
You need one agent that can write a complete specification of any piece of software, either just by using it and inferring how it works, or by reverse engineering if not prohibited by the license. You then have a lawyer in the middle (human or LLM) review it, removing any bits that are copyrighted. You then need another agent that can re-implement that spec. You just made a perfectly legal clone.
Cleanroom design is a well-established precedent in the US, and has been used before, just with teams of humans instead of LLMs.
I think some companies will be completely unaffected by this, as either the behavior of their code can't easily be infered just from API calls, or because their value actually lies in users / data / business relationships, not the code directly. Stripe would be my go-to example, you can't just reverse-engineer all their optimizations and antifraud models just by getting a developer API key and calling their API endpoints. They also have a lot of relationships with banks and other institutions, which are arguably just as important to their business as their code. Instagram, Uber or Amazon also fall into this bucket somewhat.
Because, unlike humans, LLMs reliably reproduce exact excerpts from their training data. It's very easy to get image generation models to spit out screenshots from movies.
> Can we start at "humans are not computers", maybe?
Sure. So it stands to reason that "computers" are not bound by human laws. So an LLM that finds a piece of copyright data out there on the internet, downloads it, and republishes it has not broken any law? It certainly can't be prosecuted.
My original point was that copyright protections are about (amongst other things) protecting distribution and derivative works rights. I'm not seeing a coherent argument that feeding a copyrighted work (that you obtained legally) into a machine is breaching anyone's copyright.
> So an LLM that finds a piece of copyright data out there on the internet, downloads it, and republishes it has not broken any law?
Are you even trying? A gun that kills a person has not broken any law? It certainly can't be prosecuted.
> I'm not seeing a coherent argument that feeding a copyrighted work (that you obtained legally) into a machine is breaching anyone's copyright.
So you don't see how having an automated blackbox that takes copyrighted material as an input and provides a competing alternative that can't be proven to come from the input goes against the idea of copyright protections?
> So you don't see how having an automated blackbox that takes copyrighted material as an input and provides a competing alternative that can't be proven to come from the input goes against the idea of copyright protections?
Semantically, this is the same as a human reading all of Tom Clancy and then writing a fast-paced action/war/tension novel.
Is that in breach of copyright?
Copyright protects the expression of an idea. Not the idea.
> Copyright protects the expression of an idea. Not the idea.
Copyright laws were written before LLMs. Because a new technology can completely bypass the law doesn't mean that it is okay.
If I write a novel, I deserve credit for it and I deserve the right to sell it and to prevent somebody else from selling it in their name. If I was allowed to just copy any book and sell it, I could sell it for much cheaper because I didn't spend a year writing it. And the author would be screwed because people would buy my version (cheaper) and would possibly never even hear of the original author (say if my process of copying everything is good enough and I make a "Netflix of stolen books").
Now if I take the book, have it automatically translated by a program and sell it in my name, that's also illegal, right? Even though it may be harder to detect: say I translate a Spanish book to Mandarin, someone would need to realise that I "stole" the Spanish book. But we wouldn't want this to be legal, would we?
An LLM does that in a way that is much harder to detect. In the era of LLMs, if I write a technical blog, nobody will ever see it because they will get the information from the LLM that trained on my blog. If I open source code, nobody will ever see it if they can just ask their LLM to write an entire program that does the same thing. But chances are that the LLM couldn't have done it without having trained on my code. So the LLM is "stealing" my work.
You could say "the solution is to not open source anything", but that's not enough: art (movie, books, paintings, ...) fundamentally has to be shown and can therefore be trained on. LLMs bring us towards a point where open source, source available or proprietary, none of those concepts will matter: if you manage to train your LLM on that code (even proprietary code that was illegally leaked), you'll have essentially stolen it in a way that may be impossible to detect.
How in the world does it sound like it is a desirable future?
Maybe I need to explain it: my point is that the one responsible is the human behind the gun... or behind the LLM. The argument that "an LLM cannot do anything illegal because it is not a human" is nonsense: it is operated by a human.
> I agree with the fact that LLMs are big open-source laundering machines, and that is a problem.
Why do you believe this is a problem? I mean, to believe that you first need to believe that having access to the source code is somehow a problem.
> I mostly see it as a problem for copyleft licences.
Nonsense.
At most, the problem lies in people ignoring what rights a FLOSS license grants to end users, and then feigning surprise when end users use their software just as the FLOSS license intended.
Also a telltale sign is the fact that these blind criticisms single out very precise corporations. Apparently they have absolutely no issue if any other cloud provider sells managed services. They single out AWS but completely ignore the fact that the organization behind ValKey includes the likes of Google, Ericsson, and even Oracle of all things. Somehow only AWS is the problem.
> I mean, to believe that you first need to believe that having access to the source code is somehow a problem.
How in the world did you get there from what I said? Open source code has a licence that says what the copyright owner allows or not. LLMs are laundering machine in the sense that they allow anybody to just ignore licences and copyright in all code (even proprietary code: if you manage to train on the code of Windows without getting caught, you're good).
> At most, the problem lies in people ignoring what rights a FLOSS license grants to end users
Once it's been used to train an LLM, there is no right anymore. The licence, copyright, all that is worthless.
> Also a telltale sign is the fact that these blind criticisms [...]
> LLMs are laundering machine in the sense that they allow anybody to just ignore licences and copyright in all code (...)
No. Having access to the code does that. You only need a single determined engineer to do that. I mean, do you believe that until the inception of LLMs the world was completely unaware of the whole concept of reverse engineering stuff?
> Once it's been used to train an LLM, there is no right anymore.
Nonsense. You do not lose your rights to your work just because someone used a glorified template engine to write something similar. In fact, your whole blend of comment conveys a complete lack of experience using LLMs in coding applications, because all major assistant coding services do enforce copyright filters even when asking questions.
> do you believe that until the inception of LLMs the world was completely unaware of the whole concept of reverse engineering stuff?
The scale makes all the difference! A single determined engineer, in their whole life, cannot remotely read all the code that goes into the training phase. How in the world can you believe it is the same thing?
> Nonsense. You do not lose your rights to your work just because [...]
It is only nonsense if you don't try to understand what I'm saying. What I am saying is that if it is impossible to prove that the LLM was trained with copyrighted material, then the copyright doesn't matter.
But maybe your single determined engineer can reverse engineer any trained LLM and extract the copyright code that was used in the training?
This is exactly what the AGPL was made to combat against. But open source devs still choose more permissive licenses first - presumably to attract corporate clients to use their product (and because devs are suckers to large corporate interests)
This. They choose a permissive licence, proudly advertise it ("use us instead of our competitor because they are copyleft and we are not"), and then come whining when other competitors benefit from the very fact that they chose a permissive licence.
There are different FOSS communities that hold different values. I come from the copyleft camp because I want to advance Software Freedom objectives for end-users. Others are more interested in advancing software developer freedom, and they find the obligations that are designed to advance end-user rights are unduly burdensome to the software developer. Articles like the one on the FreeBSD website [1] explain why they take a different position
I choose to believe that both of these sub-communities of the larger FOSS community are principled in their beliefs. I don’t see whining from FreeBSD folks about competitors, or for-profit companies using all the permissions they give with their choice of license.
> I don’t see whining from FreeBSD folks about competitors
Sure! Then that's all good! I have nothing against the use of permissive licences (though I am on the copyleft camp too, obviously). Or put it in the public ___domain.
My problem is with those who do and then whine about it.
It especially bugs me when company blogs call out “abuse” when they only exist as a company because others gave them the permissions needed to build a business on software they did not author themselves!
The solution to the old problem of "what if someone uses my code to compete with me" is "don't open source your code".
This isn't complicated. It's trade secrets 101.
I'm being disingenuous though. Of course the bait-and-switch merchants know this, they're just banking on getting enough product momentum off the free labour of others before they cash in. That's the plan all along.
I think that is a little unfair. I don't know anything about the companies behind Redis and Elastic. But another possibility is that they want to make a good open source product and create some sort of business around it and find it difficult to make a waterproof moat. I'm sure that there are many other open source companies with the same basic strategy that are just more lucky, e.g. don't get AWS as competition.
> The problem here is that Amazon is in a position where they can fork and sell it for a loss in order to crush the competition.
Everyone can fork any FLOSS project. That's by design. There are also no restrictions on random people selling services directly or indirectly using said FLOSS projects.
If a corporation changes their mind and decide it was a mistake for them to release software as FLOSS, they have the right to do what Redis just did. What they can't do is feign ignorance or do a bait-and-switch to try to coerce their userbase to cough up licensing fees for FLOSS projects.
Selling at a loss isn’t bad. Selling at a loss to build marketshare or kill competition usually is.
The issue is that previous companies usually use this aspect to build marketshare, so there is no other way to compete. Other tactics include lobbying, patent bullying, etc.
Shoot, as bad as Uber is, just look at how taxis in major cities were formed.
Then look at labor laws in other countries.
Under the current legal/political structure for any modern country, there is no winning without exploitation, which is why I refuse to start my own business.
No - that's why their pricing is so detailed and why their services have few limits you can't request them to increase.
They can afford to maintain a better branch of Elastic Search or Redis or so on, an drive those companies out of business by the virtue of efficient free market competition. At which point they'll make sure their branch is only useful on their servers, perhaps by adding dependencies on all their internal microservices - so technically they released the code but you can only run it if you're Amazon.
> At which point they'll make sure their branch is only useful on their server
Can you think of any examples where they've done that? They certainly could, but I can't think of any times they have actually rug-pulled like that, and the motive to do so seems tenuous too. In order for Amazon to do something like this they would need established market dominance, as well as a belief that some new player will hurt their market share enough to justify alienating the FOSS community.
Sorry, I didn't want to imply that they do. This would be illegal as per antitrust regulations.
What I wanted to say is that AWS using a permissively-licenced project is fine. The project should have thought about the licence beforehand and started with copyleft from the beginning on. Their problem now.
But a more general problem I see is that even if you use a copyleft license for your project, you have the risk to be crushed by anti-competitive behaviours coming from TooBigTech. It's not rare, they do that all the time and the US doesn't enforce anything. And when others (like the EU) try to do it, the US put pressure because they defend their US TooBigTech.
The latter is not a model or licencing issue, it's an antitrust issue. Now Amazon could compete with Redis without doing it illegaly, but they can only because they are so big. And the fact that they are so big is related to the lack of antitrust enforcement (it's documented, TooBigTech have all abused their dominance forever).
Not really. I have done multiple AWS pricing and costing exercises while launching services and there was never a direction from the VP to sell it lower than what it costs to build and operate. Cost to build and operate includes everything from salaries, infrastructure and many other line items. And usually things are projected 3-5 years into the future and P&L analysis must show that eventually the service will make a profit. It does make some assumptions about minimum customer adoption for the profit margins to materialize which eventually becomes part of the product and sales teams goals.
The costing model does allow losses to be incurred in the initial years because building the thing is more expensive at first but then it should settle down and revenue should outpace expenses.
What can happen with these open source products being launched as a service is that that initial cost can be cut down by as much as 50-75% but rarely more than that because you still need to build all the surrounding infrastructure, documentation, UI. It still gives AWS an advantage by relying on an existing body of work they can start with where many problems have been thought through and solved. Also you will likely get a good product roadmap skeleton ready to be prioritized which otherwise can be a huge time sink.
In a nutshell, no. AWS won’t sell a service at a loss (there are exceptions of course) but there is room to incur a loss at the beginning but it is priced to eventually turn a profit. Whether that happens for every service in reality is a different story.
Most (all?) open source licenses allow you to sell hosted clusters. They offered a hosted solution well before they changed its license. You can also fork it; but depending on the license, you might need to open-source any fork.
: I don't know of any open source license ones that don't allow someone to sell hosted cluster. Even AGPL, which is copyleft, allows it; so long as the hosted version is either: the same as the open-source version, or it's version is also open-sourced.
Nobody knows what the AGPL actually says, since it has not been determined sufficiently in court (at least in the EU) which is why the license is blacklisted by most companies.
If you want corporate adoption except AWS, you still cannot use AGPL.
I feel like there should be a license that doesn’t allow hyperscalers alone to fork without releasing source while being palatable to smaller companies
Interesting that you use the word "free." Long before there was open-source, it was called "free software" and the restrictions on use that required derivative software to give back their changes was the entire point.
> On principal I will not use the Amazon fork, because I don’t want to support a company that would prefer to fork a project rather than fork over some cash.
That's specious reasoning at best.
The whole point of FLOSS is that anyone is free to use it how they see fit. Whether it's a hobbyist doing a pet project or a FANG using it as their core infrastructure, they are granted the right to use it how they see fit.
That's exactly why they started to use it to begin with. Isn't it?
When a random corporation decides to pull the rug on the established user base expecting to profit from the pain of migrating existing services, it is not the user base who is the bad actor.
Can you elaborate on what exactly Amazon did to Elastic? I read all of their blog posts and the only thing I really got out of it was "they sell hosted Elastic cheaper than we can", which is hardly surprising given that Elastic really just packages up AWS/GCP/Azure cloud infra. That doesn't have to be AWS selling at a loss, AWS just doesn't need to pay itself.
And by all accounts I've read Amazon did contribute back to Elastic development up until Elastic switched the license on them. At that point they forked, but it's hard to blame them when they were deliberately locked out of the original project.
Most of the arguments I've seen against Amazon with regard to Elastic have tended to be very vibe-based. Amazon bullied Elastic because that's always what Amazon does! It's plausible, but it's also plausible that Elastic thought they could use Amazon's terrible reputation as a weapon against it without there being any substance.
Elastic was in the same bind as every company which writes opensource. There's no way to monetize it when a hyperscaler can sell it at much thinner margins.
Either the product is donated to a foundation, or the parent company dies.
While amazon technically wasn't doing anything "wrong", they're effectively squeezing the oxygen out of the ecosystem.
The parameters of the problem are relatively new. SAAS - which is now the norm for how software should be delivered, and opensource - which is old - just don't mix, the incentive structure is completely misaligned. Companies are prodding to see how to get out of this conundrum. Mongodb set the tone with the SSPL, and other midcap just kind of followed suit, what else was there to do, what other approaches could have been taken? Now, as the fallout starts to become clear, there's just a lot more information now on what works and doesn't, and companies are pivoting.
There's no vibes about it, it's brutal reality out there, big tech is strangling the market, and the small guys are figuring a way out, first trying one way, now another.
Being in a position where you can copy anything and crush competition is a problem.
In terms of antitrust, I believe that if you could prove that Amazon forked and offered the service with the intent to crush the competition, it would be downright illegal. A current case is Meta: back then, Zuckerberg was happily writing (internally) that Facebook needed to buy WhatsApp and Instagram and Snapchat to prevent them from ever competing. This is anti-competitive.
Companies that build themselves on selling open source software put themselves in the position where anyone else can copy them and compete with them on price, and price alone. This is clearly the disadvantage of open source. It brings plenty of advantages, which is why people do it - but you can't have only the advantages and no disadvantages of open source.
Open sourcing your product is a risky investment, and as with all risky investments, it might pay out, or it might not.
As I said before, I believe that using permissive licences is a bad idea. I have seen multiple projects choosing a permissive licence as a way to compete against an already established copyleft project, just because it's easier to get adopted by companies. I find it unfair but also a bit stupid: by using a permissive licence, they allow anyone to compete with them with a proprietary fork.
Still, there is an antitrust question (that is slightly orthogonal): if TooBigTech can offer a similar product at a loss (e.g. for free) until they capture the market, then that's a problem. And they can only do it because they are too big, and that is an antitrust issue IMO.
I meant they have the resources to copy any product in a way that will crush the competition.
As in, they can build an alternative to an open source project, offer it for free (i.e. at a loss) for years until they capture the market, and then start enshittifying. This is an antitrust problem.
> Elastic explicitly allowed AWS to copy and use their source code, then whined about it.
Yeah at the very least they should have started with a copyleft licence.
1. People put a lot of work into building databases. The license choice is OSS / FOSS.
2. Some people in the community (original authors, community leads) make a company around the database and continue developing it for years on end. They sometimes raise venture capital to expand the business.
3. Amazon / Google / Microsoft offer managed versions of the database and make bank on it. Easily millions in revenue. Original creator / company doesn't get anything, and the hyperscaler isn't obliged to pay.
4. The company decides to change the license to force Amazon / Google / Microsoft to pitch in and pay a fee.
5. Amazon / Google / Microsoft fork the database. The community revolts. Sometimes the people revolting are employees of the hyperscalers, other times these are just FOSS fans that hate "source available" licenses or relicensing.
6. Database company is forced to walk back the changes. Still no revenue.
---
The solution is clear: start your new database with an "equitable source / source available" license from day one. Nobody will complain about a relicense since your license will handle the hyperscalers right off the bat.
Basically your license needs one of a few things if you want to prevent Amazon from walking off with your money:
- A hyperscaler clause such that any managed offering has to (1) be fully open source, (2) has to pay a fee, or (3) is prevented outright.
- A MAU / ARR clause such that hyperscalers are in the blast radius. Note that this also hits your customers.
> The solution is clear: start your new database with an "equitable source / source available" license from day one. Nobody will complain about a relicense since your license will handle the hyperscalers right off the bat.
Yes, this would be the honest thing to do, but people don't do it because using a non-FOSS license loses you adoption. The step you're missing in your little timeline is that the only reason the project takes off at all and becomes big enough that anyone is making money off of it is because it's Open Source. Proprietary databases, programming languages, and similar have lost big time and that's not changing any time soon.
So what's really happening is that these FOSS companies want to have their cake and eat it too. They want to release code for free so that other devs will use it and then make money from the project while somehow banning other companies from also making money off it.
What's missing is why open source won. It's impossible to be rug pulled. If the maintainers (this is a carefully chosen word) attempt a rug-pull you can just Elastic or Redis them. If the product is closed this is impossible and you're at the mercy of whoever you built on. Open Source is the ultimate right to repair. Everything is working as intended.
Selling software is a deadend. Nobody in their right mind will pay for it because the risks are too great. If you want to build a technology business sell services and support.
The majority of software was always proprietary. GNU was a revolutionary idea. Businesses giving away their product to gain favor is just the natural tendency of the rate of profit to fall.
This purist mindset is killing the long term viability of open source. If hyperscalers can make money from successful projects but open source founders can't, in 10 years we'll have fewer open source projects.
Licence freedoms don't have to be all or nothing, or apply to everyone in the world. It can be open source and have restrictions. Anything else turns open source into a religion.
Not a religion, but a term with a specific meaning, which meaning implies a collection of freedoms granted without discrimination. People are welcome to use licenses that revoke those freedoms, but calling them FOSS is confusing and muddies the waters.
See Llama's license: if it can be Open Source while having restrictions, then having an Acceptable Use Policy is okay, right? So Redis could create a license that bans its use if you host adult content, Star Wars fan fiction, or documents containing the letter R?
If Open Source doesn't mean a license that indiscriminately grants a set of specific freedoms then it's pretty useless as a term—all I know on hearing it is that a project's source code is available. Which reminds me, we already have a term for that: Source Available.
Sort-of, we've had a litany of Postgres devshops go bust over the past decade. Some will be for the usual reasons but I wouldn't want to discount the impact of RDS.
However I think Postgres is a bit different. It's a bigger market and more widely used. There's a need for tuning services no matter which platform provides the database servers, so there's a level to make money above cloud hosting.
> So what's really happening is that these FOSS companies want to have their cake and eat it too.
talk about kicking down. So no harsh words about the big dogs who, while contributing nothing, and just due to sheer size and scale advantage, are capturing the lion's share of the enterprise value?
This is a line that gets thrown around very casually, but every assessment I've heard says that the big dogs are quite prolific contributors to these projects.
> capturing the lion's share of the enterprise value?
Ah, here's the Matt Mullenweg logic creeping in again [0]! This is basically his taunt towards DHH (though at least you're using it to cry foul rather than tease!): "although he has invented about half a trillion dollars worth of good ideas, most of the value has been captured by others."
And this is where I think most of this aggression towards cloud providers is actually coming from: a significant number of these companies have bought into the idea that they're a failure if they fail to capture the majority of the value from their Open Source project. Which is understandable when you make a VC-funded business out of it, but makes no sense given that FOSS has always been about advancing the collective good, not raking it in.
If your last sentence were true, you’d prefer a world where hyperscalers do not fork and close source a FOSS project but the FOSS project continues on to thrive
With copyleft being effectively banned in most companies, it means that there is no viable FOSS business model.
Reality is that for now, many FOSS contributors do it for free in their spare time. Comically critical pieces of infrastructure rely on such goodwill development.
But as cozy salaries decline and job security does, less people will have time to work for free.
We will have less Foss, less innovation, less new viable projects compared to if there were no hyperscalers.
ok sure, but there's contributing and contributing.
> a significant number of these companies have bought into the idea that they're a failure ...
ok, fair enough. But in that equilibrium you're painting, _only_ companies like AWS will ever be able to monetize open source, especially since software delivery and SAAS are essentially equivalent in 2025, and SAAS is a margin's play which small fry cannot compete with.
VC-funding is also the lever that allows for open source software to find deep and quick penetration in the industry. it's not aws driving it. So when a practitioner benefits from the mature tooling and such a wide userbase of say Redis or ElasticSearch, it is not only because it was cheap (open source), but also because it was lavishly supported (VC).
> I think most of this aggression towards cloud providers is actually coming from ...
I mean, you're certainly right about that, wouldn't call it aggression necessarily.
The software platform is different now than it was 20 years ago. If we want a thriving open source ecosystem, we will need a answer to the fact that big tech can hover up the spoils before the upstarts - that both spearhead the project _and_ fund its expansion - have a chance.
I don't think the license pick is because of adoption - outside of a few specific cases, the license usually isn't the blocker to getting your tools/projects more widely adopted.
I've written code in the past that I put under GPL that today, I'd probably use a different license for (BSD 3-clause has my preference these days, although I'd just prefer a generic non-commercial license instead). I don't really bother relicensing cuz... it just doesn't matter in the end, these projects are super niche anyway. I picked the GPL since "everyone uses it".
There's always this backlash to demonize anyone daring to move away from the GPL when the simple fact is... maybe some products just don't work in the modern market with the GPL. The hyperscaler thing is a pretty massive issue and the fact that GPL proponents can only give platitudes of "it's working as intended"/"fuck your greed" instead of y'know, accepting that maybe the GPL doesn't work in these environments is... not great.
That isn't a defense of the SSPL or anything like that (it's quite the bad license), but there's a reason that these entities keep writing new licenses instead of going "all rights reserved, we publish source as a courtesy, you can't use it for anything" (even if they effectively close all contributions, they still want to try a permissive license.
Basically the thought that goes into picking the license often isn't nearly as complicated as you may think. It can literally just be "if everyone is doing it, maybe what they're doing is right". Not so much "the GPL gets you more contributors".
I think it might depend on the business. In some places, open source is the "safe pick" (particularly if you aren't selling software and are not worried about things like the GPL). In others, licensing concerns are huge.
I really would like to but from working in consulting I know that AGPL is banned in most companies since it’s specifics are completely diffuse and need to be hashed out in court (or are unpalatable).
In spirit the AGPL is what is needed but in practice it kills any corporate adoption sadly.
A license that singles out hyperscalers or goes by revenue is probably a better bet.
Maybe a license that is more clear canbe written, maybe not.
You know Amazon has a hosted Grafana service, right? The AGPL isn't a guaranteed obstacle to hyperscalers sucking up most of the value produced by maintainers.
> Amazon / Google / Microsoft offer managed versions of the database and make bank on it. Easily millions in revenue. Original creator / company doesn't get anything, and the hyperscaler isn't obliged to pay.
This isn't what is happening. A company called Garantia Data renamed themselves to Redis Labs and acquired the Redis trademark. They're not the original company, and they used a naming trick to present as if they are official (they are now, and nothing they did was illegal).
All companies should do this.. if anything so we know what not to use.
Any sort of proprietary product (be it fully closed source, source-available, or "open source with limitations") is always a risk. You were happily using database X, but they got acquired by Broadcom and now their product costs 100x what it was before. What do you do now?
That's why it is much safer to adopt open source - worst case is the company goes under, but you still keep using last released version indefinitely, and hope new entity (maybe even hyperscaler!) forks the code. Or make an in-house fork if you have enough resources.
"equitable source" license means it's not an option. That new product should really be much, much better than open-source alternatives to even be considered.
> worst case is the company goes under, but you still keep using last released version indefinitely, and hope new entity (maybe even hyperscaler!) forks the code
But it's not always the worst case, is it? If the licence is permissive, an hyperscaler can just take is and make a proprietary product out of it.
Hence copyleft licences without a CLA. Nobody could make Linux proprietary because the copyright is shared between so many people/companies.
I don't understand what you are saying. What has a "supplier" to do in this?
Copyleft is always the best for the customer. Copyleft says "the customer has rights". Permissive says "do whatever you want, as long as you keep my name somewhere in the attributions".
A product - in this case, software - is supplied from a supplier, or vendor, to their customer. Is this business 101?
The usual worst case for the supplier is that they give the product but don't get paid, and the usual worst case for the customer is that they give the money but don't get the product.
With copyleft specifically, the supplier is much more likely to not get paid, but the customer receives the benefit of being allowed to continue the maintenance of the software themselves if the supplier goes out of business. The supplier hopes this will help them acquire and keep customers, many of whom will pay, and keep them in business.
I understand what a supplier is. I don't understand how it relates to what I said.
> With copyleft specifically, the supplier is much more likely to not get paid
That doesn't make any sense. Whether the supplier open sources their code as permissive or copyleft doesn't have any impact on the likelihood of getting paid.
But if they distribute it as permissive, a competitor can just make a proprietary fork. Possibly continuously importing the new improvements from upstream and focusing on differentiating. Whereas if they distribute it as copyleft, a competitor has to share their changes, that upstream can benefit from.
So for the supplier, if that's how you want to call it, it's better to licence code as copyleft. Except if the whole idea is to be adopted by corporations, but in that case don't whine when they take your code and build a proprietary product without contributing anything back.
The original company went under, they no longer exist, so they obviously don't care.
The software user does not care either. OK, someone made a proprietary fork, and that "someone" is a hyperscaler... so what? The last released version is still out there and still can be used-as is / maintained by you / maintained by other 3rd party.
The new maintainers / fork authors should not care too. They base off the last released version, and their fork is not affected in any shape by what some hyperscalers do.
(I guess you can make an argument that proprietary fork might pull users/resources from the open-source one... but I don't think it's a big issue. If the users chose open-source version to begin with, why would they switch to proprietary fork?)
This breakdown hits hard because it’s not just about business models — it’s about trust.
Open source succeeded because it created shared public infrastructure. But hyperscalers turned it into extraction infrastructure: mine the code, skip the stewardship.
The result? We’ve confused “open” with “free-for-the-powerful.”
It’s time to stop pretending licenses are enough. This is about incentives, governance, and resilience. The next generation of “open” has to bake in counterpower — or it’s just a feeding trough for monopolies.
Before moving from permissive licences to non-open-source licences (because they have exceptions for TooBigTech), an easy step would be to use copyleft licences, wouldn't it?
> Grafana's a great example of this. AWS and Azure _could_ have sold the unmodified AGPL Grafana as a service or published their modified versions, but instead, they both struck proprietary licensing and co-marketing agreements with Grafana Labs.
> The company decides to change the license to force Amazon / Google / Microsoft to pitch in and pay a fee.
Could you put a clause in the license that calls out those specific companies that you're concerned about and makes them pay, as well as any of their subsidiaries, a list that can be changed later?
That way, smaller businesses around the software can still exist, nobody gets concerned with the license too much because it calls out specific hyperscalers (no love lost on them in the community) and you still get them to pay their fair share.
Why do people try to ruin everything by SSPL that's overly restrictive and catches everyone else in the blast area, or try to write some clever license that would apply in all cases? Just call out the exact companies that are eating your lunch!
Hyperscaler Anti-Freeloading License (HAFL): If you belong to any of the following companies, or are a subsidiary of them, or operate any of the given cloud platforms and want to offer the service there, pay up: Amazon Web Services (Amazon), Google Cloud Platform (Google), Microsoft Azure (Microsoft), Alibaba Cloud (Alibaba Group), IBM Cloud (IBM), Oracle Cloud Infrastructure (Oracle), Tencent Cloud (Tencent), SAP Cloud Platform (SAP). This list can be changed at our discretion.
No you are wrong. People want adoption by any company other than hyperscalers.
People would be happy to sell service contracts and the like while keeping their code foss, if only hyperscalers weren’t direct competitors to that.
But the reality is that current copyleft licenses means immediate blacklist by most companies legal department.
Which is why almost all cooyleft companies offer copyleft plus some corporate licensing even though in 99 percent of the cases it has nothing to do with the spirit of copyleft as companies just want to import and use the damn package in some completely unrelated internal project or whatever
The combination of unclear legal precedent and corporate governance means that copyleft in it’s current state exactly does not work as most people want
> But the reality is that current copyleft licenses means immediate blacklist by most companies legal department.
Because they always manage to find permissively-licenced alternatives! There are many different copyleft licences (not only the GPL family, take e.g. MPL or EUPL) that can be used and that are not "viral".
> The combination of unclear legal precedent and corporate governance means that copyleft in it’s current state exactly does not work as most people want
This is wrong. What it means is that corporate governance doesn't handle it correctly, because they don't need to.
If a project goes permissive because they want to please corporations, they shouldn't whine when they do. Otherwise they should just use a copyleft licence and teach the smaller companies that they can actually totally use that!
This is the heart of all of these stupid takes. There is no "your money" for anyone else to walk off with. Nothing was stolen.
If you want to sell software, or rent access to software, then just do that honestly forom the outset. And good luck to you on that. I will not consume it unless I have no other choice, and I will not contribute to it at all period, even tertially by for instance developing things that use it or help people work out how to solve problems with it etc. In other words just generally not invest in it, in all the different ways one might invest in something. But hey maybe you will make something indispensible and do it better than anyone else can, and maybe you will get a bunch of other customers.
If you want to benefit from the adoption and goodwill and army of free work that comes with open source, then do that.
The honest reason to work on open source is because you yourself have recognized how much utility you have been given fo free because of it, and wish to pay it forward and basically add to humanity as a whole. What you get back out of it is the same thing everyone else does, the use of the software itself, plus your name being on it.
But if you license something open source, and then care the TINIEST BIT what someone else does with it beyond adhering to the attribution and share-alike terms, then you have missed the point of open source. You are bent about being "robbed" of something that was never yours in the first place. You have no right to Amazon's billions, even the part of it that they made by hosting a copy of some oss software you happened to have written. Amazon is not selling your property, they are selling a managed hosting service. You have no right to the revenue from that. The software being hosted is a community resource there for everyone to use like the air or water, only even better since unlike Nestle taking the water from everyone else, everyone else still has the software.
If anything the supposed injured party in all of these cases are the bad community members because they are often only OSS disingenuously in the first place. They start off with MIT/BSD style licenses because they know a lot of companies are allergic to GPL. But WHY are they so intolerant of GPL? Because GPL doesn't allow them to steal, but MIT allows them to steal. So they start with an MIT-type license because it's "commercial friendly" and then later cry that someone "stole" their B S freaking D licensed software.
People that do that were never writing open source for the purpose of adding to the community pool in the first place. It's either dishonest or at best, possibly honest but in that case just unbelievably incompetent and ignorant.
Yeah, it’d be great if everything was open source and everyone was happy.
But that’s not really what happens. Redis and Elastic are popular exactly because there’s entire companies behind it constantly maintaining it, securing it, and adding features.
If you want use open source software at your job that’s a hard requirement. No, you can’t use “MySillyKeyValueCache” that Timmy develops in his free time with no security or compatibility concerns.
Companies cost money to run. Google and AWS exploit open source work for profit, and don’t contribute back enough to ensure its support. This is what you should be really mad about. The internet giants are literally killing open source.
If they have their way you’ll be stuck with their proprietary software. In practice AWS is already there.
> Companies cost money to run. Google and AWS exploit open source work for profit, and don’t contribute back enough to ensure its support.
I don’t understand this take, if Google/AWS/Microsoft decide to host your service and some vulnerabilities are discovered then you got free Security Research done, your product would either have remained vulnerable or someone else would have filed the same bug.
You own the code so you can decide only certain people can commit PRs and can choose to close any feature requests issues.
If one of the cloud providers decided to use your product it’s because they deemed it reasonable as is or they can fork / contribute upstream.
That is what OSS is, by definition.
Hell you can post the code with a license like MIT and then never touch it again, and if someone else can monitise that code kudos to them.
If you are an existing company and open sourced your code, Facebook/react, then you presumably already make enough money to support development yourself or intend to stop development.
If you open source code that is your core business and somebody “steals your lunch” your learnt an important lesson and hopefully won’t make the same mistake twice. If you then decide to relicense and the community abandons you and causes uproar on the internet you are reaping the rewards of your actions, accept them.
Look, in principle I don’t disagree with anything you said.
However, I think OSS is a net positive to the industry, and would like to see it remain that way.
The classic way to monetize OSS has been to provide hosting and support for a price.
Now the internet giants are taking that entire pie for themselves.
If we agree that we want OSS to be a viable option going forward, and we agree that you need money to hire devs to maintain a successful large scale OSS project, then what do you suggest be done?
> and we agree that you need money to hire devs to maintain a successful large scale OSS project
Therein lies our disagreement, you don’t need to. If you are amazon and upstream is willing to accept requests then amazon can hire devs, and if someone eats their cake then those devs will eventually migrate to the new ___location.
As someone who did the initial work for OSS I put the code out there, if I dislike the direction a fork is taking I can ignore said fork and keep working on my own version, why do I need to use the version they are using.
It boils down to why did you even start an OSS project, if the intention was to make money or hire other employees you fundamentally misunderstood the assignment.
If you made a project that serves your own need, decide someone else may benefit so you publish it as FOSS online and it gets massive traction how does that change your need, you probably have no reason to make it commercially viable, it is still serving the same need it always has for you.
Now if it gets forked and the fork proves to be of better quality and still serves your need then switch to the fork for your own personal need and now someone else is doing the maintenance, so you end up being the “freeloader”.
And I’ve already discussed the “burden on the maintainer”, if it’s such an issue close access to issues and pull requests. No more burden.
The fork is typically closed source so you do not get to benefit from it and neither does the public good.
This of course means that you should never use permissive licensing since it doesn’t provide many key benefits of what people thought oss would do.
You could use copyleft but i guess with llms it kinda amounts to the same
What exactly is the nature of this fantasyland? I have to have said I wanted or expected something for that to make any sense. You make silly claims based on nothing. That is not an argument.
In fact it's the other way around. Expecting to enjoy both the benefits of OSS and the benefits of collecting rent at the same time for the same thing is the fantasyland.
Then sell software. I said that first thing. Where is the fantasy in that?
If you don't understand it such that you think it's a fantasy (despite the ocean of existing software as proof that is produced, and countless published manifestos from people who do it describing why they do), well that's a you problem not a me or anyone else problem.
You don't get it, that's fine, then simply don't get it, and don't participate in this activity you think is insane.
You are free to have any opinion you want about what constitutes a rational use of your time and effort.
But don't pretend you understand something that the participants do not understand. We're all eating non-fantasy food just fine, somehow.
If they try to sell their software, by your own admission, you won't consume it. That's the point. The fantasy is the idea that some of these open-source products would even work as a closed-source proprietary business model.
Then don't try to sell it. Or make something else that people can't live without and are willing to pay for and can't just pay themselves to write an equivalent instead of paying rent to you forever, whatever, what's it to me or anyone else?
Do whatever you want but it's no one else's problem if you can't figure out what.
I disagree with you on a lot, but you're bang on the money here.
If you want a business, build a business. One key aspect of building a business is understanding what your IP and trade secrets are, how they affect your bottom line, and then controlling them appropriately.
The point is that if there was no business behind projects like Redis and ElasticSearch you wouldn’t be using them either. You’d be using some random Microsoft or Oracle product.
I’m guessing you don’t want that either, so come up with a way to make sure Redis has the financial support it needs to hire engineers to do the full time work that part time contributors like us don’t want to.
The point is I don't have to come up with any such thing.
It is not true that if redis didn't exist then I'd be using some MS or Oracle product. I might, if it was practical. Or someone might have invented redis, or I might if that was a space that still needed filling and somehow no one else did it.
It's like saying if linux didn't exist we'd all be using NT.
That's completely ridiculous. No we would not. BSD already existed and if not that then a minix clone or someone else would have started some other unix clone. There were several small unix clones and other full OS's made by completely small developer teams by then, even single people, commercially. If a single guy can do it at all (regardles that they were doing it to sell), it means the job is not infinitely big and so perfectly doable by a few self-motivated volunteers, especially given how that kind of work has no deadline.
Except "volunteer" is the wrong word because they aren't some kind of weird saint doing something just for you or me. They are doing it for themselves, and you get to have it too.
Everything is like that. Redis is no different. There is nothing magic about redis. Things exactly like that get created when the need for them arises every day. If redis didn't exist is a nonsense invalid premis because 12 redis-alikes will always exist any time the need for it exists. It doesn't matter that I didn't already do it muyself, and I don't have to now either, and that is not just beacause redis happens to exist.
This goes back to my original point. You’re living in fantasy land.
In reality, that’s not how it works. If to use Redis your company had to assign someone to do maintenance work then you just wouldn’t be allowed to use Redis. And yes, you would be using MS Redis, because none of the clones would be secure and supported enough for you to feel good using them.
> Google and AWS exploit open source work for profit, and don’t contribute back enough to ensure its support. This is what you should be really mad about. The internet giants are literally killing open source.
They would have to publish their changes if those projects used copyleft licences. Copyleft licences don't force to "contribute back", but they force disclosing the changes, that the community can then benefit from.
Not sure I understand your message. If you have a copyleft licence without a CLA, then the copyright is distributed between all the contributors, which makes it almost impossible to change the licence. And because it is copyleft, it means that the sources need to be distributed to the users.
So TooBigTech can build a service upon a copyleft project, but they have to distribute their changes, which means that the community can benefit from them. One example I learned about here is Grafana: AWS did not want to use their AGPL version so apparently they pay Grafana to get a commercial licence. That's of course possible only because Grafana own the copyright of the whole codebase. It wouldn't be possible with Linux, for instance, where nobody has the power to give a commercial licence and therefore it is GPLv2 for everybody.
It doesn't prevent TooBigTech from competing by serving the open source project, but that is more of an antitrust issue, I think.
Copyleft are sadly unclear legally which is why essentially no company uses copyleft licenses even if they should.
I mean the intend of apgl is exactly right, but in practice it means you can never ever use it in a company even if you really do not even want to change it or sell it or host it in isolation in any way.
That is really frustrating. Most internal licensing tools i have seen just literally blacklist any direct copyleft imports
Essentially yes. While weak copyleft would be fine for the use cases i have seen, the distinction and the licenses have yet to be tested in a EU court.
As a a consequence there are a numbers of legal options on the matter and as a consequence to that, it is a very hard no from most compliance deps i have seen
There is no problem to solve because there is no rug to pull out. Redis are not owed anything and no one stole anything from them.
When I try to make a business hosting copies of an http server, I am not pulling the rug out from under nginx who actually wrote the http server.
And when AWS does that better and than me, they are not pulling any rug out from under me, or at least not in any way that is special to any of the software involved, just plain old bigger businesses vs little businesses, no different than say Safelite vs Joe the Glass Guy.
No one needs to come up with anything. OSS is already just fine. I don't know what you want, but it seems to be something you never had any right to, and no one needs to come up with anything to satisfy it.
It’s easy to say everyone else is wrong but the majority of innovation and new projects nowadays are coming from people wanting to do oss while still supporting a business with it. That is having it be more than a hobby.
Now you can say that this itself is already a misunderstanding and maybe you are right, but I think you forget also that software reality has changed and indeed there was a time when oss fulltime was viable.
Now the reality is that there is no business model other than closed source or oss with some inevitable rug pull of some sort.
Everyone else is not wrong, and I did not say they were, and so any argument you try based on that is already voided nonsense before you start.
Some people are wrong. It's easy to say that because it has always been true for everything in the world. This topic is no different.
If you want to suggest that you are not wrong, or that I am, you have to present an argument that holds water for that, not anything else.
No one owes anyone a business selling OSS, not even the author of said OSS. If you write something that you want to make money selling, then you are not interested in participating in OSS. That's it. Full stop.
Sell your software honestly.
"But I can't if it's not..." So what? Ok then don't sell it. It's no one else's problem that you have a misguided notion of why OSS exists and what it is good for and why one should spend any time or effort contributing to it.
All the other wailing and crying stems from bullshit you never had any right to in the first place. Not just legally or technically but morally and common-sensely.
> "But I can't if it's not..." So what? Ok then don't sell it.
Interestingly, when it's coming from TooBigTech, apparently they're happy to say "But we can't make viable LLMs if we don't abuse copyright" (yes, they said it). And apparently it works.
Not that I disagree with your point, though: nobody is owed a business selling OSS.
the problem is the definition of Open Source is controlled by the Open Source Initiative, which has been captured by the hyperscalers
which is sort of funny because the term "Open Source" was itself coined to make it possible for people to seek funding to build companies based on the (crazy?) idea of producing software, then giving away its source code
20 years later, the structure of the industry has now changed, and now "Open Source" exists to feed Amazon, Microsoft and Google
if it's not possible to alter the definition to include licenses that include terms that allow sustainable value creation for businesses other than the
hyperscalers, then the term is no longer fit for purpose, and we need a new one
It's really not though. The OSI quickly loses credibility when they try to push a definition that the community doesn't like (see the Open Source AI kerfuffle).
Both the OSI and the FSF are agreed that Source Available with bans on specific use cases is not FOSS. When you've got freaking Richard Stallman opposing you you really have to do better than just scream "corporate capture". Engage with his idea of Freedom, don't set up straw men.
> Both the OSI and the FSF are agreed that Source Available with bans on specific use cases is not FOSS.
well... yes, because they decide the definition of the terms
> When you've got freaking Richard Stallman opposing you you really have to do better than just scream "corporate capture". Engage with his idea of Freedom, don't set up straw men.
Stallman has a very particular view of Freedom (itself a multifaceted term)
and he rather famously completely rejects the term "Open Source"
the situation we're finding ourselves in is one where three increasingly malevolent entities control and capture 100% of the value generated by writing and selling software with source code
if you're an employee of these entities, great for you
for the rest of us, this is a bad situation to be in
and certainly not one that could produce another Red Hat
I agree that the OSI and FSF are trapped with their most hardcore followers, and can't effectively change, assuming they even wanted to.
As for Stallman... his idea of freedom is very narrowly-scoped. In particular, it makes no distinction between hobbyists and megacorps, and is completely blind with respect to economics.
By lumping hobbyists with companies, it makes the category error of extending human rights to corporations. This of course, is nothing new in America, and hasn't been since the infamous 1886 Santa Clara County vs Southern Pacific Railroad court case, that established corporate "personhood".
Corporations are collections of humans. There are certain ways in which extending human rights to corporations a mistake, but allowing them to use free software isn't one of them: either the individuals in the company are able to use the software or they are not, and if they are not then the software is not free.
And in this case it means that it's not AGPL, but proprietary. Which kind of proves my point: they are apparently paying Grafana Labs to avoid the constraints of AGPL. If Grafana was permissive, they would surely not pay.
You wouldn’t use the ten commandments as your only moral guide.
The landscape has changed. Google, Amazon, and microsoft are actively trying to destroy open source business models. Don’t let the leopards eat your face because you were too attached to your ideology.
> Don’t let the leopards eat your face because you were too attached to your ideology.
You're conflating ideologies. User-focused ideology doesn't care about business models. Leopards aren't eating their faces. They're perfectly fine where they are.
Business model focused ideology might care, but the AGPL exists and meets Debian's requirements. Those who care can choose to use it, or not, as they wish.
> the problem is the definition of Open Source is controlled by the Open Source Initiative, which has been captured by the hyperscalers
Thank you for pointing that out.
This is something I have noticed in the last decade:
A lot of fake, captured organisations have popup around open source. I once went down the rabbit hole to try to make a list and quickly found dozen of them.
It is always very hard to understand what they do and employ a bunch of people who usually never wrote a single line of code.
One example found on the Fedora website is the "Digital Public Goods Alliance" [0]
> the problem is the definition of Open Source is controlled by the Open Source Initiative, which has been captured by the hyperscalers
I'm not sure this is true. The OSI's definition of open source doesn't seem to have changed since ~2001 [1] - before AWS was founded - and it'd been around in various forms since ~1997.
This was the era of Microsoft's 2001-era "Shared Source license" which was deliberately GPL-incompatible; Bruce Perens, author of the definition, wrote "Microsoft's Shared Source program recognizes that there are many benefits to the openness, community involvement, and innovation of the Open Source model. But the most important component of that model, the one that makes all of the others work, is freedom." [2] (Perens also judged the first version of the "Apple Public Source License" insufficiently free [3])
They've kinda always been about not just being able to view the source, but also modify it, and redistribute the modified version, merge it into other software projects, make commercial use of it, etc etc
It just so happens that this stance, adopted well before AWS existed, works extremely well for AWS.
I'd argue it doesn't "just so happen" to benefit AWS, it was causal: Open Source created AWS. AWS is structured the way that it is in order to benefit from Open Source, and it grew to its current size by so benefiting.
In a lot of ways things like AWS are what the OSI set out to create when they set out to sell Free Software as an idea to corporations. This was the pitch.
NATS almost ended up doing it recently, too. Fortunately they caved in just today, after the CNCF and the community protested. [1] While the outcome is great, it was a bunch of drama for nothing, and their reputation has been harmed.
Yep. Actually if Redis would end up in CNCF and Redis Labs could provide commercial hosting, extensions - this would be outcome I would be excited about
Interestingly their CEO states that AWS an Google forking redis and maintaining it separately was their "goal" all along. Because fragmentation is apparently good?
Yeah, the whole "they give nothing back" line might be the worst part of the PR around these changes. It's obvious to anyone familiar with the ecosystem that it's not true, which damages the credibility of Redis's argument in the eyes of the people who matter most.
> Neither company has built in a legal safety mechanism to prevent themselves from pulling the rug again later
Previous versions are still available under the original license, right? So if you don't want to use it with a new license, you're in the same situation as if company went out of business or stopped support and development for any other reason. There are no safety mechanisms for that either.
An existing safety mechanism is to do exactly what Linux does: have a copyleft licence and no CLA. So that the copyright is shared between the contributors (so it's impossible to change the licence) and the licence enforces sharing your changes of the project.
The legal safety mechanism is the license. They gave you software under a certain license and you're in the clear as long as you follow it. You don't have to delete it if they give different software under a different license.
If you do need constant updates, you may trust forks more not to switch license, but forks tend to disappear at about the same rate that originals switch license, so why does it matter? Such is the nature of relying on free stuff - you're at the mercy of the one who hands it out.
I contributed heavily to a project during its early days and spent almost 2.5 years helping it grow. For awhile i was one of the most active contributors.
Then there was talk of turning the project into an actual business, and myself and a few of the original contributers were offered extremely poor paying jobs. That no one took. Then they got a CEO, investors and we were basically forced out of the project unless we joined the company.
I distinctly remember being in a call where we were told they would be relicensing it eventually and launching a SaaS. To protect our work from being used by large companies. I laughed and pointed out the irony in that call that you were doing the same thing.
After that they changed their policy such they do not accept outside PR's. It has killed any interest in supporting open source projects outside personal stuff.
Did you sign a Contributor License Agreement?
If not, then I'm pretty sure it's illegal to keep your changes while relicensing, without obtaining your consent.
I just wanted to make a tool to help developers. Then when the SaaS launched they instead focused on adding $$$ features instead of fixing bugs, and started heavily pushing their SaaS anytime you used the tool.
They ended up switching to a terrible model with a previous release where if you were a business or in anyway making money you now needed to pay for licenses and it was comically expensive.
The reality is I could have forked it but I don't have the time and patience to deal with everything that comes from a massive project.
I believe AGPL + time + contributors makes that very difficult. It also means that if you have a commercial product that uses Redis you need to open source your stuff though, so not net better imu. Please correct me if I am wrong.
No, it doesn’t mean you need to open source your commercial product if it uses Redis. AGPLV3 doesn’t work like that. You need to open source your changes to Redis if you make any, and if your product is distributed through a network.
Likewise. Respect for antirez and all of that he is doing, but his hiring back feels like just trying to lure developers back after ridiculous move by the Redis corporation.
Given there are viable alternatives out there, I see no reason why someone should invest any time in Redis (we are using Valkey as a replacement).
Nothing wrong in checking other alternatives, but Redis the company didn't call me to rejoin. I approached them to do something like an evangelist and bring back some kind of community vision inside. Then... if you can code, you end coding often times, and instead of doing the evangelist I wrote the Vector Set data type :D Just to clarify that me rejoining was not some kind of "winning back the community plan". I wrote at large about all that, even clearly stating that even the paycheck is modest (to avoid that kind of conflict of interest of the economical motivation).
I agree. That ship has sailed, at least for the foreseeable future. We switched to Valkey and it's our choice for a couple upcoming projects as well. To switch back now after this whole ordeal would make no sense at all.
Microsoft made one called Garnet, I wouldn't say its a fork though, its basically compatible with Redis and implemented mostly in C#. It supports the RESP wire protocol from Redis for ease of compatibility.
Garnet fascinates me. Their benchmarks even claim that it is better than Redis and also Dragonfly. Are there any papers or write ups explaining what makes Garnet fast? (I do know its based on FASTER)
The tl;dr is it's just a lockless hashmap attached to a TCP server with a log. Simple Get/Set operations are highly optimized, so with high batching they are able to efficiently fetch a lot of data efficiently. The architectures scales very well when you add threads and data access that is uniform.
It struggles a bit on certain types of workloads like hot keys, think heavy hitting a single sorted set. It's a cool architecture.
The difference is that Redis is single-threaded, the value proposition of Redis was/is a fast (in-memory) simple (single-threaded) server to serve things so much faster than a traditional DB. It was a perfect fit and became popular with the Ruby/Rails community,etc since those environments combined with traditional SQL servers are slogs compared to what a fast server like Redis could do.
As good as Redis is, modern computers with multiple cores potentially leaves a lot of performance on the table. Garnet seems to be a well designed multithread KV-store (though sadly the benchmark page doesn't list benchmark for more complex objects even if the simple cases looks good).
I feel like most of the obsession over Redis came from old school cgi-esque server side apps that couldn't easily maintain state themselves...
Nowadays your webapp is often a persistent, multi-threaded web server where you can cache whatever temporary state you want far more efficiently than reaching out to a KV store.
Using postgres and caching common results becomes a non-issue even at scale. The main area where a super fast KV store shines is when you need to share a very large number of keys, where many servers only need to access a random subset and isn't interested in maintaining the full set and getting updates on a bus, where the values often change and needs to be revalidated so caching is ineffective, and where persistence is not needed so a database doesn't matter and a lighter KV store is acceptable.
Nice. I've been using Memurai (https://www.memurai.com/) for development on Windows (native, no WSL or Docker - for reasons), but this looks much better.
EDIT: Weird that being a program from Microsoft (well, it's Microsoft Research, so that probably explains it?) it has no installer and doesn't run as a service on its own.
Conventional wisdom from 10 years ago on HN is that Microsoft Research just pays some top researchers (with commercially interesting, err, interests) to keep doing their thing. I wouldn't distrust anyone from there based on their employer. That is from someone who doesn't trust MSFT very far.
Microsoft has a habit of "fake" open source. Particularly on Github.
By "fake", I mean Microsoft largely treats their Open Source codebases like they are closed-source, in-house proprietary codebases that they happen to let members of the public look at.
They'll accept a PR once in a while, but mostly it appears Issues and PR's are used as a free alternative to UserVoice.
Every decision is made behind closed doors (probably a MS Teams call actually), and you'll notice how top-heavy their staff is on these projects (half a dozen employees with "Manager" in their title on any given repo).
Beggars can't be choosers, and I'm glad they are dipping their toes into the FOSS water... but I don't really get FOSS vibes from their projects on Github.
Open source does not mean you accept contributions from others. It means that the source code is available, and that people are free to take it and modify it themselves. You are using a definition which is... not exactly incorrect, because definitions are what they are, but certainly not what everyone else uses.
I'm fairly impressed Microsoft managed not to name their Redis competitor "Cache", just to pollute the keyspace like they do with so many of their other products.
I guess it will depend on your definition of "extinguish" and a few other things but:
gestures wildly at the MIT-licenced VSCode codebase
Yes, they are not rug pulling the VSCode source but by locking down the marketplace (and never giving a truly open source VSCode, what developers think of as "VSCode") they are in the processes of locking out forks.
Ah so this is like how Android being open source was (almost) always bullshit. You had to play all kinds of google dances to get the google apps on them.
Fair point! good example of an attempt to extinguish.
Like seriously - I haven't heard of them doing it in quite a long time. I know they were atrocious in the 90s and 00s, I'm not disputing that at all. Are there any examples of them continuing this behavior in the last decade though?
Further, a huge percentage of the people making those decisions back then are no longer with the company, different people are in charge, etc. The industry has changed it's relationship with open source - company board rooms aren't scared of the consequences of loading open source onto a server, the legality and liabilities have been hashed out, and MS isn't really even capable of pulling those fear levers anymore. MS itself has repositioned in the industry - their dreams of total computing dominance have been shattered: there's no chance of a windows derivative owning the server market any more, there's no money in browsers or consumer OSes (heck even MS's domination in gaming is showing cracks due the the efforts of valve). Point being - would it even make strategic sense for them to try to EEE anything anymore?
Note: I have almost no ties to MS. I haven't used an MS os or desktop software since before covid (in any capacity, even moving the mouse on a computer running windows). I don't use any of their SaaS products personally or professionally. There are integrations between the products I help build and azure, however those are not a major source of revenue for my employer and I do very little work that even touches that stuff. Point being - I'm pretty non-MS in my life and don't have any sort of loyalty or incentive to defend them. I do abhor their EEE actions back in the 90s and 00s when they were doing them, and those still make me angry... but that's not a reason to assume that different people at a company are going to act the same as the old-school ones.
Perhaps... when I look at an Open Source codebase, I expect there to not just be source you can read, but also a way to contribute and engage with the codebase creators, beyond just bug reporting (aka. Github Issues).
While it may technically be open source due to it's license and you can literally go look at the source - Microsoft is operating these codebases as-if they are proprietary. Every decision is made out of public - and I would not be surprised to learn they have their own internal tracker for the "real" backlog items. You frequently see command/comments by Microsoft employees which clearly are triggering workflows in their real backlog tracker, and near-zero discussion happens in Github by Microsoft employees, and when it does - it's clearly through a PR filter.
Like I said, beggars can't be choosers and it's better to have this than nothing - but I don't really think Microsoft has grasped the true concept of FOSS as-of yet.
> Is SQLite not open-source?
Not really in my opinion either. There's no way to contribute at all... best you can do it raise hell on the forums about a particular issue you want to see fixed. So while it's Open Source in the strict sense, it's not Open Source in the general sense.
Whether a project is developed by a community or by a single entity holding all keys is completely orthogonal to being Open Source/Free Software. There's nothing wrong in putting one kind of projects above the other, but you may want to revise "your opinion" if you want to stay communicative, because terms and definitions are only useful when people agree on what they mean.
The terminology for this is the cathedral model and the bazaar model. Under the former model, code is released from 'on high'. Under the latter, it's developed in the open and with cooperation with the community. Both count as Free and Open Source software though, provided of course that a FOSS licence is used.
I think most projects are genuinely like this though, even ones that accept outside contributions more earnestly. I understand why you'd associate open to contributions with open source but I think its a mistake to treat the relationship as required rather than common.
The sibling comments contain some sharper critique of Microsoft if you haven't read them yet.
I make an open source, MIT licensed piece of software. I don't accept unsolicited contributions, but I document that people are free to fork the code and provide instructions on how to develop, test and build on your machine.
In my opinion, the spirit of open source goes beyond just tossing code over a wall for people to look at. In my opinion, it means accepting engagement from your users, their inputs and their contributions when/where warranted.
In my opinion, for something to be truly open source, I should be able to fix a bug I ran into, or implement a feature from the backlog and contribute it back upstream. If upstream is just going to ignore my contribution, pretend it doesn't exist, or reject it just because - then that codebase is just pretending to be open source.
That's not to say you are required to accept all contributions - I'm saying you should be open to contributions that A) save you time B) enhance the codebase or C) fix confirmed bugs.
In Microsoft's case - I don't see a lot of that going on. I see lots of Issues (bug reports), some PR's, but mostly opaque decision making, and complete silence on things the Corporate side of Microsoft doesn't want to comment on yet. Which is the beef - it's a corporate project run like it's proprietary but you can go look at the code. Again, better than nothing, but it's not really what I consider true open source.
> In my opinion, the spirit of open source goes beyond just tossing code over a wall for people to look at. In my opinion, it means accepting engagement from your users, their inputs and their contributions when/where warranted.
I disagree. There's nothing about open source or the various open source licenses that require accepting engagement from the community and/or contributions.
Open source means allowing modifications, and sharing those modifications. It's in most licenses that the software is provided as is and without warranty.
> In my opinion, for something to be truly open source, I should be able to fix a bug I ran into, or implement a feature from the backlog and contribute it back upstream. If upstream is just going to ignore my contribution, pretend it doesn't exist, or reject it just because - then that codebase is just pretending to be open source.
A project not accepting outside contributions is still open source, not pretending to be, and the beauty of it is - if you want it to accept outside contributions, you are able to fork it and accept contributions on your own fork, or otherwise share your modifications. But there's absolutely no obligation of the original dev/owner to accept or engage with anything from the community. It's in the license, the software is provided as is and without warranty of any kind.
Maybe it's worthwhile to coin a new term, community software, to specifically make a distinction between projects that are community developed (accept contributions) vs those that don't.
I'd say the spirit of open source is that others are free to modify the code and that's it. This requires a good license, the possibility to fork, some documentation and a way to build the project yourself.
But why would accepting contributions be required?
A lot of people are very entitled. They think that an open-source project gives the right to make request/demand of the project. Even if they are willing to write the contribution themselves, they still think they have the right to have their pull request accepted. They forget that 1) it may be outside the scope of the project, and 2) the project owners are going to be the ones that have to actually maintain the code they commit. Crazy stuff.
What if you want implement a feature, but they don't have time to look at it and make sure it's secure, or support future bugs? Look at the xz (IIRC) hack - not everyone has tons of free time.
How long after they release their code are the required to keep this up? Do they need to respond to your requests within 5 business days?
If they retire / move on to another project, does the source code stop being open source?
I have the same feeling. I got used to infrastructure being run as a democracy, not merely “source available under GPL/BSD/MIT”. (It’s a big thing to want, sure, but I don’t mind wanting big things.)
IMO, no. The open source definition says nothing about requiring outside contributions, and IMO the spirit of it in the beginning was never about that.
It was about being able to have access to, fork/modify, and redistribute the software. That's the important thing - that the software I use can be modified by me, and if I wish to do so, the license allows me to then redistribute those changes vs. proprietary software, which cannot be modified nor redistributed.
I'm not sure why the zeitgeist went from the above and turned into "all open source must be community software and engage with users and accept contributions" because that was never what it was about. It was a benefit of some projects, sure, but at the end of the day the important thing isn't whether or not the project accepts contributions but whether I have the power and license to modify something to my liking, fork it if I wash, and/or redistribute my changes.
Having worked a few places that have major open source projects, it was eye opening to see how pervasive this seems to have become. It feels like over my lifetime I've watched the value/expectations/demands/definition of OSS shift from "source is available, and I can make changes to my own copy/fork of of it if required" to "source is available, and I will demand you make changes to it for me"
> source is available, and I will demand you make changes to it for me
There seems to be a disconnect here. I agree, demanding someone else make changes for you is in poor taste. The issue is, some of these projects advertise themselves as being open source while having no meaningful way to contribute changes back upstream - ie. the spirit of open source.
If I use your program for free, and encounter a bug, I should be able to fix it and contribute that back upstream so everyone benefits. That's my way of "paying back" and helping the community. Projects that reject contributions, or make contributing difficult are not in the spirit of open source, even if they are technically open source via their license.
That's what I call "fake" open source - you want the positive image of being Open Source only.
> There seems to be a disconnect here. I agree, demanding someone else make changes for you is in poor taste. The issue is, some of these projects advertise themselves as being open source while having no meaningful way to contribute changes back upstream - ie. the spirit of open source.
I mean, you're not demanding someone else make the changes. You're only demanding that they engage and build a community around their project and all that entails.
As long as the license is reasonable, anybody else could pick up the project and do the community building around it. Apache HTTP Server developed around community patches to NCSA httpd. I don't think hitch (from the people who brought you Varnish) is well know, but it started as stud with community patches. If something like this takes off, the originator will either start taking the community patches too, or not; it's their choice.
In the meantime, as a user, I get the benefit of whatever changes the origin provides and I can patch it as I want, and fix up problems that I see. That works for code dump open source and community open source,.
That's too vague. Free software is software you can't take and then modify and keep your modifications hidden or proprietary, thus depriving your users and the community of access to these changes.
MIT license allows this, hence it's not a free software license. Because very large corporations don't like freedom they've poisoned the discourse around software freedom and launched PR campaigns trying to substitute talk about freedom with talk about openness or open source. To some extent they've succeeded, as evidenced by the chatter about it in this thread.
As a end user, I find it quite restrictive. There might be some software I won't be able to fork/modify because they were MIT. I'm not the owner of the software anymore.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons..."
The MIT license always allows you to fork/modify the project. You wouldn't be the copyright owner of the existing code in your fork, but you could be the owner of the project, and the copyright owner of any new code you add.
I get the feeling. I also live in the real world and know that nobody except for a few (most notably RedHat) have figured out how to make sustainable money in open source. These closed licenses didn’t come out of nowhere. They came in response to places like AWS using the open source license to make a mint with a project — and doing so legally (it’s there in the license to do so) — but then the project suffers. So the license change is done to prevent that so the project — ostensibly — can survive. It makes sense. And so does wanting to live up to the promises of open source. It’s a tough situation for sure.
The only real reason to use non-copyleft licenses for these kinds of projects is to be able to do the rug pull, so you should have expected it instead of feeling betrayed.
I imagine they will now require copyright assignment or something like that for external contributors to be able to relicense new code under a commercial license.
A copyleft license like the AGPL didn't stop MongoDB from rugpulling. I'd argue that the AGPL, and the copyright assignment that tends to go with it, makes it easier to rugpull because forking entities would be at an extreme disadvantage in keeping the lights on compared to the closed-sourcing company. A non-copyleft license, on the other hand, makes it much easier for a forking company to cover all the same niches as the original company, making a rugpull that much more difficult.
MongoDB used to be AGPLv3. A year after their IPO they realized "aww shit, Wall Street wants continuous growth, being profitable isn't enough" and decided to migrate to a completely new license, SSPL, that's designed to put everything surrounding the software in scope of the copyleft. The implication being that if Amazon were to offer MongoDB they'd also have to release all of AWS RDS[0] as a thing you could just download and use.
The community did not like this one bit, but MongoDB doesn't need to care about what the community thinks because they had CLA'd all their contributors. That is, if you wanted something in MongoDB upstream, you had to give MongoDB full copyright ownership over the software. Which exempts them from copyleft[1]. One of the critical parts of copyleft is the "no further restrictions" rule; otherwise copyleft is just proprietary with extra steps.
[0] I don't remember if they were hosting MongoDB as part of RDS or something else.
[1] As we've seen with the Neo4J lawsuit, copyright licenses cannot tie the hands of the copyright owner. The only way for copyleft to work is to create a Mexican standoff of contributors who will sue each other to death if any one of them decides to relicense without unanimous community consensus.
AWS never offered the AGPLv3 licensed version of the MongoDB server as part of any managed service. There were large cloud providers in China that _did_ offer MongoDB as a service. They also provided the corresponding source code [1]. Despite signs that they were complying with the obligations of the license, they had the SSPL drafted anyway.
Because once it was clear that software as a service was a compelling model, it was no longer appealing to give everyone the permissions needed to offer the software as part of a service (as AGPLv3 was always designed to do).
Changing the license seemingly worked, as a partnership was eventually announced [2].
> The only real reason to use non-copyleft licenses for these kinds of projects is to be able to do the rug pull
That’s an exaggeration. The vast majority of permissively licensed projects have never “rug pulled” and never will. It might be one possible reason to choose such a license but it’s very far from the only one.
Unless a CLA transfers copyright to the project owner, the copyright owners are every historical contributor to the project. Each contribution is owned by the contributor alone and they alone are able to grant rights to it.
A CLA often tries to mitigate this by making contributors give the project owners special rights at the time of contribution.
(Note that even if relicensed, this itself can never revoke licenses granted for prior versions unless that license specifically had revocation written into it.)
Yes, a project can only be relicensed if a CLA assigning licensing rights (not ownership) is signed by all contributors, or if all code is owned by the entity relicensing it. Whether it's under a copyleft license, a permissive license, or even a proprietary license is irrelevant.
> there are open legal questions about whether the GPL and its variants are enforceable.
At this point in history, there are multiple legal cases where GPL violators were taken to court and lost or settled. See: BusyBox and Linksys/OpenWrt.
GPL v3 also has a nice clause that allows companies to "repair/cure" their non-compliance.
> Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
if that's your "good legal reason" to avoid the GPL, then it's just as much a "good legal reason" not to open source your work at all: if the GPL is not enforceable, that would mean you have used a non-copyleft license, which according to you is the thing you want to avoid for good legal reasons.
you said "avoid gpl". reason? "unenforceable". eliminating gpl from consideration, thus you advocate non-copyleft licenses instead.
with me so far?
but gpl's "unenforceability" could only mean it turns into a non-copyleft license, one which you say one should not use, so if one shouldn't use gpl because in reality it is a non-copyleft license, then you must be against non-copyleft licenses.
just to state it again for clarity: "don't use gpl because copyleft is unenforceable so gpl is just MIT underneath, and I repeat, don't use it" is a recommendation not to use MIT license.
Please link to where I said not to use copyleft licenses. Check the usernames carefully.
Note that I don't agree that GPL and MIT are equivalent, or that GPL becomes a non-copyleft open source license if not enforceable. IANAL but it might revert to the regular copyright law for wherever you publish software, not an open source license.
just be honest, you don't like GPL. The reason you don't like GPL is not because it's not enforceable, but because you don't want to enforce it, or be subject to its strictures. Your argument that you use MIT because GPL is unenforceable makes no sense, as I pointed out.
(also, it is enforceable and has been enforced, but that's a separate topic.)
All advantage accrues to hyperscaler "managed" versions. That's so much more fucked than a rug pull.
Amazon gets to make millions off of the thing you built.
"Equitable source" licenses with MAU / ARR limits, hyperscaler resale limits, and AGPL-like "entire stack must be open" clauses is the way to go. It's a "fuck you" to Amazon, Google, and Microsoft in particular and leaves you untouched.
Open source today is hyperscaler serfdom. Very few orgs are running Redis on bare metal, and a equitable source license can be made to always support the bare metal case.
If you open source something, the rich trillion dollar companies just steal it.
If you're okay with that, that's cool. But they'll profit off of your work and labor. And the worst part is that at scale, the advantages of the sum total of open source is used to compete with you and put price pressure on your salary and career options. To rephrase that, the hyperscalers are in a position to leverage open source to take advantage of market opportunities you cannot, and they can use that to compete with your business or competing businesses that might otherwise pay you better.
Open source needs anti-Google/Amazon/Microsoft clauses.
Yes, it does! The "problem" with AGPL3 is that it has no carve-out for companies smaller than Amazon, Microsoft, or Google. If you use AGPL, you have to open source your entire stack.
Not everyone thinks infectious copyleft / free software is a problem. But it will mean that if you use AGPL3, every part of your stack has to be open. That doesn't work for everyone.
This is why "equitable source" / "fair source" is gaining traction. You can use a license like Apache and add in clauses with MAU/ARR/Hyperscaler limits that allows practically everyone else to use your software.
No, SSPL requires you to open source your entire stack. That's why the OSI and FSF rejected it.
AGPLv3 says, if you modify the software and put it on a network, you have to provide a link for anyone accessing the software to download the modified source. There's numerous drafting and technical problems with this arrangement[0] but the only parts of your stack you have to release are the parts that are actually part of the program covered by AGPLv3.
The "strong copyleft" strategy[1] is to identify a specific freedom-restricting behavior we don't like and prohibit just that. We're not saying "Amazon is not allowed to use this software", we're saying "Anyone who turns this software into a service needs to provide a way to fork the service and get the software back without losing anything". If such a copyleft license happens to scare a company into buying license exceptions, that's a happy accident.
In contrast equitable source doesn't say anything about freedom, it just says "these people need to pay a license fee". That's not FOSS, that's shareware. In FOSS, free-riding is not a bug. The problem with AWS isn't that they aren't paying a license fee, it's that they are building roach motels out of community projects.
[0] I'd link to Hector Martin's incredibly informative Mastodon posts regarding the subject, but he deleted his account after crashing out of LKML. As a substitute for that, I'll summarize my hazy memories:
- The intended compliance mechanism is to make your app a quine; but that only makes sense for webapps written in PHP/Python/etc. Someone actually put AGPLv3 on an Ethernet stack - how do you comply with that?
- It's unclear how license compliance works in a pull request driven Git workflow. If you're running the server locally for testing, and someone accesses it, have you violated the license?
- You can filter out the source offer with an HTTP proxy not covered by AGPLv3. That seems like a very wide loophole which the FSF apparently believes would work.
> contributed a minor improvement to Redis under its original license [...] feeling betrayed as a contributor to a properly-FOSS-codebase
How does this work legally? You write some code, contribute it under a certain license... and... a company can just re-license your code under any license they like?
They require a Contributor License Agreement [0] whereby you grant them the copyright to your contribution. Which means they become the ultimate decision-maker for all contributions and can relicense however and whenever they wish.
CyanogenMod required a CLA to assign them copyright to Cyanogen Inc, only for them to basically kill the project. They forked it as LineageOS only to still require a CLA.
IMO the anger people direct at source available licences would be better directed at CLAs. They're what hands the power away.
Don't contribute to projects with CLAs people without reading them carefully and understanding what can happen! Then you won't be surprised if a project is relicensed because you know you signed an agreement to let them do that.
If anyone betrayed the spirit of Open Source, it is those who were trying to siphon off the entire commericial value of these projects while contributing little back to the project.
Companies have to pay people to work on it. Hell, they are even entitled to profit from it. A world with commerically viable Open Source is way better than without it. Ditching OSS companies for trying to survive 90% of revenues going to $1T cloud vendors is counter productive.
Due to license rug pull, I started an org wide movement from Redis to Valkey early this year and now there’s no turning back. It also does not help Redis that Valkey is cheaper offering by AWS (at least through Elasticache).
I think we need to get back to state where we take software licenses serious in letter and in spirit. The transitions Redis made are pristine on a legal and moral level. Feeling betrayed is absolutely uncalled for.
Not sure why Redis is blamed for the "rug pull". They didn't relicense the old versions. They just said "sorry guys, we don't want to support this project under those terms anymore". They are under no obligation to do that, legal or moral. Don't like it? No problem, fork it and maintain it yourself (as many did). But don't demand of others to continue to support the project under those terms if they do not want to. This is FOSS working as intended.
All those people that contributed, did so to the FOSS version. Their contributions live on in all the forks, both FOSS and proprietary (by Amazon & co., and Redis before today). So not sure where "betrayal" supposedly happened. Maybe when Amazon used their contributions too?
They already had made a strong statement by choosing BSD when GPL was at their disposal. That is a much stronger statement than some blog post that reflects a momentary snapshot of their plans.
It is just that people don't hear what they don't want to hear. Every BSD or MIT is a loud and clear statement to deny the guarantee that derivative works will remain free and open.
Because Redis said they would never do something and then changed their mind (which makes the whole concept of saying you will never do something useless), while Amazon never said they wouldn’t use open source software for free (which is their right).
I never really understood the hype around reproducible builds. It seems to mostly be a vehicle to enable tivoization[0] while keeping users sufficiently calm. With reproducible buiilds, a vendor can prove to users that they did build $binary from $someopensourceproject, and then digitally sign the result so that it - and only it - would load and execute on the vendor-provided and/or vendor-controlled platform. But that still kills effective software freedom as long as I, the user, cannot do the same thing with my own build (whether it is unmodified or not) of $someopensourceproject.
Lets turn this around. Why would you ever want non-reproducible builds?
Every bit of nondeterminism in your binaries, even if it's just memory layout alone, might alter the behavior, i.e. break things on some builds, which is just really not desirable.
Why would you ever want builds from the same source to have potentially different performance, different output size or otherwise different behavior?
IMO tivoization is completely unrelated, because the vendor most certainly does not need reproducible builds in order to lock down a platform.
> Lets turn this around. Why would you ever want non-reproducible builds?
It's not about wanting non-reproducible builds, but what am I sacrificing to achieve reproducible builds. Debian's reproducible build efforts have been going for ten years, and it's still not yet complete. Arguably Debian could have diverted ten years of engineering resources elsewhere. There's no end to the list of worthwhile projects to tackle, and clearly Debian believes that reproducible builds is high priority, but reasonable people can disagree on that.
This not to say reproducible builds are not worth doing, just that depending on your project / org lifecycle and available resources (plus a lot of subjective judgement), you may want to do something else first.
Debian didn't "divert engineering resources" to this project. People, some of whom happen to be Debian developers, decided to work on it for their own reasons. If the Reproducible Builds effort didn't exist, it doesn't mean they would have spent more time working on other areas of Debian. Maybe even less, because the RB effort was an opportunity to find and fix other bugs.
Yes, the system is not closed and certainly people may simply not contribute to Debian at all. However, my main point is that reasonable people disagree on the relative importance of RR among other things, so it's not about "want[ing] non-reproducible builds" even if one has unlimited resources, but rather wanting RR, but not at the expense of X, where X differs from person to person.
"It's possible to disagree on whether a feature is worth doing" is technically true, but why is it worth discussing time spent by volunteers on something already done? People do all sorts of things in their free time; what's the opportunity cost there?
For me as a developer, reproducible builds are a boon during debugging because I can be sure that I have reproduced the build environment corresponding to an artifact (which is not trivial, particularly for more complex things like whole OS image builds which are common in the embedded world, for example) in the real world precisely when I need to troubleshoot something.
Then I can be sure that I only make the changes I intend to do when building upon this state (instead of, for example, "fixing" something by accident because the link order of something changed which changed the memory layout which hides a bug).
> things like docker have been around doing just that for a while now.
Thats just not enough. If you are hunting down tricky bugs, then even extremely minor things like memory layout of your application might alter the behavior completely-- some uninitialized read might give you "0" every time in one build, while crashing everything with unexected non-zero values in another; performance characteristics might change wildly and even trigger (or avoid) race conditions in builds from the exact same source thanks to cache interactions, etc.
There is a lot of developer preference in how an "ideal" processs/toolchain/build environment looks like, but reproducible builds (unlike a lot of things that come down to preference) are an objective, qualitative improvement-- in the exact same way that it is an improvement if every release of your software corresponds to one exact set of sourcecode.
Docker can be used to create reproducible environments (container images), but can not be used to reproduce environments from source (running a Dockerfile will always produce a different output) - that is, the build definition and build artifact are not equivalent, which is not the case for tools like Nix.
I see reproducible builds more as a contract between the originator of an artifact and yourself today (the two might be the same person at different points in time!) saying "if you follow this process, you'll get a bit-identical artifact to what I have gotten when I followed this process originally".
If that process involves Docker or Nix or whatever - that's fine. The point is that there is some robust way of transforming the source code to the artifact reproducibly. (The less moving parts are involved in this process though the better, just as a matter of practicality. Locking up the original build machine in a bank vault and having to use it to reproduce the binary is a bit inconvenient.)
The point here is that there is a way for me to get to a "known good" starting point and that I can be 100% confident that it is good. Having a bit-reproducible process is the no-further-doubts-possible way of achieving that.
Sure it is possible that I still get an artifact that is equivalent in all the ways that I care about if I run the build in the exact same Docker container even if the binaries don't match (because for example some build step embeds a timestamp somewhere). But at that point I'll have to start investigating if the cause of the difference is innocuous or if there are problems.
Equivalence can only happen in one way, but there's an infinite number of ways to get inequivalence.
Tavis makes some good arguments, but since that post I've seen a couple real-world situations where reproducible builds are valuable.
One is where the upstream software developer wants to build and sign their software so that users know it came from them, but distributors also want to be the ones to build and sign the software so they know what exactly it is they are distributing. The most public example is FDroid[1]. Reproducible builds allow both the software developer and the distributor to sign-off on a single binary, giving users addition assurance that neither are sneaking something in. This is similar to the last example that Tavis gave, but shows that it is a workable process that provides real security benefit to the user, not just a hypothetical stretch.
The second is license enforcement. Companies that distribute (A/L)GPL software are required to distribute the exact source code that the binary was created from, and ability to compile and replace the software with a modified version (for GPLv3). However, a lot of companies are lazy about this and publish source code that doesn't include all their changes. A reproducible build demonstrates that the source they provided is what was used to create the binary. Of course, the lazy ones aren't going to go out of their way to create reproducible builds, but the more reproducible the upstream code build system is the fewer extraneous differences downstream builds should have. And it allows greater confidence in the good guys who are following the license.
And like others have said, I don't see the Tivoization argument at all. TiVo didn't have reproducible builds, and they Tivo'd their software just fine. At worst a reproducible build might pacify some security minded folks that would otherwise object to Tivoization, but there will still be people who object to it out of the desire to modify the system.
You can still slip malware into a reproducible build, but you have to do it in the open. If you do it via injecting a tampered-with artifact via some side channel which is specific to your target, they will end up with a hash that doesn't agree with the one that is trusted by rest of the community, and will have reason for suspicion.
That benefit goes away if the rest of the community all have hashes that don't agree with each other. Then the tampered-with one doesn't stand out.
It basically means that not everybody needs to build from source code if they want to verify that the binaries they're using haven't had malware injected during the build process. I.e. so long as enough people check that they can reproduce the build, and call out any case where it doesn't, everyone else can just use the binaries without building from source. This means auditing efforts can focus just on the source code, which is a lot more tractable (but still hard, and imperfect. But it means a potential attacker needs to work a lot harder, as opppsed to a compromise of the build servers basically giving them free reign without much risk of detection).
It doesn't really do anything at all for tivoisation, Tivo managed it just fine without reproducable builds.
There is merit to some of the security arguments. However, one thing reproducible builds enable is to reliably identify the source code version from which a particular build was produced. If a build artifact is found to have undesirable behavior (whether malicious or just a genuine bug or misdesign), reproducible builds allow to reliably trace that behavior back to the source code, and then to only modify the undesired behavior. If, on the other hand, you can’t identify the corresponding source code version with certainty, and therefore have to fix the behavior based on a possibly different version of the source code (or of the build environment), then you don’t know that it doesn’t additionally contain any new undesired behaviors.
To achieve that it is enough to hash inputs, and cache resulting outputs. Repeating a build from scratch with an emtpy cache would not necessarily have to yield the same hashes all they way down to the last artifact, but that's actually a simplification of the whole process, and not a bad thing per se.
> To achieve that it is enough to hash inputs, and cache resulting outputs.
Thing is, inputs can be nondeterministic too - some programs (used to) embed the current git commit hash into the final binary so that a `./foo --version` gives a quick and easy way for bug triage to check if the user isn't using a version from years ago.
Adding the Git hash is reproducible, assuming you build from a clean tree (which the build script can check). Embedding the current date and time is the canonical cause of non-reproducibility, but that can be worked around in most cases by embedding the commit and/or author date of the commit instead.
This is only a problem if those nondeterministic inputs are actually included in the hash. This is often not the case, because the values are included implicitly in the build rather than explicitly.
Is it possible for mortals to rebuild gcc from scratch? Can I start with some minimal, auditable compiler (tcc?) and build up to a modern gcc? Or would it be some byzantine path where I need to compile gcc v1998, then perl, then Python 1.8, enabling you to compile gcc v2005, which lets you build Python2.3, etc.
It is a byzantine path, also because gcc switched to C++ at some point (for no good reason IMHO). But there is a project that maintains such a bootstrap path: https://www.gnu.org/software/mes/
Mh. Though, if you have deterministic builds for GCC, imagine how much of a problem some nerd in Northern Washington or Scandinavia with their own strange C build chain would be to inject something strange into these compilers into the build process.
Like, you spend millions to get that one backdoor into the compiler. And then this guy is like "Uhm. Guys. I have this largely perl-based build process reproducing a modern GCC on a Pentium with 166 Mhz swapping RAM to disk because the motherboard can't hold that much memory. But the desk fan helps cooling. It takes about 2 months or 3 to build, but that's fine. I start it and then I work in the woods. It was identical to your releases about 5 times in the last 2 years (can't build more often), and now it isn't somewhere deep in the code sections. My arduino based floppy emulator is currently moving the binaries through the network"
Sure, it's a cyberpunk hero-fantasy, but deterministic builds would make these kind of shenanigans possible.
And at the end of the day, independent validation is one of the strongest ways to fight corruption.
Auditors can take a copy of the source, reproducibly build it themselves, and thus prove that the binaries someone would like to run match the provided source code.
> This diagram demonstrates how to get a trusted binary without reproducible builds.
Ages ago our device firmware release processes caught the early stage of a malware infection because the hash of one of our intermediate code generators (win32 exe) changed between two adjacent releases without any commits that should've impacted that tool.
Turns out they had hooked something into windows to monitor for exe accesses and were accidentally patching out codegen.
Eventually you just top trusting anything and live in the woods I guess.
> With reproducible buiilds, a vendor can prove to users that they did build $binary from $someopensourceproject, and then digitally sign the result so that it - and only it - would load and execute on the vendor-provided and/or vendor-controlled platform.
As long as Debian provides source packages and access to their repos - digital signature has nothing to do with Reproducible Builds, you actually don't need one for the same bytes.
It is not that different from tamper-proofing medications. It proves that no one added poison to whatever you are consuming, after that thing left its "factory".
I use Quad9 (the winner in this benchmark for me, personally) as a fallback resolver, in case the one I operate and use primarily (https://resolv.us.to) happens to break. I created my own (very simple - there's no global unicast with geolocation involved or anything!) service and made it publicly available a few years back because I think it's important not to have too few alternatives for DNS resolution to choose from as an "ordinary" Internet user, since meddling with DNS, down to individual ISPs' servers, is one of the cornerstones of Internet censorship, which I believe must be opposed whenever possible.
Even if WWIII is not a consequence of all this senseless and reckless militarization, each and every penny spent on it will end up missing dearly for fighting human civilization's real fight of this century: combatting climate change. It's even worse, since this kind of spending will make it even worse.
I'm afraid humanity is blowing it for good with this.
The militarization is a response to the fact that a world war is already underway, and a key ally of those accelerating militarization is either bowing out or actually switching sides.
A world war will not be the result of the militarization, just like it wasn't a result of the militarization Britain undertook after the 1938 Munich crisis.
Isn't the opposite actually the case? Hegseth has called it a "division of labor." [0]
Ultimately when war breaks out with China, and a year in to that conflagration Russia (ever the opportunist) decides to take a few bites out of the Baltic states, who's going to defend Europe? The US military isn't omnipotent.
The opposite? You mean, the United States is increasing its integration with and support for its allies in the global conflict against the aggression of the Russian-centered alignment?
No, that's 100% not what is happening.
> Hegseth has called it a "division of labor."
Probably a lot more useful to look at concrete policy actions than Hegseth’s vague, content-free characterizations.
And it concrete reality, the US is cutting support for those who have been on the same side as it has, in some cases threatening them with violence and annexation, and provide aid and comfort to the aggressor that is the opponent that it had been working with others to defend against.
You're confusing tactics for strategy (as is the media, which is entirely necessary for the strategy to work). I could point you at the policy papers, but if you think the US can upend 70 years of foreign policy in the blink of an eye and without a fight, you can believe anything.
No, you're confusing wishful thinking fantasy justifications with concrete reality of actions and material effects.
> if you think the US can upend 70 years of foreign policy in the blink of an eye and without a fight, you can believe anything.
If you think it can't do that at the whim of the President (with or without a fight, since no opponent can win that fight), especially after a massive purge of the military leadership, you have an even more exaggerated fantasy view of the power of the "deep state" than even the paranoid fantasies -- that those presenting them don't even believe -- used as a pretext for the purge.
Yes, radical breaks in policy are possible, and the Trump Administration has been doing them in virtually every ___domain.
You're being overly dramatic. Germany has been spending low amounts of money on its military for two and a half decades. Circumstances changed and spending needs to be increased to adjust to the new situation. This is not militarization, it's the state fulfilling fundamental purpose of providing external security for its citizens. Spending is going to be less (relative to GDP) than in the 70s and 80s.
The agenda after WW2 was to not allow germany to militarize itself again and to keep the soviets out. Now it seems that germany is allowed to take it's safety in it's own hands again. Maybe there is more trust (western values?), maybe americans want to pull out? I still find it an astounding development. Maybe because it is an european project. Becoming adult. And the US has a bigger fish to fry.
There is definitely more trust toward Germany then toward USA or Russia now. Wither way, even if there was less trust, it does nit matter. Russia is very active threat and America is hostile toward Europe, threatens annexation of part of it ... while being friendly to Russia and other authoritarian regimes.
So, it is nor like there would be another choice.
German is at least teaching about own atrocities as about atrocities. Other countries have massively bigger problem to admit their past might have been ugly.
The historical german inability to correctly read geopolitical currents, anticipate their domestic dynamics, or understand international perceptions of Germany seems to be intact. It's true that The Atlantic Council and other US cultural-state entities aren't making it easier, but this is egregious.
No one looks to Germany for leadership on rule of law, or human rights, least of all Germans.
I did not said Germans are perfect. They are massively better then USA, they are overall trustworthy enough to trust them.
Your last link is about AfD being supported by Elon Musk. And second about other parties not wanting to cooperate with AfD - and while fascism is up everywhere, in America there is basically no functional opposition.
Sad to meet someone who doesn't care that they're going down the same river and over the same waterfall so long as someone else is going a little faster.
I know exactly what's in the links, what I don't know is why you believe it's logical to assume that every other human being must be a cheerleader for one group of corrupt authoritarians or another. Supporting either government as such is shameful and your descendants will be as ashamed of you as you are of yours.
I believe it's fine now because there's been another, better, solution that doesn't rely on spending so many $$$ on it; the EU. USA is probably not at all as worried about Germany as a country going crazy again.
Ironically, Hitler wanted to establish a third Reich - basically a new Roman empire that spanned all of... Europe. Funny how these things goes. Old wine in new bottles.
I don’t have the exact numbers, but for every billion not invested in healthcare, infrastructure, or clean energy, countless people will end up suffering or dying unnecessarily.
Take this study[0] as an example: austerity measures in Greece led to 10,000 avoidable infant deaths. Plenty of other studies show similar results, cutting social spending costs lives.
So how many deaths will come from shifting €800 billion from social programs to military budgets?
It’s a sure thing that this will cause suffering and death, while the idea that we need all that money for defense is just speculation, especially when the EU’s combined military budget is already far bigger than Russia’s.
The difference in Germany is that they made a historic change to their consitution where they are now able to grow their military by borrowing way more money. They are not moving money away from healthcare and infrastructure, they are actually creating a way to increase their military budget without having to move money away from healthcare and infrastructure (because in Germany, the money is sorely needed there as well).
Basically making more debt that the future generations will have to pay? The end result will be the same. These are just financial tricks, money can be created out of thin air but not resources, energy, etc.
I would even agree on this, but I wonder why do I never hear anyone requesting for the EU leadership to be held accountable for pursuing a policy of blind dependence on the US? We've cut ties with anyone the US disliked, only to be left alone all of a sudden, why aren't we looking for those responsible for this? I can tell you, many of them are currently in a position of power within the EU.
Right now we have top diplomats like Kallas saying publicly that we need to find a way to beat China, all while we already have problems with Russia and the USA. They are putting us in a corner against the rest of the world, and for what? Who has to benefit from this?
> I wonder why do I never hear anyone requesting for the EU leadership to be held accountable for pursuing a policy of blind dependence on the US?
The EU is a democratic institution. Its leaders are elected (or appointed by those who are elected). The way to hold them accountable is during elections.
I don't think that anyone campaigning on a platform to spend money on defense in EU would have been very successful prior to 2025. Things likely changed now.
I don't understand how Von Der Leyen got "elected" again then, given her horrible performance, and that only a 37% of Europeans views her favourably[0]. The catastrophic situation we find ourselves in developed under her commission, after all.
> The way to hold them accountable is during elections.
Given that VDL was elected by the EU parlament with secret ballot, how do I know which MEP voted her, so that I can vote someone else at the next EU elections?
Von der Leyen is president of the EU commission. Members of EU commission are appointed by each EU member head of state - who are all democratically elected. All EU countries should be democracies after all, although Hungary is certainly stretching this definition.
The president of the commission is appointed by the EU council (which is generally formed by the government of each member state), but has to be formally approved by the EU parliament.
If you are unhappy with how your countries' elected members of parliament (or how its head of state conducts your country position within the EU), you can vote to change it.
This includes decisions on which votes are by secret ballot.
Nothing of what I said is incorrect, yet you chose to provide a condescending and pointless explainer instead of addressing my arguments.
Why is VDL ruling the commission if her previous performance was terrible and she isn't viewed favorably by the majority of Europeans?
Why was she appointed by MPEs with secret ballot?
Given the current situation, do you really believe the democratic system in place for the EU provides efficient mechanisms for holding elected officials accountable for their actions?
I'd honestly expect better from the supposed "cradle of democracy".
> Nothing of what I said is incorrect, yet you chose to provide a condescending and pointless explainer instead of addressing my arguments.
I provided an explanation with no particularly condescending tone. I have no idea how your arguments were not addressed.
> Why is VDL ruling the commission if her previous performance was terrible and she isn't viewed favorably by the majority of Europeans?
The opportunity to change is in the regular elections, both at national and EU levels. That's when unpopular leaders are replaced.
The fact that a particular leader is unpopular does not mean much, as they are indirectly elected (as so happens in paliamentarism).
> Given the current situation, do you really believe the democratic system in place for the EU provides efficient mechanisms for holding elected officials accountable for their actions?
Yes, I do. Depending on what you mean by "held accountable", of course. People sometimes use this jargon in a very loaded manner.
The main problem I see is not the "accountability", as the commissioner is accountable to the council that picked them.
The main problem is that the council is formed by national government representatives. This mixes EU politics with local national politics. I may vote for a given party in national elections for local reaasons (e.g.: housing and transportation), but I may disagree with their stance on the EU level.
I don't know a good way to resolve this dissonance outside of some sort of federalization of the EU (something I think would be positive btw).
> The main problem I see is not the "accountability", as the commissioner is accountable to the council that picked them.
You realize that cannot work when both the council and the commissioner play on the same political agenda? For the council to hold the commissioner accountable it would mean to admit their own guilt. There is no incentive for them to do this, so there is no mechanism for accountability at all.
One example? VDL privately conducting EU business with Pfizer for a vaccine purchase on her phone (illicit) and refusing to provide the text messages to the EU's general court. All while her commission is supposedly based on defending standards of transparency, efficiency and so on.
What about her countless delusions about Russia's economy being about to collapse while all she accomplished was to send EU in a recession?
Or her blindly pursuing a policy of dependence on the USA and hostility towards China, only for the US to dump us as soon as they got a new president?
Please tell me where's the accountability in all of this.
Or why now, given her track record made up entirely of failures, we should trust her to guide the EU into a new very delicate historical phase.
You completely ignored the body of my message to complain about EU leadership, offering no sources, just a bunch of grievances (some of which are dubious at best).
Not sure where the conversation would go on from here. You will just keep being angry.
I keep telling you this system is broken, you keep replying that this is how things are supposed to work in this system. I agree that this isn't going anywhere.
The plot twist of the century would be if climate change is the driving force behind our bluster about annexing Canada and Greenland. He can't admit it publicly without alienating his base, so he's forced to come up with increasingly ludicrous ruses to justify his attempts to save America from its real threats.
In a way it is his very openly stated goal. He has mentioned that it's a national security issue, since Russian and Chinese ships are passing through the arctic route, and as the ice melts greenland and canada will become increasingly important.
Researchers and analysts tend to agree humanity already blew it. We waited way too long to start mitigation practices and what little we’ve done has already started to be rolled back. Even if we restarted efforts it would take too long to ramp up given the snowball effect already in play. We are cooked.
So… maybe, but to my mind counterfactuals don’t really help clarify thinking about this kind of thing. The range of futures available to us today doesn’t really depend too much on alternate futures that may have been available had we done things differently in the past.
I understand the political necessity of setting a concrete 1.5 degC warming target and messaging it as an all-or-nothing kind of goal, but “we missed it” isn’t an absolute. It’s not the line between “hellfire and brimstone” and “salvation”: there’s still a range of outcomes and behavior still matters.
The fatalism strikes me as counterproductive and paralyzing: people can and will continue to try to survive, even if there are wildfires now, or hurricanes, or heat waves, or worse. Better, it seems to me, to focus on what actually is changing, how people are responding, and how to mitigate and adapt.
It's okay to have a differing opinion, but ultimately the oceans are warming much faster than simulations created by experts predicted. The runaway greenhouse effect has always been front and center of the "oh no" part of humanity warming the planet and so far it appears "runaway warming" describes the state of the ocean - which is very not good.
The reason to lift the debt limit is actually, so at current times it becomes not an either or option.
Unfortunately, Russia seems a very real threat to many European countries (particularly those with Russian minority population). Germany needs this this money e.g. to actually setup a NATO brigade in Lithuania.
While all this military spending indeed is lost money, it seems that cost would need to be split and this is not even consensus inside the EU.
Germany has already become the target as its physical and digital infrastructure is under constant attack while feeling unable to defend ourselves. This is not about WWIII, but actually to prevent it happening.
If Germany is conquered by Russia, there's exactly nothing done to combat climate change. Warding them off gives at least a chance to work on that and some progress.
Its much worse, just look at the map. Global warming would be massively beneficial to russia - north sea would be better used by navy (whatever remains out of it) and shipping lanes, permafrost disappearing makes much more land usable. They have massive oil and gas reserves that they need to sell, half of their economy runs off this.
Molten permafrost is unlikely to be arable soil 'just so'. Tends to be rather acidic, and teems with nasty insects. Who knows what else it thawing up, too.
There's a latent conspiracy theorist deep inside me that says they've figured out the only way to stop the warming is the complete collapse of all post-conflict industrial society and start building again.
I’ve thought for over a decade that we need to plan for climate change as a certainty.
Even if the US and Europe made huge emission cuts, the rest of the world will not. I’m sure developing nations will stay poor if we ask nicely. “Hey Africa. We know about the three centuries of exploitation and colonialism but hey… could you do us a favor now and stay poor to fix the problem we mostly created? Thanks!”
There is no chance of this happening.
There’s only one realistic solution. If renewable plus storage or some other non carbon based source can actually be cheaper and easier to deploy than coal and gas, it might replace them. As it stands coal and gas remain the cheapest sources of power (though the gap has narrowed) in most regions. They are also easy to deploy with a global market in both generation equipment and fuel and a huge global talent pool of people who know how to do it.
We don’t need to invest a shit ton of money to stop climate change. We just need to decide to do it. There are cost-effective ways to do it. What we’ve been lacking is the consensus to do it.
the simplest way to stop climate change would be a massive transition to nuclear power, ideally fusion, but until the technology is there for fusion, fission. nuclear power, great though it is, costs money, huge amounts of money
Europe has been benefiting from being friends with the US, which allowed not spending so much money in its own defense capabilities. Money that could then be destined to welfare. Now that US threatens to cut such good terms in their relationship, EU finds itself with decades of accumulated underinvestment, and the need to get up to speed in that front for yesterday.
We're still not behaving collectively as much hogher than idiotic monkeys fighting in a cage. Defense from our neighbors is still a primary concern. We'll all die burnt with a heat wave.
The “benefitting from being friends with the US” is exactly the frame Trump would love for you to use. It’s a little more nuanced than that imo.
After WW2 the US wanted the European powers under its wings rather than rebuilding their own militaries to the level of a global power. There were to be two powers: the US and the Soviets.
It suited the US interests to force Europe in a dependent position. Partly for good reasons like preventing us fighting amongst ourselves starting another war. Which we’d just done twice, so fair enough.
But also to stop us from coming under communist influence voluntarily. And, just for imperialist reasons. They were just more modern and subtle about it than the empires they replaced, most notably the British Empire. The almost-century of western Europe being effectively part of an US empire culturally, financially and militarily is now coming to an end. How we’re gonna deal with that? No idea. It’s going to be pretty damn hard in a lot of ways.
European countries are going to "wait and see what happens". You think Poland, who are only free from Russian occupation for about 30 years, is going to wait and see if they are going to be occupied by Russia for another half a decade?
Putin has proven time and time again he will exploit any perceived weakness. He did so in 2014 with Crimea and when Europe (and the US) failed to react it provoked him to try it again in 2022. There is zero evidence Putin is going to wind down after Ukraine. He has stated multiple times publicly he thinks the biggest tragedie of the last century was the collapse of the large Russian empire.
War is the consequence of a lack of defense, not the opposite. Nothing could illustrates that more than the period since 1947 and the invasion of Ukraine in 2022.
Never has the world been so heavily armed. We have nukes that can wipe each other out at the press of a button. The result? A period of peace unprecedented in human history.
Putin invaded Ukraine because he thought he could win easily. Countries don't risk their existence in that way unless lead by madmen.
Would keeping a few nukes have protected Ukraine from Russia?
Likely a prolonged period of espionage and subterfuge by the Russians to gain control of those nukes would have ensued. Maintaining control would be difficult especially in Eastern Ukraine where the Russian language predominates.
Putin wants to recreate the military conquests of previous Russian leaders. He thinks he's Peter the Great but his situation is more akin to Don Quixote de la Mancha in his later years: I imagine that he sees all those nukes, regrets how much they cost to make and maintain and that he hasn't launched them (each launch moving a line item from a debit column to a credit column). He may not yet realize that the fuel from one nuclear warhead in a nuclear reactor could supply a small city with electrical power for years. IOW he's sitting on a fortune and doesn't know it. Sad.
Did you know that for the past two decades, 10 percent of all the electricity consumed in the United States has come from Russian nuclear warheads?
Russia is enormous geographically, has wonderful resources and people who are bright and creative. A shame they're getting killed in Ukraine. The only good news is that there will be more women for each young man and, once they return from war, each will be tasked with the need to create a new generation of Russians. Good work, if you can get it. If they don't, the Chinese will move in from the east (probably will anyway, already doing it de facto).
Honestly the response to climate change has been "We need to secure water and food for ourselves. Who has them and what's our invasion plan?". Or "We have water and food. What's our defense plan?"
Having less mouths to feed is a positive (if viewed from this selfish point of view) side-effect.
From what? There's no serious indication of a russian threat to invade western europe.
> climate change can wait
That's actually the exact opposite of the truth. Like "rust never sleeps", neither does any other ongoing chemical process. Climate change will keep on chugging 24/7 and receive exactly the amount of productive attention it's already received, which is none. Every effort by equity since the broad awareness of this reality has been to obstruct any action that would disrupt current revenue streams.
Any educated, and even marginally objective person cannot deny the ongoing disaster of human environmental devastation in all forms, and it's ever increasing impact on all human activity.
However all major investors in the status quo are also clear on another fact: they're old enough to be dead before the worst of the consequences come to bear. So they advocate burning it down for profit today, and fuck the future.
I'm also old enough to not have to face to worst of it, but I actually feel we should steward our environment, not just burn it down for this quarters returns.
All of this is in addition to the reality that peace is more economically productive, and war is just one big murderous disaster. The use of military force through the so called cold war, and into the present, has left us with these festering military operations that are depriving all of us from a peaceful and productive life, and for what? So some hyper-rich asshole can have more. That's what "glory of god and country" is all about. Getting the idiot herd to dismember itself to see which ruling class asshat gets more stuff.
> From what? There's no serious indication of a russian threat to invade western europe.
Well, either you are not following the openly stated intentions of the Russians or you are not taking them seriously. The former would be sheer ignorance, the latter would be a grave mistake.
> From what? There's no serious indication of a russian threat to invade western europe.
The same was said of Ukraine. It made no sense, people thought Putin wouldn't be so self destructive, but here we are.
Given the stakes and consequences of being wrong, Europe is doing the right thing. Even a 5% chance of Putin invading Europe is worth buying an insurance policy for, given that Europe's existing policy has proved worthless.
> All of this is in addition to the reality that peace is more economically productive, and war is just one big murderous disaster.
You make it sound like war is a choice one makes, and that laying down arms, appeasement or surrender is a solution. It's been proved time and again that that just leads to more war.
Europe has plenty of time for its militarization. If it had done it before then they wouldn't have to be doing it now, but it wouldn't have deterred any of those events.
Delaying was a reasonable choice given the information at the time, hoping that Putin was not a moron and that he would not take control of the US. They are reacting adequately, at a deliberate but not reckless pace, in response to the new information.
I don't care about points, but I find however that very interesting for getting downvoted. As someone who has traveled the world and lived on different continents, I thought my comment would be pretty obvious. But it isn't for a majority, and I truly wonder who is disagreeing then:
1- I believe that Americans overall, whether left or right, let alone far right, can't disagree with that statement - that Europe doesn't (didn't) take its security seriously and Americans don't want to be involved anymore
2- Europeans? I don't know, maybe some still live in their bubble and all is good? Reading some of the comments, I can see such people but that's certainly surprising to see a majority
3- And then the rest of the world ... the ROTW doesn't care much about what's happening in Europe, and climate change is the least of many people's problems.
Russia might not be able to invade Germany in the near future, but the much more likely and concerning scenario would be Russia nibbling away at EU countries in the east like the baltic states. If that happens and the NATO/EU states hesitate to defend that would almost certainly shatter these alliances and bring a much more uncertain future.
People claimed that Russians were stripping stolen washing machines for chips, that sanctions would have collapsed their economy, etc. Lots of people claim lots of things, sometime they are right.
Based on previous actions by Russia, like the 2014 Invasion in Ukraine. But mostly I took this scenario from what one expert wrote (Carlo Masala), but it is just that, one scenario.
But this is similar to what other expert say, that the more concerning weak point in NATO is political. And if Russia could successfully drive a wedge between the NATO states it becomes vulnerable even if the total conventional military of its members is superior to Russian forces.
There's plenty of other experts saying that peaceful cooperation with Russia is possible. Wouldn't that be preferable to war to the last man, or a new decades long cold war?
I don't understand why don't we talk more about achieving that, instead of blindly preparing for WWIII. NATO shouldn't even exist since the URSS collapsed.
Sure. Might be. But you are here asking for Europe to preemptively roll over and give in to Russian wars of aggression.
From a game theory point of view how is that supposed to bring peace? That just shows Russia that they can do whatever they want and reach their goals. We already had the Minsk agreement Russia violated. Why should Russia stop when we give in to their demands? What‘s the logic there?
At some point you have to show strength. And earlier is probably better if you want to prevent WWIII
> At some point you have to show strength. And earlier is probably better if you want to prevent WWIII
Sure, EU combined already spends three times as much as Russia in "showing strength". I'm sure there must be a way to use what we have without tripling the expense. If nothing, because showing that we need 10 times their military expense to keep up with them would only show that we are in fact weaker.
Unless the goal of rearmament is only to make a few weapon manufacturers richer, then I'd say we've found the most efficient way to do it.
I don’t think re-armament is the only or the best solution. It’s just that with the US having left the picture Europe does have to show strength if it has to have any hope of keeping Russia at bay. That‘s not just arms, that’s also credible deterrence. How can Europe achieve that absent the US without spending on arms?
I do think that Ukraine is instructive in terms of Russia not being as almighty as they might seem, but in terms of outcome Putin is scary close to achieving practically all of his war aims short of Ukraine ceasing to exist. I learned that Putin is patient. He can take it step by step. He does not value human life. And that’s dangerous.
At great cost to the Russian people, sure, but does Putin care? Another five to ten years and he can give something else a go. And suddenly he is in the Baltcis or at the Polish border.
Sure that would be preferable if Russia was willing to accept that. But they proved those experts wrong in 2022 and have not changed their ways since. Maybe you could argue in the 90s that NATO shouldn't exist but Russias actions proved such arguments wrong
After how many slaps in your face will you raise your hands? Europe tried peace with Russia and Russia invaded country after country over the past decades. Where would you draw the line?
There's a saying if you want peace prepare for war. Especially with Russia who seem to have a habit of cooperation with countries that can defend themselves and invasion of those who can't.
> There's a saying if you want peace prepare for war.
Sure, EU combined already spends three times as much as Russia in "showing strength". I'm sure there must be a way to use what we have without tripling the expense. If nothing, because if we need 10 times their military expense to keep up with them we'd only show that we are in fact weaker.
Based on the way Russia has been gradually pushing more and more. Step by step. Slowly.
They take what they want. They are appeased. A couple years nothing happens. They take what they want. They are appeased … etc.
Invading Ukraine should be a clear warning that Russia will not just stop. For appeasement to end and for Europe to seriously look for viable paths to peace. Not just yearlong pauses in fighting that allow Russia to regain strength. That is not peace.
The allies actually did create a just peace through strength in Europe during and after WWII. So I’m not sure why you are so offended by that thought? Would there have been a better way to create a peace that all in all has been lasting for more than three quarters of a century now? Would it have been better to further appease the facists?(Obviously not a perfect or complete peace. Obviously the Cold War also sucked. Not disputing any of that.)
Also, obviously I hope that this time around it’s not too late to prevent facists from burning Europe to the ground before we can defeat them.
Do you dispute that showing strength is an element to peace? (I’m not talking about killing people or invading other countries. I’m talking about a demonstrated and credible willingness to defend your values and alliances.)
> The allies actually did create a just peace through strength
They won the war, the goal was clearly defeating the axis. Did you have a shower today or did you achieve a just and long lasting personal hygiene through water?
You should at least be brave enough to say it like it is: you want to win the war.
The only problem is that this time the enemy has enough nuclear weapons to trigger a new ice age, so you resort to Newspeak.
> Also, obviously I hope that this time around it’s not too late to prevent facists from burning Europe to the ground before we can defeat them.
For how I see it we got them already in the commission and doing all they can to burn the EU to the ground.
In general, divide and conquer (aka defeat in detail) is an excellent way to test and break the resolve of a military alliance, or a poorly organised but massively overpowering enemy in the aggregate.
I need logic. Everything else is just speculation, the equivalent of believing that since for the past few days it rained then we're certain that tomorrow will rain as well. It all sounds perfectly reasonable, if you are an ignorant.
Ok, if you want it spelt out, the logic would be something we all learnt in grade school. The only language bullies understand are consequences. Putin annexed part of another sovereign country in 2014 and faced no consequences. He therefore launched a full on occupation of the same country in 2022.
Also, there is no need for ad hominem attacks. Contrary to what you might think, they don't actually serve any purpose in putting your point across.
> The only language bullies understand are consequences. Putin annexed part of another sovereign country in 2014 and faced no consequences. He therefore launched a full on occupation of the same country in 2022.
So yesterday and the day before that were rainy, therefore we come to the conclusion that it has to rain tomorrow as well, am I right?
If only we knew a bit more about how the weather works!
> Also, there is no need for ad hominem attacks.
The "unless you are an ignorant" wasn't directed at you, I just used it to make a point.
When it was raining yesterday, and people with weather experience say "it might rain tomorrow", do you insist umbrellas should be left at home because it's just speculation?
Except in this scenario it's still raining as we speak. What data do you have that Russia will stop?
There are many experts saying that tomorrow won't rain, unless we make it rain ourselves. Don't make it sound like there's a clear consensus.
> What data do you have that Russia will stop?
I’m not the one suggesting we throw away €800 billion, excuse me for asking why. Still, I'll try to explain.
Russia has lots to lose and nothing to gain from a direct war with NATO. The last thing it needs is more land and resources, so we can exclude that as well. I also genuinely think the cause of this war is Russia feeling threatened by NATO expansion and a civil war on its border, whether you consider that legit or not.
The only reason for Russia to not stop is if we don't allow it, at this point.
This sounds exactly like what people were saying right up to February 2022: it doesn't make sense, Russia won't invade Ukraine, Russia has enough land, it is geopolitical suicide. And yet, here we are!
And does it really seem like the USA is going to come to the rescue if Russia pushes into the Baltics? Heck at this point they may very well use the distraction to invade Greenland.
So what's the deterrent but for Europe to buy its own umbrella?
> So what's the deterrent but for Europe to buy its own umbrella?
We already have an umbrella and it costs 3 times as much as Russia's umbrella, while being much less effective. So before burning another €800 billions Iwish we'd put some effort in finding ways to make our current $450 billions of yearly expense work.
Seriously, if we need 10x Russia's budget to keep up with them we've already lost.
On the other hand, if the ultimate goal isn't to make Europe safer, but just to enrich a bunch of weapon manufacturers, then I'd say this plan works perfectly.
"I think it is obvious that NATO expansion does not have any relation with the modernisation of the Alliance itself or with ensuring security in Europe. On the contrary, it represents a serious provocation that reduces the level of mutual trust. And we have the right to ask: against whom is this expansion intended?"
(Putin 2007 in Munich, Wikipedia)
This is kind of paranoid. But there are historical reasons:
Poland/Lithuania (Władysław), Napoleon, The German Kaiser, Hitler
I am referring to the Polish Russian War from 1609 to 1618.
"The King of Poland, Sigismund III Vasa, declared war on Russia in response in 1609, aiming to gain territorial concessions and to weaken Sweden's ally. Polish forces won many early victories"
"Sigismund's son, Prince Władysław of Poland, was elected tsar of Russia by the Seven Boyars in September 1610, but Sigismund refused to allow his son to become the new tsar unless the Muscovites agreed to convert from Eastern Orthodoxy to Catholicism"
In fact, I don't have an overview of all the wars and carnage in which Russia and neighboring countries were involved.
The funny thing is that EU already spends 3 times as much as Russia in defence. Rearm EU is clearly a huge gift to weapon manufacturers and not much else. If we really cared about our defence we'd focus first on making what we have more efficient.
If we need to spend more than a trillion € to keep up with Russia's measly $145 billions, then we've already lost.
We need to spend a lot more and do defence very differently than Russia, because Europe values human life a lot higher than Russia.
Russia can attack with ten million men armed with knifes and clubs. They will March on, kill and destroy, until they starve to death or someone stops them.
We won't defend by sending twenty million similarity armed persons into the meat grinder. We will want to arm and armour our troops so that there is minimal loss of life and limb.
Russian may attack with chemical weapons, we are not interested in hitting back the same way.
Russia will target hydro dams and other civilian infrastructure with potential for mass destruction. We care about protecting those things to avoid harm to our citizens, Russia does not care.
> Russia can attack with ten million men armed with knifes and clubs.
This is just baseless propaganda and you know it. What a shame would it be for the west and Ukraine if an army of men armed with clubs was able to stand its ground against them.
> Russia will target hydro dams and other civilian infrastructure with potential for mass destruction. We care about protecting those things to avoid harm to our citizens, Russia does not care.
Of course, just like we did with Iraq's infrastructure[0] for example?
I suggest you use the effort you put in writing such uninformed rethoric in informing yourself, it will pay out at some point.
Whataboutism is of course another clear sign. Ironically your example would imply Iraq should have built up its military to defend from such an attack.
Things would have looked quite different in Iraq, if Saddam had focused his military spending on defense of civilians and civilian infrastructure!
Not making excuses for all the wrongs done by US and allies before, during and after the invasion. It was a terrible idea, badly executed. But it didn't come out of nowhere, Saddam had a history of attacking his neighbours, he and his party had it coming.
Growing up in the German-speaking part of Europe, it was impossible to not know them if you were a nerd kid in the 90s - back when you could buy retail software (not only computer games) packaged in huge, shiny cardboard boxes and delivered on physical media (3.5" floppy disks and CD-ROM) at all the fancy computer stores around town and in malls.
I remember buying a Data Becker product for cold, hard-saved-up pocket money for use with what was my first (or was it my second? Hard to tell after the decades ;)) CD writer, it was named "DER GROSSE CD-KOPIERER" ("The great CD-copier", roughly). First thing I wanted to do was use it to make a backup copy of its own installation medium, a CD-ROM containing a win32 application of only a few MBytes in size, but with the medium having had more than its nominal max. capacity of 650MB "recorded" on it regardless. It did not support that, because recording more than that amount of data mandated the use of Disc-At-Once recording mode, which "DER GROSSE CD-KOPIERER" failed to implement. So they made their amazing(ly bad) software solution to copy CDs deliberately unable to copy the very CD that program was distributed on.
Needless to say, I never considered buying any of their products again after that.
"No better place to leave for" seems an apt way to put it.
I think/fear that in the long run, there will be fewer and fewer ways to participate in activities and communities on the web on your own terms, as only a vetted, allowlisted set of client builds (that may be "open source" on the tin, but by that point it is effectively meaningless) will be able to pass CDN "anti-abuse" restrictions. It will not be a better web, but it sure will be more profitable for some.
This is an amazingly common psychological trap. You wouldn't believe
the number of people, men as well as women, who end up in the therapy
chair, at the police station or at the hospital A&E, because they are
"stuck" with a violent and abusive partner.
The modern tech landscape is all about abuse. People use fancy names
for it like "enshitification" or "rot economy" - but at the end of the
day it's about domination and abusive relations.
A very common position here is that the victim sees "no alternative".
And... surprise surprise, where they get that idea from is the partner,
friends, group/organisation that is also toxic and colludes in
gas-lighting and co-abusing the victim into a limited worldview.
Once the victim spends any amount of time outside that mental prison,
they regain perspective and say... "Oh, so I actually do have
choices!".
This is a poor analogy. There are thousands of people to meet and bond with, so you do have a choice. But there are less than a handful of fundamentally different browsers.
Derivative browsers don't really count here, as they depend on the upstream to not hurt them. For instance, if the parent project completely removes something essential for privacy, it it a lot of work to keep it in your code. The Manifest v2 removal is an example. Over time, when other changes are built on the removal, this creates an increasingly high burden. Eventually, the child project is starved. You simply do not want to be in this position.
> This is a poor analogy. There are thousands of people to meet and bond with, so you do have a choice. But there are less than a handful of fundamentally different browsers.
This is because users decided that they want a browser that spies on them.
At least in Germany in hacker and IT-affine circles, you will often be frowned upon if you voluntarily use Chrome or Edge (except if you have a really good reason).
> At least in Germany in hacker and IT-affine circles, you will often be frowned upon if you voluntarily use Chrome or Edge (except if you have a really good reason).
That's largely the same here, at least for anyone worth their salt. But how does that matter when Mozilla's pulling things like this?
For years now your only browser choices are "Google", and "funded by Google", and it shows.
I can't even give someone too hard of a time for using vanilla chromium or similar anymore; Not like it's any worse than literally every other browser offering nowadays, minus rare exceptions like librewolf or ungoogled chromium that also add a whole host of minor technical complications to use.
I don't think the analogy is weakened by bringing numbers/quantity
into it. The dynamics work for any number of principals. Take a 3
player game, where Alice trusts Bob but is better off with Bill,
however Bill is not visible to her because of chaff/disinfo/noise
broadcast by Bob or Bob's confederates.
It's not what Mozilla does, it's about what Mozilla says/claims.
You only need one better browser to switch to. I guess you're getting
at a Hobson's choice [0], that there really is only one browser and
all others are copies of the same harmful set of properties, so moving
isn't worth the overhead (switch cost is a factor in this that we
often ignore). To my mind, there must be at least one browser out
there that is "less undesirable" than that case. Just iterate your way
into your comfort zone.
So often arguments on this axis come down to how much convenience are
you going to give up for the trust relation you desire. We get stuck
if we mistake convenience for necessity thereby bringing absolutes
into a continuous trade-off problem.
I wouldn't say there's only one, but there are two main clusters for anyone not on a mac, and a handful of teams large enough to do a solid job of running their own variant. There's precious little iteration to do.
I'm not a typical user [0] but am very mindful of the typical user.
Maybe I'd not realised how much the browser space has shrunk and that
the experience of "browsing", the abstract task, now breaks down
into more specialised tasks.
I'm thinking lately the myth of the "browser" and "web" as coherent
data spaces is something even Sir Tim gave up on, right? If the
centre cannot hold constellations of specialised clients (which are
already "apps" in a sense) look like enduring in the near future at
the expense of interoperability and standards. The "best browser" will
be the one that strikes the most deals with the parts of the network
people want to connect to. It's just like the best "game console".
That seems really bleak for the Internet qua people's network.
No doubt http/s and the worlds of port 80/443 will endure eternal, but
the "Universal" search and information space the pioneers and then
proto-Google aspired to now seems so remote that the idea of a
"browser" is itself a little ridiculous to beards like me. I think
today the "browser" has become a clique of PKI suites and CAs, at the
behest of banking and retail, backed by broken but well meaning
regulation, and unwittingly creating this monster we still call "The
Browser". anyway, peace.
[0] I use w3m for 99% of my daily drive and a sandboxed degoogled chromium
for any of the "messy stuff"
XDG_CACHE_HOME, as mentioned by numerous other posters, is a good remedy to deal with this kind of cached-artifacts-sprawl - but it does need a will to cooperate and care from the original software developer in the first place, which ain't a given.
I went on GitHub to see if anyone was using this cachedir tag signature. Found many results so that’s cool :)
But ironically, if I’m reading correctly the command line ncurses based disk usage tool ncdu seems like it will intentionally ignore any directory that has this cachedir tag.
If you have to run 5 different docker images each with their own “global shared library” set you clearly no longer have system wide globals. You have an island of deps potentially per program or possible a few islands for a few sets of programs.
Which, once again, completely defeats the entire purpose of the Linux global shared library model. It would have been much much much simpler for each program to have linked statically or to expect programs to include their dependencies (like Windows).
Containers should not exist. The fact that they exist is a design failure.
Static linking is acceptable when you can update a version of a library in one place and it will trigger rebuilds of all dependent software, ensuring things like security updates are delivered system-wide with confidence. The Windows every-app-is-an-island model makes you reliant on every app developer for updates to the entire dependency graph, which in practice means you have a hodge podge of vulnerable stuff.
I have a great deal of respect for antirez and recgnize him as a kind and benevolent member of the FOSS community, but no matter what Redis, Inc. announced or does, they have lost my trust for good, and I will continue to use Redis forks for as long as they exist.
reply