I've been saying this for years now: the rust4linux folks are putting so much effort into trying to upstream their work, seems like they should instead put that effort into maintaining a fork. Arguably it would be less hours spent porting than arguing with humans. Certainly more pleasant!
Then one of two things will happen:
* Rust will prove its worth and their fork will be the better operating system. Distros will start switching to it one by one due to its increased performance and stability, and they will have dethroned Linus and become gods of the new world.
* The project will die because the C programmers will outcompete it with good ol' C, and all their talk about safety doesn't pan out in the real world.
If I were the rust4linux leadership, I'd roll those dice.
Sounds like Hector Martin is doing exactly that, burning the bridge along the way. Good luck! I think it's the right move (minus the bridge burning).
I think you're discounting the very real damage that would be done by Linux a project controlled by Linus and a community being replaced by Linux a project controlled by Google/Samsung/Redhat/Microsoft. I'm afraid that this is what is going to happen with the Linus tree effectively rejecting rust drivers by subjecting anyone attempting to upstream rust code to persistent bullying, but I don't want it to happen.
> I think you're discounting the very real damage that would be done by [Linux a project controlled by Linus and a community] being replaced by [Linux a project controlled by Google/Samsung/Redhat/Microsoft.]
(Brackets added for clarity)
Isn't the current Linux already Linus + communities + companies?
More to the point, any two such projects would quickly diverge. Once a particular piece of Linux is reimplemented in Rust, if the C version adds a feature it is no longer as simple as applying a patch to keep in sync.
> Isn't the current Linux already Linus + communities + companies?
Absolutely. To that point the companies I listed are the ones that I'm aware of employing kernel developers who work specifically on rust in linux.
The control of the project is in Linus's/community hands though, not corporate ones, and I think that's a good thing.
> More to the point, any two such projects would quickly diverge. Once a particular piece of Linux is reimplemented in Rust, if the C version adds a feature it is no longer as simple as applying a patch to keep in sync.
I don't think so. Linux is a huge modular system, and no one is really interested in rewriting the core components of it at this point. Nor maintaining their own copies of components that some other company is responsible for (like graphics drivers). Until and unless it became the dominant fork I'd expect that they'd keep merging in the mainline branch and updating their things as necessary.
> subject upstream rust code to persistent bullying
I don’t remember seeing this bullying accusation in your original comment. Was it edited in?
Regardless, the “bullying” happened on both sides. Hector Martin started the social media brigading and quit when he couldn’t get his personal enemy suspended for CoC violations. Jonathan Corbet wrote a letter naming and shaming maintainers, in the guise of a report.
All in all, I agree with the GP. Most of the arguments against (even temporary) forking feel like excuses for a lack of will and a maybe even a lack of ability. The space is open for a fork.
I may have edited the phrasing within a minute or two of posting, couldn't say. I haven't edited the comment in the 14 hours since then, and the sentiment was in the original.
(I have edited this comment on the other hand, first to explicitly disagree with various statements in the above, then to delete those disagreements since I don't really want to get baited into an argument)
Does Linux have the ability to load drivers dynamically (like Windows)? Is there a reason why developers don't develop drivers outside of the kernel and have users simply install them post-install?
Yes, it technically does (kernel modules), but the difference between Linux and NT is that Microsoft guarantees (as much as possible) that its ABIs are stable whereas Linux explicitly does not preserve backwards compatibility in the kernel (only the userspace is the stable interface).
It seems like a huge technical factor holding back a stable ABI is the C compiler itself. Binaries changing between compiler versions and changing with different compiler flags.
So while your code can be written such that it appears to respect the interface of an external library, the underlying binary representation might not line up correctly.
If there is agreement that modularity is good for the kernel but technical limitations prevent that from being a reality - surely the solution is to improve C's interoperability first?
---
I'm not a kernel developer and am probably naive here, but on the surface it feels like offering a stable driver ABI is one possible solution to the rust-in-linux controversy that has lead to so many people exiting kernel development.
I'd imagine if projects like Asahi could simply offer out-of-tree drivers, they wouldn't need to maintain a kernel fork (at least not for drivers) or negotiate with core maintainers (which I understand is stressful).
Might also make it easier for vendors like Qualcomm/Samsung/Nvidia to distribute drivers out-of-tree, perhaps reducing the need for long running Linux forks and allowing devices to update to modern Linux kernels.
As a novice hacker, I'd imagine the ability to reuse proprietary driver blobs would allow distros to be created targeting hardware that was otherwise impossible to access as drivers were hidden behind custom kernel forks (e.g. install mainline Fedora on a Samsung phone, taking the GPU driver from the Samsung build of Android - or OpenWRT on an Android powered portable 5g modem).
> It seems like a huge technical factor holding back a stable ABI is the C compiler itself.
Not really. Every OS has a stable C ABI, otherwise there would be no stable OS API functions and no application plugin APIs. The actual reason seems to be that they simply do not want to commit to a stable ABI/API so they are free to make breaking changes and remove outdated APIs. Fair enough, but don't blame it on the compiler!
Additionally, there is politics in play here. Not the politics that is normally discussed outside of HN, but the politics of having companies (at least) release the detailed specifications of their hardware. I cannot really state authoritatively what the Linux developers think on this side, but Linus brandishing his middle finger to Nvidia (https://youtu.be/MShbP3OpASA?si=GJ1_0B81b7bFY_iZ&t=2890) says a lot of things.
It can, but Linux does not have a stable driver ABI. Whoever wrote the out-of-tree drivers would have to constantly update them whenever there was a breaking change to the kernel, which I understand is relatively common.
nvidia uses "DKMS" to rebuild itself for each running kernel.
Closed source modules like nvidia frequently have a kernel-independent proprietary piece and a kernel-specific (open source) ABI piece. Whenever you upgrade your kernel, DKMS will re-build the kernel-specific shim and re-link the proprietary blob. The result is a .ko tailor made for your running kernel, even though most of the code is in a kernel-independent blob.
It looks like nVidia is open to moving their open driver (which requires a card that as a GSP) upstream [1] but with Rust being killed from the Linux kernel that's probably dead.
There is a reason why in-tree drivers are preferred, and that's because the Linux driver interface and API changes with kernel API changes. The API is considered unstable in the sense that is it not unchanging.
A driver written for one release of Linux may not work with the next release as the API changes.
To my knowledge, out-of-tree drivers are quite inconvenient to maintain compared to in-tree drivers, due to the kernel's unstable API (a driver that works on a given version of the kernel likely only works on that single version unless it has lots of ifdefs to handle all the other versions).
(Not in rust, but it's mostly assembly anyways so I'm not sure rust provides much. There is https://github.com/briansmith/ring in rust, not sure if it's sponsored by anyone)
> Sounds like Hector Martin is doing exactly that, burning the bridge along the way. Good luck! I think it's the right move (minus the bridge burning).
Unfortunately, what Hector Martin was actually doing is producing rather spectacular flame on LKML and Mastodon. And he isn't representative of other Rust developers either, at least one has voiced their disagreement with him: https://lore.kernel.org/rust-for-linux/Z6OzgBYZNJPr_ZD1@phen...
I agree maintaining a fork would've been a more productive use of Hector's time, but that's not what has been happening and I see no reason to believe it is what will be happening from now on. From my own experience, personalities like Hector quit after not getting their way, rather than looking for other constructive options.
> Now I'm left with the unlikely explanation that
you just like thundering in as the cavalry, fashionably late, maximally
destructive, because it entertains the masses on fedi or reddit or
wherever. I have no idea what you're trying to achieve here, I really
don't get it, but I am for sure fed up dealing with the fallout.
That is a perfect description on what has been happening over the years.
Even if they made a fork and made it super great from a technical perspective, it still would face an uphill battle from the perspective of adoption. The amount of work to get people to switch over would be enormous, and the fact that there's at least some willingness on the part of Linux to adopt Rust in places makes me skeptical that there really would be less work to try to compete. I'd be thrilled if you're right though!
I think a fork with a mission to aggressively rewrite the kernel into Rust would be a great experiment. Lessons learned could be applied to the more mainstream C/Rust kernel(s).
Who would use a kernel that is essentially the mainline kernel with more Rust? Would Debian or Red Hat use it? Maybe some very hard-core Rust hobbyists, but otherwise, why use the Rustier kernel?
But if they could demonstrate significant improvements in security and stability, then they would have something to say. Maybe the plan should be to rewrite a component that the mainline maintainers wouldn't agree to - something important with a significant affect on security and stability. Yes, it's a risk, but if their claims for Rust are true, then it shouldn't be a problem. If Rust doesn't offer a major improvement then it isn't worth the effort anyway. Put their money (or time) where their mouth is.
There are drivers that exist only in Rust, so any distro that wants to support that hardware will. Like the Apple A3 silicon graphics driver. Iirc red hat and nvidia are also working on some Rust based drivers.
Ah, sorry, I misunderstood you. I don’t have a link handy, but Marcan had a post about how they actually re-wrote some in progress drivers from C to Rust because it was easier to do, and so didn’t expect a c version to exist. Basically it was about how driver complexity has increased over time, and Rust helps tackle that in a way that C can’t.
Yes, I realized later that I should add 'efficiency' (for development) to security and stability, which could also be a big win for Rust in particular.
I'm a little bit skeptical as to how successful a hard fork of Linux that only differs from the mainline kernel by having a bit more Rust code actually would be.
If you're going to rewrite significant parts of the kernel, you might as well do what I've been doing and try to write what amounts to a better Linux than Linux that tries to maintain compatibility, but moves beyond the rather limiting conventional Unix architecture. The conventional Unix architecture was fine on a something like a 70s/80s-era PDP-11/VAX, but in the modern world its limitations have been apparent for quite some time.
What I've been working on is an OS very similar to QNX Neutrino in terms of general architecture, but with a somewhat different IPC protocol layering that reduces the number of user-visible primitives and allows for more consistent management of security. Most of the functionality of the system will be implemented in user-level server processes that export their services through special filesystems, with the only special/irregular parts of the system being the microkernel, the process manager (which also contains the core VFS and memory filesystems since these will be tightly linked to the process model), and the base syscall library (vaguely akin to the vDSO under Linux). Literally everything else will just be a regular process. It's not a "Rust OS" as such, as there will still be some C (for instance, the microkernel, which was forked from an older version of seL4), although it will have a fair bit of Rust code.
IMO the issues with Linux are mostly due to a combination of poor/ad-hoc extensibility and the development model that's way too decentralized in some places but excessively centralized in others. The architecture I'm going with will allow for more experimentation, since adding functionality to it will typically just be a matter of adding a regular user program (or a plugin for a regular user program), and much of the system will be based around standardized filesystem-based RPC protocols (generic tooling for implementing RPC interfaces will of course be provided). Therefore it would be easier to maintain experimental functionality in a separate repository and merge it into the base system later on.
Currently it's still quite preliminary, and it only runs some hardcoded tests built into the process server, although despite that, some developers from a major company have taken interest in it recently because of the possibility of using it as a replacement for QNX both in embedded systems and development workstations. I'm working on the VFS layer and built-in special filesystems at the moment, and hopefully should be able to get user processes running pretty soon.
Because the Rust used in the kernel is unsafe. All these people downvoting me or talking about 2000 year old books choosing to be maliciously ignorant, which is fine, I guess.
You have to be rage baiting to make an argument this bad.
First, no, it’s not all unsafe, it’s not even 50% unsafe.
Second, even 50% unsafe is an upgrade from 100% unsafe.
The problem is that the R4L project is raising a lot of questions about how interfaces are handled by some Linux maintainers, and there’s definitely a difference in criteria when it comes to soundness bugs.
I fall on the side that just because nobody has stepped on it on years isn’t a good excuse to keep landmines in the code, but clearly a lot of people in the kernel think differently.
By your logic, the entire Rust language and ecosystem would be considered unsafe. Even the Rust standard library is full of unsafe blocks and functions. But people don't consider it so, because Rust isn't about avoiding unsafe code. It's about containing unsafe code and presenting a safe wrapper by enforcing the invariants. This philosophy would have been abundantly clear if you cared to look at the patch [1] at the center of this controversy. Perhaps you should verify the validity and harmlessness of your own claims before accusing others.
Then one of two things will happen:
* Rust will prove its worth and their fork will be the better operating system. Distros will start switching to it one by one due to its increased performance and stability, and they will have dethroned Linus and become gods of the new world.
* The project will die because the C programmers will outcompete it with good ol' C, and all their talk about safety doesn't pan out in the real world.
If I were the rust4linux leadership, I'd roll those dice.
Sounds like Hector Martin is doing exactly that, burning the bridge along the way. Good luck! I think it's the right move (minus the bridge burning).