Spending significant time adapting core kernel code or developing a safe Rust abstraction for DMA, only to be summarily shut down by a single gatekeeper who cites "not wanting multiple languages" is demotivating. It's especially incongruent given that others have championed Rust in the kernel, and Linux has begun hosting Rust modules.
If the project leadership — i.e. Linus — truly wants Rust integrated, that stance needs to be firmly established as policy rather than left up to maintainers who can veto work they personally dislike. Otherwise, contributors end up in a limbo where they invest weeks or months, navigate the intricacies of the kernel's development model, and then find out a single personality is enough to block them. Even if that personality has valid technical reasons, the lack of a centralized, consistent direction on Rust's role causes friction.
Hector's decision to leave is understandable: either you have an official green light to push Rust forward or you don't. Half measures invite exactly this kind of conflict. And expecting one massive rewrite or an all‐encompassing patch is unrealistic. Integration into something as large and historically C‐centric as Linux must be iterative and carefully built out. If one top‐level developer says "no Rust", while others push "Rust for safety", that is a sign that the project's governance lacks clarity on this point.
Hector's departure highlights how messy these half signals can get, and if I were him, I'd also want to see an unambiguous stance on Rust — otherwise, it's not worth investing the time just to beg that your code, no matter how well engineered, might be turned down by personal preference.
I think initially Linus stance of encouraging Rust drivers as an experiment to see how they turn out was a right decisions. There should be some experience before doing long term commitments to a new technology.
But since then a lot of experience was made and I got the notion that the Rust drivers were quite a success.
And now we are at a point where proceeding further does need a decision by Linus, especially as one of the kernel maintainers is actively blocking further work.
> Spending significant time adapting core kernel code or developing a safe Rust abstraction for DMA, only to be summarily shut down by a single gatekeeper who cites "not wanting multiple languages" is demotivating
Christoph Hellwig is one of the oldest subsystem maintainer.
Maybe the Rust developer shall have a more careful behaviour. Nobody wants to break core kernel code.
This is a fundamental misunderstanding of the structure of the linux kernel, the nature of kernels in general, and the ways one performs automated verification of computer code.
Automated verification (including as done by rust in it's compiler) does not involve anything popularly known as AI, and automated verification as it exists today is more complete for rust than for any other system (because no other widely used language today places the information needed for verification into the language itself, which results rust code being widely analyzable for safety).
Human validation is insufficient and error prone, which is why automated verification of code is something developers have been seeking and working on for a long time (before rust, even).
Having "explicit" (manual?) memory management is not a benefit to enabling verification either by humans or by machines. Neither is using a low level language which does not specify enough detail in the type system to perform verification.
Kernel modules aren't that special. One can put a _lot_ of code in them, that can do effectively anything (other than early boot bits, because they aren't loaded yet). Kernel modules exist for distribution reasons, and do not define any strict boundary.
If we're talking about out-of-tree kernel modules, those are not something the tend to exist for a long time. The only real examples today of long lived out-of-tree modules are zfs (filesystem) and nvidia (gpu driver). These only exist out-of-tree because of licensing and secrecy. This is because getting code in-tree generally helps keep code up to date with less effort from everyone involved: the people already making in-tree changes can see how certain APIs are being used and if those in-tree folks are more familiar with the API they can/may improve the now-merged code. And the formerly out-of-tree folks don't have to run their own release process, don't have to deal with constant compatibility issues as kernel APIs change, etc.
>Human validation is insufficient and error prone,
Basically, if you assume thats impossible for humans to be correct, or that its impossible to write correct memory safe C code, you start down the path that leads to things like Java, Haskell, and now Rust. And then when nobody takes you seriously, you wonder why - well, its because you are telling people who know how to write correct and memory safe C code that we are insufficient and error prone
>Kernel modules aren't that special.
By definition, they interface with the core kernel code. They are not core kernel code
Because it forces the developer to think about what is being written at every step of the way, instead of relying on language features that are by far not complete in terms of providing memory safety.
Naive take would be that it adds abstraction that you need to keep checked, in addition to the kernel code itself. Not making a value statement at all on the level of impact that actually has in practice.
That's shifting the goal post.
Ergonomically I don't know a single language that is both trying to make you write secure and correct code without sacrificing ergonomics.
Spending significant time adapting core kernel code or developing a safe Rust abstraction for DMA, only to be summarily shut down by a single gatekeeper who cites "not wanting multiple languages" is demotivating. It's especially incongruent given that others have championed Rust in the kernel, and Linux has begun hosting Rust modules.
If the project leadership — i.e. Linus — truly wants Rust integrated, that stance needs to be firmly established as policy rather than left up to maintainers who can veto work they personally dislike. Otherwise, contributors end up in a limbo where they invest weeks or months, navigate the intricacies of the kernel's development model, and then find out a single personality is enough to block them. Even if that personality has valid technical reasons, the lack of a centralized, consistent direction on Rust's role causes friction.
Hector's decision to leave is understandable: either you have an official green light to push Rust forward or you don't. Half measures invite exactly this kind of conflict. And expecting one massive rewrite or an all‐encompassing patch is unrealistic. Integration into something as large and historically C‐centric as Linux must be iterative and carefully built out. If one top‐level developer says "no Rust", while others push "Rust for safety", that is a sign that the project's governance lacks clarity on this point.
Hector's departure highlights how messy these half signals can get, and if I were him, I'd also want to see an unambiguous stance on Rust — otherwise, it's not worth investing the time just to beg that your code, no matter how well engineered, might be turned down by personal preference.