Hacker News new | past | comments | ask | show | jobs | submit login

[flagged]



This is a fundamental misunderstanding of the structure of the linux kernel, the nature of kernels in general, and the ways one performs automated verification of computer code.

Automated verification (including as done by rust in it's compiler) does not involve anything popularly known as AI, and automated verification as it exists today is more complete for rust than for any other system (because no other widely used language today places the information needed for verification into the language itself, which results rust code being widely analyzable for safety).

Human validation is insufficient and error prone, which is why automated verification of code is something developers have been seeking and working on for a long time (before rust, even).

Having "explicit" (manual?) memory management is not a benefit to enabling verification either by humans or by machines. Neither is using a low level language which does not specify enough detail in the type system to perform verification.

Kernel modules aren't that special. One can put a _lot_ of code in them, that can do effectively anything (other than early boot bits, because they aren't loaded yet). Kernel modules exist for distribution reasons, and do not define any strict boundary.

If we're talking about out-of-tree kernel modules, those are not something the tend to exist for a long time. The only real examples today of long lived out-of-tree modules are zfs (filesystem) and nvidia (gpu driver). These only exist out-of-tree because of licensing and secrecy. This is because getting code in-tree generally helps keep code up to date with less effort from everyone involved: the people already making in-tree changes can see how certain APIs are being used and if those in-tree folks are more familiar with the API they can/may improve the now-merged code. And the formerly out-of-tree folks don't have to run their own release process, don't have to deal with constant compatibility issues as kernel APIs change, etc.


>Human validation is insufficient and error prone,

Basically, if you assume thats impossible for humans to be correct, or that its impossible to write correct memory safe C code, you start down the path that leads to things like Java, Haskell, and now Rust. And then when nobody takes you seriously, you wonder why - well, its because you are telling people who know how to write correct and memory safe C code that we are insufficient and error prone

>Kernel modules aren't that special.

By definition, they interface with the core kernel code. They are not core kernel code


This doesn't make sense to me, why is a manual language that requires validation better than a language that enforces some safety on its own?


Because it forces the developer to think about what is being written at every step of the way, instead of relying on language features that are by far not complete in terms of providing memory safety.


Naive take would be that it adds abstraction that you need to keep checked, in addition to the kernel code itself. Not making a value statement at all on the level of impact that actually has in practice.


I don't get this what so ever, is there memory allocation bugs in GC languages such as Java?

Even if that is the case Rust specifically is design not to be GC.

Ontop of that you csn do manual memory management if you want in Rust afaik.


Rust with manual memory management is just more cumbersome than C


That's shifting the goal post. Ergonomically I don't know a single language that is both trying to make you write secure and correct code without sacrificing ergonomics.

Closest is most likely Ada.


The patch is just a binding / abstraction of DMA for writing Rust drivers.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: