> so long as you are diligent about checking invariants
part. Could you go through and check all the parts of a huge C++ codebase to make sure invariants are held as opposed to a few hundred lines of unsafe Rust code?
Right, I guess the question is what will that proportion be when Rust is used for things like operating systems and web browsers. 30% would be untenable but a few hundred/thousand lines of unsafe code is fairly easy to put under a microscope.
For some current day research into this, there is the paper "How Do Programmers Use Unsafe Rust?"[1] which I'll drop a quote from here:
> The majority of crates (76.4%) contain no unsafe features at all. Even in most crates that do contain unsafe blocks or functions, only a small fraction of the code is unsafe: for 92.3% of all crates, the unsafe statement ratio is at most 10%, i.e., up to 10% of the codebase consists of unsafe blocks and unsafe functions
That paper is definitely worth reading and goes into why programmers use unsafe. e.g 5% of the crates at that time were using it to perform FFI.
In writing "RUDRA: Finding Memory Safety Bugs in Rust
at the Ecosystem Scale" [2], I recreated this data and year-by-year the % of crates using unsafe is going down. And for what it's worth, crates are probably a bad data-set for this. crates tend to be libraries which are exactly where we would expect to find unsafe code encapsulated to be used safely. There's also plenty of experimental and hobby crates. A large dataset of actual binaries would be way more interesting to look at.
Or Rust in Android, in this deep dice gaining two places of unsafe code which found a bug in the existing implementation due to the vetting triggered by being the only two places.
As we follow the standard rust rule that "safe code should not be able to use unsafe code to do unsafe things", those unsafe bits of code have been very carefully checked, to the best of our abilities, to ensure they don't create memory safety issues. It is a lot easier to triple-check 170 lines of code than 30,000 lines.
Are you using wgpu for the rendering stuff? Heard that WebGPU had to sacrifice some performance in order to make the API safer for the web (like more bounds checking and sanity checks). These kinds of issues are actually plaguing projects like Tensorflow.js (for example see https://github.com/gpuweb/gpuweb/issues/1202).
Other libraries like Vulkan and DirectX 12 are fundamentally unsafe in the API level, so direct usage of it would lead to heaps of unsafe Rust code. Rust people have tried wrapping it in a safe way (like gfx-rs and vulkano) but nowadays most seem to have transitioned to wgpu (since WebGPU API is safe by design so it fits more for the Rust ecosystem).
Rust does sacrifice some performance in general in order to achieve its safety claims, but people are happy with it so far, since the majority of applications using Rust (like CLI apps and web servers) don't have to squeeze out performance that much (for webdev there are too many things that can cause performance issues other than not writing it in Rust). But for 3D graphics people can be more sensitive about these problems. Though maybe if you're not developing a triple-A game with the latest cutting-edge graphics (with new techniques like "hardware ray tracing" and "bindless descriptors", which are both impossible in wgpu), writing in Rust can be a good-enough tradeoff for your needs.
WGPU is just finishing up a major reorganization of locking and internal memory management, going from a global lock to fine-grained Arc reference counts.[1] Change log, just posted a few minutes ago: "Arcanization of wgpu core resources: Removed 'Token' and 'LifeTime' related management, removed 'RefCount' and 'MultiRefCount' in favour of using only 'Arc' internal reference count, removing mut from resources and added instead internal members locks on demand or atomics operations, resources now implement Drop and destroy stuff when last 'Arc' resources is released, resources hold an 'Arc' in order to be able to implement Drop, resources have an utility to retrieve the id of the resource itself, removed all guards and just retrive the 'Arc' needed on-demand to unlock registry of resources asap removing locking from hot paths."
From a performance standpoint, I'm much more concerned about being able to get all the CPUs working on the problem than slight improvements in per-CPU performance. My metaverse viewer has slow frames because loading content into the GPU from outside the rendering thread blocks the rendering thread. All that "ARCcanization" should fix that.
A counterpoint that makes this argument a bit weaker: Rust’s “unsafe” marker doesn’t pollute only its scope and actually pollutes the whole module; You need to make sure that the invariants in unsafe code are met even in safe code. (An explanation of this in the Rustonomicon: https://doc.rust-lang.org/nomicon/working-with-unsafe.html)
So there’s quite a lot more code to actually check then what some of the Rust proponents are saying. One can say that C++ is still worse in this regard (theoretically you need to check 100% of your code to be safe in C++). But for some minority of developers who frequently needs to delve into unsafe code, the advantages of Rust might seem a bit more disappointing (“the compiler doesn’t really do that much for the more important stuff…”)
> whole point of rust is that memory safety issues are never worth the cost
I don’t think that it would be the point of rust — otherwise why not write Java, or a litany of GCd languages instead?
Rust is a low-level/systems programming language where you have more control over the program’s execution (e.g. no fat runtime), which is a necessity in some rare, niche, but important use cases.
> so long as you are diligent about checking invariants
part. Could you go through and check all the parts of a huge C++ codebase to make sure invariants are held as opposed to a few hundred lines of unsafe Rust code?