While I like the idea of Rust, having to (re)-learn all these subtleties seems daunting (just look at the table comparing dyn vs. box vs. arc vs. references etc.)
When I moved to Java 1.0, coming from C/C++, I hated the performance loss but happily traded that for a garbage collector.
Typically, when the code compiled, it ran.
Now with Rust I'm wondering how much time practitioners spend on analyzing compiler errors (which is a good deal better than analyzing heap dumps with gdb).
And do you get to a place where your coding naturally avoids gotchas around borrowing?
> Now with Rust I'm wondering how much time practitioners spend on analyzing compiler errors
Almost zero. Seriously. Because the rules got internalized for me pretty quickly.
If you asked me, "how much time did you spend when just starting Rust," then it would be a lot more than zero. Enough to noticeably slow me down. But it got down to near-zero pretty quickly. I'd maybe a month or so with ~daily programming.
I'll caveat this by saying that to have this kind of experience, it's incredibly important to understand why you're getting these types of errors in the first place.
Rust programs require you to consider some things upfront in your design that you don't have to think about in other languages. If you internalize these requirements, designing programs in Rust can quickly become just as easy and natural as developing in other languages. But it can feel arbitrary and impossible if you just try and force your way forward by `Box`ing and `clone()`ing everything endlessly because it seems to make the annoying compiler errors go away.
If you're the type of engineer who learns a new language and just ends up writing programs in the style of your old language (but with different syntax), Rust is going to feel a lot harder and you may never "get" it. The difficulty curve of Rust is—I think—much steeper than other languages for this type of engineer. You can be productive writing C-style programs in golang. You can be productive writing Java-style programs in Ruby. But Rust is going to fight you much harder than other languages if you try to approach things this way.
If you're the type of engineer who strives to build idiomatic software in whatever language you're using, you'll have a much faster ramp-up to proficiency.
Personally, I find the overhead of dealing with rust memory management to be 100% worth it when writing embedded code with no dynamic allocation. It can really help to prevent bad memory management practices and, not so much catch, but rather structurally prevent, bugs. If you’re really experienced with embedded C you were probably doing things mostly the same way anyway.
For writing code on an operating system, I’m in the same boat as you; I would rather have GC. Haskell and Rust are spiritually actually pretty similar with the former simplifying memory management and enriching the type system (at the cost of needing to worry about things like memory leaks), and I tend to go to Haskell for non-embedded applications most of the time.
> While I like the idea of Rust, having to (re)-learn all these subtleties seems daunting
It's not like Java is any simpler. Rust gives you the equivalent of a GC, except at compile time. And the compiler tells you when you're getting it wrong.
I've heard before the idea that Rust has a "compile time GC" or "static GC", and while I can sympathize with wanting to leverage that term, it already has a fairly well understood meaning and it's not what Rust provides. The only GC that comes built-in to Rust is reference counting via Rc and Arc.
With an actual GC, there is no notion of getting it wrong; the whole point of a GC is that it automates the handling of memory. A useful metaphor for a GC is that it's a system that simulates an infinite amount of memory. With a GC, at least conceptually, you don't allocate and deallocate memory, rather you declare what object you want and it stays alive forever. The GC works behind the scenes to maintain this illusion, although since it's only a simulation there are certain imperfections.
There are some languages that can do this at compile time, such as Mercury and I believe Rust took some inspiration from Mercury... but Rust does not have a compile-time GC the same way that Mercury does.
You're switching context here and doing so in a way that's fairly pedantic and not really useful.
You mentioned that Rust informs you when you're "getting it wrong" and most people who aren't being pedantic can understand the meaning of that; that there's something that would otherwise go wrong if not for a compile time check that prevents it.
In most GC'd languages, there is no notion of something that would have otherwise gone wrong if not for a compile time check (with respect to memory safety). In most GC'd languages, that very concept doesn't exist.
Another way to put it is that there's nothing for a Java compiler to tell a user about "getting it wrong" because there's nothing to get wrong in the first place (with respect to memory safety, since we're being overly pedantic now).
So, I think you've gravely misunderstood the concepts at work here.
You know NullPointerException? Does that feel like maybe "getting it wrong" ? But it does still happen to Java programmers. That can't happen in (safe) Rust. If you write a program that could try to dereference a null pointer it won't compile. You'd be getting it wrong.
Or let's try something a bit more sophisticated. Many Java data structures can be subject to a Data Race in threaded software. So you may be "getting in wrong" in the sense that on John's 16 core monster server the output is incorrect, but on your cheap 10 year old laptop it works (much more slowly but) correctly. Both outcomes were valid meanings of your Java program, and Java provides some tools you could use to protect yourself, but it won't even warn you that you were "getting it wrong" the results are just incorrect, too bad.
In (safe) Rust, Data Races can't happen, the compiler will reject your program. Some other types of Race Condition can happen, but no Data Races.
I literally state in my post that I am referring to garbage collection and memory safety. I literally even state this specifically because I knew if I didn't someone would bring up completely irrelevant details for the sake of argument. And yet... here we are.
In Rust, data races are undefined behavior, and so safe Rust mostly prevents them even though there are still to this day subtle issues about where the boundary between safe and unsafe Rust is. That said, this is a great thing that Rust provides, it's a genuinely incredible step forward, but it has very little to do with this topic.
However in Java, data races are not undefined behavior, they have well specified semantics and do not result in memory errors the same way they do in Rust or C++:
Calling a NullPointerException a memory safety violation is like calling a panic on unwrap or panic on array out of bounds a memory safety violation (they're not). Both are well defined operations with specified semantics.
Are they likely bugs? Yes absolutely, but neither Java or Rust prevent developers from writing bugs and the fact that you're confusing program correctness with memory safety only indicates that it's you who gravely misunderstands the concepts being discussed.
NullPointerException isn't a memory safety violation but it is getting it wrong.
That's what the original comment claimed, that Rust's compiler "tells you when you're getting it wrong".
Since you brought it up - I'd actually say the existence of unwrap() shows this trend elsewhere in Rust. Java is one of many C-style languages in which silently discarding the important result of your call is a common mistake. In some cases Java tried to mitigate this with a Checked Exception, but now you're just adding a bunch of boilerplate everywhere, it doesn't do much to encourage a better way forward. Rust's Result and Option force a programmer to explicitly decide to discard unhandled cases (Errors and None respectively) each time if that's what they meant. Yet another case where the Rust compiler will tell you if you're getting it wrong.
The original comment was speaking about getting it wrong with respect to memory safety, not NullPointerExceptions, not array out of bounds accesses, not division by zero, but about memory safety.
This discussion isn't about program correctness as a general and broad concept, Rust and Java both have various strategies to eliminate many classes of errors and both languages leave the door open to many other classes of errors.
This discussion is about whether Rust uses a compile time garbage collector in order to ensure memory safety. It does no such thing, Rust has a borrow checker which ensures that syntactically valid expressions referencing memory have a correspondingly valid semantic interpretation. C++ does not have such a thing, syntactically valid expressions referencing memory may not have any valid semantic interpretation, what is referred to as undefined behavior. This is not what a garbage collector does in any sense of the word. A garbage collector is a system that computes an upper bound on object lifetime and when an object exceeds that upper bound, reclaims the memory associated with the object. Rust does no such thing at compile time.
Rust's system of enforcing memory safety is great, it's a step forward in language design, by all means give it the praise it deserves... just don't refer to it by a concept that already has a well defined meaning and an active area of research. Compile time garbage collection is a separate concept from how Rust enforces memory safety and there's not much utility in reusing that term, all it does is create confusion.
You're clutching at unrelated straws. Rather than comparing to Java, try comparing to OCaml, which is a language that's much closer to "Rust with GC". There's pretty much no safety gain from using Rust over OCaml. But if you use OCaml you don't have to worry about borrow checking.
> There's pretty much no safety gain from using Rust over OCaml.
> But if you use OCaml you don't have to worry about borrow checking.
I've never written any OCaml, when you choose not "to worry about borrow checking" how does OCaml arrange to ensure your program is free from data races in concurrent code anyway? Or do you consider that "pretty much no safety gain" ?
OCaml's memory model specifies bounded space-time SC-DRF.
What this comes down to in simple terms is that data races have well specified semantics and their effects are bounded both in terms of what is affected by a data race, and when it's affected.
Using C as a starting point, a data race can modify any region of memory, not just the memory involved in the read/write, and the modification can be observed at any time, it might be observed after the write operation of the data race executes or it can be observed before the write operation executes (due to instruction reordering).
In Java, data races are well specified using bounded space SC-DRF. This means that unlike C, data races are NOT undefined behavior. A data race is limited to only modify the specific primitive value that was written to. However it does not specify bounded time, so when the modification of that primitive value is observed is not specified by the Java memory model, it could happen before or after the write operation.
OCaml's memory model specifies both bounded space and time SC-DRF. When a data race occurs, it can only modify the primitive value that was written to, and the modification must be observed no sooner than the beginning of the write operation and no later than the end of the write operation.
That was a very long-winded non-answer, but I think I understood it to be essentially "Yes".
I'm definitely not an expert, but to me this memory model sounds like a more circumspect attempt to carve out a set of benign data races which we believe are OK. Now, perhaps it will work this time, but on each previous occasion it has failed, exactly as illustrated by Java.
Indeed the Sivaramakrishnan slides I'm looking at about this are almost eerily reminiscent of the optimism for Java's memory model when I was younger (and more optimistic myself). We'll provide programmers with this new model, which is simpler to reason about, and so the problem will evaporate.
Some experts, some of the time, were able to correctly reason about both the old and new models, too many programmers too often got even the new model wrong.
So that leads me to think Rust made the right choice here. Let's have a compiler diagnostic (from the borrow checker) when a programmer tries to introduce a data race, rather than contort ourselves trying to come up with a model in which we can believe any races are benign and can't really have any consequences we haven't reasoned about.
Of course unsafe Rust might truly benefit from nicer models anyway, they could hardly be worse than the existing C++11 model but that's a different story.
Can attest. It's easier to learn the rules of Java the language, but way harder to learn how to write good Java software. To some extent, Rust forces you to begin learning both at the same time, which is of course more difficult.
What always surprises me is how much "good Rust software" actually coheres with "good software". I'm not saying that you should write software in any language as though it were Rust -- every language has its own effective praxis. Rather, since Rust forces you to pick up some of the rules of good design as you learn, those rules can transfer to other ecosystems, forming part of a language-agnostic basis of engineering. I think that's really cool.
A good example is handles over pointers [0]: recognizing that pointers/reference embody two orthogonal concerns, address and capability, lets you see how to separate them when it benefits the design. Rust's extremely strict single-ownership model often forces you build separate addressing systems, allowing disparate entities to address and relate to each other in complex patterns, while consolidating all capability to affect those entities into a single ownership root.
The mental model of single-ownership itself is valuable for managing the complexity of a large network of entities, and knowing when you can or should break it in other languages has been really valuable to me.
When I moved to Java 1.0, coming from C/C++, I hated the performance loss but happily traded that for a garbage collector. Typically, when the code compiled, it ran.
Now with Rust I'm wondering how much time practitioners spend on analyzing compiler errors (which is a good deal better than analyzing heap dumps with gdb). And do you get to a place where your coding naturally avoids gotchas around borrowing?