Hacker News new | past | comments | ask | show | jobs | submit login

> wrestle the compiler

This is quite literally a skill issue, no offense.

'wrestle the compiler' implies you think you know better; this is usually not the case and the compiler is here to tell you about it. It's annoying to be bad at tracking ownership and guess what: most people are. The ones who aren't have decades of experience in C/C++ and employ much the same techniques that Rust guides you towards. If you really know better, there are ways to get around the compiler. They're verbose and marked unsafe to 1) discourage you from doing that and 2) warn others to be extra careful here.

If this is all unnecessary for you - and I want to underscore I agree it should be in most software development work - stick to GC languages. With some elbow grease they can be made as performant as low level languages if you write the critical parts in a way you'd have to do it in Rust and will be free to write the rest in a way which doesn't require years of experience tracking ownership manually. (Note it won't hurt to be tracking ownership anyway, it's just much less of an issue if you have to put a weakref somewhere once a couple years vs be aware at all times of what owns what.)






> 'wrestle the compiler' implies you think you know better; this is usually not the case and the compiler is here to tell you about it.

Well, yes and no. The way type systems work to soundly guarantee that some program property P holds is by guaranteeing some stronger property Q, such that Q => P. This is because type systems generally enforce what we call "inductive invariant", i.e. a property that is preserved by all program statements [1], while most interesting program properties are not inductive. To give an example, suppose we're interested in the property that a program produces an even number; an inductive invariant that implies that property is one that makes sure that the outcome of all computations in the language are even. A program that satisfies the latter property obviously satisfies the former, but the converse isn't true.

Similarly, the way Rust guarantees that all programs don't have, say, use-after-free, is by enforcing a stronger property around ownership. So all safe Rust programs don't have use-after-free, but many programs that don't have use-after-free don't satisfy the stronger ownership property. This means that sometimes (and this is true for all sound type systems) you have to "wrestle" the compiler, which insists on the stronger property, even though you know that the weaker property -- the one you're interested in -- holds. In other words, sometimes you do know better than the compiler.

That is not to say that the approach where the compiler enforces stronger invariants is always right or always wrong, or even right or wrong most of the time, but that "wrestling the compiler" is something that even the most skilled programmers confront from time to time.

[1]: This is because inductive invariants are compositional, i.e. they hold for some composition of program terms t and s iff they hold for t and s and their composition operator, and type systems want to be compositional.


No, GP can just not use Rust, they don't have to use GC languages to have something that makes sense and doesn't force you to always have a debate with the compiler about even simple things.

If they used Odin (or Zig) they could've looped through that dynamic array no problem, in fact:

    package example
    
    import "core:fmt"
    
    main :: proc() {
        xs: [dynamic]int
        append(&xs, 1, 2)

        for x in xs {
            fmt.println(x)
        }
        
        fmt.println(len(xs))
    }
It is ridiculous that Rust complains even about the simple for loop and to say that this somehow comes down to "Well, everyone would do it this way if they cared about memory safety" is just not really true or valuable input, it sounds like what someone would say if their only systems programming experience came from Rust and they post-rationalized everything they've seen in Rust as being how you have to do it.

My tips to people who maybe feel like Rust seems a bit overwrought:

Look for something else, check out Odin or Zig, they've got tons of ways of dealing with memory that simply sidestep everything that Rust is about (because inherently Rust and everything that uses RAII has a broken model of how resources should be managed).

I learned Odin just by reading its Overview page (https://odin-lang.org/docs/overview/) and trying stuff out (nowadays there are also good videos about Odin on YouTube), then found myself productively writing code after a weekend. Now I create 3D engines using just Odin (and we in fact use only a subset of what is on that Overview page). Things can be simple, straight forward and more about the thing you're solving than the language you're using.


I dunno; I've never tried Zig before, and it wasn't hard to check whether this kind of bug was easy to have:

  const std = @import("std");
  
  pub fn main() !void {
      var gpa: std.heap.GeneralPurposeAllocator(.{})=.{};
      const alloc=gpa.allocator();
  
      var list = try std.ArrayList(u8).initCapacity(alloc, 1);
      const a = try list.addOne();
      a.* = 0;
      std.debug.print("a={}\n", .{a.*});
      const b = try list.addOne();
      b.* = 0;    
      std.debug.print("a={}\n", .{a.*});
      std.debug.print("b={}\n", .{b.*});
  }


  a=0
  Segmentation fault at address 0x7f9f7b240000

I think it is important to note that in 59nadir's example, the reason Rust gives an error and Odin doesn't is not memory safety. Rust uses move semantics by default in a loop while Odin appears to use copy semantics by default. I don't really know Odin, but it seems like it is a language that doesn't have RAII. In which case, copy semantics are fine for Odin, but in Rust they could result in a lot of extra allocations if your vector was holding RAII heap allocating objects. Obviously that means you would need to be careful about how to use pointers in Odin, but the choice of moving or copying by default for a loop has nothing to do with this. For reference:

Odin (from documentation):

  for x in some_array { // copy semantics
  for &x in some_array { // reference semantics
  // no move semantics? (could be wrong on this)
Rust:

  for x in vec.iter_ref().copied() { // bytewise copy semantics (only for POD types)
  for x in vec.iter_ref().cloned() { // RAII copy semantics
  for x in &vec { // reference semantics
  for x in vec { // move semantics
C++:

  for (auto x : vec) { // copy semantics
  for (auto &x : vec) { // reference semantics
  for (auto &&x : vec) { // move semantics

And why do you think that bug is relevant in the case of a loop that prints the elements of a container? We can all see and verify at a glance that the code is valid, it's just not provably valid by the Rust compiler.

I feel like these posts trying to show possible memory issues with re-allocated dynamic arrays are missing the point: There is no code changing the underlying array, there is no risk of any kind of use-after-free error. This is exactly the kind of case where all of this jumping through hoops shouldn't be needed.


> There is no code changing the underlying array, there is no risk of any kind of use-after-free error.

There is none of this code, until there is.


Ok, so we've established that the loop can be verified as not changing the container in any way, what makes you believe this shouldn't be obvious to the Rust compiler?

When code that modifies the container is added, it should be understood and then correctly errored about, I don't get why this is such a crazy concept to people.

The point here is that you pay the cost for an error that can't happen. It's just a micro example of a much more general issue that boils down to:

The Rust compiler does a lot to find and mitigate bugs, it's amazing, but it also rejects completely valid programs and solutions because it simply isn't good enough (and it's a difficult enough problem where I'm prepared to say it will never be good enough). You can either pay that cost constantly and for certain problems be dealing with it (a lot) for no gain whatsoever (because the bugs it was trying to prevent weren't actual issues or are in fact imperatives because the thing you're doing requires them) or you can choose not to.

I don't think it's particularly useful to make excuses for the compiler not understanding very basic things in simple examples and indirectly argue that it would be too complicated to see what the loop is doing and act accordingly. Rust already signed up for a very complicated compiler that does all kinds of crazy things in order to mitigate bugs; this type of introspection would increase the accuracy of it a lot.


> You can either pay that cost constantly and for certain problems be dealing with it (a lot) for no gain whatsoever (because the bugs it was trying to prevent weren't actual issues or are in fact imperatives because the thing you're doing requires them) or you can choose not to.

Alternatively, you can use Rust so much these limitations become second nature, and thus don't make them in the first place.

> I don't think it's particularly useful to make excuses for the compiler not understanding very basic things in simple examples and indirectly argue that it would be too complicated to see what the loop is doing and act accordingly.

Great idea, until it stops working. It runs into the paraphrased quote: "Any sufficiently complicated borrow checker is indistinguishable from Dark Magic".

First you say, well, the compiler should be sufficiently smart to figure out case A1 should work, then you add that, but then arises another case A2 that the compiler is sufficiently smart to figure out and so on.

However, you add a bunch of these "sufficiently smart" borrow rules, and you'll end up with a mess. A1 and A2 don't work if A432 is applied, but do work if A49324 is given if the A4 and A2 are satisfied.

The harder the borrow checker is to understand, the more difficult it is to construct a mental model that's useful.

In summary: while I'm not against improving the borrow checker, but the problem is that it needs to be balanced with the opportunity cost of understanding how it approximately works.


> Ok, so we've established that the loop can be verified as not changing the container in any way, what makes you believe this shouldn't be obvious to the Rust compiler?

I would be quite happy for the Rust compiler to be able to perform more powerful analysis and make writing code easier. What I object to, and I think that quite small Zig code snippet highlights, is that dealing with those shortcomings

> for no gain whatsoever

is also plainly wrong.


I make custom 3D engines and I can tell you that it would not be a net benefit for us to use Rust. That's why I added "for certain problems" as a qualifier; there are use cases where Rust would be a net negative.

There are also plenty of use cases where Rust is actually useful and provides guarantees about things that you want guarantees about.


For anyone curious about Odin and graphics, it seems to work really well:

https://gist.github.com/NotKyon/6dbd5e4234bce967f7350457c1e9...

https://www.youtube.com/watch?v=gp_ECHhEDiA


And how should resources be managed?

In bulk, i.e. not one-by-one as is implied and most used with RAII. RAII works best for a one-by-one use case and in well designed, performant systems the one-by-one use case is either irrelevant, rare or an anti-pattern.

if you want bulk, you can use arrays, vecs, arenas, etc.

Rust, in many ways, is a terrible first systems programming language.

To program a system is to engage with how the real devices of a computer work, and very little of their operation is exposed via Rust or even can be exposed. The space of all possible valid/safe Rust programs is tiny compare to the space of all useful machine behaviours.

The world of "safe Rust" is a very distorted image of the real machine.


> Rust, in many ways, is a terrible first systems programming language.

Contrariwise, Rust in, in many way, an awesome first systems programming language. Because it tells you and forces you to consider all the issues upfront.

For instance in 59nadir's example, what if the vector is a vector of heap-allocated objects, and the loop frees them? In Rust this makes essentially no difference, because at iteration you tell the compiler whether the vector is borrowed or moved and the rest of the lifecycle falls out of that regardless of what's in the vector: with a borrowing iteration, you simply could not free the contents. The vector generally works and is used the same whether its contents are copiable or not.


A lot of idiomatic systems code is intrinsically memory unsafe. The hardware owns direct references to objects in your address space and completely disregards the ownership semantics of your programming language. It is the same reason immediately destroying moved-from objects can be problematic: it isn’t sufficient to statically verify that the code no longer references that memory. Hardware can and sometimes does hold references to moved-from objects such that deferred destruction is required for correctness.

How is someone supposed to learn idiomatic systems programming in a language that struggles to express basic elements of systems programming? Having no GC is necessary but not sufficient to be a usable systems language but it feels like some in the Rust community are tacitly defining it that way. Being a systems programmer means being comfortable with handling ambiguous object ownership and lifetimes. Some performance and scalability engineering essentially requires this, regardless of the language you use.


None of these "issues" are systems issues, they're memory safety issues. If you think systems programming is about memory saftey, then you're demonstrating the problem.

Eg., some drivers cannot be memory safe, because memory is arranged outside of the driver to be picked up "at the right time, in the right place" and so on.

Statically-provable memory saftey is, ironically, quite a bad property to have for a systems programming language, as it prevents actually controlling the devices of the machine. This is, of course, why rust has "unsafe" and why anything actually systems-level is going to have a fair amount of it.

The operation of machine devices isnt memory safe -- memory saftey is a static property of a program's source code, that prevents describing the full behaviour of devices correctly.


Water is wet.

Yes, touching hardware directly often requires memory unsafety. Rust allows that, but encourages you to come up with an abstraction that can be used safely and thereby minimize the amount of surface area which has to do unsafe things. You still have to manually assert / verify the correctness of that wrapper, obviously.

> This is, of course, why rust has "unsafe" and why anything actually systems-level is going to have a fair amount of it.

There are entire kernels written in Rust with less than 10% unsafe code. The standard library is less than 3% unsafe, last I checked. People overestimate how much "unsafe" is actually required and therefore they underestimate how much value Rust provides. Minimizing the amount of code doing unsafe things is good practice no matter what programming language you use, Rust just pushes hard in that direction.


> For instance in 59nadir's example, what if the vector is a vector of heap-allocated objects, and the loop frees them?

But the loop doesn't free them. This is trivial for us to see and honestly shouldn't be difficult for Rust to figure out either. Once you've adopted overwrought tools they should be designed to handle these types of issues, otherwise you're just shuffling an esoteric burden onto the user in a shape that doesn't match the code that was written.

With less complicated languages we take on the more general burden of making sure things make sense (pinky-promise, etc.) and that is one that we've signed up for, so we take care in the places that have actually been identified, but they need to be found manually; that's the tradeoff. The argument I'm making is that Rust really ought to be smarter about this, there is no real reason it shouldn't be able to understand what the loop does and treat the iteration portion accordingly, but it's difficult to make overcomplicated things because they are exactly that.

I doubt that most Rust users feel this lack of basic introspection as to what is happening in the loop makes sense once you actually ask them, and I'd bet money most of them feel that Rust ought to understand the loop (though in reading these posts I realize that there are actual humans that don't seem to understand the issue as well, when it's as simple as just reading the code in front of them and actually taking into account what it does).


> But the loop doesn't free them.

What if it did free them in a function you don't directly control?


> forces you to consider all the issues upfront.

Ever wonder why we do not train pilots in 737s as their first planes? Plenty of complex issues do NOT, in fact, need to be considered upfront.


YMMV, naturally, but I've found that some embedded devices have really excellent hardware abstraction layers in Rust that wrap the majority of the device's functionality in an effectively zero-overhead layer. Timers? GPIO? Serial protocols? Interrupts? It's all there.

- https://docs.rs/atsamd-hal/

- https://docs.rs/rp2040-hal/


> It's annoying to be bad at tracking ownership and guess what: most people are. The ones who aren't have decades of experience in C/C++ and employ much the same techniques that Rust guides you towards.

You wouldn't need to do that here, in SPARK, or Oberon, or just about any other memory safe language. This is unique to Rust, and their model - and it absolutely is not required for safety. It's not a skill issue. It's a language design problem.


doesn't spark do something inspired by rust to get safe dynamic memory allocation? https://docs.adacore.com/spark2014-docs/html/ug/en/source/ac...

what does oberon do?


SPARK has had it before Rust existed. However, gnatprove doesn't require you the programmer to change anything. The compiler does the work to ensure safety, not you.

Oberon is similar. The typesolver will determine if something is safe, without the need for explicitly borrowing anything.


> Oberon is similar. The typesolver will determine if something is safe, without the need for explicitly borrowing anything.

so what does oberon do to prevent you from resizing a possibly reallocating array while holding a reference into it?


It doesn't allow those operations to occur at the same time. If you can't meet compile time guarantees, then it does not compile.

Pointers aren't the same as in C. A pointer has an explicity type, not just a size. A pointer cannot change where it is located, whilst in scope anywhere else.

If you then make your pointer local, then it will get cloned. As there's no concept of a void pointer, every type supports cloning, and so your thread-local variable will have nothing to do with the parent any longer.

So if you try to grab a local reference, to something in another thread, you'll get a copy, or a compile time error if you don't copy it.

If you try to modify something you're looping over, it won't compile at all.

However, in all of this, there's no extra syntax. The compiler can deal with what is permitted. The programmer can just write.

This is multithreaded (by compiler switch):

        module example610a;
        type
        Vector = array * of integer;
        var
          i, n: integer;
          a: Vector;
        begin
          a := new Vector(n);
          for i := 0 to len(a) - 1 do
            write("a[",i:2,"]: "); read(a[i])
          end;
          writeln;
          for i := 0 to len(a) - 1 do
            write(a[i]:3);
          end;
        writeln;
        end example610a.

That systems languages have to establish (1) memory saftey, (2) statically; (3) via a highly specific kind of type system given in Rust; and (4) with limited inference -- suggests a lack of imagination.

The space of all possible robust systems languages is vastly larger than Rust.

It's specific choices force confronting the need to statically prove memory saftey via a cumbersome type system very early -- this is not a divine command upon language design.


> The space of all possible robust systems languages is vastly larger than Rust.

The space of all possible CVE is also vastly larger outside of Rust as well.

My biggest takeaway from Rust isn't that it's better C++. But that it's extremely fast (no runtime limited GC) and less footgunny Java.


sure, rust is not the final answer to eliminating memory safety bugs from systems programming. but what are the alternatives, that aren't even more onerous and/or limited in scope (ats, frama-c, proofs)?

My preference is to have better dynamic models of devices (eg., how many memory cells does ram have, how do these work dynamically, etc.) and line them up with well-defined input/output boundaries of programmes. Kinda "better fuzzing".

I mean, can we run a program in a well-defined "debugging operating system" in a VM, with simulated "debugging devices" and so on?

I dont know much about that idea, and the degree to which that vision is viable. However it's increasingly how the most robust software is tested -- by "massive-scale simulation". My guess is it isnt a major part of, say, academic study because its building tools over years rather than writing one-off papers over months.

However, if we had this "debuggable device environment", i'd say it'd be vastly more powerful than Rust's static guarantees and allow for a kind of "fearless systems programming" without each loop becoming a sudoku puzzle.


This is yet another issue with Rust, nowhere in my post have I mentioned C++, I made no effort comparing the two languages, I just pointed out poor developer ergonomics in Rust and you're instigating a language flame war as if you took valid criticism as a personal attack; you can do better than that.

> poor developer ergonomics

I don't think it's poor developer ergonomics. The compiler tells you "Hey, try adding &x at this position".

It's unfamiliarity with Rust's type system and syntax sugars.

<HYPERBOLE>

I'll take 1000 compiler errors over a single non-deterministic bug that only happens on ARM at 6 o'clock on Blue Moon when Mercury is in Orion (it's UB).

And I'd ritually sacrifice my first unborn werekid to the Dark Compiler Gods for a compiler error, that actually suggests a correct fix.

</HYPERBOLE>


My 2¢, it’s perfectly reasonable to bring up other languages in defense of criticism, because it explains why these decisions were made in the first place. GP literally said that rust isn’t a good fit for you if you’re in a position to use a GC. The comparison to C++ is important because it’s one of very, very few contemporary languages that also doesn’t require a GC/refcounting everywhere. So it’s useful to compare to how C++ does it.

Yet another issue with people who criticize rust: they don’t want anyone to defend rust, and complain loudly about anyone defending rust as being a literal problem with the language. You can do better than that.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: