But here is what you said and what the author said don't conflict with each other, and it has been on my mind for a while.
People who write similar code, or work on things for decades usually don't really think through what "sketch out some code" looks like. They spend most of their time on refactoring things that has clear use-cases, but not well-defined API boundaries within the component, or between components. So ownerships, nullability checks, data race checks are all comes very naturally as a starter.
But there are other side of the world, where people constantly sketching out something, for things like creative arts, high-level game logic, data analysis, machine learning etc. Now putting yourself in that position, the syntax noises are actively in the way of this type of programming. Ownerships, even nullability checks are not helpful if you just want to have partial code running and checking if it draws part of the graph. This is a world Python excels, and people constantly complaining about why this piece of Python code doesn't have type-annotation.
We may never be at peace between these two worlds, and this manifest itself somewhat into the "two-language problem". But that to me, is when someone mean "development velocity is faster".
> Ownerships, even nullability checks are not helpful
Memory management does get on the way. But you are wrong about algebraic data types, they will help you sketch something.
Ideally, if you don't know what you want, you will want extendable¹ algebraic types, more like Type Script than Rust, but what you call "nullability check" is a benefit since the beginning.
1 - Where you can say "here comes a record with those columns" instead of "here comes this record". You can write this in Rust, but it's easier to simply completely define everything.
In your framing there's a sort of implicit downplaying of the frequency of exploratory work and an implicit promotion of stricter work.
> Something like using MATLAB for exploratory research is probably another decent example. Or maybe hackathon games. But for most games, data analysis, machine learning etc. then being stricter pays for itself almost immediately.
(Emphasis mine)
This is where the viewpoints differ. Some people spend a lot more time on the exploratory aspect of coding. Others prefer seeing a program or a system to completion. It largely depends on what you work on and where your preferences lie.
Years ago I wrote a script that grabs a bunch of stuff from the HN API, does some aggregation and processing, and makes a visualization out of them. I wrote it because the idea hit me on a whim while intoxicated, and I wrote the whole thing while intoxicated. The script works and I still use it frequently. I haven't made any changes to it because it just does what it needs to. It has no types. It's written decently because I've been coding for a long time but I was intoxicated when I wrote it. The important thing is it's still providing value.
There's a surprising amount of automation and glue code that doesn't need the correctness of a type system. I've written lots of stuff like this over the years that I use weekly, sometimes daily, that I've never had to revisit because they just work. I suspect it's a matter of personal preference how much time a person spends on that kind of work vs building out large, correct systems. I suspect there's a long tail of quality-of-life tooling that is simple and exploratory in nature much like large, strict systems are much bigger than most people expect at first blush because of how many cases they handle.
I think trying to say that one is more common than the other without anything approaching the rigor of at least a computing survey is really just to use your gut to make generalizations. Which is what the strict vs loose typing online debates really are. A popularity contest of what kind of software people like to write given the forum the question is being discussed on.
> rust demands that I cross every last t before I can run it at all. which is great if you already have a crystal notion of what you are building
Maybe I'm a weirdo, but I don't find this to be the case for me.
When I'm knocking things together in Rust I use a ton of unwrap() and todo!() and panic!() so I can figure out what I'm really doing and what shape it needs to have.
And then when I have a design solidified, I can easily go in and finish the todo!() code, remove the panic!() and unwrap() and use proper error types, etc.
> rust demands that I cross every last t before I can run it at all.
It's worse than that IMO. Rust makes it very awkward/impractical to have cyclic data structures, which are necessary to write a lot of useful programs. The Rust fans will quickly jump in and tell you that if you need cycles, your program is wrong and you're just not a good enough programmer, but Maybe it's just that the Rust borrow checker is too limited and primitive, and it really just gets in the way sometimes.
Some of the restrictions of the Rust borrow checker and type system are arbitrary. They're there because Rust currently can't do better. They're not the gospel, they aren't necessarily inherent property that must always be satisfied for a program to be bug free. The Rust notion of safety is not an absolute. It's a compromise, and a really annoying, tiresome drain on motivation and productivity sometimes.
The basic model of Rust is to move use-after-free from a dynamic, runtime check to a static, compile-time check. But to keep the static checks from being Turing-complete, you need to prohibit arbitrary cycles while something like a tree (or other boundable recursion) is doable. So Rust not being able to check cyclic data structures isn't a "Rust currently can't do better" situation, it's a "Rust just can't do better" situation.
What Rust's intended solution for that is that you add in data structures that do the dynamic checking for you in those cases. But the Rust library doesn't provide anything here that's useful (RefCell is the closest alternative, and that's pretty close to a this-is-never-what-you-want datatype), which means your options are either to use integers, roll your own with unsafe, or try hard to rewrite your code to not use cycles (which is usually a euphemism for use integers anyways). The problem here, I think, is that there is a missing data structure helper that can sit in between integers and references, namely something akin to handles (with a corresponding allocator that allows concurrent creation/deletion of elements).
missing data structure helper -- didn't you already just name-check that though, since that's basically RefCell .. or if you're willing to roll the dice... UnsafeCell (aka "trust me I know what I'm doing")?
&'a RefCell<T> is pretty close to a definition of Handle<'a, T>, except that Rust provides no implementations of allocate and deallocate that take a const instead of a mut reference for self. Trying to make an allocator that lets you safely deallocate something requires a completely different implementation of Handle<'a, T> than what RefCell can provide, and even if you're fine without deallocation, allocation with a const ref still requires unsafe to get the lifetime parameter right.
I'm not in the habit of regularly following new Rust RFCs, so I'd have no way of knowing about something made just last week. :-) But I'm taking a look now.
I don't tend to follow them either, but I've been frustrated by the lack of progress on allocator_api, and I came across this yesterday after looking into that. I only mention it because the Handle stuff in there looked tangentially related, though it's talking about something quite a bit different than you were.
There are several different ways you can implement a Handle, depending on what features you want; the most important part of its implementation is that `fn is_valid(handle: Handle) -> bool` is possible. The simplest implementation is a (pointer, generation) pair, which can be packed into a u64 pretty easily even for 64-bit systems; every allocation and deallocation increments the generation counter in the allocator, and is_valid is thus implemented by checking if the allocator's generation matches the claimed generation for a Handle. This kind of Handle is effectively a Copy implementation (not merely Clone!).
Effectively, handles are like weak pointers in that you can detect when the underlying object has been freed, but unlike weak pointers, there's no need for a reference counter to know when to deallocate the object--the object is freed when the allocator itself dies, or it can manually be freed earlier. It is possible to write code that will attempt to use the freed object, and the compiler will be happy, but the runtime will detect that it has been freed and panic instead. (RefCell does something similar, except it only detects violations of multiple readers xor one writer requirement, not overall lifteime). You can also add other wrappers around Handles to automatically free those Handles on scope exit, but the point is you can now have multiple references to an object that can be upgraded to a mutable reference if you desire.
> The Rust fans will quickly jump in and tell you that if you need cycles, your program is wrong and you're just not a good enough programmer
I have absolutely never heard a Rust fan say this. AFAICT the fact that cyclic data structures are hard to write is widely accepted within the community as one of the negative tradeoffs of the language.
If you’re talking to people who claim that any language is better than all others in every possible way, for every possible use case, then they are zealots whose opinion can be ignored.
I would never tell you that you are wrong to have cyclic data structures. But there are reasonable workarounds like using handles into an array to do it, which of course re-creates some of the same problems as pointers, but not the worst ones, and is often a positive for performance on modern hardware due to improved data locality.
Or you can use reference counted types and take a small performance hit.
The limitations are an inherent consequence of basic tenets of Rust's design. Rust wouldn't be Rust anymore if you fixed them.
> Some of the restrictions of the Rust borrow checker and type system are arbitrary. They're there because Rust currently can't do better. They're not the gospel, they aren't necessarily inherent property that must always be satisfied for a program to be bug free. The Rust notion of safety is not an absolute. It's a compromise, and a really annoying, tiresome drain on motivation and productivity sometimes.
Yeah, but this actually seems consistent with the philosophy behind Rust: to take away the tools a programmer needs for creativity, so they couldn't do potentially costly mistakes, as applicable to big teams in huge corporations. Another commenter in this thread put it nicely: the borrow checker is a straitjacket for the programmer.
It's not meant to foster creativity, it's meant to be safe for big business and novice employees.
> It's not meant to foster creativity, it's meant to be safe for big business and novice employees.
Interestingly, my experience is the opposite.
I find that the "straightjacket" is extremely precious during refactorings – in particular, the type of refactorings that I perform constantly when I'm prototyping.
Compared to this, I'm currently writing Python code, and every time I attempt a refactoring, I waste considerable amounts of time before I can test the interesting new codepath, because I end up breaking hundreds of other codepaths that get in the way and I need to go through the testsuite (and pray that it contains a sufficient number of tests) hundreds of time until the code is kinda stable.
Which is not to say that Rust matches every scenario. We agree that it doesn't, by design. But I don't think that the scenarios you sketch out are the best representation of what Rust can/should be used for and can't/shouldn't be used for.
Basically it should be left for scenarios where any kind of automatic memory management isn't allowed, either for technical reasons, or because it is a lost battle trying to change the mindset of the target group.
For everything else there are more productive options.
Cyclic data structures are implemented easily with unsafe. Like non-cyclical ones (Vec for example). The difficult part is to make a safe API to that. This difficulties are not of syntactic nature but design difficulties. You need to think through your use cases for such a struct and to devise an API that supports them.
This is more difficult than C++ way "just do it". With C++ you will solve the same problems but on a case by case basis as they come into view. With Rust you need to solve these problems upfront or do a lot of refactoring later. There are upsides and downsides in both approaches, but it is clear that Rust is not good to sketch some code quickly to see how it will do.
It is still possible to do it quickly with Rust in a C++ way by leaking usafety everywhere and passing raw pointers, but I think it is still easier to do it with C++ which was designed for this style of coding.
This is definitely true, but I also don't know what a reasonable alternative is at this point for systems dev (aka places where a GC is a Bad Idea). I wouldn't unleash C or C++ onto a new project like that? I'd just feel icky. And Zig's type system IMHO isn't good enough, I'd really miss pattern matching for one.
I do think many people are using Rust in the Wrong Places(tm). It seems like torture to me to be applying it for general application development (though because I basically now "think" in it, I can see I myself would be tempted to do so).
And for things with complicated ownership graphs or nested interrelated data? It's just... no. Dear god, Iterator in Rust is an ownership and type traits nightmare, let alone anything more complicated
So I think people should just use a hybrid approach and keep Rust where it belongs down in the guts and use something higher level and garbage collected higher up.
Here's another thing about Rust that's driving me batty: it is nominally positioned as a "systems" programming language, but key things that would make it more useful there are being neglected, while things that I would consider webdev/server programming aspects are being highly emphasized.
Examples I would give that have driven me nuts recently: allocator_api / pluggable per-object allocators ... stuck in nightly since 2016(!). Full set of SIMD intrinsics and broader SIMD support generally ... also stuck. const_generics_expr ... still not there.
Meanwhile async this and async that and things more useful to the microservice crowd proliferate and prosper
Async is badly needed in systems programming, more so than at the application level: handling events in embedded/low level components is incredibly tedious without it.
I think I agree with most of what you write, but note that async has lots of applications beyond microservices. In particular, writing anything that uses the network (e.g. a web browser), which definitely feels system-y to me.
This is the nice thing about TypeScript—you can type want you want. As you iterate you can either ramp or down your type checking. This is outside the realm of memory management, of course.
And new to JS/TS land is the separation of pure data structures from resources. Something a sibling comment or brought up.
Yea, different languages for different purposes. Rust is for finished products, not so much for experimentation. When you want to play or experiment you should use Lisp.
Your Lisp program will be entirely usable once you have experimented and found the right way to do it. Lisp compilers are really good, and they support gradual typing: you can write your program with no explicit type information, and then speed it up by adding type information in the hot spots. You can deploy that to production and it will serve you well.
At some point your Lisp program will be mature, you will have implemented most of the features you know you will need, and you will know that any new features you add in the future will not alter the architecture. Once you understand the problem and have established the best architecture for the program, you can consider rewriting it in Rust. Lisp’s GC does have a run–time cost, and you can measure it to figure out how much money you will save by eliminating it. If you will save more money than the cost of the rewrite, then go for it. Otherwise you can go on to work on something more cost–effective.
Note that you might not need to rewrite the whole program; it might be more effective to rewrite the most performance–critical portion in Rust, and then call it from your existing Lisp program. This can give you the best of both worlds.
That is hardly a reason, given that Common Lisp also supports value types and whole OSes were once upon a time written in Lisp variants, whose main features landed on Common Lisp.
But here is what you said and what the author said don't conflict with each other, and it has been on my mind for a while.
People who write similar code, or work on things for decades usually don't really think through what "sketch out some code" looks like. They spend most of their time on refactoring things that has clear use-cases, but not well-defined API boundaries within the component, or between components. So ownerships, nullability checks, data race checks are all comes very naturally as a starter.
But there are other side of the world, where people constantly sketching out something, for things like creative arts, high-level game logic, data analysis, machine learning etc. Now putting yourself in that position, the syntax noises are actively in the way of this type of programming. Ownerships, even nullability checks are not helpful if you just want to have partial code running and checking if it draws part of the graph. This is a world Python excels, and people constantly complaining about why this piece of Python code doesn't have type-annotation.
We may never be at peace between these two worlds, and this manifest itself somewhat into the "two-language problem". But that to me, is when someone mean "development velocity is faster".