One of the things that made me so excited about Rust when I first used it was the capabilities of the Rustdoc system. The built in examples that are also unit tests, the ease of just using markdown... and now the linking is even simpler. It’s one of my favorite things about the language, and I think is why so many crates have such good documentation, because it’s easy to do. (and it’s tested and validated so you know it’s right!)
Not directly related, but when I was learning programming back in highschool (before the Internet) what made it easy was the built in help in Turbo Pascal. You could press F1 over any function or keyword and you were given a detailed description and example of usage. Learning C later using the K&R book and Google was a huge downgrade. Even today I think that language help built into the IDE should be a basic functionality.
In vim if you press K it will bring up the man page for the word under your cursor. It was very useful when learning C, considering that most libc functions have helpful man pages.
And for the sake of completeness, it is not just for man pages either. vim simply calls 'keywordprg' with the word under the cursor, so you can use whatever you feel like. Open a browser tab on your favourite documentation, display an ERD, pop open a picture of a kitten, …
Many filetype plugins come with pre-configured 'keywordprg' settings. Including some non-programming filetypes like git which will execute `git show <word>`, which is great if you're the type of person who often references commits in other commit messages.
I've seen that with Delphi. The unique factor was that the examples were not a basic call of the function but an actual real world practical sample that usually solved the problem one was looking for.
I was a Delphi programmer for a few years, professionally, when it first came out in the 90s. I worked with their developer products (and for Borland directly on Delphi/Kylix/C++Builder, eventually) until the mid-2000s. I've never seen the match of Borland's docs, before or since, particularly as integrated with the IDE's coding features.
There are a number of programming languages and APIs with excellent documentation, but theirs were above and beyond.
This is done quite nicely with clojuredocs being intergrated into Cursive / Intellij.
I hover over a Clojure function in INtelliJ and I get a pop up of the clojuredocs with description and example usages for that function.
It's great and I don't know why more IDE's don't do this. Why isn't VS linked to the MSDN for C# / .NET ? SO I can get the information for that function / class / library etc straight in my IDE!
Bringing it back, this works great with Rust and VSCode and rust-analyzer: Any function in any crate shows its doc comment complete with markdown formatting when you hover over it.
Have you actually tried that? I just installed it and it's just a buch of help files that link to the web. F1 continues to work the same way, bringing up websites, often useless ones like this one for a for loop:
I also like Rustdoc, using markdown and the new linking improvements. It's much better than learning yet another DSL.
However, there's one thing I find annoying, and that's the lack of a structure for parameters.
Following the `# Arguments:` convention is redundant (I get it's petty, but I'm annoyed every time I write it), but more than that it's error-prone and limiting.
Because arguments names are just a convention that's not strictly enforced, it's not automatically checking the naming is correct, and it limits the ability of tools like cbindgen and flapigen (love them both) to transform param specific docs.
I've seen the argument (not sure if this factored into rustdoc's design or if it's just someone's opinion) that structured per-parameter docs tend toward useless boilerplate, e.g. "foo: a foo object," while a holistic description of the function itself is easier to make useful.
I don't know if it really factored into rustdoc's design since it was already built when I came to Rust, but when I was part of the team responsible for rustdoc, I did object to suggestions we do this on that basis.
As well as, extending the underlying language (that is, markdown) has to be done really carefully, and so being conservative with how it's done matters. This was a huge part of figuring out the design of intra-doc links in the first place.
(All of this great work is being done by others, and I don't know what their opinion on it really is, so my opinion being -1 doesn't really matter these days.)
Agreed, I find per-parameter docs useful in dynamic languages such as python, or to a lesser extent useful when the type system is weaker (in C), but in Rust I seldom find them useful. This can be useful to express constraints not present in the type system, for instance `fn matmul(a: Matrix<f64>, b: Matrix<f64>) -> Matrix<f64>` will benefit from a documentation describing the constraints and guarantees on the number of rows and cols of each matrix, since this cannot be expressed in the type system (yet).
I also love that it is consistent between projects. The fact that I can just go to https://docs.rs/chrono for any public project and have a consistent interface for reading and navigating is huge.
Of course this is somewhat fickle and makes competing documentation generations harder to get started but as a user when rustdoc is really good it is a nice benifit.
Yeah, this is great. Julia has it since a couple years in the Documenter package. Plus basic Markdown rendering in the REPL when you hit `?func`. Markdown + extension to understand language specific cross-references is powerful! [1]
I've always found crate documentation to be the worse thing about Rust. Because it auto-generates some documentation, people just assume that's good enough, and you end up with tonnes of crates that seemingly only have a list of functions and structs and what arguments they take, but very little information about how you're supposed to plug everything together.
At least you get that. In the (untyped) Python ecosystem, you're lucky to get "this parameter is a file-like object" even though "file-like" doesn't tell you if it just supports read() and write() or also seek() or close() or truncate(). You have to dig into the source code, which likely just passes the parameter into another function which passes it into another and then across a library boundary and so on. And again, that's the best-case scenario. Just having correct type information is 80% of the battle IMHO.
really? I've found that in general the same number of people bother to go deep into explaining compared to e.g. JS, and always having rustdoc for the people that don't is far better than reading the source or TypeScript definitions.
Both are true, in my experience. Lots of good documentation for some crates, while others (usually with fewer maintainers or less intention of reuse) just have the autogenerated index.
Go is the same. Everyone, including the stdlib maintainers seem to think a few lines of comments per method is the same as documentation on how to use the package, best practices, pitfalls, etc.
This is the part that blew my mind. For simple functions ("how do I read from a buffer again?"), the code example is more useful than any number of paragraphs of description, especially because its correctness is enforced by the compiler. Switching to learning Node after Rust has been a major step backwards and involves a lot more time on Stack Overflow.
(Unrelated gripe: if I could ban w3schools from my search results, that would be great.)
If you can’t tell a story with text, you won’t be able to tell it with code either. The thing that makes people want better documentation is the same thing that makes it difficult to get.
I remember years ago using Ant as a build tool and the only way I figured it out was to Googlestalk the author. Given enough answers to questions in enough places I finally developed a theory of the system. It was still supremely weird to convince it to do certain things but at least I was able to.
You’re not telling a story to a computer. You’re telling it to your coworkers. The people’s code that’s most frustrating to work with share your philosophy. They are often so convinced they’re right that they can’t even hear constructive criticism. That’s not my opinion, that’s the consensus view shared over lunches and coffees with their coworkers, across many jobs.
Is it possible to link your local core and library docs yet?
I have my dependencies documented locally, I have the standard library documented locally, both of these work well with the ability to do searches just like the online docs. The problem is they're separate. The local docs for one of my crates cannot link to my local standard library docs; instead, I have to jump around different browser tabs and manually look things up.
There used to be some hacks that could work around this, but those hacks stopped working.
Interesting question. Does RUSTDOCFLAGS="--extern-html-root-url std=file:///path/to/std/docs" cargo doc work? You can repeat it for std, core, alloc, proc-macro, etc.
This is a great feature, but it is also present in many languages. For example, doctest[1] is included with Python, and allows for tests in docstrings.
I mean doctests are not exactly new. The `doctest` module was added to Python in 2.1, back in 2001. And even that is largely just a weak shade of semi-literate programming, to say nothing of actual literate programming.
This is one of the things I really love about Rust, and also one of the things that's most challenging for its adoption. Not a ton of stuff in Rust is new, but it does present a new mix of old things, and often, those things are new to many people, even if they're actually old.
Rust took out the things considered best practice and provided them to users by default, or checked for them by default. This includes simple things as explicit names for types like u8 instead of char (as you are going to assume the size of char anyway, let's be honest... most code won't work on 16 bit char platforms). But it also includes stuff like unifying naming conventions, or providing a good package manager. C++ has multiple package managers, as does js. But rust had a great one from the start and for now there doesnt seem to be the need for people to use alternatives, because nobody had to come up with them. So no balkanization of standards and the costs for users that this entails. Maybe with age this will change, but for now Rust is doing quite well.
I had to suffer with coworkers that would write doctests, bad docs and bad tests together at last!
The solution here is to write real tests and hyperlink them in a marked up form to the functions they actually test. Coding inside of a doc string is an epic troll.
This is a warning to anyone getting seduced by doctests, stay away! They invert the problem, when one should just write tests, it is a problem with the documentation and code discovery tools that makes this seem like a good idea.
I think there's another perspective on doctests, that seems more convincing to me: doctest functionality allows testing for the documentation, not a way of writing tests in documentation.
In particular, it's great if the documentation includes clear and simple examples for learning from, and it's even better if these are validated as working. This means that the focus of the code in documentation is typically different to a test (it doesn't need to include such precise validation or look at all the edge cases or regressions), but it's still really useful to have them automatically run.
I'd guess their coworkers decided to write all tests as doctests, rather than use doctests to ensure examples run fine?
Python's doctests are also formatted and in some ways behaving like an interactive shell session, with
>>> code here
output here
rather than just "literate code" so the execution context is a bit strange. This makes complicated doctests hard to inspect and debug. Doubly so because there's almost no tooling which understands doctests.
And of course on the flip side the best tests make for absolutely terrible examples since they try to exercise weird corner cases.
I think it's between a test written inside a comment (where smart IDE features don't apply) versus `#[test]` in normal code, where you have a rich editing environment and instant feedback (including `cargo build` failing immediately on errors).
Perhaps it's just my perception, but doctests (in comments) seem slower to run.
Ah; rust-analyzer syntax highlights doc tests (though not perfectly), but the higher order bit of "IDE features don't work as well" makes tons of sense, thanks!
This is also part of my complaint. At least in Python, doctests limit the ability for the IDE to be effective.
They hinder the writer as well as the consumer, since they make it harder to write effective tests. What I am saying is, that all tests should be first class wrt the documentation.
I had no idea that Python had a library for this, nice! Skimming that doc, it's especially impressive how well it looks to handle exception tracebacks.
Indeed, I can't wait for const generics to become available outside of std, I've been bumping into that limitation since pre-rust-1.0 days, it'll be amazing to finally be able to rewrite all that hacky code correctly.
you can't pass it a slice. This would let you pass a vector to this function.
Now, because this wasn't possible previously, it means many things take a slice, and then check that the length is the length they expect, because that ends up being easier for folks. Stuff like this helps it be able to be done properly.
(This specific function is one example that is done in this style, and it's a pain. I have real world code that looks like
let foo = u16::from_be_bytes([some_slice[0], some_slice[1]]);
This gets even worse with say, u128::from_be_bytes.)
steveklabnik is (of course) correct, but you also have to consider the performance cost of a dynamicaly sized slice versus a statically sized array.
A situation I've encountered several times already is implementing statically sized FIFOs. At the moment in Rust I can't implement a type "FIFO of depth N" where is a generic, static parameter. My only choices are implementing "FIFO of depth n" where n is provided dynamically at runtime (and implemented internally using something like a VecDeque) or a completely fixed depth FIFO type that I need to duplicate for every depth (FIFO32, FIFO16, FIFO10 etc...).
If you require very high performance a dynamically checked FIFO can incur a fairly large overhead when a well optimized static FIFO can implement most operations in a couple of opcodes at most.
and then back it with an array. I learned this trick from whitequark.
(Though maybe if you want something other than a slice of bytes, this gets harder... I've used this trick for ringbuffers/mmio only, personally, so YMMV.)
That being said real const generics will make this way nicer, eventually.
I wasn't aware of this trick, thank you for that. I guess if I have to implement another FIFO before const generics land I'll have an opportunity to try it out...
I don’t think so - you can only use static functions to declare static variables. But you could do it with a macro. It’s debatable whether that would be readable and predictable, though.
On the one hand it's not hugely thrilling for the headline features of a new release to be improvements to doc tooling and a stabilized trait impl but on the other hand it's good to see the language settling down and maturing.
The 1.49 release will have a new tier 1 target (aarch64-unknown-linux-gnu) as well as apple silicon as a tier 2 target. The 1.50 release will have min const generics as well as stable backtraces.
As the releases are every 6 weeks, an individual one might seem small. But over time they add up.
Note though that I do consider the rustdoc improvements to be major. Previously I wasn't bothering with directly linking to referenced items because you had to figure out html names. Now it's very easy and I plan to write more links.
These may be major changes for devs, but for beginners who want to learn Rust without it changing under them all the time, these are not major changes anymore. Which is a great thing!
There have been releases where ARM (maybe I was using armv7 rather than aarch64 then, but I'm on aarch64 now) was totally broken, and now I know that won't happen on 1.49 or beyond.
Min const generics...I'm not sure I'll find much use for it until const_evaluatable_checked happens, but I'm glad to see progress.
Stable backtraces will mean I can stop using the deprecated failure crate without giving up my quality diagnostics.
There are big changes under the hood. And those regularly make HN front page. Like the recent Cranelift codegen backend to help with the coding-compiling cycle time.
Similarly there's regularly ~350 PRs merged each week into rust. (The libification and chalkification is ongoing, which is the next-gen solver for the type/trait system, plus at the same time some refactor of the compiler to make it more like a usable library, so rust-analyzer can use it to provide more immediate/incremental feedback during development.)
I wish they have used a different word than `const` for "not necessarily constant but it can be called at compile time". This is bound to confuse newcomers. Note this is unrelated to this release but I just realized how confusing it could be.
would you consider this function pure? I don't think many would. Also, it may be pure given Rust's semantics, but it kinda goes against the intuitive, usual way people talk about purity, so that makes it hard.
A few months ago I tried Rust but found that the tooling like auto completion and highlighting where still a bit alpha. IIRC I used VSCode with the most popular Rust plugin at the time.
Did I miss something or is there some progress being made in that respect?
Sounds a bit strange. I'm using rust-analyzer in combination with TabNine (which both have integrations in every major editor), and it's among the best completion I've encountered across languages (though there are some limitations when it comes to proc-macros).
Maybe it was still downloading some required binaries in the background when you were trying it out?
I don't want this to sound like a criticism, but I would like to understand your point of view: do you really consider this a reason for choosing a language over any other?
Even if you had to use notepad to program - or had very basic syntax highlighting - wouldn't be what the language provides a more compelling point to make these choices?
Absolutely. I don't think developer ergonomics should take priority over absolutely everything (the modern web arena shows why this is bad) but there are some great technologies that have lost out or are relegated to incredibly niche use cases because their ergonomics are bad to the point where you just can't afford to deal with them unless you really have to.
Ada/SPARK/Ravenscar are a perfect example -- they provide incredibly powerful tools for proven correct programming. They are open source. The ergonomics are nowhere near what you're probably used to having, and that's why odds are very high you've never used them.
I'm currently working myself into rust, and yes, it is certainly a mighty inconvenience. Probably of my setup, I can't tell yet.
In java with a good IDE, you can ^Space yourself through a lot of problems without knowing the libraries too much. And even if you don't just pick the first thing sounding right, scrolling through the documentation of the auto suggested methods is very convenient.
On the other hand, my dev-laptop currently has about 20 different rust-doc pages open at the moment to keep track of ... the methods Iter<T> has, the methods IterTools has, Vec has, Slices have, what package FromStr was in again, what methods a str::fmt::Formatter has to implement Display to implement Error (that needs to be imported), where HashMap is, what methods those have...
I've been there multiple times over the last decade or two with multiple languages and rust is compelling enough to work through that.
But if you compare the ergonomics of modern java, go or python IDEs with my current vscode+rust-analyze state, rust isn't winning on IDE ergonomics. At least in my setup.
Since I got a bit engaged there, this has led to my lesson of rust today. Or, the first one: There is the "rust" extension for vscode. This extension can be configured to use rust-analyzer as it's backend and I figured that was what people meant with "use rust-analyzer".
However, there is an extension called "rust-analyzer" and from a few hours of writing rust, it's so much better. Inline type annotations, good type ahead and completion.
It's certainly an interesting position, from my perspective, as I (and probably OP) have worked our entire professional lives without a syntax checker or any other form of IDE other than simple things like autoindent.
Yes. I too have been pretty anti-IDE most of my history. But I think a lot of it comes down to tools and culture; Ruby (which I have done for the first half of my profesional life) is extremely hard to give good tooling for. So the tools are bad, so they aren't useful. So you don't use them.
Contrast with a language like Java, which is very amenable to a lot of tooling, and kinda sorta needs it to make the language more usable. If that's your background, you can't imagine not having that tooling, because, well, you're used to it.
I will say that I have been playing around with using more IDE-like things, and even though it's culturally pretty foreign to me, I understand the appeal. And as I'm getting more used to it, I get annoyed when it's not there, and then remember how I used to pooh pooh people who required all those fancy features. So... I get it. These days at least.
The gap between classic IDE’s like XCode, Visual Studio or JetBrain’s IDEs and text editors are getting smaller each day.
I’m not keen on the classic IDE
experience either. I like VSCode because it’s a simple/light weight text editor which helps me with small tasks like auto-imports.
I feel like these things are especially helpful when you’re a beginner and just started learning a new language.
Anyways, thanks for the hard work Steve and I’ll give it a second try with RA soon!
> I don't want this to sound like a criticism, but I would like to understand your point of view: do you really consider this a reason for choosing a language over any other?
Absolutely yes.
I no longer consider a programming language finished unless it has at least tab-complete and proper debugging support. Inline popup doc-comments are nearly mandatory too.
Not designing a new language for modern tooling is a cardinal sin. It's giving up decades of progress for... what exactly? Some sort of ascetic purity?
One disappointing design aspect of Rust specifically is that it has a tendency to "hide" functions unless explicitly imported. This makes things difficult for IDE tab-completion. It gives the false impression that some functions are missing, when in fact they were simply not imported into scope.
The language seems to be designed for developers that know ahead of time what they will or will not use, to the finest detail, before they even start typing code.
I've only ever met one programmer that could do that: start at the top of the file and type continuously, left-to-right and top-to-bottom without editing, without pause, without having to go back and fill in anything he had missed. Let's just say he was a "special sort" and leave it at that.
The rest of us are much more productive with IDEs and debuggers.
Once you get to larger programs, having nice tooling for things like language-aware autocomplete, language-aware refactoring, etc is an absolute must. It reduces the amount of time spent on the manual labour, allowing you to actually think about the problem at hand
As others have said, I'd be curious what plug in you use. The state of things is pretty good. I'm using the IntelliJ Rust plugin and love it. It also provides pretty good inline annotation of inferred types which I find to be a big time saver.