That is to come! Once I get it working the way I want it haha.
Wifi is tricky and I seem to have lost the code that made that video work and I reflashed with old code that doesn't work :(
I've seen a Massachusetts public library with an associated hacker space (sewing machines; 3D printers), and another with a "tool lending library" (including a battery drill, IIRC?). And have read of European libraries with a "be a general community space" vibe. So that seems plausible at least.
It only takes one student to lose an arm, face, life and their parents will sue the school into oblivion and start PAPT (parents against power tools) or something "for the kids!"
I'm on your side that I'd rather see the tools at the school. At the same time, I took an auto-shop class in high school and the majority of students in the class were dipshits and were lucky not to get more hurt. The teacher managed to stop them just in time from trying to turn over an engine on a rack that would have crushed them if the teach had been 5 second later.
We actually did have an unprecedented accident a couple years back and as a result some new safety requirements were put in place. But no tools or access to tools were removed.
I wonder if the culture here is different than the U.S.? Dipshit students are removed from these classes before they can get anywhere near tools. It’s definitely a privilege to be in a shop class.
I don't want to generalize it but as soon as you said "if the culture here is different than the U.S" I thought "Okay yep that's why."
I don't even know if it counts as culture, but the US inevitably seems to have such a refined sense of litigation and, insurance mindset for lack of a better term?
Anecdotally, schools are indeed a prime example, we had cooking classes in high school that were stopped because of the costs to insure for it.
> I wonder if the culture here is different than the U.S.?
There's definitely going to be cultural differences between the US and wherever you're located, but be careful not to generalize too much from one comment you read on the Internet.
VSC is the least bad Electron app I’ve ever used, but (heavily subjective) it pales in comparison to Neovim + Tmux. It’s not even close.
Related: I was looking at WinRAR’s site last week after reminiscing about it with coworkers, and found that a. They haven’t really updated their UI since I last used it a decade+ ago b. The download is still 4 MB. THAT is why native is superior – if you know what you’re doing, you can get incredible performance with absurdly low filesizes.
Because I need the space for videos and games and that's why I have large storage. Not for tiny applications wasting 300 MBs because someone thought that an electron app would reduce engineering cost.
Aside from the fact that it shouldn’t take hundreds of MB to launch the simplest of apps, and that it’s incredibly wasteful on its face? Memory. Where do you think those bytes end up when you launch it?
I use VSC because some mature plugins only exist for VSCode, like Rust, and Microsoft pushing it for stuff like PowerShell, killing their ISE IDE.
And it only performs that well, because all critical code is written in a mix of C++ and Rust running in external processes, and they have ported text rendering into WebGL.
It's an interesting example, because the fact that it is js makes it trivial for most developers to make modifications to it by opening the Chrome DevTools. Even if you're not a js dev, you probably occasionally write some js.
I'm arguing that it's successful because any of its users can trivially hack something on top of it and distribute it, including things the original devs never intended or think is a good idea. In that way, its success mirrors Excel.
> From a documentation, examples, accessibility, tooling, and number of people you can get help from, JS wins
Maybe as a general purpose language, but for this specific comparison (extending editors). Elisp and Emacs wins. Vimscript is not the best plugin language, but the interation process is way better than VSC.
Google Maps allows you declare your allegiance. You can mark a business as LGBTQ+ friendly (why should I have to declare that and it not just be assumed?).
So, I'm very visibly queer and from the south. I have always been appreciative of gestures like this or in the parent comment - because it is not a safe thing there to assume that people would be accepting.
> why should I have to declare that and it not just be assumed?
That’s an easy question with an easy answer.
Because it can’t be assumed. Because there are people (who own businesses) who are not friendly to LGBTQ+ people. And people (such as LGBTQ people) may want to find or avoid certain places.
Is a good-faith interpretation of such a signal that it would be some sort of silly performative measure?
Thank you for posting this. For those who didn't click through, its an article headlined "Wyoming bar calls for murder of gay people as “cure for AIDS” and are selling it as merch", and shows a picture of the bar's horrifically bigoted t-shirt.
People who dismiss labels like "LGBTQ-friendly" as "performative moralism" (to use the term Paul Graham used multiple times in his article) have clearly never had their very existence threatened on a frequent basis simply because of who they are.
MUJI used to have lots of that (20-25yrs ago). Shelves made from cardboard tubes, etc... You could tell, one bump and it would be destroyed. I think they got rid of most of them.
> #1 Understand the system: Read the manual, read everything in depth, know the fundamentals, know the road map, understand your tools, and look up the details.
Maybe I'm mis-understand but "Read the manual, read everything in depth" sounds like. Oh, I have bug in my code, first read the entire manual of the library I'm using, all 700 pages, then read 7 books on the library details, now that a month or two has passed, go look at the bug.
I'd be curious if there's a single programmer that follows this advice.
This was written in 2004, the year of Google's IPO. Atwood and Spolsky didn't found Stack Overflow until 2008. [0] People knew things as the "Camel book" [1] and generally just knew things.
I think we have a bit different interpretations here.
> read everything in depth
Is not necessarily
> first read the entire manual of the library I'm using, all 700 pages
If I have problem with “git bisect”. I can go only to stackoverflow try several snippets and see what sticks, or I can also go to https://git-scm.com/docs/git-bisect to get a bit deeper knowledge on the topic.
Essentially yes, that's correct. Your mistake is thinking that the outcome of those months of work is being able to kinda-probably fix one single bug. No: the point of all that effort is to truly fix all the bugs of that kind (or as close to "all" as is feasible), and to stop writing them in the first place.
The alternative is paradropping into an unknown system with a weird bug, messing randomly with things you don't understand until the tests turn green, and then submitting a PR and hoping you didn't just make everything even worse. It's never really knowing whether your system actually works or not. While I understand that is sometimes how it goes, doing that regularly is my nightmare.
P.S. if the manual of a library you're using is 700 pages, you're probably using the wrong library.
- very peculiar way of writing, with lot of impressively unnecessary description of minute detail, that the reader starts counting not only sheep but also breaths until the ultimate end before finishing a sentence. (I.e. Unnecessarily verbose)
- very extensive docs describing things from various angles including references, topic based how tos and such.
(I agree that the last one is the least likely, but there is always hope)
I mean he has a point. Things are incredibly complex now adays, I don't think most people have time to "understand the system."
I would be much more interested in rules that don't start with that... Like "Rules for debugging when you don't have the capacity to fully understand every part of the system."
Bisecting is a great example here. If you are Bisecting, by definition you don't fully understand the system (or you would know which change caused the problem!)
I've been on the internet as long as you and from my POV it changed a ton. The biggest changes to me are
(1) social media via smartphones - letting everyone trivially post to everyone else on the planet.
This use to be nerd activity (blogs) and the audience was other nerds. First social media sites, then the smart phone completely changed this.
(2) follows from 1, influencer culture, by which I mean, Instagram, TikTok, X, Youtube all incentivize people performing to try to get as many viewers/followers as possible. Thinking back to the 70s/80s, even the top movie stars just got some fan mail. They didn't have 400-600 MILLION FOLLOWERS to whom they could say anything they wanted. A celebrity had a most a TV show with a crew and editors and a strong chance of getting fired/banned if they got to crazy. Now, any high school kid can have 100+ million followers
It's not just people with followers, 20% of my youtube feed is people trying desperately to have something to talk about. Some news happens, thousands of people "report it" on their "channel". The scale of it is insane to me.
Thanks for the 1-hour video. Could you link to the timestamp of the strongest argument(s) you see in the video that are relevant in the current discussion (i.e. the existing error models we're talking about in Rust and C++, rather than a hypothetical future one)?
Just from a quick glance: I see he's talking about things like stack overflows and std::bad_alloc. In a discussion like this, those two are probably the worst examples of exceptions. They're the most severe exceptions, and the one the fewest people care to actually catch, and the ones that error codes are possibly the worst at handling anyway. (Do you really want an error returned from push_back?) The most common stuff is I/O errors, permission errors, format errors, etc. which aren't well represented by resource exhaustion at all, much less memory exhaustion.
P.S. W.r.t. "the top C++ gurus/leaders" - Herb is certainly talented, but I should note that the folks who wrote Google's style guide are... not amateurs. They have been involved in the language development and standardization process too. And they're just as well aware of the benefits and footguns as anyone.
The general problem cited with exceptions is that they're un-obvious control flow. The impact it has is clearer in Rust, because of the higher bar it sets for safety/correctness.
As a specific example, and this is something that's been a problem in the std lib before. When you code something that needs to maintain an invariant, e.g. a length field for an unsafe operation, that invariant has to be upheld on every path out of your function.
In the absence of exceptions, you just need to make sure your length is correct on returns from your function.
With exceptions, exits from your function are now any function call that could raise an exception; this is way harder to deal with in the general case. You can add one exception handler to your function, but it needs to deal with fixing up your invariant wherever the exception occurred (e.g. of the fix-up operation that needs to happen is different based on where in your function the exception occurred).
To avoid that you can wrap every call that can cause an exception so you can do the specific cleanup that needs to happen at that point in the function... But at that point what's the benefit of exceptions?
> With exceptions, exits from your function are now any function call that could raise an exception; this is way harder to deal with in the general case. You can add one exception handler to your function [...] To avoid that you can wrap every call [...]
That's the wrong way to handle this though. The correct way (in most cases) is with RAII. See scope guards (std::experimental::scope_exit, absl::Cleanup, etc.) if you need helpers. Those are not "way harder" to deal with, and whether the control flow out of the function is obvious or not is completely irrelevant to them -- in fact, that's kind of their point.
In fact, they're better than both exception handling and error codes in at least one respect: they actually put the cleanup code next to the setup code, making it harder for them to go out of sync.
None of those are easier than not needing to do it at all though; if your functions exits are only where you specify, you can cleanup only once on those paths.
> None of those are easier than not needing to do it at all though; if your functions exits are only where you specify, you can cleanup only once on those paths.
and unlike the others, it avoids repeating the same code three times.
(Ironically, I missed the manual cleanups before the final returns in the last two examples right as I posted this comment. Edited to fix now, but that itself should say something about which approach is actually more bug-prone...)
I can't parse this super well on mobile, but what invariant is this maintaining? I was imagining a function that manipulated a collection, and e.g. needed to decrement a length field to ensure the observable elements remained valid, then increment it, then do something else.
The gnarliest scenario I recall was a ring-buffer implementation that relied on a field always being within the valid length, and a single code path not performing a mod operation, which was only observably a problem after a specific sequence of reserving, popping, and pushing.
EDIT: oh, I think I see; is your code validating the invariant, or maintaining it?
> I can't parse this super well on mobile, but what invariant is this maintaining.
The stack length (and contents, too). It pushes, but ensures a pop occurs upon returning. So the stack looks the same before and after.
> I was imagining a function that manipulated a collection, and e.g. needed to decrement a length field to ensure the observable elements remained valid, then increment it, then do something else.
That is exactly what the code is doing.
> EDIT: oh, I think I see; is your code validating the invariant, or maintaining it?
Both. First it manipulates the stack (pushing onto it), then it does some stuff. Then before returning, it validates that the last element is still the one pushed, then pops that element, returning the stack to its original length & state.
> The gnarliest scenario I recall was a ring-buffer implementation that [...]
That sounds like the kind of thing scope guards would be good at.
Then I think the counter-example is where function calls that can't fail are interspersed. Those are the cases where with exceptions (outside checked exceptions) you have to assume they could fail, and in a language without exceptions you can rely on them not to fail, and skip adding any code to maintain the invariant between them.
E.g. in the case you provided, if pop & push couldn't fail, that would just be two calls in sequence.
> E.g. in the case you provided, if pop & push couldn't fail, that would just be two calls in sequence.
I have no idea what you mean here. Everything in the comment would be exactly the same even if stack.push_back() was guaranteed to succeed (maybe due to a prior stack.reserve()). And those calls aren't occurring in sequence, one is occurring upon entrance and the other upon exit. Perhaps you're confused what absl::Cleanup does? Or I'm not sure what you mean.
I think you're going to have to give a code example if/when you have the chance, to illustrate what you mean.
But also, even if you find "a counterexample" where something else is better than exceptions just means you finally found found a case where there's a different tool for a (different) job. Just like how me finding a counterexample where exceptions are better doesn't mean exceptions are always better. You simply can't extrapolate from that to exceptions being bad in general, is kind of my whole point.
Apologies, I believe I meant if the foo/bar/baz calls couldn't fail. If there's no exceptions, you don't need the cleanup block, but in the presence of exceptions you have to assume they (and all calls) can fail.
The problem re. there being a counter-example to exceptions (as implemented in C++) is that they're not opt-in or out where it makes sense. At least as I understand it, there's no way for foo/bar/baz to guarantee to you that they can't throw an exception, so you can rely on it (e.g. in a way that if this changes, you get a compiler error such that something you were relying on has changed). noexcept just results in the process being terminated on exception right?
> I meant if the foo/bar/baz calls couldn't fail. If there's no exceptions, you don't need the cleanup block
First, I think you're making an incorrect assumption -- the assumption that "if (foo())" means "if foo() failed". That's not what it means at all. They could just as well be infallible functions doing things like:
if (tasks.empty()) {
printf("Nothing to do\n");
return 1;
}
or
if (items.size() == 1) {
return items[0];
}
Second, even ignoring that, you'd still need the cleanup block! The fact that it is next to the setup statement (i.e. locality of information) is extremely important from a code health and maintainability standpoint. Having setup & cleanup be a dozen lines away from each other makes it far too easy to miss or forget one of them when the other is modified. Putting them next to each other prevents them from going out of sync and diverging over time.
Finally, your foo() and bar() calls can actually begin to fail in the future when more functionality is added to them. Heck, they might call a user callback that performs arbitrary tasks. Then your code will break too. Whereas with the cleanup block, it'll continue to be correct.
What you're doing is simplifying code by making very strong and brittle -- not to mention unguaranteed in almost all cases -- assumptions on how it looks during initial authorship, and assuming the code will remain static throughout the lifetime of your own code. In that context, putting them together seems "unnecessary", yeah. But point-in-time programming is not software engineering. The situation is radically different when you factor in what can go wrong during updates and maintenance.
> Moreover, your foo() and bar() calls can actually begin to fail in the future when more functionality is added to them. Heck, they might call a user callback that performs arbitrary tasks. Then your code will break too. Whereas with the cleanup block, it'll continue to be correct.
In a language without exceptions, I'm also assuming that a function conveys whether it can fail via it's prototype; in Rust, changing a function from "returns nothing" to "returns a Result" will result in a warning that you're not handling it
> What you're doing is simplifying code by making very strong assumptions on how it looks during initial authorship, and assuming the code will remain static throughout the lifetime of your own code.
But this is where the burden of exceptions is most pronounced; if you code as if everything can fail, there's no "additional" burden, you're paying it all the time. The case you're missing is in the simpler side, where it's possible for something to not fail, and that if that changes, your compiler tells you.
It can even become quite a great boon, because infallibility is transitive; if every operation you do can't fail, you can't fail.
No. I've mentioned this multiple times but I feel like you're still missing what I'm saying about maintainability. (You didn't even reply to it at all.)
To be very clear, I was explaining why, even if you somehow have a guarantee here that absolutely nothing ever fails, this code:
The reason, as I explained above, is the following:
>> The fact that it is next to the setup statement (i.e. locality of information) is extremely important from a code health and maintainability standpoint. Having setup & cleanup be a dozen lines away from each other makes it far too easy to miss or forget one of them when the other is modified. Putting them next to each other prevents them from going out of sync and diverging over time.
Fallibility is absolutely irrelevant to this point. It's about not splitting the source of truth into two separate spots in the code. This technique kills multiple birds at once, and handling errors better in the aforementioned cases is merely one of its benefits, but you should be doing it regardless.
Without infallibility, you need a separate cleanup scope for each call you make. With this, the change to the private variable is still next to the operation that changes it, you just don't need to manage another control flow at the same time.
EDIT: sorry, had the len's in the wrong spot before
> I do, but I'm still expecting things to be more complicated than that example.
They're not. I've done this all the time, in the vast majority of cases it's perfectly fine. It sounds like you might not have tried this in practice -- I would recommend giving it a shot before judging it, it's quite an improvement in quality of life once you're used to it.
But in any large codebase you're going to find occasional situations complicated enough to obviate whatever generic solution anyone made for you. In the worst case you'll legitimately need gotos or inline assembly. That's life, nobody says everything has a canned solution. You can't make sweeping arguments about entire coding patterns just because you can come up with the edge cases.
> Without infallibility, you need a separate cleanup scope for each call you make.
So your goal here is to restore the length, and you're assuming everything is infallible (as inadvisable as that often is)? The solution is still pretty darn simple:
We may have to agree to disagree. I'm trying to convey a function that would need a different cleanup to occur after each call if they were to fail, e.g. reducing the len by one (though that is the same here too).
> We may have to agree to disagree. I'm trying to convey a function that would need a different cleanup to occur after each call if they were to fail, e.g. reducing the len by one (though that is the same here too).
Your parenthetical is kind of my point though. It's rare to need mid-function cleanups that somehow contradict the earlier ones (because logically this often doesn't make sense), and when that is legitimately necessary, those are also fairly trivial to handle in most cases.
I'm happy to just agree to disagree and avoid providing more examples for this so we can lay the discussion to rest, so I'll leave with this: try all of these techniques -- not necessarily at work, but at least on other projects -- for a while and try to get familiar with their limitations (as well as how you'd have to work around them, if/when you encounter them) before you judge which ones are better or worse. Everything I can see mentioned here, I've tried in C++ for a while. This includes the static enforcement of error handling that you mentioned Rust has. (You can get it in C++ too, see [1].) Every technique has its limitations, and I know of some for this, but overall it's pretty decent and kills a lot of birds with one stone, making it worth the occasional cost in those rare scenarios. I can even think of other (stronger!) counterarguments I find more compelling against exceptions than the ones I see cited here, but even then I don't think they warrant avoiding exceptions entirely.
If there's one thing I've learned, it's that (a) sweeping generalizations are wrong regardless of the direction they're pointed at, as they often are (this statement itself being an exception), and (b) there's always room for improvement nevertheless, and I look forward to better techniques coming along that are superior to all the ones we've discussed.
>Just from a quick glance: I see he's talking about things like stack overflows and std::bad_alloc.
There are specific scenarios that a major issue, yes. But as the title of the video implies, the problem with exceptions runs far deeper. Imagine being a C++ library author who wants to support as many users as possible, you simply couldn't use exceptions even if you wanted to, and even if most of your users are using exceptions. The end result is that projects that use exceptions have to deal with two different methods of error handling, i.e. they get the worst of both worlds (the binary footprint of exceptions, the overhead of constantly checking error codes, and the mental overhead of dealing with it all).
C++ exceptions are a genuinely useful language feature. But I wish the language and standard library wasn't designed around exceptions. C++ has managed to displace C almost everywhere except embedded and/or kernel programming, and exceptions are a big reason for that.
> Imagine being a C++ library author who wants to support as many users as possible, you simply couldn't use exceptions even if you wanted to
I'm pretty sure that (much) less than 50% of the C++ code out there is "a C++ library that wants to support as many users as possible" -- I imagine most code is application code, not even C++ library code in the first place. It's perfectly fine to throw e.g. a "network connection was closed" or "failed to write to disk" exception and then catch it somewhere up the stack.
> The end result is that projects that use exceptions have to deal with two different methods of error handling. i.e. they get the worst of both worlds
No, that's not true. You might get a bit of marginal overhead to think about, but it's not the worst of both whatsoever. If you want to use exceptions and your library doesn't use them, all you gotta do is wrap the foo() call in CheckForErrors(foo()), and then handle it (if you want to handle it at all) at the top level of your call chain. It's not the worst of both worlds at all -- in fact it's literally less work than simply writing
std::expected<Result, std::error_code> e = foo();
and on top of that you get to avoid the constant checking of error codes and modifying every intermediate caller, leaving their code much simpler and more readable.
And of course if you don't want to use exceptions but your library does use them, then of course you can do the reverse:
std::expected<Result, std::error_code> e = CallAndCatchError(foo()).
Nobody is claiming every error should be an exception. I'm just saying you're exaggerating and extrapolating the arguments too far. A sane project would have a mix of different error models, and that would very much still be the case if none of the problems you mentioned existed at all, because they're different tools solving different problems.
> Do you really want an error returned from push_back?
For most people, no, you definitely want it to just work or explode, which is indeed what happens in normal Rust, and, not coincidentally, the actual effect when this exception happens in your typical C++ application after it is done with all the unwinding and discovers there is no handler (or that the handler was never tested and doesn't actually somehow cope).
But, sometimes that is what you wanted, and Linus has been very clear it's what he wants in the kernel he created.
For such purposes Rust has Vec::try_reserve() and Vec::push_within_capacity() which let us express the idea that we'd like more room and to know if that wasn't possible, and also if there was no room left for the thing we pushed we want back the thing we were trying to push - which otherwise we don't have any more.
There is no analogous C++ API, std::vector just throws an exception and good luck to you AFAIK.
> For such purposes Rust has Vec::try_reserve() and Vec::push_within_capacity() [...] There is no analogous C++ API, std::vector just throws an exception and good luck to you AFAIK.
I guess this is an attempt at Vec::push_within_capacity ? Your function takes a reference and then tries to copy the referenced object into the growable array. But of course nobody said this object can be copied - after all we want it back if it won't fit so perhaps it's unique or expensive to make.
> I guess this is an attempt at Vec::push_within_capacity?
Sure, yes. It's trivial to change to try_reserve if that's what you want. (There are other solutions for that as well, but they're more complicated and better for other situations.)
> Your function takes a reference and then tries to copy the referenced object into the growable array. But of course nobody said this object can be copied - after all we want it back if it won't fit so perhaps it's unique or expensive to make
Just add extend it to allow moves then? It's pretty trivial. (Are you familiar with move semantics in C++?)
But how? I did attempt this before I replied, but of course after not long I had inexplicable segfaults and we're not in a thread about those problems with C++
I can't see how to make that work, but I also can't say for sure it's impossible all I can tell you is that I was genuinely trying and all I got for my trouble was a segfault that I don't understand and couldn't fix.
Edited to add: In case it helps the signature we want is:
If you're not really a Rust person, this takes a value T, not a reference, not a magic ultra-hyper-reference, nor a pointer, it's taking the value T, the value is gone now, which just isn't a thing in C++, then it's returning either Ok(()) which signifies that this worked, or Err(T) thus giving back the T because we couldn't push it.
I'm sorry I don't think I understand the problem you're trying to illustrate. I'm not sure why you're emphasizing value vs. reference, but even if that's what you want, this works just fine: https://godbolt.org/z/P8EGPYWW5
Well the good news is that now I realise the biggest problem in my previous attempt was that I forgot C++ types which can't be copy constructed also by default can't be moved, so I'd actually made it impossible to use my example type. I still don't know why I had segfaults, but I don't care now.
I agree that your new code does roughly what you'd do in C++ if you wanted this, but you get to the same place as before -- if for example you try commenting out your allocation failure boolean, the code just blows up now.
There are lots of APIs like this which make sense in Rust but not in C++ because if you write them in Rust the programmer is going to handle edge cases properly, but in C++ the programmer just ignores the edge cases so why bother.
> I agree that your new code does roughly what you'd do in C++ if you wanted this, but you get to the same place as before -- if for example you try commenting out your allocation failure boolean, the code just blows up now. There are lots of APIs like this which make sense in Rust but not in C++ because if you write them in Rust the programmer is going to handle edge cases properly, but in C++ the programmer just ignores the edge cases so why bother.
Almost, it panics because we didn't handle the error case. Of course this won't pass review because we explicitly just said "I won't handle this" and the reviewer can see that - whereas the C++ programmer wordlessly allowed this. Subtle, isn't it.
"But I can write correct C++" is trivially true because it's a Turing Complete language, and at the same time entirely useless unless you're playing "Um, actually".
> Almost, it panics because we didn't handle the error case. Of course this won't pass review because we explicitly just said "I won't handle this" and the reviewer can see that - whereas the C++ programmer wordlessly allowed this. Subtle, isn't it. "But I can write correct C++" is trivially true because it's a Turing Complete language, and at the same time entirely useless unless you're playing "Um, actually".
I'm sorry, what? How in the world did you go from "exceptions are worse than error codes" to "that's why Linus doesn't like C++, he wants to write push_within_capacity() in C++ without exceptions and it's impossible" to "oh but your version doesn't move" to "oh I guess moving is possible too... but if you modified it to be buggy then it would crash" to "oh I see Rust would crash too... but it's OK because Rust programmers wouldn't actually let .unwrap() through code review"?? Aren't there .unwrap() calls in the standard library itself, never mind other libraries? So next we have "Oh I guess .unwrap() actually does through code review... but it's OK because Rust programmers wouldn't write such bugs, unlike C++ programmers"?
I don't remember telling you "Exceptions are worse than error codes" as these both seem like bad ideas from people with either a PDP/11 or no imagination or both. Result isn't an error code. std::expected isn't an error code either.
Among the things Linus doesn't like about C++ are its quiet allocations and its hidden control flow, both of which are implicated here - I think those are both bad ideas too, but in this case I'm just the messenger, I didn't write an OS kernel (at least, not a real one people use) so I don't need a way to handle not being able to push items onto a growable array.
The problem isn't that "if you modified it to be buggy then it would crash" as you've described, the problem is that only your toy demo works, once we modify unrelated things like no longer setting that global to true the demo blows up spectacularly (Undefined Behaviour) whereas of course the Rust just reported an error.
> Aren't there .unwrap() calls in the standard library itself
Unsurprisingly an operating system kernel does not use std, only core and some of alloc. So we're actually talking only about core† and alloc, not the rest of std. There are indeed a few places where core calls unwrap(), cases where we know that'll do what we meant so if you wrote what you meant by hand Clippy (at least if we weren't in core) would say you should just write unwrap here instead.
† As a C++ person you can think of core as equivalent to the C++ standard library "freestanding" mode. This is more true in the very modern era because reformists got a lot of crucial improvements into this mode whereas for years it had felt abandoned. So if you mostly work with say C++ 17, think "freestanding" but actually properly maintained.
We can't write unwrap here because it's not what we meant, so that's why it shouldn't pass review.