I honestly love Ruby and Ruby on Rails, but I can't understand why companies like Shopify and Github go through so much effort to scale Ruby especially at their size. Maybe I am wrong, but couldn't this effort be put to rewriting parts of it in a more performant language like Go or Rust? One has to imagine that they have a large code base, how much developer time is spent writing Tests for Ruby? How much time was spent debugging odd monkey patching gems over the life of the codebase?
I do get that developer time was/is more expensive than servers. But I am not so sure at some level of scale. When you need 100 servers vs 5, and need to spend so much testing effort dealing with dynamic language, etc. And then you build custom compilers, special tools for tracing, entire architectures to deal with single threaded model, etc. Between Github & Shopify alone, they could have probably build a very Ruby on Rails like framework on a language more suited to the size and scale of these platforms.
> they could have probably build a very Ruby on Rails like framework on a language more suited to the size and scale of these platforms.
I have a hunch they would rather have tens of thousands of other folks using a framework that has massive community support and folks other than them directly maintaining it.
Also being able to Google almost any problem in Rails and find multiple really good answers is worth needing 5 or 10 times more compute costs on just your application servers because dev time is expensive at any scale.
If you're paying 2,000 developers 150k+ a year that's 300 million dollars without accounting for anything that scales off base salary (bonuses, 401k matching, etc.). If you can save each developer 5 hours a week because of the Rails community existing that's 10,000 dev hours a week saved. An average person might work let's say 1,900 hours a year. That's roughly ~5.2 years of dev time saved from using Rails in opportunity costs and direct costs per week. Direct costs alone is ~$790k per week. I don't know what Shopify or a bigger place is paying on just application server costs but I'm guessing it's well worth hosting Rails instead of building their own framework in a more computationally efficient framework.
I think these numbers are really generous too. I'm guessing using Rails is saving a lot more than 5 hours a week of dev time per developer.
I keep hearing that argument my entire career... and I've attended at least 7 instances of it being very false.
The point is, nobody is even trying to do what you also renounce -- hence there's an inherent confirmation bias in the claims of "rewrites are hard" or "in-house frameworks fail". I think we should recognize that.
There are ways to surgically and gradually migrate away from a slow technology. I've done it several times in my career and my only fail was a failure of not knowing all business requirements -- lesson learned, never made the same mistake again after.
I am pretty sure in in orgs like GitHub and Shopify the business requirements can be gathered and catalogued. It's all in there.
> Now modern data eng/warehouse stacks on the other hand, those are money fires
No but I totally need a 100k+ contract (plus extra for runtime costs) to do…database stuff that…uhh, no other column based database is uhh, totally capable. Yeah. /s
Also we totally gotta buy these other data tools, because all of our “data” people refuse to learn even basic code, and can only write sprawling notebooks.
Oh plus gotta pay for some databricks/dbt to run my sprawling notebooks on huge machines because our Python based runtime is single threaded, can’t utilise the full CPU properly and requires silly amounts of memory, but it’s ok because “developer velocity” let us put an incomprehensible notebook into production a whole 15 seconds faster.
How much did they spend to write this compiler? Also, I would say they spend at least 5 or more hours per weak writing tests to handle all edge cases for any new code they commit due to the language being dynamic? Also, what kind of community is there for a custom compiler as well as the non-standard high performance things like using CRuby. I think it is easy to assume developer productivity is increased--and I think it is for small/medium projects. But at this scale and complexity, my assumption is that they are likely spending more development resources to support the language in their high performance environment.
Having a statically typed language doesn't mean you don't write tests.
Sure, there's a subset of tests you can avoid writing but you're still going to be writing a ton of tests with both language types. I also think this point gets blown out of proportion. For example I don't write a ton of boilerplate "what if I pass X type to Y variable" types of tests in Rails because I trust the database and Rails' validations. For example if I have a datetime field in a DB which is automatically defined as a datetime in Rails I'm not going to write a separate test for what happens if I pass in a string, integer, boolean, list, hash and so on. I'd write a test around making sure the date is within a specific range if I hooked up a validation rule to it to limit the range, but I'd write the same test with a statically typed language too.
Also, Stripe has a good write up[0] on how they use Sorbet to add type checking for Ruby and how they applied it to a multi-million line code base without disrupting developers too much. Basically a small team did most of the work, including writing the tool and migrating the code base. I haven't used it personally but it exists and can be used successfully at scale.
The "parts of it" in your question might be the clue to an answer: you start implementing some often-used code path only to realize that they share code with all the rarely used code paths (permissions/authentication come to mind),
So you'd need to reimplement a whole bunch of your codebase, at which point you'll have two versions of everything that you need to keep in sync.
That's exactly why most COBOL rewriting projects in banking have failed miserably. It is much more complex than just rewriting a complex CRUD app. It is not just about the main program, it is also everything that connects to it and expect it to behave in super specific ways.
I mean yes it's an investment for sure but the whole YJIT team is 6 people. They are well compensated, lets say they cost 2.5 million dollars a year. That's still peanuts compared to a rewrite. Ruby is serving them well, they have tons of Ruby experts and their codebase is optimized etc, why would anyone want to ruin that?
But compare the long term costs. I think there have been countless examples of going from Ruby to Go and eliminating 80 to 90% of needed infrastructure. I can't begin to estimate their infrastructure costs, but if they are spending $1M/mo and you can eliminate 90% of that. That is not peanuts.
Their monolith is millions of lines of code, one of the biggest in the world, so I don't think some 2 year startup writing a blog "how we moved from Ruby to Go" is quite the same as what Shopify will have to do to rewrite their monolith.
Add to that the cost of losing productivity since they have so many Ruby experts.
This will be completely disruptive at a time where Shopify has to grow like crazy and add new features.
So I think they know what they're doing.
Also I'm somewhat skeptical of all those rewrite success stories, it seems to me like a CTO or principal engineer deciding to make the rewrite will never admit that it was a bad decision since his job depends on it. So of course he will make the case how great and beneficial the rewrite was. I bet there are many stories where the rewrite was detrimental to the business.
I can easily turn your argument exactly in the opposite direction:
Since there's always huge conservatism in relation to rewriting or making in-house frameworks then there's a confirmation bias in these stories: "whew, we dodged a bullet by not even trying thing X".
Yeah, everyone could have said that.
I've attended and participated in at least 7 successful rewrites. You don't hear about them though because people read HN and are like "I am not willing to engage with biased people so let's keep it to ourselves".
That's an aspect of these conversations that a lot of people around here don't account for: the people who get stuff done are quiet. This should be included in analyses but often isn't.
---
...And finally, millions of code in a monolith isn't that scary. Find a part that has minimal dependencies to everything else, rewrite it, put a reverse proxy in front of your service that points a particular endpoint to the new code, test for a bit, done. Rinse and repeat. The process itself is trivial, not especially creative, and mostly just laborious than anything else.
I feel obliged to point out that "I trust that $BIG_COMPANY knows what they are doing" is very often in reality "There are gatekeepers inside the tech teams that are custodians of tradition". Been in plenty of companies and that's often the non-romantic truth.
I'd just end this by saying that a lot of teams don't make their calls in such a scientific and objective manner as you seem to imply. I wish that was the case but it's not what I've seen. Bad luck or me sucking at picking employers, I suppose.
This isn't a counterargument as such, but you could apply similar logic to, say, banks and COBOL? I'm not in banking, but I imagine there are many good reasons to keep writing and maintaining COBOL, and these companies may have similar stories to tell about Ruby.
COBOL and many legacy systems have the nasty problem of everything being inter-connected: a textbook example of spaghetti code.
So in such systems, in order to even begin properly, you have to rewrite a sizeable chunk of the legacy code before you could even showcase a first demo / MVP.
Needless to say, the relentless culture of our modern times makes the businessmen never approve such projects.
> Maybe I am wrong, but couldn't this effort be put to rewriting parts of it in a more performant language like Go or Rust?
This is absolutely a valid point, and I'm slightly perplexed as well.
Pragmatic (and smaller) companies like GitLab have indeed approached the system by rewriting part of their product in more performant languages (Golang in GitLab's case).
Another pragmatic approach is Stripe's AOT compiler, which is (I suppose) much less resource-intensive to develop, and it's, in a way, the "optimize-the-bottleneck" approach, rather than trying to improve the performance of the whole language.
> they could have probably build a very Ruby on Rails like framework
This would cost a lot in terms of community and support, it would take a very long time to release, and it would also cost them a lot to migrate to.
All in all, it's perfectly possible that Rails itself is hard to optimize for, and that for large-scale monoliths, splitting out microservices (with moderation!) may be not be effective and/or efficient. But I'm still curious at why GitLab's (moderate!) microservice approach wasn't chosen for Shopify.
>I honestly love Ruby and Ruby on Rails, but I can't understand why companies like Shopify and Github go through so much effort to scale Ruby especially at their size. Maybe I am wrong, but couldn't this effort be put to rewriting parts of it in a more performant language like Go or Rust? One has to imagine that they have a large code base, how much developer time is spent writing Tests for Ruby? How much time was spent debugging odd monkey patching gems over the life of the codebase?
Me neither, especially there is crstal, a Ruby like syntax, that runs 3 times fast than go. Invidious, (youtube proxy) is written it it. Ruby developer ought to just switch to that for speed.
Crystal is not production ready. It doesn't even have production-ready parallelism (and there has been no discussion about it for a couple of years), so it can't be compared to Go.
I'd love to see large-scale adoption, but it's stuck in the typical vicious circle of no users <> little development.
Additionally, if they don't release parallelism support quickly, they'll be forever stuck with an ecosystem designed with the assumption that only one thread runs at a time¹. This was a deal-breaker for me, when I've evaluated it for use at my company.
¹=this is a very serious problem. Ruby has implemented parallelism (via Ractor), but the vast majority of the libraries assume a single thread running at a time; creating a project that uses parallelism will likely have many subtle bugs caused by the libraries.
I think GitHub at least is spinning off parts (not sure about Shopify) but as others are getting at, rewriting a big, business critical application is risky.
I worked at a place that was completely rewriting a PHP site in PHP (moving from a no framework no test spaghetti code base to something structured and tested) and that was a six year project just to get feature parity with the old site. During four of those years features were being added to both sites in parallel too which doubled the cost of every feature.
Glad to see an explanation for why they bypassed cargo. I had wondered about that when the PR was posted as it didn't contain any documentation for why they were going off the beaten path. However, I'm a bit confused at the reasoning. Because a solution felt sub-optimal (vendoring an optional dependency), they went the route of invoking rustc directly? That seems a bit scorched earth especially when they later talk about needing another dependency offline and instead of vendoring both they instead worked around that in a different way.
I've been tickled pink to watch this work happen. A wild fusion of my past and my present. And of course, much cleaner and properly done, as opposed to fun experiments I used to like to do, like https://github.com/steveklabnik/ruby/tree/rust
> Unlike C, Rust won’t automatically promote integer types to wider types. It forces you to manually cast any mismatching integer types for every operation
> Rust's insistence on manual casting everywhere encourages people to write inefficient code, because writing verbose code feels uncomfortable and adds friction.
This reads a lot to me like "we like how easy it is to create bugs in C. We'd like to be able to do that in Rust, too." But maybe I'm misunderstanding. I'm sure these people know a lot more about the consequences of implicit integer casting than I do.
> ..there's no reason why you couldn’t safely promote a u8, u16, or u32 into a usize, and asking programmers to manually cast integers everywhere makes the code more noisy and verbose.
They have a valid point. In C#, numerical casts that won't lose precision are implicit, and any that could are explicit. I think that’s a nice compromise. So, int to uint requires a cast, but int/uint to long doesn't:
int myInt = /* ... */;
long myLong = myInt;
uint myUInt = myInt; // error CS0266: Cannot implicitly convert type 'int' to 'uint'. An explicit conversion exists (are you missing a cast?)
ulong myULong = myInt; // error CS0266: Cannot implicitly convert type 'int' to 'ulong'. An explicit conversion exists (are you missing a cast?)
uint myUInt = /* ... */;
int myInt = myUInt; // error CS0266: Cannot implicitly convert type 'uint' to 'int'. An explicit conversion exists (are you missing a cast?)
long myLong = myUInt;
ulont myULong = myUInt;
> there's no reason why you couldn’t safely promote a u8, u16, or u32 into a usize
The people running Rust on a 16-bit architecture (e.g. AVR) would disagree. On those platforms, usize is 16-bit, so promoting u32 to usize would not be possible.
I _do_ wish there was a nicer way to declare up-front "I *never* want this crate to compile on something smaller than 32-bit (or 64-bit), so assume that".
IIRC, similar ideas like this have floated around under the name of a "platform compatibility lint".
But whether implicitly or explicitly converted, the compiler would reject the code all the same, no? (Unless an explicit conversion would actually succeed, which is even worse. I'm not a Rust programmer so don't know. But this is why explicit casts are frowned upon in C.)
The proposal is that by default the compiler would refuse to do an infallible conversion from `u32` to `usize`, but if you declare up front "this code assumes at least a 32-bit target", then you gain the ability to do infallible conversions from u32 to usize. And if you declare up front "this code assumes at least a 64-bit target", you gain the ability to do infallible conversions from u64 to usize.
Also, you can already convert u8 and u16 to usize infallibly, today. Only u32 and u64 currently require fallible conversions via try_from.
That wouldn’t be an issue. The whole claim is that this:
let a: u8 = /* */;
let b: u16 = a;
push_u16(a);
can easily be inferred into: (no pun intended)
let a: u8 = /* */;
let b: u16 = a.into();
push_u16(a.into());
But the Rust language has decided that implicit upconversion that is already apparent is a bad idea because “bugs”. And I’d agree with them, but it’s stupid in this case. The issue at hand isn’t asking for ‘a’ to be upconverted to a u16 in the middle of an expression like this:
let b: u16 = a + 1;
…because you’d be left with the question of when to upconvert (“implicitly changing behavior” as you call it). No, it would only be when it shouldn’t need to be said such as a lone assignment or function call.
And if the Rust language’s argument is that it’d be unclear when it’s allowed? Because I’ve seen many RFCs where someone does what you just did: “what about this weird corner case where ‘u8 max + 1’? Should that be a u8 or u16? Maybe we should toss the whole proposal!” To that I’d say: start off where it’s clear (like my examples above). It’s not a backwards incompatible change.
I get that there has to be some indication to the compiler to distinguish between doing 8-bit unsigned math vs 64-bit unsigned math to evaluate these expressions. But having such a mish-mash in a stdlib function makes new users think, "There must have been a good reason to do it like this since it's in stdlib, but dang if I know what it is!"
I ran into this myself a couple of years ago when looking at Nim and agree it is annoying, verbose, and ugly to be forced to specify types for obvious integer things.
It's more complaining that Rust is verbose when one wants to store small integers in structs but have intermediate computation use machine width integers
Implicit widening doesn't tend to cause bugs. I don't think it'd be so bad if vec[u8] implicitly worked without needing vec[u8 as usize]
FWIW, you can have a newtype wrapper around Vec that provides indexing access with any type you want. If you are porting or writing code that uses i32 for indices for some reason, then you use the wrapper instead of Vec with no runtime cost and no inline type conversions.
This hasn't been solved at the language level yet because if we `impl Idx for u8` then inference in `vec[42]` breaks. Making inference support that in a principled manner will require a lot of work, and doing it in a hacky way might lock us into a suboptimal place that we haven't fully explored yet. I want to see some solution here at some point.
I wonder what the author would have preferred to the visual litter of `unsafe` blocks? Can you use `unsafe` to mark off entire scopes to avoid having to redeclare it? Seems unavoidable in a project that is meant to heavily interoperate with a C99 code base like Ruby.
Cargo having to phone home all the time sounds annoying, though. I hope that gets improved, though I know the cargo team is swamped right now.
> Can you use `unsafe` to mark off entire scopes to avoid having to redeclare it?
Yes.
> Cargo having to phone home all the time sounds annoying, though.
It does not have to phone home, ever. They for some reason didn't seem to want to use the right workflow, but that workflow does exist, and is an important and well-supported use-case.
(It seems from the bug report that they did hit a confusing error message, which is unfortunate.)
I read the piece and the ticket. I still don’t understand. That’s fine, it’s not my responsibility to anymore. I’m glad they found a solution they’re happy with.
Specifically, cargo vendor was suggested in the ticket, but there was no response as to why it wasn’t adequate for this case, just that it doesn’t “feel” good. I’m probably missing something.
The reason is right there in the ticket. They have exactly one dependency, and it's optional. They don't want to vendor it just so they can build the no-capstone artifact in CI without having to phone home to crates.io or whatever. You don't have to agree with her, but you can't pretend she didn't give a clear reason.
Saying “we don’t want to” is technically a reason, sure, but it doesn’t mean I understand the why. Why is that a problem?
(Depending on that why, I have an idea or two for a workaround, but you can’t suggest solutions until you understand the actual issue. I have my own laundry list of issues with Cargo that I have to work around; it’s painful and I have a lot of empathy for that.)
For all the same reasons that vendoring isn't the default option in any dependency management system in any language: because there is a cost to vendoring. She didn't say that, but I wouldn't think she'd have to. You don't vendor all of your dependencies either: why not? There's your answer.
I'm wondering if maybe you've missed the part where they don't need this single dependency in the builds they're talking about doing.
It seems like the Rust community people responding on the ticket had no trouble understanding what the issue is. There's a difference between acknowledging something and conceding that it needs to be changed.
I don’t because I don’t need to worry about offline builds. If I did, then I’d vendor them. Hence not understanding. (And even if it literally is that only, then I have at least one idea for what they could do that would make this work. But maybe that solution would fail for other reasons, and I want to acknowledge that it’s not like I have the full requirements list and am saying “oh gosh just do this,” because I am not.)
That it’s a single dependency is why it feels extreme to me; won’t that be a very, very tiny thing to vendor?
Anyway I don’t really know why you’re being so aggro here. I don’t know how many times I can say “I’m probably missing something” and “they know better than I do.” I really like you, Thomas, but I also hear the passive aggression, and don’t like it. It makes me want to return it, but that doesn’t benefit anyone.
You said "They for some reason didn't seem to want to use the right workflow", which by my lights manages to be both loaded and dismissive (if that was the goal, well played). I pointed out that the "some reason" is right there in the post, and you doubled down. I'm not sure why you did; you could have just said, "oh, interesting, thanks" and I'd have said "pip pip! cheerio!" but if you want instead to... uh... explore the contours of this controversy? I'm always game, which is what you're experiencing now.
If you want to keep me invested in a programming discussion, one very good way to do it is to imply that you can't for the life of you understand why someone wouldn't want to vendor Capstone.
Do you mean this like python pip? Or performance improvement? .. or?
(Sorry, just trying to understand your remark. FWIW I'm a bit disappointed with skablik in this chain, why so pedantic, self-centered, and close minded?)
No problem. And I am serious about the "for some reason" bit, I don't know why `cargo vendor` isn't acceptable for them, but they know their requirements better than I do.
> Can you use `unsafe` to mark off entire scopes to avoid having to redeclare it?
Yes, but, you should avoid labelling large blocks of code "unsafe" when really only a small amount is unsafe or there are disconnected unsafe bits scattered through a function.
Also, though the compiler can't tell, stylistically it's proper in Rust to justify each unsafe block, and larger blocks mean the justification is often too vague ("Access the data" thanks, but why is this OK?) or falls out of date.
The idea is: Unsafe does not mean "I expect this is wrong but I want to do it anyway" rather "I am sure this is right but the compiler doesn't understand why" and the justification explains to other humans, maintainers, reviewers, and your future self, why you believe it's right.
> The idea is: Unsafe does not mean "I expect this is wrong but I want to do it anyway" rather "I am sure this is right but the compiler doesn't understand why" and the justification explains to other humans, maintainers, reviewers, and your future self, why you believe it's right.
I'd add one more aspect to it in addition to yours: "get back to this in the future and see if we can make it not use `unstable`". It's a very good marker in the code and I hope Rust doesn't move to implicitness.
Unstable aka Nightly features in Rust also need flagging, you must annotate to tell the compiler you want this feature (and so your code won't compile in stable Rust or in any future nightly Rust which lacks the feature)
No I don't think Rust would choose to make either nightly flags or the unsafe marker implicit.
In fact the opposite is likely, today unsafe functions have their body implicitly treated as unsafe, because you said the unsafe word at the start of the function. For short functions this seems convenient, in longer functions it would be nice to distinguish "actually needs its own safety rationale" from merely "happens to be in a function marked as unsafe".
For what it's worth: the complaint here makes sense, but "unsafe" isn't just telling the compiler what's going on, it's (probably more importantly) telling other programmers what's going on.
strong disagree, C is a very poor language that does not provide a proper stdlib to reuse, collections/containers/functors. No OOP means no efficient code reuse, zero syntax sugars, manual memory noise, etc etc
C toy programs can be short though.
Disagree all you like. The rust port apparently didn't hear you, because the line count grew by nearly 50%, and I can post other similar comparisons with some other codebases and languages too.
I see about 1k lines of new tests, probably 500-600 lines of build system integration across various files, nearly 1k lines of cargo lock file information for dev dependencies and the Rust code appears to be more highly commented than the C code was.
Sounds to me like you already decided what your conclusion was and are just looking for data to back it up without actually analyzing it.
But this Rust port isn't idiomatic Rust, so it doesn't take much advantage of the improved idiom. Its authors call that out early.
Example: In C you're obliged to write a lot of counter loops, because the language does not have iterators, the idiomatic Rust is both smaller and easier to read because the counters weren't really for anything, they're just implementation details:
for (k=0; k< size(geese); ++k) // We need this k counter in C to index geese
for goose in geese // In Rust we can just have the geese themselves
But if you do a non idiomatic port, you carry across that loop as it was written, plus you gain the extra conversion noise. So this is a much less useful metric.
Fair, it was the first example of a verbose C idiom that came to mind, and it will spill onto multiple lines for complex cases easily (depending on your style rules) but it isn't a good example for this purpose.
I did also think about switch versus match, but in that case you're more or less obliged to do the extra lifting when translating to Rust and so I expect the breaks (which are extraneous in match) would be elided rather than translated.
Perhaps this is a silly reasoning, but my first impression is that a that a good jit compiler requires a fast compiler, so golang meet that requirement better than Rust, obviously Rust has a more advanced/featured compiler but in a jit compiler for ruby the optimizations are going to be mostly for short parts of code (no heavy math or long loops), so the jit could try to find places where the code use vectors, maps or list of same type elements to avoid having to determine the dynamic type. That is, my impression is that in web related tasks the greater benefit comes from detecting parts of code that can be transformed in static type code and then apply a fast compiler to that part.
But perhaps Rust was chosen by its security model rather than by its performance, in which case I agree. But I think that a variant of go using borrow/owner var semantic could be the best track to construct a great YJIT.
As far as I understand it the JIT compiler is written in Rust, that doesn't mean that it produces Rust. So the output of the JIT can be in assembly, C or something else that compiles fast and runs fast.
I haven't had a look at the YJIT design, but your reasoning:
> detecting parts of code [...] and then apply a fast compiler to that part
applies to the previous JIT (MJIT) design, which created C snippets and compiled them on the fly. I don't think this is the case for YJIT, although I'm not 100% sure.
Surely this isn't the case, for example, for TenderJIT, which, for the bytecode instructions, builds assembly code and (optionally) invokes the original interpreter functions.
I find your remark strange: the author is mainly comparing the scalability of Rust vs C.
C++ does provide tools to build scalable software, and was already doing so in 1998 (few new features on this front since 98 I guess? Maybe modules, but using them for portable development in 2022 looks difficult given their state in compilers that aren't MSVC). I don't think this is a reason to prefer C++ over Rust (if anything, the tools for encapsulation and modularization provided by C++ are way less convenient and efficient than Rust's--speaking from experience of C++14 and Rust here). Also C++ has the massive drawbacks of lacking ADT, forcing headers on users (which is elaborated in the post I believe) and not offering a "safe mode" like Rust does (the tooling is also massively better, but somehow cargo felt like a hindrance to the author of that post, so there's that).
It did surprise me that the author wouldn't elaborate on why C++ was not chosen, when her argument made it look like it could provide some of the advantages she found in Rust. However, she doesn't *have to* elaborate on every possible subject in that post, and implying she doesn't know the language (anymore) because of her not focusing on the reasons she didn't use it looks like a dubious stretch.
The author does, in fact, bring up C++ herself. It is very strange to have dismissed it while knowing so little about its current status, and so worth bringing up. She didn't have to mention C++ at all.
If you don't see C++20 as a very, very different language from C++98, it can only be because you know practically nothing about the topic. That is totally allowed, but expressing opinions based on knowing nothing produces worthless opinions.
C++ had a major change of direction with C++11, to the extent that a lot of previous design patterns and constraints are now completely obsolete. C++98 compared to C++20 is probably equivalent to Python 2.2 vs 3.9, and someone wouldn't claim to "know python" if they only used 2.2.
You can believe whatever you like, but the fact is that changes in C++11 and C++20 change the tradeoff between C++ and Rust in fundamental ways. The only way anyone could believe otherwise would be by knowing precious little about the topic.
While I did not use C++20, I did follow its new feature set, and did not see anything able to address the issues of C++:
- no way to express "lifetime relationships", they remain in comments or force you to work around the problem by using copied objects or worse, shared pointers. This resulted in very real bugs in our code due to a library returning structs that had to live shorter than some other object, without it being clear from the name of the struct. Coincidentally, this same library has a rust implementation, where keeping instances of this struct for too long is a compiler error.
- no blessed way to express thread safety in the type system
- terrible build system without a blessed package manager, which is not about to change because:
- they are still headers
everywhere. While modules are technically in C++20, tooling is not ready today (save for MSVC). The module implementation is also dubious in that it seems to cater to any niche use case in existence which adds a lot of complexity (and new opportunities for ODR violations...), instead of going "clean slate" and providing a simple system (preventing the split between declaration and definition and not being orthogonal to namespaces would have been a good start)
- no ergonomic sum types. Sum types are very important to model business logic and it is hard to go back to languages lacking them. There are discussions of pattern matching for C++ but it's not in C++20.
Several of these issues are impossible to fix in an existing language without breaking an unreasonable amount of existing code, so I don't see future versions of C++ reversing the statu quo. I also don't really trust the committee to achieve simple, ergonomic solutions, when I look to eg what was done with modules or coroutines (here I heard that the solution is to use third party libraries, but while it is tractable in Rust it is much less so in C++ due to the lack of satisfying tooling).
You're welcome to elaborate if you disagree though.
In modern C++, you get the same benefits as are claimed for "lifetime annotations" by operating at a higher level. In effect, a library may adopt responsibility for what the Rust compiler would have enforced, and where it matters, they routinely do. This is no substantial burden. As a result, it has been a very long time since I last encountered a lifetime problem in my C++ code. Sum types work fine in C++. Pattern matching is an extra notational convenience on top of them.
Rust has many nice convenience features, but lacks numerous other powerful features C++ provides. But this is not a good place to go into detail. If you care, there is plenty of material online. The point remains that comparisons, if done at all, should be done based on knowledge about current facts, not suppositions and hearsay.
Yes. People are "only allowed to claim" to know about the merits of a language that they actually know. C++20 is the C++ to compare against, unless she wants to compare against, say, Rust as it was in 2014. C++98 is simply not an honest basis for comparison.
I don't need "thinly veiled insults" when the facts speak for themselves. The author had no need to mention C++ at all. Then, she would not have found herself making ignorant comparisons.
Admitting that each ISO standard is a distinct language (so, C++ 20 is distinct from C++ 17 and C++ 03 and so on) and insisting upon naming them this way would of course satisfy this concern. But doing so would also highlight the uncomfortable reality that C++ 20 isn't a real language programmers can use yet - compiler vendors are still implementing this standard and won't be done for a while. It would also highlight the fragmentation, you would need to pick one of these different C++ languages to write some software, and in doing so rule out using the others.
And that's a pretty poor alternative to Rust, which is one real language Maxime and her team were able to use to write this software.
ncmncm, what is your specific definition for "actually" knowing a programming language?
If it's something like "mastery over most or all facets and nuances", perhaps there are a few thousand human beings in total who meet this qualification. Across all programming languages across the entire Earth.
I do get that developer time was/is more expensive than servers. But I am not so sure at some level of scale. When you need 100 servers vs 5, and need to spend so much testing effort dealing with dynamic language, etc. And then you build custom compilers, special tools for tracing, entire architectures to deal with single threaded model, etc. Between Github & Shopify alone, they could have probably build a very Ruby on Rails like framework on a language more suited to the size and scale of these platforms.