Anyone else getting concerned about the rate of development of the C++ Standard vs compiler implementation? We don't even have feature complete C++20 on the major compilers yet. C++23 is even less implemented. How will the committee handle this? Will they reduce feature additions at some point in the future?
FWIW, I can’t ever remember a time when the most widely used compilers in practice weren’t around 5-10 years behind changes in the standard. Part of the issue is that adoption takes time, and always will. A lot of people using C++ professionally don’t want to be on the bleeding edge of language changes. Some do, but a lot don’t, for good reasons.
When I was in college in the 90s, templates were new and had iffy support. In the 2000s, STL was still problematic. I remember the Microsoft Compiler spitting out error messages on STL types that were so long the compiler couldn’t handle the string length and truncated the error message so you couldn’t see what the problem was. In the late 00s and early 10s I worked in games, we wanted stability and for the code to be as non-tricky as possible (for example by banning use of exceptions, STL containers, and most casual heap allocation, among other things.) So we used older compilers by choice. My current job publishes an SDK and APIs that people have been using for many years, they need to be backward compatible and we can’t impose recent language features on our users because not everyone uses them, we have to stick to features that compile everywhere, so I still see lots of C++{11,14,17}.
Perhaps you are right, and I had rose tinted glasses. I seem to remember C++17 being completed (all major features) around 2018 on at least one or two major compilers. Meanwhile C++20 still doesn't have modules properly implemented, which is such a huge feature IMO that I count it as a major deficit. But maybe C++17 was the exception rather than the rule.
I think what annoys me most is when a standard is implemented except for an important feature. E.g. ranges was unusable for ages on Clang. And modules is still not quite there. I want to use the latest standard to get these cool features so I can write code like it's 2025, not 1980. If the biggest features are unimplemented then why upgrade? My concern is the trend continues, and we get cool things like reflection, but it takes 10 years to be implemented properly.
Wow, the list of papers for Modules is large. I can understand why it is taking so long to implement. Also, there needs to be "fingers to do the typing". I assume most of the commercial devs (Google, Apple) just don't think modules are important enough. It looks like Concepts also took a very long time to be fully implemented. Hell, that feature was discussed for 10 years (a dream of Bjarne) before approved into a standard... then maybe 5 years to impl!
Yes, it changed my point of view that it is about time WG21 goes back to the old ways of only standardising existing practice, or at very least proposals without preview implementations shouldn't even get discussed in first place.
The problem is that with the new wind C++ got with C++11, its role in GCC and LLVM, and domains like HPC, HFT, GPGPU, WG21 got up to 300 something members, everyone wanting to leave their name on a C++ standard, many proposing features in PDF form only.
And since in ISO driven languages, what gets into a standard, does so by votes, not technical implementation merit, it is about doing a proper campaign to get the bases to vote for your feature, while being persistent enough to keep the process going. Some features have more than 20 revisions on their proposal.
But this would do nothing to introduce safety-related features, which are still sorely missing after 2+ decades. In light of upcoming regulation to exclude unsafe languages from certain new projects, maybe those features wouldn't be that unimportant after all.
The other side of that coin is that if you required "technical implementation merit", then only people or groups who have strong experience with C++ compilers would be able to propose things.
I'm not saying that the existing situation is ideal and it's certainly not a dichotomy, but you have to consider the detriments as well as the benefits.
You could have a two-stage process. Stage 1 would be the proposal, where you have the discussion on whether the feature is desirable or not, which can then be provisionally accepted. At this point it is not in the standard.
Then you have Stage 2, which would require an actual working implementation, so it can be properly tested and such. Acceptance in Stage 2 would result in the feature being in the standard.
It is worth noting that one of the main reasons why C++ standard evolution is faster nowadays is because the "bare minimum" for consideration of acceptance in the standard is working examples on a fully functioned compiler. This tends to make it a lot easier for other compilers to implement those features as there is at minimum a working reference to compare against (vs older still unimplemented features like modules where nobody really ironed out how to properly implement them until after they were shoehorned into the standard)
Maybe knowing where a language is going will help them implement older features? Also, some things are technically easy once all the conceptual wrinkles are ironed out. There is no reason some of these can't be added before C++20 is 100% supported.
Yeah, I've been very worried about this. C++23 is basically unimplemented as far as I'm concerned. Working on C++26 makes no sense to me. They gotta space it out more than 3 years, if nothing else.
I cannot believe how, after all these years, we don't even have `restrict` in the language, nor uniform function call syntax (i.e. x.foo(y) being interchangeable with foo(x, y) ); but we do get the ultimate bells-and-whistles in the form of reflection and reflection-based code generation; and the half-baked coroutines mechanism.
uniform function call syntax and a `restrict` mechanism have not made it in to the standard after so many years
That is more on the fact that less investment is made in C++ compilers nowadays. Companies are migrating away from C++ and it shows in compiler development.
I write C++, and complain about still not having a portable C++17 implemenation that fully supports parallel STL, not having a full compliant C++20 compiler, embedded compilers kind of still catching up with C++14, and I could rant on.
Are there any embedded compilers left that try to implement their own C++ frontend?
To me it looks like everyone gave up on that and uses the clang/gcc/EDG frontends.
Yes, and even those that have compiler forks aren't on vLatest.
Interesting you mention EDG, as it is now famously know as being the root cause why Visual Studio development experience lags behind cl.exe, pointing out errors that compile just fine, especially if using anything related to C++20 modules.
Apparently since the modules support introduced in VS 2019, there have been other priorities on their roadmap.
I always like these new comile time features getting into the C++ spec.
I'm actually looking forward to the related reflection features that I think are currently in scope for C++26. I've run into a number of places where the combination of reflection and constexpr could be really valuable... the current workarounds often involving macros, runtime tricks, or both.
The D language basically does that. You can write D programs that evaluate D code at compile time to generate strings of new D code which you can then basically compile-time eval into your code as needed. Combined with the extremely powerful compile-time reflection capabilities of D it's the closest thing I've seen to Lisp metaprogramming outside of that family of languages and it's easier to read than Rust macros or C++ template metaprogramming.
Every time I hear about D it sounds awesome. I actually used it to prototype an image collage-composing algorithm which I then rewrote in Scala[1], and the D version might have been nicer to write.
The only reason I didn't write more stuff in D was that the stack traces from my programs were pretty much useless. Maybe I was supposed to set a --better-stack-traces flag when I compiled it or something idk.
> Combined with the extremely powerful compile-time reflection capabilities of D it's the closest thing I've seen to Lisp metaprogramming outside of that family of languages ...
Scala gets pretty close to LISP-level of metaprogramming support between its intrinsic support for macros[0] (not to be confused with the C/C++ preprocessor of the same name), the Scalameta project[1], and libraries such as Shapeless[2].
Not comparing Scala to D, just identifying a language with similar functionality.
D is really, really good. I hope it gets more love soon. D's focus on just getting shit done, lightning builds, QOL improvements all over the place, actually good modules, templates and metaprogramming that work, simpler more regular syntax, any memory management paradigm you want, being fully batteries included, being super easy to cross compile, being able to span all the way from Python/C# slop all the way down to tight-as-you-like C code... It's an amazing language and is getting better all the time. A real C++ successor. It has become my secret weapon! Maybe I actually don't want it to blow up soon, since it gives me a huge edge on anyone stuck with C++, which gets worse every release (how slow do builds have to get before people lose it completely?).
Many of the functions in Phobos are not `@nogc` compatible, yes.
That being said, I can't think of many scenarios in which an application where user-code is all `@nogc` would be hindered by occasional GC'ed stdlib methods.
One standout example of viability is the "dplug" library for realtime audio processing plugins and the commercial AuburnSounds VST's written by the author.
In D, you can write Python-style slop code and/or C#-style slop code. Two very different styles of slop code, both of which can be written in D. Or even mixed and matched.
It would be cool, except for the entire language that is available at compile-time being C++, and thus entirely unsuitable for manipulating C++ programs.
Not at all, originally template metaprogramming was discovered by accident.
Cannot recall any longer if the original article on the matter appeared on The C/C++ Users Journal or Dr. Dobbs.
Eventually it started to get abused and the Turing completeness has been discovered.
Since C++11, the approach to a more sane way to do metaprogramming with templates has been improving.
Instead of tag dispatch, ADL and SFINAE, we can make use of concepts, if constexpr/eval/init, type traits, and eventually reflection, instead of the old clunky ways.
Templates are semi-accidentally Turing-complete. They were intended for writing compile-time-generic, run-time-concrete functions and types - but it turned out you could use them, along with the overload resolution mechanisms, to compute things. The Turing-completeness involves recursive use.
Computing things using templates is not intuitive. Many of us have gotten used to it - but that's because that's all we had for many years. It's a different sub-language within C++. As constexpr capabilities widen, we can avoid "tortuted" templates and can just write our compile-time checks and figurings in plain C++ - more or less.
Sometimes, enhanced language features in C++ allow us to actually throw away and forget about other existing features, or at least - complex and brittle idioms using existing features. Like SFINAE :-)
C++23 doesn't have full reflection yet. That's coming in C++26.
I've seen the vast majority of build time in a very large C++23 project be taken up by reflection in fmtlib and magic_enum because both have to use templates (I think).
Recent C++ changes seem like polishing your firewood before burning it. Yes, they make perfect sense, but often they are a fix to a problem that committee introduced by cutting down previous proposals (e.g. forcing lambdas to be single return statement, then relaxing it).
Half of new features feel like "how to make STL implementation less embarassing".
Meanwhile there still is no language support for e.g. debugging constexpr, or printing internal private state of objects in 3rd party code.
> there still is no language support for e.g. debugging constexpr, or printing internal private state of objects in 3rd party code.
Actually, reflection might make it easier to do that. Supposedly, you should be able to get a member pointer to the private member you're interested in (or even do it dynamically by iterating over all members, and figuring out which one you like), from that and an actual object obtain a regular pointer, and finally dereference it.
How could a debugger make sense of "internal private state of objects in 3rd party code"? Only a portion of the stack frames of linked functions (input parameters coming from known code, maybe expected return values) has a presumable type.
No, you can't do that either: https://godbolt.org/z/vzdTMazx7 : error: '__builtin_bit_cast' is not a constant expression because 'char' is a pointer type
Here the `constexpr` keyword means the function might be called in a constant-evaluated context. f doesn't need to have all its statements be able to be evaluated in constexpr, only those which are actually used are. You need to explicitly instantiate a constexpr variable to test this.
The consteval specifier declares a function or function template to be an immediate function, that is, every potentially-evaluated call to the function must (directly or indirectly) produce a compile time constant expression.
It's possible that the compiler just doesn't bother as long as you aren't actually calling the function.
every release adding more constexpr stuff honestly helps me - been burned by old template hacks enough lol. you think we ever get to a real point where all this new power just makes stuff easier, or is it always tradeoffs?
C++ is, should be, like COBOL. A very important language because of the installed base. But why the continual enhancements? Surely there are better uses of all those resources?
As a specific example, expanding constexpr means a codebase I recently worked on can move away from template metaprogramming magic to something that is more idiomatic. That means iterating on that code will be easier, faster, and less error-prone. I've already done static dispatch using constexpr and type traits that would have taken longer to do with templates.
Currently constexpr programming needs you to know the specifics of what is supported - ideally you'll be able to infer that from first principles of the invariants that are available at compile time. That leads to faster, more confident development.
It's a similar story for reflection: we were using custom scripts and soon won't have to. The changes usually come out of the problems people are already finding solutions for in the real world, rather than gilding a lily.
Although there are excellent alternatives to C++ such as Rust, C++ is still widely used as many open-source and commercial codebases are built with it.
Adding features to a language that is still actively used does not seem like a bad thing.
The amount of C++ written at my company every day is… a lot. We are slowly fighting away from it towards memory safety, but it is hard. It will take a decade.
At the company I currently work for we also use C++, and I am quite proficient in it. But the amount of times I have slowed myself down by simple lifetime issues makes me want to switch to a more memory safe language. Whether that is C++ with profiles or a whole new language such as Rust.
For example, a week back I lost a few hours finding a segfault bug in C++ code, which ended up being a trivial lifetime error: I used a reference after it was invalidated due to a std::vector resize.
These kind of errors should be compile time errors, rather than hard to trace runtime errors.
How does your company go about changing to memory safety? Are new projects / libraries written in Rust for example? Do projects / libraries get (partially) rewritten?
It is very hard to reason about lifetimes and they can eat you. We have a lot of guidelines and strategies to simplify it, but it still just isn’t amazing.
Not exactly. There's a lot of C++ code that still can't be rewritten into cool languages overnight without risking correctness, performance and readability.
I'm always happy to see C++ pushing itself and the compiler backends as it benefits the victims of lame codebases and also the cool kids using the improved compiler backends.
I definitely had my eyes on slint for quite some time (pretty much since it was announced - I highly value the technical skill of their team) but it's still quite far from the whole QWidget offering.
I don't know anything else that is even remotely in the same ballpark - certainly not Tauri or egui for instance. Anything that is GPU based is blacklisted from the get go (including QtQuick) - people who preach gpu based GUI have definitely never tried to have a quick debug feedback loop with an app built using AddressSanitizer where just opening the simplest GL or Vulkan context takes up to 20 seconds on a good day.
Only the GUI part is relevant for this discussion as this is what Qt is know for.
Other modules also have different Rust crates equivalent, or are really niche and nobody uses them.
Qt's value is as an app framework where you can assume interoperation across the same async runtime and enjoy a cohesive set of APIs for all your app's needs.
https://en.cppreference.com/w/cpp/compiler_support