I recently discovered https://whitebox.systems/ It's a local app with a $69 one-time charge. And, it only really works with "C With Classes" style functions. But, it looks promising as another productivity boost.
I'll add to that my own Visual Studio extension that improves runtime debugging by showing you which lines are actively executing and what changes happened line-by-line: https://d-0.dev It integrates with the VS debugger completely and does not require any changes to one's codebase.
It's also available on the marketplace if you want to try it out: https://marketplace.visualstudio.com/items?itemName=donadigo...
Hey, thanks for the mention of WhiteBox! I'm the lead dev on it.
The currently-available version is focused on a "write-based" workflow for C and a subset of C++: JIT-compiling, running and auto-debugging the function you're working on in the background whenever you edit it.
It shows a timeline of how the data in your function changes, with the intent being to give you feedback as immediately as possible.
We're currently working on a "read-focused" workflow, which is basically about making WhiteBox also work as a timeline debugger (for EXEs compiled from any language) with a few extra tricks up its sleeve...
Just open the drop-down to select the C++ standard. Below, there is a section "More Transformations". Enable "Show coroutine transformation", and you will get the transformation you're looking for: https://cppinsights.io/s/4d8c3fc1
Myself and my ancient h/w and s/w engineering friends have been having a discussion recently about how the modern C++ language features are generating code that is not readily debugged at the source level.
The compiler has always created assembly that implements the feature described by teh high level language, but histoirically these have had a relatively one-to-one relationship with the source code.
However in the world of post C++11 many more abstracted features are supported, and they no longer have any kind of relation to the source in terms of something you could inspect with a debugger (except in assembly).
So it seems a tool like this is really useful, but unfortunately almost inherently compiler specific.
Don't debug -O3, or -O2 for that matter. Debug a separate debug build without optimizations, or don't heavily optimize your release build if you want to debug that (you know that obviously).
Even with optimizations disabled, debugging C++ can be quite painful. Only add a little template stuff to the source code, and a few "zero-cost" methods like indexing operators or cast operators, and it quickly becomes painful to step through code.
There should be a way to avoid stepping through boilerplate methods, but apart from specifying string patterns to exclude methods by name -- not inline with the method but in a separate debugging configuration (which is very very annoying).
The only solution I know is to use C++ features very lightly. Any other practical options?
Can Just My Code help if it is precisely my code that I want to skip over? I believe it does not -- IIUC it works based on modules i.e. the DLL that contains the machine code. IOW it doesn't even help if I move the "zero-cost" stuff to a separate source file -- templates and other stuff that have to be instanciated or inlined won't be separable from the code that I want to debug based on the module they're running in.
> Also, there are scenarios where debugging without optimizations isn't an option.
There are scenarios where debugging isn't an option in the first place. Doesn't mean that I should have to suffer in the typical case where debugging with no or only light optimizations is perfectly viable.
An acceptable solution I guess would be to have a [DebuggerHidden] function attribute as I found for C#. Didn't find one for C++.
Well, if we are going down tiny details about what to skip, there are lots of C preprocessor stuff, standard library functions implementated in crazy ways for ISO/POSIX compliance, and own abstractions/functions, that I might also want to skip over.
Well, don't go down that path. Stop bringing decreasingly relevant points. You are too argumentative.
"standard library functions implemented in crazy ways" is what I though about once in my life, when writing a toy compiler whose output was linked to libc. Such "crazy functions" do exist but but apart from libc having little relevance in this context it's not at all like debugging a line my_arr[i] = make_arr_elem(). That is a reasonably looking line but can easily contain two more function calls than is immediately visible, and it's very very tiring in actual practice to step through such code. The only solution I've found is to be careful to use very little such magic.
The general way that C is written in practice is that you don't have 3 function calls per line. So even if you have lots abstractions and macros in place that you would like to ignore (which can happen, but it's more rare than common) you can still skip over them (e.g. F10 in VS).
The complaint above isn't that gdb is hard, it's that the binary with debug info doesn't correlate very well with the source code after optimisations.
Bugs tend to disappear on me when I change the optimiser flags or run the thing under gdb though. I've also had a program valgrind-clean that segfaults when run without valgrind. It's a confusing world out there.
Debugging optimized binaries is harder than debugging non-optimized ones, and requires more advanced debugging skills and gdb knowledge.
There is a reason why crash dumps on optimized builds used to be called "guru meditation".
One useful skill to get started is learning how to get at data and print it through various levels of complex C++ data structures, which might require leaning on the python API to stay sane.
> Bugs tend to disappear on me when I change the optimiser flags or run the thing under gdb though
A lot of that shouts UB or compiler bug to me. I don't get this in my code unless there is some UB I missed. Unfortunately there are places where UB is unintuitive and may not even be present in newer versions of the language.
Not a fact. It's typical to have debug and release builds, and possibly other flavors.
Of course sometimes all you have is a stack trace from a release build, and then you need to debug that. But if you can, debug the debug build, it's much easier.
Frankly, that's stupid advice. Typically you have at least two build modes, debug and release. Debugging without optimization is easier because the source code maps to the compiler output, also a program without optimizations behaves the same as with optimizations, unless you put undefined behaviour into your code (should be a rare thing for an experienced programmer) or hit a compiler bug (also quite rare).
I'm not sure where you got these ideas, but there are tons of benefits to debug builds like catching out of bounds lookups on vectors the moment they happen and faster compilation.
It's bizarre that you don't realize everyone works this way.
Not when you want to debug it.
I think you're mistaking debug information not lining up with your program for different behavior, but these are not the same thing.
Crashes in production happen with optimized builds. You need to be able to inspect the core and figure out what happened from there, as usually you can't reproduce the scenario.
What about just compiling and running your program after you make a change or get a crash?
Crashes in production happen with optimized builds. You need to be able to inspect the core and figure out what happened from there, as usually you can't reproduce the scenario.
Right... What's your point? This scenario doesn't overlap with regular iterations. Normal workflow and a crash after a program is distributed are two separate things.
The crash when your program is distributed is the normal workflow. What matters is that your program runs flawlessly in its distributed environment, not that your test suite passes locally.
Testing is but a proxy to achieve the true goal, and is far from perfect.
Your normal workflow is only finding bugs once you've released your program to other people? You don't find any bugs while you work on it? You write a program and if it compiles you immediately assume everything is fine, release it and wait for someone to complain?
Do you realize that this thread was about someone saying that debug builds are a legacy holdover from the 80s ?
Testing is but a proxy to achieve the true goal, and is far from perfect.
What is it exactly that you think you're replying to? All this was about was someone thinking there was no use for debug builds. You're hallucinating some sort of discussion or argument about distributed software, services, updates, none of it is even coherent to what is being talked about.
Did you get mixed up and think that people saying that debug builds are crucial for iterations means that they were saying no one ever needs to debug an optimized build?
Postmortem debugging on production builds and without debug info is just harder (depends on how much postmortem info your bug report system provides, and at least on Windows it still makes sense to create a PDB file and archive that inhouse to associate that with the minidump you're
getting from the bug report, slightly mismatched debug info is still much better than no debug info at all). A production build crash doesn't mean that the symptoms are not reproducible on a debug build, and investigating the debug build after the bug has been reproduced is a lot more convenient.
Also a lot more bugs show up and are already fixed during development and never even make it to CI or even out into the wild. That's were debug builds and debuggers are most useful (during the initial development phase).
Sometimes I really have a feeling that software development is moving backward in time (shakes head). Debuggers are incredibly poweful tools, use them!
Doesn't change the fact it's completely unrealistic to expect that you can do that with a full debug build.
The industry has moved towards software as a service, and sometimes your service crashes and you need to figure out why.
Even if you want to turn the problem into a regression test that you can run a debug build against, you'll still need to look at the core of the optimized build to figure out what happened to begin with.
GDB behaves like any other debugger when it comes to optimized versus unoptimized builds. Debugging an optimized build is possible in any debugger, but your debugging information no longer maps exactly to compiler output, making source-level debugging harder.
Extremely bad advice. There is no point in debugging something that bears no relation to the software you release, and compiler optimizations is the only reason to use C or C++ in the first place.
It's extremely good advice. Debugging is all about simplifying things and finding the problem. Deal with your program without optimizations first. Once that is set, then you can deal with any differences due to optimizations, which should be extremely rare, because that would be a bug anyway.
Mixing two sources of complexity is extremely bad advice.
The readme indicates that it uses system headers (G++ libstdc++) by default.
Since it's source-to-source, I don't think being based on clang actually matters for anything unless you happen to use GCC-specific extensions. I would also expect clang and gcc to behave similary when cutting through abstraction.
I'm not sure, but it also seems like the output code should be able to compile? Then you can probably use g++ for that step.
> Myself and my ancient h/w and s/w engineering friends have been having a discussion recently about how the modern C++ language features are generating code that is not readily debugged at the source level.
What specific features do you have in mind? I generally believe this is because DWARF and debuggers haven't been keeping up, not because of inherent limitations.
gcc lacks usable tooling, in the past it was a conscious decision to not allow any other program to "steal" the work of gcc.
Every single tool for c++ is built around clang, because clang is actually built to allow other tools to use it
Honestly, from the title, I expected something that gave actual 'insights' into the compiler, rather than the code. For example, indicating applied and potential (missed) optimizations. But seeing what it was, I was definitely not unpleasantly surprised :)
Being able to cut through abstraction is very nice and quite important when you want to understand WHAT you are writing from the machine's perspective (or even in general when you don't know the abstraction yet). I love using Haskell, but have no idea what kind of machine code GHC spits out at the end (something based on the spineless tagless G-machine, an abstraction I barely understand by itself).
> I love using Haskell, but have no idea what kind of machine code GHC spits out at the end
There is GHC's core, an indermediate representation of your code (mostly "just" desugared Haskell), which is comparable to Andreas' C++ Insights. Which you can get by passing `-ddump-simpl` with additional flags to disable displaying everything.
But getting an idea what assembly is finally generated is still hard (at least for mé). Or, to rephrase, to know if "everything" is finally unpacked and strict or not, and if not: why not.
When you say the compiler explorer can do it, do you mean that it will 'inline' the output in a similar way to the assembly i.e. you can view the results 'within' the source code?
I think Haskell creators never meant to have users care about those abstractions. It's more about what, not how, and given its terseness and power, I'd be happy to be a normal user.
Yeah, I like this, but the tool I wish existed the most is some kind of pointer aliasing checker. Something that would tell me when I've violated strict aliasing or, conversely, when I've forgotten to put in a restrict and now the compiler is reloading things from memory unnecessarily.
It is also available at a touch of a button within the most excellent https://godbolt.org/
along side the button that takes your code sample to https://quick-bench.com/
Those sites and https://cppreference.com/ are what I'm using constantly while coding.
I recently discovered https://whitebox.systems/ It's a local app with a $69 one-time charge. And, it only really works with "C With Classes" style functions. But, it looks promising as another productivity boost.