The problem is that the various `__has_foo` aren't actually reliable in practice - they don't tell you if the attribute, builtin, include, etc. actually works the way it's supposed to without bugs, or if it includes a particular feature (accepts a new optional argument, or allows new values for an existing argument, etc.).
You should use double underscores on attribute names to avoid conflicts with macros (user-defined macros beginning with double underscores are forbidden, as identifiers beginning with double underscores are reserved).
#if __has_attribute(__cold__)
# warning "This works too"
#endif
static void __attribute__((__cold__))
foo(void)
{
// This works too
}
yep. C's really come a long way with the special operators for checking if attributes exist, if builtins exist, if headers exist, etc.
Covers a very large part of what is needed, making fewer and fewer things need to end up in configure scripts. I think most of what's left is checking for items (types, functions) existence and their shape, as you were doing :). I can dream about getting a nice special operator to check for fields/functions, would let us remove even more from configure time, but I suspect we won't because that requires type resolution and none of the existing special operators do that.
You still need a configure step for the "where are my deps" part of it, though both autotools and CMake would be way faster if all they were doing was finding, and not any testing.
That said, that (determining the c flags and ld flags for dependencies) is something that might be able to be mixed into compilation a bit more than it is now. Could imagine that if we annotate which compilation units need a particular system library, we could start building code that doesn't depend on that library while determining the library ___location/flags (ie: running pkg-config or doing other funny business) at the same time.
Or since we're in the connected era, perhaps we're downloading the library we require if it's not found and building it as an embedded component.
With that type of design, it becomes more clear why moving as much to the build stage (where we can better parallelize work because most of the work is in that stage) and more accurately describing dependencies (so we don't block work that could run sooner) can be helpful in builds.
Doing that type of thing requires a build system that is more flexible though: we really would need to have the pieces of "work" run by the build system be able to add additional work that is scheduled by the build system dynamically. I'm not sure there are many build systems that support this.
Download/build on demand is cute when it works, but it's a security nightmare and a problem for Nix which runs the build in an environment that's cut off from the network.
This is already a problem for getting Bazel builds to run nicely under Nix, with the current solution (predownload everything into a single giant "deps" archive in the store and then treat that as a fixed input derivation with a known hash value) is deeply non-optimal. Basically, I hope that any such schemes have a well-tested fallback path for bubbling the "thing I would download" information outward in case there are reasons to want to separate those steps.
I agree that there are problems when laying multiple build systems on top of one another, and I see that often as a user of nix (it's also bad with rust projects that use cargo, and though there are a variety of options folks have written they all have tradeoffs).
To some extent, the issue here is caused by just what I was discussing above: Nix derivations can't dynamically add additional derivations (ie: build steps not being able to dynamically add additional build steps makes things non-optimal).
I am hopeful that Nix's work on dynamic derivations will improve the situation for nix (with respect to bazel, cargo, and others) over time, and I am hopeful that other build systems will recognize how useful dynamically adding build steps can be.
It's true— fundamentally, nothing about a build realizing partway through that it needs more stuff breaks the Nix philosophy, assuming the build is holding a hash for whatever it is it wants to pull so that everything stays hermetic. It's a bit annoying to not know upfront exactly what your build graph looks like but honestly it's not the worst— like, you already don't know how long each derivation is going to take.
In any case, the tvix devs have definitely understood the assignment on this and are making not only ifd a first class citizen, but also the larger issue of allowing the evaluation step to decompose, and for the decomposed pieces to run in parallel with each other and with builds— and that really is the game-changer, particularly with a cluster-backed build, to be able to start work immediately rather than waiting out a 30-60 second single-threaded eval.
You might be interested in rawhide[1] or fselect[2]. (Note: I don't really use them myself, but they seem to offer something like what you're suggesting.)
Also, this is still a find-style syntax, but my bfs utility supports -exclude [3]. So you can write
reply