That the annotation applies to variables and not types is surely an oversight or mistake right? Seems like it could have been easier to initially implement that way but it just doesn’t seem to fit with how C type system works. (Yes it will make declarations uglier to do it on types but that ship has sailed long ago; see cdecl.org)
And how would you type a string vs byte array then? C doesn't even have proper string support yet, ie unicode strings. Most wchar functions don't care at all about unicode rules. Zero-terminated byte buffers are certainly not strings, just garbage.
C will never get proper string support, so you'll never be able to seperate them from zero-terminated byte buffers vs byte-buffers in the type system.
So annotating vars is perfectly fine.
The problem was that the PM and Release manager was completely unaware of the state of the next branch, of its upcoming problems and fixes, and just hacked around in his usual cowboy manner. Entirely unprofessional. A release manager should have been aware of Kees' gcc15 fixes.
But they have not tooling support, no oversight, just endless blurbs on their main mailinglist. No CI for a release candidate? Reminds us of typical cowboys in other places.
But you would still need to change it everywhere, right? Like, instead of changing the annotation everywhere you have to change the type everywhere. Doesn't seem like a huge difference to me.
If the CI system didn't get the Fedora upgrade then it would not have caught it. Aside from that the kernel has a highly configurable build process so getting good coverage is equally complex.
Plus, this is a release candidate, which is noted as being explicitly targeted at developers and enthusiasts. I'm not sure the strength of Kees' objections are well matched to the size of the actual problem.
> That the annotation applies to variables and not types is surely an oversight or mistake right?
I don't think so. It doesn't make sense on the type. Otherwise, what should happen here?
char s[1];
char (__nonstring ns)[1]; // (I guess this would be the syntax?)
s[0] = '1';
ns[0] = '\0';
char* p1 = s; // Should this be legal?
char* p2 = ns; // Should this be legal?
char* __nonstring p3 = s; // Should this be legal?
char* __nonstring p4 = ns; // Should this be legal?
foo(s, ns, p1, p2, p3, p4); // Which ones can foo() assume to be NUL-terminated?
// Which ones can foo() assume to NOT be NUL-terminated??
By putting it in the type you're not just affecting the initialization, you're establishing an invariant throughout the lifetime of the object... which you cannot enforce in any desirable way here. That would be equivalent to laying a minefield throughout your code.
Perhaps unsigned could help here with understanding.
unsigned means, don't use of an integer MSB as sign bit. __nonstring means, the byte array might not be terminated with a NUL byte.
So what happens if you use integers instead of byte arrays? I mean cast away unsigned or add unsigned. Of course these two areas are different, but one could try to design such features that they behave in similar ways where it makes sense.
I am unsure but it seems, if you cast to a different type you lose the conditions of the previous type. And "should this be legal", you can cast away a lot of things and it's legal. That's C.
But whatever because it's not implemented. This all is hypothetical. I understand GCC that they took the easier way. Type strictness is not C's forte.
> Perhaps unsigned could help here with understanding.
No, they're very different situations.
> unsigned means, don't use of an integer MSB as sign bit.
First: unsigned is a keyword. This fact is not insignificant.
But anyway, even assuming they were both keywords or both attributes: "don't use an MSB as a sign bit" makes sense, because the MSB otherwise is used as a sign bit.
> __nonstring means, the byte array might not be terminated with a NUL byte.
The byte array already doesn't have to contain a NUL character to begin with. It just so happens that you usually initialize it somewhere with an initializer that does, but it's already perfectly legal to strip that NUL away later, or to initialize it in a manner that doesn't include a NUL character (say, char a[1] = {'a'}). It doesn't really make sense to change the type to say "we now have a new type with the cool invariant that is... identical to the original type's."
> I understand GCC that they took the easier way. Type strictness is not C's forte.
People would want whatever they do to make sense in C++ too, FWIW. So if they introduce a type incompatibility, they would want it to avoid breaking the world in other languages that enforce them, even if C doesn't.
No actually, that was the point. I was asking, what do you think should happen if you store a NUL when you're claiming you're not. Or if you don't store a NUL, when you claim it's there.
Well, as a human compiler, I said "Hey, you've non-NUL terminated a NUL terminated string". If that was what you intended you should use the type annotation for that, so I think that case worked as intended.
EDIT:
> what do you think should happen if you store a NUL when you're claiming you're not
I don't believe nonstring implies it doesn't end with a NUL, just that it isn't required to.
But char[] already isn't required to be NUL-terminated to begin with. char a[1] = {'a'} is perfectly fine, as is a[0] = '1'. If all you want to do is to document the fact that a type can do exactly what it already can... changing the type to something new doesn't make sense.
Note that "works as intended" isn't sole the criterion for "does it make sense" or "should we do this." You can kill a fly with a cannon too, and it achieves the intended outcome, but that doesn't mean you should.