My case is a bit outside that, so I don't think the compiler can deduce that. I have a file format which tells me the expected number of fields about a category, and I throw an error & abort if the number is not exactly that.
Also, these data structure fields are always sent in as const variables, so they are never modified (making them "sealed" in a sense), hence I don't need to bounds check on arrays and vectors storing them.
That sounds trivial enough that the compiler would remove the bounds checks, assuming I'm understanding correctly that you have a condition that validates the number of fields at some point before an invalid access would occur.
But if it's possible for someone to muck with the file contents and lie about the number of fields which would cause a bounds error, that's exactly what bounds checking is supposed to avoid. So either bounds checks will be removed, or they're necessary.
I think it won't be able to because the creation of these data structures and consuming them is 3 files apart.
> But if it's possible for someone to muck with the file contents and lie about the number of fields.
You can't. You can say you'll have 7, but provide 8. But as soon as I encounter the 8th one during parsing, everything aborts. Same for saying 7 and providing 6. If the file ends after parsing 6th one, I say there's an error in your file and abort. Everything has to checkout and have to be sane to be able to start. Otherwise you'll get file format errors all day.
The rest of the pipeline is unattended completely. It's bona fide number crunching (material simulation to be exact), so speed is of the essence. Talking about >1.5 million iterations per second per core.
> I think it won't be able to because the creation of these data structures and consuming them is 3 files apart.
Strictly speaking I don't think the distance between creation and consumption matters. It all comes down to what the compiler is able to prove at the site where the bounds check may go.
For example, if you're iterating over a Vec using `for i in 0..vec.len() { ... }` then the amount of code between the creation and consumption of that Vec doesn't matter, as the compiler has all the information it needs to eliminate the bounds check right there.
If that's a vector which you basically iterate, yes. However, thinking what I developed, I have offset or formula determined indexes I hit constantly, and not strictly in a loop. They might prove harder. I need to implement these and see what the compiler(s) do in these cases.
The code I have written is a 3D materials software which works in >(3000x3000) matrices, and I do a lot of tricks with these to what I get from them. However, since everything creating them are validated during their creation, nothing breaks and nothing requires checks. Because most of the data is read-only (and forced by const correctness throughout the code).
> However, thinking what I developed, I have offset or formula determined indexes I hit constantly, and not strictly in a loop. They might prove harder.
I think at that point it'll come down to the compiler's value range analysis as well as how other parts of the program affect inlining/etc. Hard to say exactly what will happen.