The Thompson attack sounds scary, but it's not a real practical concern.
Look, as this article demonstrates, it's not hard to build a backdooring compiler. Even if you want to build it in a more robust way than checking filenames, it's really not difficult: it's (admittedly complex) pattern matching, and quite a lot of optimization in fact boils down to pattern matching. The problem is that the pattern matching you'd need to do to get the everything-is-backdoored scary effect is brittle as fuck.
Compiler output tends to be effectively nondeterministic. I mean, the goal of the compiler is to produce completely deterministic input, but very subtle changes can have cascading consequences. (I say this as I am trying to fix a test for LLVM's opaque pointer changes). Even something so simple as figuring out how to make bit-equivalent reproducible builds with the reproducible builds initiative took a few years to really get going, since there are so many things that are effectively random that you wouldn't consider at first (e.g., iterate over all files in a directory).
It's possible to make a compiler backdoor that is "updatable" and therefore a lot less brittle. And yes this does make the backdoor easier to detect since it's now communicating over the network. But such flexibility could really future-proof the backdoor and let it evolve over time as the target language changes.
For example, you could also make a compiler compile certain other software incorrectly in order to introduce exploitable vulnerabilities in the binaries. When I was working on convincing people of the importance of reproducible builds, I used to use an example where changing a single bit in the binary could introduce a fencepost error by changing a conditional branch operation into a different conditional branch operation. If the conditional branch related to overwriting memory and incrementing pointers (for example), that could make the resulting binary exploitable even though there was no fencepost error in the original source code.
(My examples on x86 involved changing JGE to JG, or JL to JLE, corresponding to changing >= to >, and < to <=, in loop conditions.)
Combining this with the trusting trust attack, you could have a self-perpetuating bug in the compiler plus a bugdoor in other software. The pattern match for the other software does not necessarily have to be super-specific in that case.
I would definitely agree that this wouldn't survive that many generations of software evolution without active intervention. It definitely wouldn't survive a change of programming language or target machine architecture, for example.
It is brittle but an evil compiler can account for that by attempting to compile the hacked program and falling back to real compilation if it gets a compilation error. It could even try downloading an update and try recompiling, but this introduces other ways it can get caught.
Look, as this article demonstrates, it's not hard to build a backdooring compiler. Even if you want to build it in a more robust way than checking filenames, it's really not difficult: it's (admittedly complex) pattern matching, and quite a lot of optimization in fact boils down to pattern matching. The problem is that the pattern matching you'd need to do to get the everything-is-backdoored scary effect is brittle as fuck.
Compiler output tends to be effectively nondeterministic. I mean, the goal of the compiler is to produce completely deterministic input, but very subtle changes can have cascading consequences. (I say this as I am trying to fix a test for LLVM's opaque pointer changes). Even something so simple as figuring out how to make bit-equivalent reproducible builds with the reproducible builds initiative took a few years to really get going, since there are so many things that are effectively random that you wouldn't consider at first (e.g., iterate over all files in a directory).