Unit, integration, e2e, types and linters would catch most of the things you mention.
Not every software is mission critical, often the most important thing is to go as fast and possible and iterate very quickly. Good enough is better than very good in many cases.
Lots of people. For certain types of software (ISO) they are required.
But I'm in the boat (and also experienced many times first hand) all those tests you write will by definition, never test against that first production bug you get :)
My point was not to question that people would write tests, the point I'm making is that it's tempting to generate both code and tests once you start using an LLM to generate large swaths of code, and then the assurance that tests give you goes out the window.
I'm not convinced that using AI as more than auto-complete is really a viable solution, because you can't shortcut an understanding of the problem ___domain to be assured of the correctness of code (and at that point the AI has mostly saved you some typing). The theory-crafting process of building software is probably the most important aspect of it. It not only provides assurance of the correctness of what you're building, it provides feedback into product development (restrictions, pushback that suggests alternate ways of doing things, etc.).
Unit, integration, e2e, types and linters would catch most of the things you mention.
Not every software is mission critical, often the most important thing is to go as fast and possible and iterate very quickly. Good enough is better than very good in many cases.