It seems to me like you are assuming ChatGPT is always going to be wrong (which in itself is not an unreasonable place to start), which is coloring your use of the tool.
I mean, I did say that assuming it was wrong was not an unreasonable place to start, which I think makes my position quite clear. But, I'll clarify: I review all the code that ChatGPT produces before I use it. Sometimes this results in me asking it to revise the code, sometimes it results in me abandoning that code, sometimes it results in me learning something because it has produced better code than I had in my mind thinking about the problem, or it used code or libraries that were a "blind spot" for me. I've been programming in Python since the late 90s, and programming since the early '80s, but I'm not a programmer by day, so I definitely have blind spots. Even if I was a programmer, I'd admit I'd have blind spots.
But: The LLMs can produce rather good code. They can cover a lot of typing and low to mid level design work from a short description. I'm bringing value to my company and my life by using this tool, plain and simple.