Nobody is going to read and the "thoughts" they do output are hardly every particularly coherent or insightful (Deepseak is just sem-unhinged continuous rambling and Open AI of course hides most of the reasoning).
Even if the steps/explanations were actually useful and insightful, though IMHO that's not remotely the same thing as figuring out the steps on your own.
Presumably you knew how to write before ChatGPT was a thing, though?
> we should ignore
Never implied anything of the sort. But it can be a bit like kids not learning basic math and skipping straight to using calculators for everything just 10x worse.
Well I just used a LLM to write a systemd unit for me. One attempt kinda works but it doesn't do what I wanted, the other would do what I wanted if it worked at all.
Every line is explained in detail. They just don't help :)
I don't really understand this fear, particularly now with the reasoning models that explain why they did something.