Hacker News new | past | comments | ask | show | jobs | submit login

> this makes me nervous for proper learning of anything in the future

I don't really understand this fear, particularly now with the reasoning models that explain why they did something.




Nobody is going to read and the "thoughts" they do output are hardly every particularly coherent or insightful (Deepseak is just sem-unhinged continuous rambling and Open AI of course hides most of the reasoning).

Even if the steps/explanations were actually useful and insightful, though IMHO that's not remotely the same thing as figuring out the steps on your own.


When chatGPT corrects my writing, the explanations are actually quite helpful.

> IMHO that's not remotely the same thing as figuring out the steps on your own.

This is true with pretty much anything, but it doesn't mean we should ignore tools that can do those steps for you.


Presumably you knew how to write before ChatGPT was a thing, though?

> we should ignore

Never implied anything of the sort. But it can be a bit like kids not learning basic math and skipping straight to using calculators for everything just 10x worse.


Well I just used a LLM to write a systemd unit for me. One attempt kinda works but it doesn't do what I wanted, the other would do what I wanted if it worked at all.

Every line is explained in detail. They just don't help :)




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: