Hacker News new | past | comments | ask | show | jobs | submit login

AI has changed how we learn by making the process of improving work much easier. Normally, learning involves writing a draft, finding mistakes, and fixing them over time. This helps build critical thinking. AI, trained on tons of refined data, can create polished work right away. While this seems helpful, it can skip the important step of learning through trial and error.

The question is: Should we limit AI to keep the old way of learning, or use AI to make the process better? Instead of fixing small errors like grammar, students can focus on bigger ideas like making arguments clearer or connecting with readers. We need to teach students to use AI for deeper thinking by asking better questions.

We need to teach students that asking the right questions is key. By teaching students to question well, we can help them use AI to improve their work in smarter ways. The goal isn’t to go back to old methods for iterating but change how we iterate altogether.






I would argue that if you are losing a consequent amount of your time fixing grammar, then it sounds like you need to spend that time to improve your grammar skills.

> We need to teach students to use AI for deeper thinking by asking better questions.

Same thing here: the whole point of learning critical thinking is that you don't need to ask someone/something else. Teaching you how to ask the LLM to do it for you is not the same as teaching you how to actually do it.

In my opinion, we need to make students realise that their goal is to learn how to do it themselves (whatever it is). If they need an LLM to do it, then they are not learning. And if they are not learning, there is no point in going to school, they can go work in a field.


You’re getting to the crux of the argument, knowing when to use AI. Doing or learning “It” in 2025 means using AI whether to understand it better or use it to get better grades.

My take is teach them to get better at asking questions and then teach them when to use their own understanding to change their answer for the better. How many times has an AI’s answer been 5/10 and with a few fixes it’s a 9/10. That comes with time. Getting them asking questions and learning the “when” later is better at least to me.


> Doing or learning “It” in 2025 means using AI whether to understand it better or use it to get better grades.

Depends, I think. If we are talking about writing an essay (and I believe we are), then the LLM is somewhere between useless and counter-productive.

Of course, if the LLM is used to understand an RFC (I would debate how useful it is for that, but that's another discussion), then it's different. The goal was to understand the RFC, it doesn't really matter how you did it. But the goal of writing an essay is not to end up with a written essay at all. Nobody cares about it, you can burn it right after it's graded.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: