Hacker News new | past | comments | ask | show | jobs | submit login

> Some of these students are dishonest. Many aren't.

If they are using LLMs to deliver final work they are all posers. Some are aware of it, many aren't.

> Many genuinely believe the work they submit is their own, that they really did do the work, and that they're learning the languages. It isn't, they didn't, and they aren't.

But I'm talking about a very specific intentionality in using LLMs which is to "help us understand what's missing in our understanding of the problem, if our solution is plausible and how we could verify it".

My model of intention and the distinction relies on that. You have a great opportunity to show your students that LLMs aren't designed to be used like that, as a proxy for yourself. After all, it's not realistic to think we can forbid students to use LLMs, better to try to incentivise the development of a healthy relationship with it.

Also, LLMs aren't a panacea. Maybe in learning languages you should stay away from it, although I'd be cautious to make this conclusion, but it doesn't mean LLMs are universally bad for learning.

In any case, if you don't use LLMs as a guide but a proxy then sure it's a guaranteed path to brain rot. But just as a knife can be used to both heal and kill, an LLM can be used to learn and to fake. The distinction lies in knowing yourself, which is a constant process.






> You have a great opportunity to show your students that LLMs aren't designed to be used like that, as a proxy for yourself. After all, it's not realistic to think we can forbid students to use LLMs, better to try to incentivise the development of a healthy relationship with it.

LLM use is the absolute last thing I want to discuss with students. I can think of few worse ways I could spend my limited time with them.

Educators can, should, and must forbid students from using tools that do their work for them -- i.e. cheating.

LLMs are always bad for learning. Always. They offload and bypass mental work.


Promise not to bother you anymore, but I thought I'd share in practice what I mean with an example: https://g.co/gemini/share/7ad2880e8d5a

It's no bother! I appreciate the civil, engaging back and forth.

This is fascinating and compelling. My concern with using LLMs for this sort of thing is similar to my concern with using them in a business setting: it provides a direct line from a single idea to a finished product. That isn't a great way to generate a product or to learn. The messy process of discovering other avenues, hitting dead ends, working through them, and finding your way to a solution to your problem will leave you in a very, very different, and much better, position than the one you find yourself in after taking the LLM's advice. You'll have a richer understanding of the terrain of the problem (edit: even when your problem is "tell me about this problem"), and, as a result, you'll have provided yourself new, improved, or refined tools for the next problem you encounter.

The blinkered solution that the LLM gives you results in something narrower and much worse. You may gain something helpful from the sources it suggests. Its proposals may lead you to an interesting solution. But you've lost out everything else -- and I'd argue it's the "everything else" that is most essential to learning, not the specific solution or source you've chosen.

To put this another way, I've been thinking lately about what I think I'm teaching my students, and what I think experts have that amateurs and novices lack. It isn't possession of the specific knowledge or reasoning that constitutes correct understanding or a correct answer to a given question (e.g. the stuff you'd find in a good written source). It's not the ability to solve difficult problems more easily. It's not any of the concrete stuff that one associates with a certain field (e.g. a Roman historian knows a lot about Rome). It's more nebulous. It's probably best described as just fluency with the problem space, which enables a person to retrieve, use, present, reason, and creatively rework. Having a skill and knowing don't boil down to the stuff you're able to do or the facts you can trot out. Those are the superficial stuff. They're incidental to learning itself.

When you rely on LLMs, I think you're essentially shearing away the entire problem space and replacing the general problem-solving skill with the superficial stuff to which the problem-solving skill applies. You wind up with literally a trivial version of the understanding you'd have if you'd worked through the issues and problems yourself.

I don't know whether this makes sense. It's a thought I've just recently been turning over in my head and I'm not sure I can articulate it well. It started from my watching an interview with a historian of authoritarianism, marveling at the polish of her answers and realizing that her expertise has nothing to do with the specific insights or historical facts she was trotting out in answer to viewer questions. It has everything to do with a deeper, subtending faculty that one develops through years of intensive study.

Edit: here's the video -- https://m.youtube.com/watch?v=vK6fALsenmw . The historian is clearly incredibly, incredibly smart, and the questions aren't terribly difficult, but I was still bowled over by just how good her answers are -- the diverse base of knowledge that she draws on, the fluent ease with which she ties together the historical, psychological, and sociological threads.


Woah it's a lot to unpack here, I'll need some time. And it's difficult to go back to this discussion. If you happen to have a mastodon handle, could we exchange messages there? My handle is @[email protected]



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: