I have mixed feelings. Generally I don’t think that LLM output should be used to create anything that a human is supposed to read, but I do carve out a big exception for people using LLMs for translation/writing in a second language.
At the same time, however, the people who need to use an LLM for this are going to be the worst at identifying the output’s weaknesses, eg just as I couldn’t write Spanish text, I also couldn’t evaluate the quality of a Spanish translation that an LLM produced. Taken to an extreme, then, students today could rely on LLMs, trust them without knowing any better, and grow to trust them for everything without knowing anything, never even able to evaluate their quality or performance.
The one area that I do disagree with the author, though, is coding. As much as I like algorithms code is written to be read by computers and I see nothing wrong with computers writing it. LLMs have saved me tons of time writing simple functions so I can speed through a lot of the boring legwork in projects and focus on the interesting stuff.
I think Miyazaki said it best: “I feel… humans have lost confidence“. I believe that LLMs can be a great tool for automating a lot of boring and repetitive work that people do every day, but thinking that they can replace the unique perspectives of people is sad.
I actually feel very strongly that code is very much written for us humans. Sure, it's a set of instructions that is intended to be machine read and executed but so much of _how_ code is written is very much focused on the human element that's been a part of software development. OOP, design patterns, etc. don't exist because there is some great benefit to the machines running the code. We humans benefit as the ones maintaining and extending the functionality of the application.
I'm not making a judgement about the use of LLMs for writing code, just that I do think that code serves the purpose of expressing meaning to machines as well as humans.
At the same time, however, the people who need to use an LLM for this are going to be the worst at identifying the output’s weaknesses, eg just as I couldn’t write Spanish text, I also couldn’t evaluate the quality of a Spanish translation that an LLM produced. Taken to an extreme, then, students today could rely on LLMs, trust them without knowing any better, and grow to trust them for everything without knowing anything, never even able to evaluate their quality or performance.
The one area that I do disagree with the author, though, is coding. As much as I like algorithms code is written to be read by computers and I see nothing wrong with computers writing it. LLMs have saved me tons of time writing simple functions so I can speed through a lot of the boring legwork in projects and focus on the interesting stuff.
I think Miyazaki said it best: “I feel… humans have lost confidence“. I believe that LLMs can be a great tool for automating a lot of boring and repetitive work that people do every day, but thinking that they can replace the unique perspectives of people is sad.