> They are invariably verbose, interminably waffly, and insipidly fixated on the bullet-points-with-bold style.
No, this is just the de-facto "house style" of ChatGPT / GPT models, in much the same way that that that particular Thomas Kinkade-like style is the de-facto "house style" of Stable Diffusion models.
You can very easily tell an LLM in your prompt to respond using a different style. (Or you can set it up to do so by telling it that it "is" or "is roleplaying" a specific type-of-person — e.g. an OP-ED writer for the New York Times, a textbook author, etc.)
I was surprised to see such world-weary criticism of the bullet-points-with-bold style in TFA— it's long been what I've reached for when writing for a technical audience, whether that's in a wiki page, a design doc, a README, a PR, or even a whole book.
I feel like for most of my audiences it provides the proper anchor points for effective skimming while still giving me room to include further detail and explanation so that it's there as desired by the reader.
(And responding to my sibling comment, I also use em dashes and semicolons all the time. Has my brain secretly always been an LLM??)
One of my issues with LLMs is how much they match the academic, technical, and corporate styles of speaking Ive learned over the years. Now when I write people ignore me because they assume I'm just pasting LLM output.
You are not alone.
Nowadays, I'm ashamed of using words like "moreover", "firstly", "furthermore". Pre-LLM, people used to compliment me on my writing style
That was not a good attempt at changing the style.
You can't just say "don't sound like an LLM." The LLM does not in fact know that it is "speaking like an LLM"; it just thinks that it's speaking the way the "average person" speaks, according to everything it's ever been shown. If you told it "just speak like a human being"... that's what it already thought it was doing!
You have to tell the LLM a specific way to speak. Like directing an image generator to use a specific visual style.
You can say "ape the style of [some person who has a lot of public writing in the base model's web training corpus — Paul Graham, maybe?]". But that coverage will be spotty, and it's also questionably ethical (just like style-aping in image generation.)
But an LLM will do even better if you tell it to speak the in some "common mode" of speech: e.g. "an email from HR", or "a shitpost rant on Reddit" or "an article in a pop-science magazine."
No, this is just the de-facto "house style" of ChatGPT / GPT models, in much the same way that that that particular Thomas Kinkade-like style is the de-facto "house style" of Stable Diffusion models.
You can very easily tell an LLM in your prompt to respond using a different style. (Or you can set it up to do so by telling it that it "is" or "is roleplaying" a specific type-of-person — e.g. an OP-ED writer for the New York Times, a textbook author, etc.)
People just don't ever bother to do this.