Longform creative writing is possibly the least valid or interesting use case for LLMs.
Why should I be bothered to read what nobody could be bothered to write?
The point of writing is communication and creative writing is specifically about the human experience… something a LLM can mimic but never speak to authoritatively.
> Why should I be bothered to read what nobody could be bothered to write?
There are some long abandoned fanfiction that nobody would bother to continue and which I would have liked to see a continuation of a sufficient quality.
Well, LLMs aren't just randomly spitting out garbled text either. Humans create and curate the training data, humans design the models, and humans train them.
Yes, but then again, completely no. I mean, a procedural generation algorithm for a game world is an artwork tuned to produce results the artist likes, usually with care to allow some cases and exclude others and meet a particular vision. It's not very different from a pre-made world, except using the RNG as one of its building blocks.
You (the ostensible writer or publisher) have to lie forever for me to continue reading your content in this case. Immediately it comes out that any of the writing from an author/outlet is AI generated, I will blacklist them from my reading list forever. The point of reading (to me) is to commune with fellow humans and learn from their lived experience, it's not just to consume prose.
It's not the most enticing thing. If the prompt is nicely crafted, you get more intentional results, and maybe that's interesting. Seems a tough sell. I suppose there are authors like Simenon and Pratchett who would churn out books at the rate of at least one a year, and fans might wish for increased output, so maybe there's a niche for a slop-assisted author.
It's just one pretty trivial example, but many months ago I had to develop some backstory for my character in a D&D campaign. I had some character background and some paragraphs I wrote to start off with, but I wasn't sure what direction to go exactly and was trying to come up with different ideas. I dropped what I had into a prompt built to setup a pseudo-text-adventure-game using a local WizardLM model running through KoboldCpp, then I ran through it interactively to the point of getting incoherent about 100 times in a row. This yielded some new ideas, some old ones, and overall an interesting distribution of outcomes; where 2 or 3 happened most commonly, and all 100 were basically variations on about 10.
I won't argue that any of what the LLM came up with would stand on its own as particularly interesting - often quite the opposite - but it showed me examples of all the "obvious directions to go" as well as some other RPG cliches that I could either adopt or choose to avoid, and ultimately served as an excellent brainstorming assistant with interesting ideas and the ability to carry through and embellish them with enough detail to get a sense for either what works or what doesn't.
What is the process you went through interactively? Regenerate until something interesting comes out and the partially edit it? I'm curious about what you found works.
The KoboldCpp UI I was using had a pretty straightforward way of setting up a chatbot-style dialogue interface that would send the dialogue context so far with each message input, along with some supporting prompt infrastructure reminding the model of how it's supposed to reply based on that context. It also allowed editing the context directly; but I didn't often do that except to correct simple one-off inaccuracies, instead usually opting to restart from the beginning when things would go off-track.
When the model would reply as multiple characters at once, or interleave narration with the dialogue, I'd usually just play along - most commonly it would decide for 2 or 3 steps to carry on the dialogue just hallucinating the things I would say.
I did a lot of experimenting with different ways to frame the dialogue, from a Zork-like adventure game, to an instant messenger chat, to just paragraphs of prose with dialogue mixed in as if from a novel. I found all of these methods to have different strengths or weaknesses, but the model also tended to blur between them after a few messages so I'd just play along as far as I could each time.
> Why should I be bothered to read what nobody could be bothered to write?
In an art-criticism sense I broadly agree with you, but I think you go too far from a reader-experience point of view.
> The point of writing is communication and creative writing is specifically about the human experience
That's a point of writing, but the point of reading is only sometimes about communication. It's also about entertainment, enrichment, or expressing half-formed thoughts and feelings.
Take a step back from the autonomously-written novel and imagine something a bit more collaborative. Many players of open-world-ish games develop an emergent story, what if that story could be semi-automatically novelized to document the unique narrative of the playthrough?
SimCity 2000 had "mad-lib" newspaper articles that commented on the city's status; consider ones written by an LLM with full knowledge of the game's state and context.
> something a LLM can mimic but never speak to authoritatively.
Supposing a reader can't tell the difference between an average human-written novel and an average LLM-written novel, where does this authority lie?
Why limit yourself only to experiences curated by humans? You wouldn't do that outside of reading, it'd be an impossible limitation to impose. The only reason reading has been that way is because only humans could write. Now machines can write, and I can tinker and play with reading the same way I would with anything else.
Wait, I wouldn't want to get recipe or furniture suggestions from a human? It would be impossible? I can see how it could be a burden, but can't see how it would be impossible... (Or am I misunderstanding?)
OP said "why should I be bothered to read what nobody could be bothered to write," which I think is silly, because I enjoy all kinds of things that nobody bothered to create. Things that exist naturally, randomly, or arise through rules or physical laws. Waves on the beach, a fractal, a game of sudoku, a campfire at night.
Until recently, humans were the only source of writing. If you wanted to read something, it had to be something written by a human. Now you can read things that weren't written by a human, just like you can watch or smell or feel things not created by humans. I think that's neat. I don't think it devalues writing, at least not inherently.
I guess I agree with your overall point, but see things like music/tv/movies/images different from programs.
The purpose of a program (for me) is to solve a particular problem, or help me in some other way. They're usually categorized as "Works well enough" and "Not for me", and it's easy to see without using them, what category I'll file them into.
But media is different. They're not supposed to "solve" anything, just supposed to make me feel something for a duration of time, and after that they're "consumed" until I forget about them, or until I engage with it again. I usually don't know how I feel about the thing until after I've consumed it.
Personally, I've found LLMs to greatly help with creating small utility programs for myself, which I've been doing since I started programming, but now I spend maybe 20-30 minutes (mostly just refactoring stuff by hand takes time) before the utility is helpful, vs many hours which it used to take to put together the same thing.
Media that is 100% created with ML tooling tends to be very different in quality than human made media. I'm not 100% convinced it's because of the tooling itself, as much as the people who use the tooling don't have enough experience to create media from before to know what to create, or what it should be.
Personally what I find interesting is getting insight into the trajectory of model abilities over time. Over the time I've been running these benchmarks, the writing has gone from pure slop, to broadly competent (at short form at least) and occasionally compelling.
I don't think it's much longer until they will be generating content you will actually want to read.
Meanwhile a lot of people are find use for LLMs for partner-writing or lower stakes prose or roleplay.
As a follow-up, this is true not only at the level of individual experience but at the societal level.
"AI" is supposed to make our lives better. Proponents want it to replace soul-crushing, boring clerical work, to free us up for things more meaningful to us, which for many people is to create and consume art.
Why do tech people keep having the impulse to replace the best parts of being human?
Are you really telling me that the vast corpus of human-generated creative writing isn't enough for you, and you have an itch to read something that can only be written by an AI? That seems crazy to me.
...or are you just a publisher, and wish you could make more money without needing to pay those pesky human authors? And in that case, I won't try to argue with you -- just give you a heartfelt middle finger.
I think some (a lot?) of people perceive “draw the rest of the fucking owl” as “soul-crushing, boring clerical work”, and actively seek out AI vibe-drawing as an alternative (either ignorant or uncaring of the limitations thereof). It’s not specific to tech people, really.
Why should I be bothered to read what nobody could be bothered to write?
The point of writing is communication and creative writing is specifically about the human experience… something a LLM can mimic but never speak to authoritatively.