I've always wished for something like that for image generation AI. It'd be much cooler/more interesting to watch AI try to draw/paint pictures with strokes rather than just magically iterate into a fully-rendered image. I dunno what kind of dataset or architecture you could possibly apply to accomplish this, but it would be very interesting.
I get what you’re saying, but if you watch Stable Diffusion do each step it’s at least kind of similar. If you keep the same seed but change a detail, often the broad “strokes” are completely the same.