The real innovation will come one someone uses a Generative AI to make something, and then use a predictive AI to rate it's accuracy, making it go again until it passes the predictive AI.
Basically a form of adversarial training/generation.
Bilateral "thinking" makes sense, and you can even feed generative AI back into itself for simple error correction.
I believe that we'll see the most success/accuracy once you have generative AI compare itself to itself, monitored by a GAN, which then spits out it's answer while retaining some knowledge as to how it came to the conclusion. A tricameral mind.
I hadn't thought about human feedback being an adversarial system, but I guess that makes sense, since it's basically a classifier saying "you got this wrong".
Basically a form of adversarial training/generation.