Hacker News new | past | comments | ask | show | jobs | submit login

The real innovation will come one someone uses a Generative AI to make something, and then use a predictive AI to rate it's accuracy, making it go again until it passes the predictive AI.

Basically a form of adversarial training/generation.




Bilateral "thinking" makes sense, and you can even feed generative AI back into itself for simple error correction.

I believe that we'll see the most success/accuracy once you have generative AI compare itself to itself, monitored by a GAN, which then spits out it's answer while retaining some knowledge as to how it came to the conclusion. A tricameral mind.


Isn't this exactly how GANs work already?


Yes. But from I've seen no one has applied it to the latest Generative AIs.


I’m pretty sure Anthropic’s Claude is doing that.

https://scale.com/blog/chatgpt-vs-claude


Maybe an adversarial approach was used in training these models in the first place?


It was they were' trained using reinforcement learning with human feedback to create the critic.


I hadn't thought about human feedback being an adversarial system, but I guess that makes sense, since it's basically a classifier saying "you got this wrong".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: