Hacker News new | past | comments | ask | show | jobs | submit login

I agree -- though I'm not sure how to make the right name for this technique. Maybe "relative-probability GAN" is the best.

I also wondered before clicking what a relativistic GAN would be: maybe as the activity of a neuron becomes larger and larger, it becomes harder and harder for it to continue? But that's already true of sigmoidal activation functions.




I was excited to see how the fabric of space and time was being used to generate adversarial examples. =)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: