I agree -- though I'm not sure how to make the right name for this technique. Maybe "relative-probability GAN" is the best.
I also wondered before clicking what a relativistic GAN would be: maybe as the activity of a neuron becomes larger and larger, it becomes harder and harder for it to continue? But that's already true of sigmoidal activation functions.
I also wondered before clicking what a relativistic GAN would be: maybe as the activity of a neuron becomes larger and larger, it becomes harder and harder for it to continue? But that's already true of sigmoidal activation functions.