year: 2014
paper: https://arxiv.org/pdf/1406.2661.pdf
website:
code:
status: reference
type: revolutionary


Takeaways

Minmax loss 1

In the paper that introduced GANs, the generator tries to minimize the following function while the discriminator tries to maximize it:

In this function:

  • D(x) is the discriminator’s estimate of the probability that real data instance x is real.
  • Ex is the expected value over all real data instances.
  • G(z) is the generator’s output when given noise z.
  • D(G(z)) is the discriminator’s estimate of the probability that a fake instance is real.
  • Ez is the expected value over all random inputs to the generator (in effect, the expected value over all generated fake instances G(z)).
  • The formula derives from the cross-entropy between the real and generated distributions. 2

The generator can’t directly affect the log(D(x)) term in the function, so, for the generator, minimizing the loss is equivalent to minimizing log(1 - D(G(z))).

GAN

Footnotes

  1. https://developers.google.com/machine-learning/gan/loss#minimax-loss

  2. GPT4 derivation CE >-< Minmax