Skip to content

Instantly share code, notes, and snippets.

@shagunsodhani
Last active November 5, 2019 17:54
Show Gist options
  • Save shagunsodhani/5d726334de3014defeeb701099a3b4b3 to your computer and use it in GitHub Desktop.
Save shagunsodhani/5d726334de3014defeeb701099a3b4b3 to your computer and use it in GitHub Desktop.
Summary of "Conditional Generative Adversarial Nets" Paper

Conditional Generative Adversarial Nets

Introduction

Architecture

  • Feed y into both the generator and discriminator as additional input layers such that y and input are combined in a joint hidden representation.

Experiment

Unimodal Setting

  • Conditioning MNIST images on class labels.
  • z (random noise) and y mapped to hidden layers with ReLu with layer sizes of 200 and 1000 respectively and are combined to obtain ReLu layer of dimensionality 1200.
  • Discriminator maps x (input) and y to maxout layers and the joint maxout layer is fed to sigmoid layer.
  • Results do not outperform the state-of-the-art results but do provide a proof-of-the-concept.

Multimodal Setting

  • Map images (from Flickr) to labels (or user tags) to obtain the one-to-many mapping.
  • Extract image and text features using convolutional and language model.
  • Generative Model
    • Map noise and convolutional features to a single 200 dimensional representation.
  • Discriminator Model
    • Combine the representation of word vectors (corresponding to tags) and images.

Future Work

  • While the results are not so good, they do show the potential of Conditional GANs, especially in the multimodal setting.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment