Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
An implementation of InfoGAN.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ketyi

This comment has been minimized.

Copy link

@ketyi ketyi commented Feb 1, 2017

Hi Arthur,

I'm interested whether this:
q_grads = trainerG.compute_gradients(q_loss, tvars)
is supposed to be:
q_grads = trainerQ.compute_gradients(q_loss, tvars)

(If not then) could you please explain the mechanism to implement the mutual information factor into GAN in your implementation a little bit?

@hoqqanen

This comment has been minimized.

Copy link

@hoqqanen hoqqanen commented Feb 10, 2017

Love the minimalism of this example.

One question -- it seems here that G is only being optimized to fool D, but not at all wrt maximizing mutual information via Q. Is this the case, or is G getting gradients from somewhere I'm missing?

For reference, in the infoGAN paper it suggests (right under eq (5)) "LI can be maximized w.r.t. Q directly and w.r.t. G via the reparametrization trick" and lines 82 and 97 of https://github.com/openai/InfoGAN/blob/master/infogan/algos/infogan_trainer.py are adding Q losses to the generator loss, which presumably propagates the Q errors through G as well.

@hoqqanen

This comment has been minimized.

Copy link

@hoqqanen hoqqanen commented Feb 10, 2017

Also it looks like the continuous Q loss is a reconstruction error, not conditional entropy? Where are the log-likelihoods as in https://github.com/openai/InfoGAN/blob/master/infogan/algos/infogan_trainer.py#L87

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment