Skip to content

Instantly share code, notes, and snippets.

@jeffling
Created July 16, 2019 00:10
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jeffling/4e0fd1697e8874a6fbfb2ebaeed3e46c to your computer and use it in GitHub Desktop.
Save jeffling/4e0fd1697e8874a6fbfb2ebaeed3e46c to your computer and use it in GitHub Desktop.
Toy BigGAN-Deep with CIFAR-10 for CompareGAN
# BigGAN architecture and settings on ImageNet 128.
# http://arxiv.org/abs/1809.11096
dataset.name = "cifar10"
options.z_dim = 120
options.architecture = "resnet_biggan_deep_arch"
ModularGAN.conditional = True
options.batch_size = 128
options.gan_class = @ModularGAN
options.lamba = 1
options.training_steps = 1000
weights.initializer = "orthogonal"
spectral_norm.singular_value = "auto"
# Generator
G.batch_norm_fn = @conditional_batch_norm
G.spectral_norm = True
ModularGAN.g_use_ema = True
resnet_biggan_deep.Generator.embed_y = True
standardize_batch.decay = 0.9999
standardize_batch.epsilon = 1e-5
standardize_batch.use_moving_averages = False
# Discriminator
options.disc_iters = 2
D.spectral_norm = True
resnet_biggan_deep.Discriminator.project_y = True
# Loss and optimizer
loss.fn = @hinge
penalty.fn = @no_penalty
ModularGAN.g_lr = 0.0002
ModularGAN.g_optimizer_fn = @tf.train.AdamOptimizer
ModularGAN.d_lr = 0.0005
ModularGAN.d_optimizer_fn = @tf.train.AdamOptimizer
tf.train.AdamOptimizer.beta1 = 0.0
tf.train.AdamOptimizer.beta2 = 0.999
z.distribution_fn = @tf.random.normal
eval_z.distribution_fn = @tf.random.normal
run_config.iterations_per_loop = 500
run_config.save_checkpoints_steps = 2500
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment