Skip to content

Instantly share code, notes, and snippets.

@zredlined
Created January 31, 2020 22:07
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save zredlined/25ff086f89e24e9d06054d99e006890f to your computer and use it in GitHub Desktop.
Save zredlined/25ff086f89e24e9d06054d99e006890f to your computer and use it in GitHub Desktop.
logging.info("Utilizing differential privacy in optimizer"
RMSPropOptimizer = tf.compat.v1.train.RMSPropOptimizer
DPRmsPropGaussianOptimizer = make_dp_gaussian_optimizer(RMSPropOptimizer)
optimizer = DPRmsPropGaussianOptimizer(
l2_norm_clip=store.l2_norm_clip,
noise_multiplier=store.noise_multiplier,
num_microbatches=store.microbatches,
learning_rate=store.learning_rate)
"""
Compute vector of per-example loss rather than its mean over a minibatch.
To support gradient manipulation over each training point.
"""
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True,
reduction=tf.losses.Reduction.NONE)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment