Skip to content

Instantly share code, notes, and snippets.

@wut0n9
Created February 13, 2019 08:48
Show Gist options
  • Save wut0n9/435df9828ff84e28eaadd770f505f5f9 to your computer and use it in GitHub Desktop.
Save wut0n9/435df9828ff84e28eaadd770f505f5f9 to your computer and use it in GitHub Desktop.
学习率调度 #learning_rate
learning_rate = 0.1
decay_rate = 0.96
global_step = tf.Variable(0, trainable=False) # 传入优化器实例的minimize方法,系统自1起增1。
# new_learning_rate = learning_rate * decay_rate^(global_step/decay_step)
# 每迭代decay_steps调度学习率
# staircase=True表示结果取整
learning_rate_decay_scheduler = tf.train.exponential_decay(learning_rate=learning_rate,
global_step=global_step,
decay_steps=400,
decay_rate=decay_rate,
staircase=True)
pred = BiMultiLayerDynamicRnn(x, seqlen, weights, biases)
# define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=pred, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate_decay_scheduler).minimize(cost, global_step=global_step)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment