Skip to content

Instantly share code, notes, and snippets.

@d0znpp
Created December 12, 2017 00:46
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save d0znpp/284efab903984dc2cb206531b5d6ce4f to your computer and use it in GitHub Desktop.
Save d0znpp/284efab903984dc2cb206531b5d6ce4f to your computer and use it in GitHub Desktop.
def get_reward(self, action, step, pre_acc):
action = [action[0][0][x:x+4] for x in range(0, len(action[0][0]), 4)]
cnn_drop_rate = [c[3] for c in action]
Then we formed bathc with hyperparameters for every layer in "action" and we created cnn_drop_rate – list of dropout rates for every layer.
Now let's create new CNN with new architecture:
with tf.Graph().as_default() as g:
with g.container('experiment'+str(step)):
model = CNN(self.num_input, self.num_classes, action)
loss_op = tf.reduce_mean(model.loss)
optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate)
train_op = optimizer.minimize(loss_op)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment