Skip to content

Instantly share code, notes, and snippets.

@DominicBreuker
Created June 16, 2016 16:30
Show Gist options
  • Save DominicBreuker/c1082d02456c4186c1a5f77e12972b85 to your computer and use it in GitHub Desktop.
Save DominicBreuker/c1082d02456c4186c1a5f77e12972b85 to your computer and use it in GitHub Desktop.
Simple example of gradient descent in tensorflow
import tensorflow as tf
x = tf.Variable(2, name='x', dtype=tf.float32)
log_x = tf.log(x)
log_x_squared = tf.square(log_x)
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(log_x_squared)
init = tf.initialize_all_variables()
def optimize():
with tf.Session() as session:
session.run(init)
print("starting at", "x:", session.run(x), "log(x)^2:", session.run(log_x_squared))
for step in range(10):
session.run(train)
print("step", step, "x:", session.run(x), "log(x)^2:", session.run(log_x_squared))
optimize()
@gridcellcoder
Copy link

great example..how do you get the optimized variables/parameters from this? ie not the loss but the parameters that result in the lowest loss?

@ShangxuanWu
Copy link

Great example! Thanks!

@fabiobento
Copy link

Thank you! Simple and straightforward

@IshJ
Copy link

IshJ commented Nov 23, 2018

Thank you

@cottrell
Copy link

cottrell commented Apr 6, 2019

Anyone have this for tensorflow 2.0?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment