init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
mse = tf.reduce_mean(tf.square(out - out_))
train_step = tf.train.GradientDescentOptimizer(0.3).minimize(mse)
tf.train.GradientDescentOptimizer
is designed to use a constant learning rate for all variables in all steps. TensorFlow also provides out-of-the-box adaptive optimizers including the tf.train.AdagradOptimizer
and the tf.train.AdamOptimizer
, and these can be used as drop-in replacements.learning_rate
argument to the tf.train.GradientDescentOptimizer
constructor can be a Tensor
object. This allows you to compute a different value for the learning rate in each step, for example:learning_rate = tf.placeholder(tf.float32, shape=[]) # ... train_step = tf.train.GradientDescentOptimizer( learning_rate=learning_rate).minimize(mse) sess = tf.Session() # Feed different values for learning rate to each training step. sess.run(train_step, feed_dict={learning_rate: 0.1}) sess.run(train_step, feed_dict={learning_rate: 0.1}) sess.run(train_step, feed_dict={learning_rate: 0.01}) sess.run(train_step, feed_dict={learning_rate: 0.01})
tf.Variable
that holds the learning rate, and assign it each time you want to change the learning rate.Gradient descent algorithm uses the constant learning rate which you can provide in during the initialization. You can pass various learning rates in a way showed by Mrry.
But instead of it you can also use more advanced optimizers which have faster convergence rate and adapts to the situation.
Here is a brief explanation based on my understanding:
Adam or adaptive momentum is an algorithm similar to AdaDelta. But in addition to storing learning rates for each of the parameters it also stores momentum changes for each of them separately
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
100000, 0.96, staircase=True)
# Passing global_step to minimize() will increment it at each step.
learning_step = (
tf.train.GradientDescentOptimizer(learning_rate)
.minimize(...my loss..., global_step=global_step))