QBoard » Artificial Intelligence & ML » AI and ML - Tensorflow » error with tensorflow RNN implementation

error with tensorflow RNN implementation

  • I'm building a RNN model to do the image classification. I used a pipeline to feed in the data. However it returns

    ValueError: Variable rnn/rnn/basic_rnn_cell/weights already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

    I wonder what can I do to fix this since there are not many examples of implementing RNN with an input pipeline. I know it would work if I use the placeholder, but my data is already in the form of tensors. Unless I can feed the placeholder with tensors, I prefer just to use the pipeline.

    def RNN(inputs):
    with tf.variable_scope('cells', reuse=True):
    basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=batch_size)

    with tf.variable_scope('rnn'):
    outputs, states = tf.nn.dynamic_rnn(basic_cell, inputs, dtype=tf.float32)

    fc_drop = tf.nn.dropout(states, keep_prob)

    logits = tf.contrib.layers.fully_connected(fc_drop, batch_size, activation_fn=None)

    return logits

    #Training
    with tf.name_scope("cost_function") as scope:
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=train_label_batch, logits=RNN(train_batch)))
    train_step = tf.train.MomentumOptimizer(learning_rate, 0.9).minimize(cost)


    #Accuracy
    with tf.name_scope("accuracy") as scope:
    correct_prediction = tf.equal(tf.argmax(RNN(test_image), 1), tf.argmax(test_image_label, 0))
    accuracy = tf.cast(correct_prediction, tf.float32)
      June 11, 2019 4:00 PM IST
    0
  • You need to use the reuse option correctly. following changes would solve it. For prediction you need to use the already existed variables in the graph.

    def RNN(inputs, reuse):
    with tf.variable_scope('cells', reuse=reuse):
    basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=batch_size, reuse=reuse)

    ...

    ...
    #Training
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=train_label_batch, logits=RNN(train_batch, reuse=None)))

    #Accuracy
    ...
    correct_prediction = tf.equal(tf.argmax(RNN(test_image, reuse=True), 1), tf.argmax(test_image_label, 0)) This post was edited by Raji Reddy A at June 11, 2019 4:14 PM IST
      June 11, 2019 4:05 PM IST
    0
  • the reuse mechanism is very useful for prediction phase of a network. Since you need to use the learned parameters for prediction; and in this case you are using the same graph; so setting reuse=True will give you access to the already learned variables for prediction. You can read about this more at tensorflow.org/programmers_guide/variable_scope
      June 14, 2019 1:01 PM IST
    0
  • I finally found the problem.The problem was my Cudnn was less than the recomended version by tensorflow which is >=7.4.1 .When i upgraded ot to the latest version released it was fixed

     
      August 26, 2021 2:10 PM IST
    0