QBoard » Artificial Intelligence & ML » AI and ML - Tensorflow » Deep-Learning Nan loss reasons

Deep-Learning Nan loss reasons

  • Perhaps too general a question, but can anyone explain what would cause a Convolutional Neural Network to diverge?
     
    Specifics:
     
    I am using Tensorflow's iris_training model with some of my own data and keep getting
     

    ERROR:tensorflow:Model diverged with loss = NaN.

    Traceback...

    tensorflow.contrib.learn.python.learn.monitors.NanLossDuringTrainingError: NaN loss during training.

     
    Traceback originated with the line:
     

    tf.contrib.learn.DNNClassifier(feature_columns=feature_columns, hidden_units=[300, 300, 300],

    #optimizer=tf.train.ProximalAdagradOptimizer(learning_rate=0.001, l1_regularization_strength=0.00001), n_classes=11, model_dir="/tmp/iris_model")

     
    I've tried adjusting the optimizer, using a zero for learning rate, and using no optimizer. Any insights into network layers, data size, etc is appreciated.
      September 14, 2021 4:30 PM IST
    0
  • Although most of the points are already discussed. But I would like to highlight again one more reason for NaN which is missing.

    tf.estimator.DNNClassifier(
        hidden_units, feature_columns, model_dir=None, n_classes=2, weight_column=None,
        label_vocabulary=None, optimizer='Adagrad', activation_fn=tf.nn.relu,
        dropout=None, config=None, warm_start_from=None,
        loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE, batch_norm=False
    )

     

    By default activation function is "Relu". It could be possible that intermediate layer's generating a negative value and "Relu" convert it into the 0. Which gradually stops training.

    I observed the "LeakyRelu" able to solve such problems.

      September 27, 2021 2:10 PM IST
    0
  • I'd like to plug in some (shallow) reasons I have experienced as follows:

    1. we may have updated our dictionary(for NLP tasks) but the model and the prepared data used a different one.
    2. we may have reprocessed our data(binary tf_record) but we loaded the old model. The reprocessed data may conflict with the previous one.
    3. we may should train the model from scratch but we forgot to delete the checkpoints and the model loaded the latest parameters automatically.

    Hope that helps.

      September 28, 2021 1:55 PM IST
    0
  • There are lots of things I have seen make a model diverge.

    Too high of a learning rate. You can often tell if this is the case if the loss begins to increase and then diverges to infinity.

    I am not to familiar with the DNNClassifier but I am guessing it uses the categorical cross entropy cost function. This involves taking the log of the prediction which diverges as the prediction approaches zero. That is why people usually add a small epsilon value to the prediction to prevent this divergence. I am guessing the DNNClassifier probably does this or uses the tensorflow opp for it. Probably not the issue.

    Other numerical stability issues can exist such as division by zero where adding the epsilon can help. Another less obvious one if the square root who's derivative can diverge if not properly simplified when dealing with finite precision numbers. Yet again I doubt this is the issue in the case of the DNNClassifier.

    You may have an issue with the input data. Try calling assert not np.any(np.isnan(x)) on the input data to make sure you are not introducing the nan. Also make sure all of the target values are valid. Finally, make sure the data is properly normalized. You probably want to have the pixels in the range [-1, 1] and not [0, 255].

    The labels must be in the domain of the loss function, so if using a logarithmic-based loss function all labels must be non-negative (as noted by evan pu and the comments below).
      September 15, 2021 12:43 PM IST
    0
  • If you're training for cross entropy, you want to add a small number like 1e-8 to your output probability.

    Because log(0) is negative infinity, when your model trained enough the output distribution will be very skewed, for instance say I'm doing a 4 class output, in the beginning my probability looks like

    0.25 0.25 0.25 0.25
    

     

    but toward the end the probability will probably look like

    1.0 0 0 0
    

     

    And you take a cross entropy of this distribution everything will explode. The fix is to artifitially add a small number to all the terms to prevent this.

     
      September 16, 2021 1:31 PM IST
    0
  • If using integers as targets, makes sure they aren't symmetrical at 0.
    I.e., don't use classes -1, 0, 1. Use instead 0, 1, 2.
      September 19, 2021 12:27 AM IST
    0