QBoard » Artificial Intelligence & ML » AI and ML - Conceptual » Epoch vs Iteration when training neural networks

Epoch vs Iteration when training neural networks

  • What is the difference between epoch and iteration when training a multi-layer perceptron?
      September 9, 2020 11:25 AM IST
    0
  • Epoch

    An epoch describes the number of times the algorithm sees the entire data set. So, each time the algorithm has seen all samples in the dataset, an epoch has completed.

    Iteration

    An iteration describes the number of times a batch of data passed through the algorithm. In the case of neural networks, that means the forward pass and backward pass. So, every time you pass a batch of data through the NN, you completed an iteration.

    Example

    An example might make it clearer.

    Say you have a dataset of 10 examples (or samples). You have a batch size of 2, and you've specified you want the algorithm to run for 3 epochs.

    Therefore, in each epoch, you have 5 batches (10/2 = 5). Each batch gets passed through the algorithm, therefore you have 5 iterations per epoch. Since you've specified 3 epochs, you have a total of 15 iterations (5*3 = 15) for training.

      September 9, 2020 12:08 PM IST
    2
  • Many neural network training algorithms involve making multiple presentations of the entire data set to the neural network. Often, a single presentation of the entire data set is referred to as an "epoch". In contrast, some algorithms present data to the neural network a single case at a time.

    "Iteration" is a much more general term, but since you asked about it together with "epoch", I assume that your source is referring to the presentation of a single case to a neural network.

      September 9, 2020 12:08 PM IST
    1
  • To understand the difference between these you must understand the Gradient Descent Algorithm and its Variants.

    Before I start with the actual answer, I would like to build some background.

    batch is the complete dataset. Its size is the total number of training examples in the available dataset.

    Mini-batch size is the number of examples the learning algorithm processes in a single pass (forward and backward).

    Mini-batch is a small part of the dataset of given mini-batch size.

    Iterations is the number of batches of data the algorithm has seen (or simply the number of passes the algorithm has done on the dataset).

    Epochs is the number of times a learning algorithm sees the complete dataset. Now, this may not be equal to the number of iterations, as the dataset can also be processed in mini-batches, in essence, a single pass may process only a part of the dataset. In such cases, the number of iterations is not equal to the number of epochs.

    In the case of Batch gradient descent, the whole batch is processed on each training pass. Therefore, the gradient descent optimizer results in smoother convergence than Mini-batch gradient descent, but it takes more time. The batch gradient descent is guaranteed to find an optimum if it exists.

    Stochastic gradient descent is a special case of mini-batch gradient descent in which the mini-batch size is 1.

      September 9, 2020 12:13 PM IST
    1
  • In the neural network terminology:

    • one epoch = one forward pass and one backward pass of all the training examples
    • batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need.
    • number of iterations = number of passes, each pass using [batch size] number of examples. To be clear, one pass = one forward pass + one backward pass (we do not count the forward pass and backward pass as two different passes).

    Example: if you have 1000 training examples, and your batch size is 500, then it will take 2 iterations to complete 1 epoch.

    FYI: Tradeoff batch size vs. number of iterations to train a neural network

    The term "batch" is ambiguous: some people use it to designate the entire training set, and some people use it to refer to the number of training examples in one forward/backward pass (as I did in this answer). To avoid that ambiguity and make clear that batch corresponds to the number of training examples in one forward/backward pass, one can use the term mini-batch.

      September 9, 2020 12:06 PM IST
    0
  • You have a training data which you shuffle and pick mini-batches from it. When you adjust your weights and biases using one mini-batch, you have completed one iteration. Once you run out of your mini-batches, you have completed an epoch. Then you shuffle your training data again, pick your mini-batches again, and iterate through all of them again. That would be your second epoch.
      September 9, 2020 12:16 PM IST
    0
  • Typically, you'll split your test set into small batches for the network to learn from, and make the training go step by step through your number of layers, applying gradient-descent all the way down. All these small steps can be called iterations.

    An epoch corresponds to the entire training set going through the entire network once. It can be useful to limit this, e.g. to fight overfitting.

      September 9, 2020 12:17 PM IST
    0