X,y= load_data('train_data') X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=12) datagen = ImageDataGenerator( horizontal_flip=True, vertical_flip=True) early_stopping_callback = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve) history = model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size), steps_per_epoch=len(X_train) / batch_size, validation_data=(X_test, y_test), epochs=n_epochs, callbacks=[early_stopping_callback])
model.fit_generator
it will save model after epochs_to_wait_for_improve
, but I want to save model with min val_loss
does it make sense and is it possible?early_stopping_callback = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve) checkpoint_callback = ModelCheckpoint(model_name+'.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='min') history = model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size), steps_per_epoch=len(X_train) / batch_size, validation_data=(X_test, y_test), epochs=n_epochs, callbacks=[early_stopping_callback, checkpoint_callback])
The callback will save the model to file, which requires that a path and filename be specified via the first argument.
mc = ModelCheckpoint('best_model.h5', monitor='val_loss', mode='min', verbose=1)
In this tutorial, we understanding how to save the best model during the training. Using two different callbacks ModelCheckpoint and EarlyStopping.,Now your code saves the last model that achieved the best result on dev set before the training was stopped by the early stopping callback.,If you want to save the best model during training, you have to use the ModelCheckpoint callback class. It has options to save the model weights at given times during the training and will allow you to keep the weights of the model at the end of the epoch specifically where the validation loss was at its minimum. ,You can conjunction with model.fit() to save a model or weights in a checkpoint file, so the model or weights can be loaded later to continue the training from the state saved.
checkpoint_filepath = 'weights.{epoch:02d}-{val_loss:.2f}.h5'
model_checkpoint_callback = keras.callbacks.ModelCheckpoint(
filepath = checkpoint_filepath,
monitor = 'val_accuracy',
mode = 'max',
save_best_only = True)
history = model.fit(x_train, y_train, batch_size = batch_size, validation_data = (x_test, y_test), epochs = epochs, callbacks = [model_checkpoint_callback],
validation_split = 0.1)
callback = keras.callbacks.EarlyStopping(
monitor = 'val_loss', min_delta = 0, patience = 2, verbose = 2, mode = 'auto',
baseline = None, restore_best_weights = True)
history_2 = model.fit(x_train, y_train, validation_data = (x_test, y_test), batch_size = batch_size, epochs = epochs, callbacks = [callback],
validation_split = 0.1)
check this callback:
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve)
checkpoint_callback = ModelCheckpoint(model_name+'.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='min')
history = model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size),
steps_per_epoch=len(X_train) / batch_size, validation_data=(X_test, y_test),
epochs=n_epochs, callbacks=[early_stopping_callback, checkpoint_callback])