There are a lot of decisions to make when designing and configuring your deep learning models.In this blog you will discover a few ways that you can use to evaluate model performance using Keras. After completing this lesson. You will know:
- How to evaluate a Keras model using an automatic verification dataset.
- How to evaluate a Keras model using a manual verfication dataset.
- How to evaluate a Keras model using k-fold cross validation.
1.1 Empirically Evaluate Network Configurations
Deep learning is often used on problems that have very large datasets.Deep learning is often used on problems that have very large datasets.
1.2 Data Spliting
The large amount of data and the complexity of the models require very long training times. As such, it is typically to use a simple separation of data into training and test datasets or training and validation datasets. Keras provides two convenient ways of evaluating your deep learning algorithms this way:
- Use an automatc verification dataset.
- Use a manual verification dataset.
1.2.1 Use a Automatic Verification Dataset
Keras can separate a portion of your training data into a validation dataset and evaluate the performance of your model on that validation dataset each epoch.You can do this by setting the
validation_split argument on the fit() function to a percentage of the size of your training dataset.
The example below demonstrates the use of using an automatic validation dataset on the Pima Indians onset of diabetes dataset:
# MLP with automatic validation set
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# load pima indian dataset
dataset = np.loadtxt("pima-indians-diabetes.csv",delimiter=",")
# split into input(X) and (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8,kernel_initializer='uniform',activation='relu'))
model.add(Dense(8, kernel_initializer='uniform',activation='relu'))
model.add(Dense(1, kernel_initializer='uniform',activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
# Fit the model
model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10)
?
1.2.2 Use a Manual Verification Dataset
Keras also allow you? to manually specify the dataset to use for validation during training.
we handy train_test_split() function from the Python scikit-learn machine learning library to separate our data into a training and test dataset. We use 67% for training and the remaining 33% of the data for validation. The validation dataset can be specified to the fit() function in Keras by the validation_data argument. It takes a tuple of the input and output datasets.
1.3 Manual k-Fold Cross Validation
The gold standard for machine learning model evaluation is k-fold cross validation. It provides a robust estimate of the performance of a model on unseen data.It provides a robust estimate of the performance of a model on unseen data.
????? Cross validation is often not used for evaluating deep learning models because of the greater computational expense.
In the example below we use the handy StratifiedKFold class1 from the scikit-learn Python machine learning library to split up the training dataset into 2 folds. The folds are stratified, meaning that the algorithm attempts to balance the number of instances of each class in each fold. The example creates and evaluates 10 models using the 2 splits of the data and collects all of the scores. The verbose output for each epoch is turned o? by passing verbose=0 to the fit() and evaluate() functions on the model. The performance is printed for each model and it is stored. The average and standard deviation of the model performance is then printed at the end of the run to provide a robust estimate of model accuracy.
# MLP for pima Indians Dataset with 10-fold cross validation
from keras.models import Sequential
from keras.layers import Dense
from sklearn.model_selection import StratifiedKFold
import numpy as np
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# load pima indians dataset
dataset = np.loadtxt("pima-indians-diabetes.csv",delimiter=",")
# split into input(X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# define 2-fold cross validation test harness
kfold = StratifiedKFold(n_splits=2, shuffle=True, random_state=seed)
cvscores = []
for train, test in kfold.split(X, Y):
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, kernel_initializer='uniform',activation='relu'))
model.add(Dense(8, kernel_initializer='uniform',activation='relu'))
model.add(Dense(1, kernel_initializer='uniform',activation='sigmoid'))
# Compile model
model.compile(loss= 'binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the model
model.fit(X[train], Y[train],epochs=150,batch_size=10,verbose=0)
# evaluate the model
scores = model.evaluate(X[test], Y[test], verbose=0)
print("%s: %.2f%%" %(model.metrics_names[1],scores[1]*100))
cvscores.append(scores[1] * 100)
print("%.2f%% (+/- %.2f%%)" %(np.mean(cvscores),np.std(cvscores)))
Notice that we had to re-create the model each loop to then fit and evaluate it with the data for the fold. In the next lesson we will look at how we can use Keras models natively with the scikit-learn machine learning library.
1.4 Summary
In this lesson, you discovered the importance of having a robust way to estimate the performance of your deep learning models on unseen data. You learned three ways that you can estimate the performance of your deep learning models in Python using the Keras library:
- Automatically splitting a training dataset into train and validation dataset.
- Manually and explicitly defining a training and validation dataset.
- Evaluating performance using k-fold cross validation, the gold standard technique.
1.4.1 Next
You now know how to evaluate your models and estimate their performance. In the next lesson you will discover how you can best integrate your Keras models with the scikit-learn machine learning library.
|