一、打印分类报告(使用scikit-learn库中的函数)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
例子:
import matplotlib.pyplot as plt
from sklearn import datasets, svm, metrics
from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix
from sklearn.model_selection import train_test_split
digits = datasets.load_digits()
"""
The data that we are interested in is made of 8x8 images of digits, Let's
have a look at the first 4 images, stored in the 'images' attributes of the
dataset. If we were working from image files, we could load them using
'matplotlib.pyplot.imread'. Note that each image must have the same size.
For these images, we know which digit they represent: it is given in the 'target' of the dataset.
"""
_, axes = plt.subplots(2, 4)
images_and_labels = list(zip(digits.images, digits.target))
for ax, (image, label) in zip(axes[0, :], images_and_labels[:4]):
ax.set_axis_off()
ax.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
ax.set_title('Training: %d' % label)
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
classifier = svm.SVC(gamma=0.001)
X_train, X_test, y_train, y_test = train_test_split(
data, digits.target, test_size=0.5, shuffle=False)
classifier.fit(X_train, y_train)
predicted = classifier.predict(X_test)
images_and_predictions = list(zip(digits.images[n_samples // 2:], predicted))
for ax, (image, prediction) in zip(axes[1, ], images_and_predictions[:4]):
ax.set_axis_off()
ax.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
ax.set_title('Prediction: %i' % prediction)
print('Classification report for classifier %s:\n%s\n' % (classifier, metrics.classification_report(y_test, predicted)))
cm = confusion_matrix(y_test, predicted, labels=digits.target_names)
print(cm)
disp = ConfusionMatrixDisplay(cm, display_labels=digits.target_names)
disp.plot()
plt.show()
output:
Classification report for classifier SVC(gamma=0.001):
precision recall f1-score support
0 1.00 0.99 0.99 88
1 0.99 0.97 0.98 91
2 0.99 0.99 0.99 86
3 0.98 0.87 0.92 91
4 0.99 0.96 0.97 92
5 0.95 0.97 0.96 91
6 0.99 0.99 0.99 91
7 0.96 0.99 0.97 89
8 0.94 1.00 0.97 88
9 0.93 0.98 0.95 92
accuracy 0.97 899
macro avg 0.97 0.97 0.97 899
weighted avg 0.97 0.97 0.97 899
参考链接: https://scikit-learn.org.cn/view/45.html
二、求准确率,精确率,召回率、F1值等。
from sklearn.metrics import accuracy_score, recall_score, f1_score, precision_score
y_test = [0, 3, 4, 12, 5, 5, 4, 1, 2]
y_pred = [0, 1, 2, 2, 3, 3, 3, 2, 2]
print('Accuracy score:', accuracy_score(y_test, y_pred))
print('Recall:', recall_score(y_test, y_pred, average='weighted'))
print('F1-score:', f1_score(y_test, y_pred, average='weighted'))
print('Precision score:', precision_score(y_test, y_pred, average='weighted'))
out:
Accuracy score:0.2222222222222222
Recall: 0.2222222222222222
F1-score: 0.15555555555555556
Precision score: 0.1388888888888889
也可以用precision_recall_fscore_support(y_true, y_pred, average='weighted') ,跟上面是一样的效果。 参考链接:
- https://scikit-learn.org/stable/modules/model_evaluation.html#
- https://github.com/wmn7/Traffic-Classification/blob/master/TrafficFlowClassification/utils/evaluate_tools.py
|