KNN(K-NearestNeighbor,又称K相邻)算法是机器学习最简单的方法之一。 特性: 1.思想极度简单,应用数学知识少(近乎为零),对于很多不擅长数学的小伙伴十分友好 2.适用于样本容量比较大的类域的自动分类 算法流程: ①准备数据(包括清洗、数据集划分)。 ②计算测试样本点(也就是待分类点)到其他每个样本点的距离 。 ③对每个距离进行排序,然后选择出距离最小的K个点。 ④对K个点所属的类别进行比较,根据少数服从多数的原则,将测试样本点归入在K个点中占比最高的那一类。 scikit-learn的KNN API 以自带鸢尾花为例子(不涉及清洗以及集合划分):
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
if __name__ == '__main__':
iris = load_iris()
transformer = StandardScaler()
x_ = transformer.fit_transform(iris.data)
estimator = KNeighborsClassifier(n_neighbors=3)
estimator.fit(x_, iris.target)
result = estimator.predict(x_)
print(result)
K值如何取最优?sklearn提供了网格搜索的方法(常用param_grid需要传递的K值列表;cv=None交叉验证参数,默认None;verbose:日志冗长度,int:冗长度,0:不输出训练过程,1:偶尔输出,>1:对每个子模型都输出。)代码如下:
import numpy as np
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
isir = datasets.load_iris()
x,y = isir.data,isir.target
X_train, X_test, y_train, y_test = train_test_split(x,y,test_size=0.2,random_state=0)
transform = StandardScaler()
X_train = transform.fit_transform(X_train)
X_test = transform.fit_transform(X_test)
n_neighbors_list = {'n_neighbors': [3,4,5,6,7]}
estimator = KNeighborsClassifier()
estimator = GridSearchCV(estimator,param_grid=n_neighbors_list,verbose=0,cv=5)
estimator.fit(X_train,y_train)
print('最优参数组合:', estimator.best_params_, '最好得分:', estimator.best_score_)
print('测试集准确率:', estimator.score(X_test, y_test))
|