源起:
1.我要做交叉验证,需要每个训练集和测试集都保持相同的样本分布比例,直接用sklearn提供的KFold并不能满足这个需求。
2.将生成的交叉验证数据集保存成CSV文件,而不是直接用sklearn训练分类模型。
3.在编码过程中有一的误区需要注意:
这个sklearn官方给出的文档
1
2
3
4
5
6
7
8
9
|
>>> import numpy as np
>>> from sklearn.model_selection import KFold
>>> X = [ "a" , "b" , "c" , "d" ]
>>> kf = KFold(n_splits = 2 )
>>> for train, test in kf.split(X):
... print ( "%s %s" % (train, test))
[ 2 3 ] [ 0 1 ]
[ 0 1 ] [ 2 3 ]
|
我之前犯的一个错误是将train,test理解成原数据集分割成子数据集之后的子数据集索引。而实际上,它就是原始数据集本身的样本索引。
源码:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
|
# -*- coding:utf-8 -*-
# 得到交叉验证数据集,保存成CSV文件
# 输入是一个包含正常恶意标签的完整数据集,在读数据的时候分开保存到datasetBenign,datasetMalicious
# 分别对两个数据集进行KFold,最后合并保存
from sklearn.model_selection import KFold
import csv
def writeInFile(benignKFTrain, benignKFTest, maliciousKFTrain, maliciousKFTest, i, datasetBenign, datasetMalicious):
newTrainFilePath = "E:\\hadoopExperimentResult\\5KFold\\AllDataSetIIR10\\dataset\\ImbalancedAllTraffic-train-%s.csv" % i
newTestFilePath = "E:\\hadoopExperimentResult\\5KFold\\AllDataSetIIR10\\dataset\\IImbalancedAllTraffic-test-%s.csv" % i
newTrainFile = open (newTrainFilePath, "wb" ) # wb 为防止空行
newTestFile = open (newTestFilePath, "wb" )
writerTrain = csv.writer(newTrainFile)
writerTest = csv.writer(newTestFile)
for index in benignKFTrain:
writerTrain.writerow(datasetBenign[index])
for index in benignKFTest:
writerTest.writerow(datasetBenign[index])
for index in maliciousKFTrain:
writerTrain.writerow(datasetMalicious[index])
for index in maliciousKFTest:
writerTest.writerow(datasetMalicious[index])
newTrainFile.close()
newTestFile.close()
def getKFoldDataSet(datasetPath):
# CSV读取文件
# 开始从文件中读取全部的数据集
datasetFile = file (datasetPath, 'rb' )
datasetBenign = []
datasetMalicious = []
readerDataset = csv.reader(datasetFile)
for line in readerDataset:
if len (line) > 1 :
curLine = []
curLine.append( float (line[ 0 ]))
curLine.append( float (line[ 1 ]))
curLine.append( float (line[ 2 ]))
curLine.append( float (line[ 3 ]))
curLine.append( float (line[ 4 ]))
curLine.append( float (line[ 5 ]))
curLine.append( float (line[ 6 ]))
curLine.append(line[ 7 ])
if line[ 7 ] = = "benign" :
datasetBenign.append(curLine)
else :
datasetMalicious.append(curLine)
# 交叉验证分割数据集
K = 5
kf = KFold(n_splits = K)
benignKFTrain = []; benignKFTest = []
for train,test in kf.split(datasetBenign):
benignKFTrain.append(train)
benignKFTest.append(test)
maliciousKFTrain = []; maliciousKFTest = []
for train,test in kf.split(datasetMalicious):
maliciousKFTrain.append(train)
maliciousKFTest.append(test)
for i in range (K):
print "======================== " + str (i) + " ========================"
print benignKFTrain[i], benignKFTest[i]
print maliciousKFTrain[i],maliciousKFTest[i]
writeInFile(benignKFTrain[i], benignKFTest[i], maliciousKFTrain[i], maliciousKFTest[i], i, datasetBenign,
datasetMalicious)
datasetFile.close()
if __name__ = = "__main__" :
getKFoldDataSet(r "E:\hadoopExperimentResult\5KFold\AllDataSetIIR10\dataset\ImbalancedAllTraffic-10.csv" )
|
以上这篇Python sklearn KFold 生成交叉验证数据集的方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持服务器之家。
原文链接:https://blog.csdn.net/Ichimaru_Gin_/article/details/79455578