在周志华的西瓜书和李航的统计机器学习中对决策树ID3算法都有很详细的解释,如何实现呢?核心点有如下几个步骤
step1:计算香农熵
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
from math import log
import operator
# 计算香农熵
def calculate_entropy(data):
label_counts = {}
for feature_data in data:
laber = feature_data[ - 1 ] # 最后一行是laber
if laber not in label_counts.keys():
label_counts[laber] = 0
label_counts[laber] + = 1
count = len (data)
entropy = 0.0
for key in label_counts:
prob = float (label_counts[key]) / count
entropy - = prob * log(prob, 2 )
return entropy
|
step2.计算某个feature的信息增益的方法
1
2
3
4
5
6
7
8
9
10
11
12
13
|
# 计算某个feature的信息增益
# index:要计算信息增益的feature 对应的在data 的第几列
# data 的香农熵
def calculate_relative_entropy(data, index, entropy):
feat_list = [number[index] for number in data] # 得到某个特征下所有值(某列)
uniqual_vals = set (feat_list)
new_entropy = 0
for value in uniqual_vals:
sub_data = split_data(data, index, value)
prob = len (sub_data) / float ( len (data))
new_entropy + = prob * calculate_entropy(sub_data) # 对各子集香农熵求和
relative_entropy = entropy - new_entropy # 计算信息增益
return relative_entropy
|
step3.选择最大信息增益的feature
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
# 选择最大信息增益的feature
def choose_max_relative_entropy(data):
num_feature = len (data[ 0 ]) - 1
base_entropy = calculate_entropy(data) #香农熵
best_infor_gain = 0
best_feature = - 1
for i in range (num_feature):
info_gain = calculate_relative_entropy(data, i, base_entropy)
#最大信息增益
if (info_gain > best_infor_gain):
best_infor_gain = info_gain
best_feature = i
return best_feature
|
step4.构建决策树
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
def create_decision_tree(data, labels):
class_list = [example[ - 1 ] for example in data]
# 类别相同,停止划分
if class_list.count(class_list[ - 1 ]) = = len (class_list):
return class_list[ - 1 ]
# 判断是否遍历完所有的特征时返回个数最多的类别
if len (data[ 0 ]) = = 1 :
return most_class(class_list)
# 按照信息增益最高选取分类特征属性
best_feat = choose_max_relative_entropy(data)
best_feat_lable = labels[best_feat] # 该特征的label
decision_tree = {best_feat_lable: {}} # 构建树的字典
del (labels[best_feat]) # 从labels的list中删除该label
feat_values = [example[best_feat] for example in data]
unique_values = set (feat_values)
for value in unique_values:
sub_lables = labels[:]
# 构建数据的子集合,并进行递归
decision_tree[best_feat_lable][value] = create_decision_tree(split_data(data, best_feat, value), sub_lables)
return decision_tree
|
在构建决策树的过程中会用到两个工具方法:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
# 当遍历完所有的特征时返回个数最多的类别
def most_class(classList):
class_count = {}
for vote in classList:
if vote not in class_count.keys():class_count[vote] = 0
class_count[vote] + = 1
sorted_class_count = sorted (class_count.items,key = operator.itemgetter( 1 ), reversed = True )
return sorted_class_count[ 0 ][ 0 ]
# 工具函数输入三个变量(待划分的数据集,特征,分类值)返回不含划分特征的子集
def split_data(data, axis, value):
ret_data = []
for feat_vec in data:
if feat_vec[axis] = = value :
reduce_feat_vec = feat_vec[:axis]
reduce_feat_vec.extend(feat_vec[axis + 1 :])
ret_data.append(reduce_feat_vec)
return ret_data
|
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持服务器之家。
原文链接:https://segmentfault.com/a/1190000015083169