Random-Forest-Python

时间:2022-09-01 14:08:23

1. 近期目标,实现随机森林进行点云分类

  1)学习阶段:

【干货】Kaggle 数据挖掘比赛经验分享

Kaggle Machine Learning Competition: Predicting Titanic Survivors

Kaggle Titanic 生存预测 -- 详细流程吐血梳理

机器学习实战之Kaggle_Titanic预测

https://www.codeproject.com/Articles/1197167/Random-Forest-Python

https://blog.csdn.net/hexingwei/article/details/50740404

  2)实践阶段:

  (1)原始点云字段(X,Y,Z,density,curvature,Classification),利用点云的高程Z,密度曲率进行train和分类。分类结果很差就是了。

    需要考虑哪些特征对分类结果的影响比较大?用什么样的点云特征更好,特征工程问题?

 # -*- coding: utf-8 -*-
"""
Created on Sat Nov 10 10:12:02 2018
@author: yhexie
"""
import numpy as np
import pandas as pd
from sklearn import model_selection
from sklearn.ensemble import RandomForestClassifier df = pd.read_csv('C:/Users/yhexie/.spyder-py3/pointcloudcls/train_pcloud2.csv', header=0)
x_train = df[['Z','Volume','Ncr']]
y_train = df.Classification df2 = pd.read_csv('C:/Users/yhexie/.spyder-py3/pointcloudcls/test_pcloud2.csv', header=0)
x_test = df2[['Z','Volume','Ncr']] clf = RandomForestClassifier(n_estimators=10)
clf.fit(x_train, y_train)
clf_y_predict = clf.predict(x_test) data_arry=[]
data_arry.append(df2.X)
data_arry.append(df2.Y)
data_arry.append(df2.Z)
data_arry.append(clf_y_predict) np_data = np.array(data_arry)
np_data = np_data.T
np.array(np_data)
save = pd.DataFrame(np_data, columns = ['X','Y','Z','Classification'])
save.to_csv('C:/Users/yhexie/.spyder-py3/pointcloudcls/predict_pcloud2.csv',index=False,header=True) #index=False,header=False表示不保存行索引和列标题

  (2)对训练集进行split,用75%的数据训练,25%的数据验证模型的拟合精度和泛化能力。

    a. 增加定性特征,进行dummy处理。

  目前采用Z值和8个特征相关的点云特征进行分类,点云近邻搜索半径2.5m

 # -*- coding: utf-8 -*-
"""
Created on Wed Nov 28 10:54:48 2018 @author: yhexie
""" import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import model_selection
from sklearn.ensemble import RandomForestClassifier df = pd.read_csv('C:/Users/yhexie/.spyder-py3/pointcloudcls/train_pc.csv', header=0)
x_train = df[['Z','Linearity', 'Planarity','Scattering','Omnivariance', 'Anisotropy',
'EigenEntropy','eig_sum' ,'changeOfcurvature']]
y_train = df.Classification from sklearn.cross_validation import train_test_split
train_data_X,test_data_X,train_data_Y,test_data_Y = train_test_split(x_train, y_train, test_size=0.25, random_state=33) df2 = pd.read_csv('C:/Users/yhexie/.spyder-py3/pointcloudcls/test_pc.csv', header=0)
x_test = df2[['Z','Linearity', 'Planarity','Scattering','Omnivariance', 'Anisotropy',
'EigenEntropy','eig_sum' ,'changeOfcurvature']] clf = RandomForestClassifier(n_estimators=10)
clf.fit(train_data_X, train_data_Y) print('Accuracy on training set:{:.3f}:'.format(clf.score(train_data_X,train_data_Y)))
print('Accuracy on training set:{:.3f}:'.format(clf.score(test_data_X,test_data_Y)))
print('Feature inportances:{}'.format(clf.feature_importances_))
n_features=9
plt.barh(range(n_features),clf.feature_importances_,align='center')
plt.yticks(np.arange(n_features),['Z','Linearity', 'Planarity','Scattering','Omnivariance', 'Anisotropy',
'EigenEntropy','eig_sum' ,'changeOfcurvature'])
plt.xlabel('Feature importance')
plt.ylabel('Feature') clf_y_predict = clf.predict(x_test) data_arry=[]
data_arry.append(df2.X)
data_arry.append(df2.Y)
data_arry.append(df2.Z)
data_arry.append(clf_y_predict) np_data = np.array(data_arry)
np_data = np_data.T
np.array(np_data)
save = pd.DataFrame(np_data, columns = ['X','Y','Z','Classification'])
save.to_csv('C:/Users/yhexie/.spyder-py3/pointcloudcls/predict_pcloud2.csv',index=False,header=True) #index=False,header=False表示不保存行索引和列标题

  计算结果:可以看到在测试集上的结果还是很差

 Accuracy on training set:0.984:
Accuracy on test set:0.776:

特征重要程度:

Random-Forest-Python


新的测试:

Accuracy on training set:0.994:
Accuracy on training set:0.891:
Feature inportances:[0.02188956 0.02742479 0.10124688 0.01996966 0.1253002 0.02563489
0.03265565 0.100919 0.15808224 0.01937961 0.02727676 0.05498342
0.0211147 0.02387439 0.01900164 0.023478 0.02833916 0.0302441
0.02249598 0.06629199 0.05039737]

Random-Forest-Python

Random-Forest-Python

感觉Z值的重要程度太高了。房屋分类结果应该是很差,绿色的很多被错误分类了。

问题:目前训练集中的每个类别的样本数目并不相同,这个对训练结果有没有影响?