python實(shí)現(xiàn)決策樹(shù)分類(2)
在上一篇文章中,我們已經(jīng)構(gòu)建了決策樹(shù),接下來(lái)可以使用它用于實(shí)際的數(shù)據(jù)分類。在執(zhí)行數(shù)據(jù)分類時(shí),需要決策時(shí)以及標(biāo)簽向量。程序比較測(cè)試數(shù)據(jù)和決策樹(shù)上的數(shù)值,遞歸執(zhí)行直到進(jìn)入葉子節(jié)點(diǎn)。
這篇文章主要使用決策樹(shù)分類器就行分類,數(shù)據(jù)集采用UCI數(shù)據(jù)庫(kù)中的紅酒,白酒數(shù)據(jù),主要特征包括12個(gè),主要有非揮發(fā)性酸,揮發(fā)性酸度, 檸檬酸, 殘?zhí)呛?氯化物, 游離二氧化硫, 總二氧化硫,密度, pH,硫酸鹽,酒精, 質(zhì)量等特征。
下面是具體代碼的實(shí)現(xiàn):
#coding :utf-8 ''' 2017.6.26 author :Erin function: "decesion tree" ID3 ''' import numpy as np import pandas as pd from math import log import operator import random def load_data(): red = [line.strip().split(';') for line in open('e:/a/winequality-red.csv')] white = [line.strip().split(';') for line in open('e:/a/winequality-white.csv')] data=red+white random.shuffle(data) #打亂data x_train=data[:800] x_test=data[800:] features=['fixed','volatile','citric','residual','chlorides','free','total','density','pH','sulphates','alcohol','quality'] return x_train,x_test,features def cal_entropy(dataSet): numEntries = len(dataSet) labelCounts = {} for featVec in dataSet: label = featVec[-1] if label not in labelCounts.keys(): labelCounts[label] = 0 labelCounts[label] += 1 entropy = 0.0 for key in labelCounts.keys(): p_i = float(labelCounts[key]/numEntries) entropy -= p_i * log(p_i,2)#log(x,10)表示以10 為底的對(duì)數(shù) return entropy def split_data(data,feature_index,value): ''' 劃分?jǐn)?shù)據(jù)集 feature_index:用于劃分特征的列數(shù),例如“年齡” value:劃分后的屬性值:例如“青少年” ''' data_split=[]#劃分后的數(shù)據(jù)集 for feature in data: if feature[feature_index]==value: reFeature=feature[:feature_index] reFeature.extend(feature[feature_index+1:]) data_split.append(reFeature) return data_split def choose_best_to_split(data): ''' 根據(jù)每個(gè)特征的信息增益,選擇最大的劃分?jǐn)?shù)據(jù)集的索引特征 ''' count_feature=len(data[0])-1#特征個(gè)數(shù)4 #print(count_feature)#4 entropy=cal_entropy(data)#原數(shù)據(jù)總的信息熵 #print(entropy)#0.9402859586706309 max_info_gain=0.0#信息增益最大 split_fea_index = -1#信息增益最大,對(duì)應(yīng)的索引號(hào) for i in range(count_feature): feature_list=[fe_index[i] for fe_index in data]#獲取該列所有特征值 ####################################### # print(feature_list) unqval=set(feature_list)#去除重復(fù) Pro_entropy=0.0#特征的熵 for value in unqval:#遍歷改特征下的所有屬性 sub_data=split_data(data,i,value) pro=len(sub_data)/float(len(data)) Pro_entropy+=pro*cal_entropy(sub_data) #print(Pro_entropy) info_gain=entropy-Pro_entropy if(info_gain>max_info_gain): max_info_gain=info_gain split_fea_index=i return split_fea_index ################################################## def most_occur_label(labels): #sorted_label_count[0][0] 次數(shù)最多的類標(biāo)簽 label_count={} for label in labels: if label not in label_count.keys(): label_count[label]=0 else: label_count[label]+=1 sorted_label_count = sorted(label_count.items(),key = operator.itemgetter(1),reverse = True) return sorted_label_count[0][0] def build_decesion_tree(dataSet,featnames): ''' 字典的鍵存放節(jié)點(diǎn)信息,分支及葉子節(jié)點(diǎn)存放值 ''' featname = featnames[:] ################ classlist = [featvec[-1] for featvec in dataSet] #此節(jié)點(diǎn)的分類情況 if classlist.count(classlist[0]) == len(classlist): #全部屬于一類 return classlist[0] if len(dataSet[0]) == 1: #分完了,沒(méi)有屬性了 return Vote(classlist) #少數(shù)服從多數(shù) # 選擇一個(gè)最優(yōu)特征進(jìn)行劃分 bestFeat = choose_best_to_split(dataSet) bestFeatname = featname[bestFeat] del(featname[bestFeat]) #防止下標(biāo)不準(zhǔn) DecisionTree = {bestFeatname:{}} # 創(chuàng)建分支,先找出所有屬性值,即分支數(shù) allvalue = [vec[bestFeat] for vec in dataSet] specvalue = sorted(list(set(allvalue))) #使有一定順序 for v in specvalue: copyfeatname = featname[:] DecisionTree[bestFeatname][v] = build_decesion_tree(split_data(dataSet,bestFeat,v),copyfeatname) return DecisionTree def classify(Tree, featnames, X): classLabel='' root = list(Tree.keys())[0] firstDict = Tree[root] featindex = featnames.index(root) #根節(jié)點(diǎn)的屬性下標(biāo) #classLabel='0' for key in firstDict.keys(): #根屬性的取值,取哪個(gè)就走往哪顆子樹(shù) if X[featindex] == key: if type(firstDict[key]) == type({}): classLabel = classify(firstDict[key],featnames,X) else: classLabel = firstDict[key] return classLabel if __name__ == '__main__': x_train,x_test,features=load_data() split_fea_index=choose_best_to_split(x_train) newtree=build_decesion_tree(x_train,features) #print(newtree) #classLabel=classify(newtree, features, ['7.4','0.66','0','1.8','0.075','13','40','0.9978','3.51','0.56','9.4','5'] ) #print(classLabel) count=0 for test in x_test: label=classify(newtree, features,test) if(label==test[-1]): count=count+1 acucy=float(count/len(x_test)) print(acucy)
測(cè)試的準(zhǔn)確率大概在0.7左右。至此決策樹(shù)分類算法結(jié)束。本文代碼地址
以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
- python實(shí)現(xiàn)決策樹(shù)分類
- python實(shí)現(xiàn)ID3決策樹(shù)算法
- python實(shí)現(xiàn)C4.5決策樹(shù)算法
- 決策樹(shù)的python實(shí)現(xiàn)方法
- python決策樹(shù)之CART分類回歸樹(shù)詳解
- 基于ID3決策樹(shù)算法的實(shí)現(xiàn)(Python版)
- Python決策樹(shù)和隨機(jī)森林算法實(shí)例詳解
- Python機(jī)器學(xué)習(xí)之決策樹(shù)算法實(shí)例詳解
- python實(shí)現(xiàn)決策樹(shù)C4.5算法詳解(在ID3基礎(chǔ)上改進(jìn))
- python利用sklearn包編寫決策樹(shù)源代碼
相關(guān)文章
帶你用Python實(shí)現(xiàn)Saga 分布式事務(wù)的方法
在這篇文章里,我們介紹了 SAGA 的理論知識(shí),也通過(guò)一個(gè)例子,完整給出了編寫一個(gè) SAGA 事務(wù)的過(guò)程,涵蓋了正常成功完成,異常情況,以及成功回滾的情況,需要的朋友參考下吧2021-09-09Pyspark 線性回歸梯度下降交叉驗(yàn)證知識(shí)點(diǎn)詳解
在本篇內(nèi)容里小編給大家整理的是一篇關(guān)于Pyspark 線性回歸梯度下降交叉驗(yàn)證的相關(guān)知識(shí)點(diǎn)及實(shí)例,需要的朋友們可以參考下。2021-12-12python+selenium+autoit實(shí)現(xiàn)文件上傳功能
這篇文章主要介紹了python+selenium+autoit實(shí)現(xiàn)文件上傳功能,需要的朋友可以參考下2017-08-08