欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

Python根據(jù)成績(jī)分析系統(tǒng)淺析

 更新時(shí)間:2019年02月11日 16:02:39   作者:月上賀蘭  
在本篇文章里小編給大家分享了關(guān)于Python根據(jù)成績(jī)分析是否繼續(xù)深造一個(gè)系統(tǒng)的相關(guān)知識(shí)點(diǎn),有需要的朋友們學(xué)習(xí)下。

案例:該數(shù)據(jù)集的是一個(gè)關(guān)于每個(gè)學(xué)生成績(jī)的數(shù)據(jù)集,接下來(lái)我們對(duì)該數(shù)據(jù)集進(jìn)行分析,判斷學(xué)生是否適合繼續(xù)深造

數(shù)據(jù)集特征展示

1  GRE 成績(jī) (290 to 340)
2  TOEFL 成績(jī)(92 to 120)
3  學(xué)校等級(jí) (1 to 5)
4  自身的意愿 (1 to 5)
5  推薦信的力度 (1 to 5)
6  CGPA成績(jī) (6.8 to 9.92)
7  是否有研習(xí)經(jīng)驗(yàn) (0 or 1)
8  讀碩士的意向 (0.34 to 0.97)

1.導(dǎo)入包

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os,sys

2.導(dǎo)入并查看數(shù)據(jù)集

df = pd.read_csv("D:\\machine-learning\\score\\Admission_Predict.csv",sep = ",")
print('There are ',len(df.columns),'columns')
for c in df.columns:
sys.stdout.write(str(c)+', '
There are 9 columns
Serial No., GRE Score, TOEFL Score, University Rating, SOP, LOR , CGPA, Research, Chance of Admit , 
一共有9列特征
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 400 entries, 0 to 399
Data columns (total 9 columns):
Serial No.   400 non-null int64
GRE Score   400 non-null int64
TOEFL Score   400 non-null int64
University Rating 400 non-null int64
SOP     400 non-null float64
LOR     400 non-null float64
CGPA     400 non-null float64
Research    400 non-null int64
Chance of Admit  400 non-null float64
dtypes: float64(4), int64(5)
memory usage: 28.2 KB

數(shù)據(jù)集信息:
1.數(shù)據(jù)有9個(gè)特征,分別是學(xué)號(hào),GRE分?jǐn)?shù),托福分?jǐn)?shù),學(xué)校等級(jí),SOP,LOR,CGPA,是否參加研習(xí),進(jìn)修的幾率
2.數(shù)據(jù)集中沒(méi)有空值
3.一共有400條數(shù)據(jù)
# 整理列名稱
df = df.rename(columns={'Chance of Admit ':'Chance of Admit'})
# 顯示前5列數(shù)據(jù)
df.head()

3.查看每個(gè)特征的相關(guān)性

fig,ax = plt.subplots(figsize=(10,10))
sns.heatmap(df.corr(),ax=ax,annot=True,linewidths=0.05,fmt='.2f',cmap='magma')
plt.show()

結(jié)論:1.最有可能影響是否讀碩士的特征是GRE,CGPA,TOEFL成績(jī)

2.影響相對(duì)較小的特征是LOR,SOP,和Research

4.數(shù)據(jù)可視化,雙變量分析

4.1 進(jìn)行Research的人數(shù)

print("Not Having Research:",len(df[df.Research == 0]))
print("Having Research:",len(df[df.Research == 1]))
y = np.array([len(df[df.Research == 0]),len(df[df.Research == 1])])
x = np.arange(2)
plt.bar(x,y)
plt.title("Research Experience")
plt.xlabel("Canditates")
plt.ylabel("Frequency")
plt.xticks(x,('Not having research','Having research'))
plt.show()

  結(jié)論:進(jìn)行research的人數(shù)是219,本科沒(méi)有research人數(shù)是181

  4.2 學(xué)生的托福成績(jī)

y = np.array([df['TOEFL Score'].min(),df['TOEFL Score'].mean(),df['TOEFL Score'].max()])
x = np.arange(3)
plt.bar(x,y)
plt.title('TOEFL Score')
plt.xlabel('Level')
plt.ylabel('TOEFL Score')
plt.xticks(x,('Worst','Average','Best'))
plt.show()

結(jié)論:最低分92分,最高分滿分,進(jìn)修學(xué)生的英語(yǔ)成績(jī)很不錯(cuò)

4.3 GRE成績(jī)

df['GRE Score'].plot(kind='hist',bins=200,figsize=(6,6))
plt.title('GRE Score')
plt.xlabel('GRE Score')
plt.ylabel('Frequency')
plt.show()

結(jié)論:310和330的分值的學(xué)生居多

4.4 CGPA和學(xué)校等級(jí)的關(guān)系

plt.scatter(df['University Rating'],df['CGPA'])
plt.title('CGPA Scores for University ratings')
plt.xlabel('University Rating')
plt.ylabel('CGPA')
plt.show()

結(jié)論:學(xué)校越好,學(xué)生的GPA可能就越高

4.5 GRE成績(jī)和CGPA的關(guān)系

plt.scatter(df['GRE Score'],df['CGPA'])
plt.title('CGPA for GRE Scores')
plt.xlabel('GRE Score')
plt.ylabel('CGPA')
plt.show()

結(jié)論:GPA基點(diǎn)越高,GRE分?jǐn)?shù)越高,2者的相關(guān)性很大

4.6 托福成績(jī)和GRE成績(jī)的關(guān)系

df[df['CGPA']>=8.5].plot(kind='scatter',x='GRE Score',y='TOEFL Score',color='red')
plt.xlabel('GRE Score')
plt.ylabel('TOEFL Score')
plt.title('CGPA >= 8.5')
plt.grid(True)
plt.show()

結(jié)論:多數(shù)情況下GRE和托福成正相關(guān),但是GRE分?jǐn)?shù)高,托福一定高。

4.6 學(xué)校等級(jí)和是否讀碩士的關(guān)系

s = df[df['Chance of Admit'] >= 0.75]['University Rating'].value_counts().head(5)
plt.title('University Ratings of Candidates with an 75% acceptance chance')
s.plot(kind='bar',figsize=(20,10),cmap='Pastel1')
plt.xlabel('University Rating')
plt.ylabel('Candidates')
plt.show()

結(jié)論:排名靠前的學(xué)校的學(xué)生,進(jìn)修的可能性更大

4.7 SOP和GPA的關(guān)系

plt.scatter(df['CGPA'],df['SOP'])
plt.xlabel('CGPA')
plt.ylabel('SOP')
plt.title('SOP for CGPA')
plt.show()

結(jié)論: GPA很高的學(xué)生,選擇讀碩士的自我意愿更強(qiáng)烈

4.8 SOP和GRE的關(guān)系

plt.scatter(df['GRE Score'],df['SOP'])
plt.xlabel('GRE Score')
plt.ylabel('SOP')
plt.title('SOP for GRE Score')
plt.show()

結(jié)論:讀碩士意愿強(qiáng)的學(xué)生,GRE分?jǐn)?shù)較高

5.模型

5.1 準(zhǔn)備數(shù)據(jù)集

# 讀取數(shù)據(jù)集
df = pd.read_csv('D:\\machine-learning\\score\\Admission_Predict.csv',sep=',')

serialNO = df['Serial No.'].values

df.drop(['Serial No.'],axis=1,inplace=True)
df = df.rename(columns={'Chance of Admit ':'Chance of Admit'})

# 分割數(shù)據(jù)集
y = df['Chance of Admit'].values
x = df.drop(['Chance of Admit'],axis=1)

from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=42)
# 歸一化數(shù)據(jù)
from sklearn.preprocessing import MinMaxScaler
scaleX = MinMaxScaler(feature_range=[0,1])
x_train[x_train.columns] = scaleX.fit_transform(x_train[x_train.columns])
x_test[x_test.columns] = scaleX.fit_transform(x_test[x_test.columns])

5.2 回歸

5.2.1 線性回歸

from sklearn.linear_model import LinearRegression

lr = LinearRegression()
lr.fit(x_train,y_train)
y_head_lr = lr.predict(x_test)

print('Real value of y_test[1]: '+str(y_test[1]) + ' -> predict value: ' + str(lr.predict(x_test.iloc[[1],:])))
print('Real value of y_test[2]: '+str(y_test[2]) + ' -> predict value: ' + str(lr.predict(x_test.iloc[[2],:])))

from sklearn.metrics import r2_score
print('r_square score: ',r2_score(y_test,y_head_lr))
y_head_lr_train = lr.predict(x_train)
print('r_square score(train data):',r2_score(y_train,y_head_lr_train))

5.2.2 隨機(jī)森林回歸

from sklearn.ensemble import RandomForestRegressor

rfr = RandomForestRegressor(n_estimators=100,random_state=42)
rfr.fit(x_train,y_train)
y_head_rfr = rfr.predict(x_test)

print('Real value of y_test[1]: '+str(y_test[1]) + ' -> predict value: ' + str(rfr.predict(x_test.iloc[[1],:])))
print('Real value of y_test[2]: '+str(y_test[2]) + ' -> predict value: ' + str(rfr.predict(x_test.iloc[[2],:])))

from sklearn.metrics import r2_score
print('r_square score: ',r2_score(y_test,y_head_rfr))
y_head_rfr_train = rfr.predict(x_train)
print('r_square score(train data):',r2_score(y_train,y_head_rfr_train))

5.2.3 決策樹(shù)回歸

from sklearn.tree import DecisionTreeRegressor

dt = DecisionTreeRegressor(random_state=42)
dt.fit(x_train,y_train)
y_head_dt = dt.predict(x_test)

print('Real value of y_test[1]: '+str(y_test[1]) + ' -> predict value: ' + str(dt.predict(x_test.iloc[[1],:])))
print('Real value of y_test[2]: '+str(y_test[2]) + ' -> predict value: ' + str(dt.predict(x_test.iloc[[2],:])))

from sklearn.metrics import r2_score
print('r_square score: ',r2_score(y_test,y_head_dt))
y_head_dt_train = dt.predict(x_train)
print('r_square score(train data):',r2_score(y_train,y_head_dt_train))

5.2.4 三種回歸方法比較

y = np.array([r2_score(y_test,y_head_lr),r2_score(y_test,y_head_rfr),r2_score(y_test,y_head_dt)])
x = np.arange(3)
plt.bar(x,y)
plt.title('Comparion of Regression Algorithms')
plt.xlabel('Regression')
plt.ylabel('r2_score')
plt.xticks(x,("LinearRegression","RandomForestReg.","DecisionTreeReg."))
plt.show()

結(jié)論 : 回歸算法中,線性回歸的性能更優(yōu)

5.2.5 三種回歸方法與實(shí)際值的比較

​red = plt.scatter(np.arange(0,80,5),y_head_lr[0:80:5],color='red')
blue = plt.scatter(np.arange(0,80,5),y_head_rfr[0:80:5],color='blue')
green = plt.scatter(np.arange(0,80,5),y_head_dt[0:80:5],color='green')
black = plt.scatter(np.arange(0,80,5),y_test[0:80:5],color='black')
plt.title('Comparison of Regression Algorithms')
plt.xlabel('Index of candidate')
plt.ylabel('Chance of admit')
plt.legend([red,blue,green,black],['LR','RFR','DT','REAL'])
plt.show()

結(jié)論:在數(shù)據(jù)集中有70%的候選人有可能讀碩士,從上圖來(lái)看還有些點(diǎn)沒(méi)有很好的得到預(yù)測(cè)

5.3 分類算法

5.3.1 準(zhǔn)備數(shù)據(jù)

df = pd.read_csv('D:\\machine-learning\\score\\Admission_Predict.csv',sep=',')

SerialNO = df['Serial No.'].values
df.drop(['Serial No.'],axis=1,inplace=True)

df = df.rename(columns={'Chance of Admit ':'Chance of Admit'})
y = df['Chance of Admit'].values
x = df.drop(['Chance of Admit'],axis=1)


from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=42)

from sklearn.preprocessing import MinMaxScaler
scaleX = MinMaxScaler(feature_range=[0,1])
x_train[x_train.columns] = scaleX.fit_transform(x_train[x_train.columns])
x_test[x_test.columns] = scaleX.fit_transform(x_test[x_test.columns])

# 如果chance >0.8, chance of admit 就是1,否則就是0
y_train_01 = [1 if each > 0.8 else 0 for each in y_train]
y_test_01 = [1 if each > 0.8 else 0 for each in y_test]

y_train_01 = np.array(y_train_01)
y_test_01 = np.array(y_test_01)

5.3.2 邏輯回歸

from sklearn.linear_model import LogisticRegression

lrc = LogisticRegression()
lrc.fit(x_train,y_train_01)
print('score: ',lrc.score(x_test,y_test_01))
print('Real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(lrc.predict(x_test.iloc[[1],:])))
print('Real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(lrc.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix
cm_lrc = confusion_matrix(y_test_01,lrc.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_lrc,annot=True,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)
plt.title('Test for Test dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score
print('precision_score is : ',precision_score(y_test_01,lrc.predict(x_test)))
print('recall_score is : ',recall_score(y_test_01,lrc.predict(x_test)))
print('f1_score is : ',f1_score(y_test_01,lrc.predict(x_test)))

# Test for Train Dataset:

cm_lrc_train = confusion_matrix(y_train_01,lrc.predict(x_train))
f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_lrc_train,annot=True,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)
plt.title('Test for Train dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

結(jié)論:1.通過(guò)混淆矩陣,邏輯回歸算法在訓(xùn)練集樣本上,有23個(gè)分錯(cuò)的樣本,有72人想進(jìn)一步讀碩士

2.在測(cè)試集上有7個(gè)分錯(cuò)的樣本 

5.3.3 支持向量機(jī)(SVM)

from sklearn.svm import SVC

svm = SVC(random_state=1,kernel='rbf')
svm.fit(x_train,y_train_01)
print('score: ',svm.score(x_test,y_test_01))
print('Real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(svm.predict(x_test.iloc[[1],:])))
print('Real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(svm.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix
cm_svm = confusion_matrix(y_test_01,svm.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_svm,annot=True,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)
plt.title('Test for Test dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score
print('precision_score is : ',precision_score(y_test_01,svm.predict(x_test)))
print('recall_score is : ',recall_score(y_test_01,svm.predict(x_test)))
print('f1_score is : ',f1_score(y_test_01,svm.predict(x_test)))

# Test for Train Dataset:

cm_svm_train = confusion_matrix(y_train_01,svm.predict(x_train))
f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_svm_train,annot=True,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)
plt.title('Test for Train dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

結(jié)論:1.通過(guò)混淆矩陣,SVM算法在訓(xùn)練集樣本上,有22個(gè)分錯(cuò)的樣本,有70人想進(jìn)一步讀碩士

2.在測(cè)試集上有8個(gè)分錯(cuò)的樣本

5.3.4 樸素貝葉斯

from sklearn.naive_bayes import GaussianNB

nb = GaussianNB()
nb.fit(x_train,y_train_01)
print('score: ',nb.score(x_test,y_test_01))
print('Real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(nb.predict(x_test.iloc[[1],:])))
print('Real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(nb.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix
cm_nb = confusion_matrix(y_test_01,nb.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_nb,annot=True,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)
plt.title('Test for Test dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score
print('precision_score is : ',precision_score(y_test_01,nb.predict(x_test)))
print('recall_score is : ',recall_score(y_test_01,nb.predict(x_test)))
print('f1_score is : ',f1_score(y_test_01,nb.predict(x_test)))

# Test for Train Dataset:

cm_nb_train = confusion_matrix(y_train_01,nb.predict(x_train))
f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_nb_train,annot=True,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)
plt.title('Test for Train dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

結(jié)論:1.通過(guò)混淆矩陣,樸素貝葉斯算法在訓(xùn)練集樣本上,有20個(gè)分錯(cuò)的樣本,有78人想進(jìn)一步讀碩士

2.在測(cè)試集上有7個(gè)分錯(cuò)的樣本

5.3.5 隨機(jī)森林分類器

from sklearn.ensemble import RandomForestClassifier

rfc = RandomForestClassifier(n_estimators=100,random_state=1)
rfc.fit(x_train,y_train_01)
print('score: ',rfc.score(x_test,y_test_01))
print('Real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(rfc.predict(x_test.iloc[[1],:])))
print('Real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(rfc.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix
cm_rfc = confusion_matrix(y_test_01,rfc.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_rfc,annot=True,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)
plt.title('Test for Test dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score
print('precision_score is : ',precision_score(y_test_01,rfc.predict(x_test)))
print('recall_score is : ',recall_score(y_test_01,rfc.predict(x_test)))
print('f1_score is : ',f1_score(y_test_01,rfc.predict(x_test)))

# Test for Train Dataset:

cm_rfc_train = confusion_matrix(y_train_01,rfc.predict(x_train))
f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_rfc_train,annot=True,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)
plt.title('Test for Train dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

結(jié)論:1.通過(guò)混淆矩陣,隨機(jī)森林算法在訓(xùn)練集樣本上,有0個(gè)分錯(cuò)的樣本,有88人想進(jìn)一步讀碩士

2.在測(cè)試集上有5個(gè)分錯(cuò)的樣本

5.3.6 決策樹(shù)分類器

from sklearn.tree import DecisionTreeClassifier

dtc = DecisionTreeClassifier(criterion='entropy',max_depth=3)
dtc.fit(x_train,y_train_01)
print('score: ',dtc.score(x_test,y_test_01))
print('Real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(dtc.predict(x_test.iloc[[1],:])))
print('Real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(dtc.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix
cm_dtc = confusion_matrix(y_test_01,dtc.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_dtc,annot=True,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)
plt.title('Test for Test dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score
print('precision_score is : ',precision_score(y_test_01,dtc.predict(x_test)))
print('recall_score is : ',recall_score(y_test_01,dtc.predict(x_test)))
print('f1_score is : ',f1_score(y_test_01,dtc.predict(x_test)))

# Test for Train Dataset:

cm_dtc_train = confusion_matrix(y_train_01,dtc.predict(x_train))
f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_dtc_train,annot=True,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)
plt.title('Test for Train dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

結(jié)論:1.通過(guò)混淆矩陣,決策樹(shù)算法在訓(xùn)練集樣本上,有20個(gè)分錯(cuò)的樣本,有78人想進(jìn)一步讀碩士

2.在測(cè)試集上有7個(gè)分錯(cuò)的樣本

5.3.7 K臨近分類器

from sklearn.neighbors import KNeighborsClassifier

scores = []
for each in range(1,50):
 knn_n = KNeighborsClassifier(n_neighbors = each)
 knn_n.fit(x_train,y_train_01)
 scores.append(knn_n.score(x_test,y_test_01))
 
plt.plot(range(1,50),scores)
plt.xlabel('k')
plt.ylabel('Accuracy')
plt.show()


knn = KNeighborsClassifier(n_neighbors=7)
knn.fit(x_train,y_train_01)
print('score 7 : ',knn.score(x_test,y_test_01))
print('Real value of y_test_01[1]: '+str(y_test_01[1]) + ' -> predict value: ' + str(knn.predict(x_test.iloc[[1],:])))
print('Real value of y_test_01[2]: '+str(y_test_01[2]) + ' -> predict value: ' + str(knn.predict(x_test.iloc[[2],:])))

from sklearn.metrics import confusion_matrix
cm_knn = confusion_matrix(y_test_01,knn.predict(x_test))

f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_knn,annot=True,linewidths=0.5,linecolor='red',fmt='.0f',ax=ax)
plt.title('Test for Test dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

from sklearn.metrics import recall_score,precision_score,f1_score
print('precision_score is : ',precision_score(y_test_01,knn.predict(x_test)))
print('recall_score is : ',recall_score(y_test_01,knn.predict(x_test)))
print('f1_score is : ',f1_score(y_test_01,knn.predict(x_test)))

# Test for Train Dataset:

cm_knn_train = confusion_matrix(y_train_01,knn.predict(x_train))
f,ax = plt.subplots(figsize=(5,5))
sns.heatmap(cm_knn_train,annot=True,linewidths=0.5,linecolor='blue',fmt='.0f',ax=ax)
plt.title('Test for Train dataset')
plt.xlabel('predicted y values')
plt.ylabel('real y value')
plt.show()

結(jié)論:1.通過(guò)混淆矩陣,K臨近算法在訓(xùn)練集樣本上,有22個(gè)分錯(cuò)的樣本,有71人想進(jìn)一步讀碩士

2.在測(cè)試集上有7個(gè)分錯(cuò)的樣本

5.3.8 分類器比較

y = np.array([lrc.score(x_test,y_test_01),svm.score(x_test,y_test_01),nb.score(x_test,y_test_01),
    dtc.score(x_test,y_test_01),rfc.score(x_test,y_test_01),knn.score(x_test,y_test_01)])
x = np.arange(6)
plt.bar(x,y)
plt.title('Comparison of Classification Algorithms')
plt.xlabel('Classification')
plt.ylabel('Score')
plt.xticks(x,("LogisticReg.","SVM","GNB","Dec.Tree","Ran.Forest","KNN"))
plt.show()

結(jié)論:隨機(jī)森林和樸素貝葉斯二者的預(yù)測(cè)值都比較高

5.4 聚類算法

5.4.1 準(zhǔn)備數(shù)據(jù)

df = pd.read_csv('D:\\machine-learning\\score\\Admission_Predict.csv',sep=',')
df = df.rename(columns={'Chance of Admit ':'Chance of Admit'})
serialNo = df['Serial No.']
df.drop(['Serial No.'],axis=1,inplace=True)
df = (df - np.min(df)) / (np.max(df)-np.min(df))
y = df['Chance of Admit']
x = df.drop(['Chance of Admit'],axis=1)

5.4.2 降維

from sklearn.decomposition import PCA

pca = PCA(n_components=1,whiten=True)
pca.fit(x)
x_pca = pca.transform(x)
x_pca = x_pca.reshape(400)
dictionary = {'x':x_pca,'y':y}
data = pd.DataFrame(dictionary)
print('pca data:',data.head())

print()

print('orin data:',df.head())

5.4.3 K均值聚類

from sklearn.cluster import KMeans

wcss = []
for k in range(1,15):
 kmeans = KMeans(n_clusters=k)
 kmeans.fit(x)
 wcss.append(kmeans.inertia_)
plt.plot(range(1,15),wcss)
plt.xlabel('Kmeans')
plt.ylabel('WCSS')
plt.show()

df["Serial No."] = serialNo
kmeans = KMeans(n_clusters=3)
clusters_knn = kmeans.fit_predict(x)
df['label_kmeans'] = clusters_knn


plt.scatter(df[df.label_kmeans == 0 ]["Serial No."],df[df.label_kmeans == 0]['Chance of Admit'],color = "red")
plt.scatter(df[df.label_kmeans == 1 ]["Serial No."],df[df.label_kmeans == 1]['Chance of Admit'],color = "blue")
plt.scatter(df[df.label_kmeans == 2 ]["Serial No."],df[df.label_kmeans == 2]['Chance of Admit'],color = "green")
plt.title("K-means Clustering")
plt.xlabel("Candidates")
plt.ylabel("Chance of Admit")
plt.show()

plt.scatter(data.x[df.label_kmeans == 0 ],data[df.label_kmeans == 0].y,color = "red")
plt.scatter(data.x[df.label_kmeans == 1 ],data[df.label_kmeans == 1].y,color = "blue")
plt.scatter(data.x[df.label_kmeans == 2 ],data[df.label_kmeans == 2].y,color = "green")
plt.title("K-means Clustering")
plt.xlabel("X")
plt.ylabel("Chance of Admit")
plt.show()

結(jié)論:數(shù)據(jù)集分成三個(gè)類別,一部分學(xué)生是決定繼續(xù)讀碩士,一部分放棄,還有一部分學(xué)生的比較猶豫,但是深造的可能性較大

5.4.4 層次聚類

from scipy.cluster.hierarchy import linkage,dendrogram

merg = linkage(x,method='ward')
dendrogram(merg,leaf_rotation=90)
plt.xlabel('data points')
plt.ylabel('euclidean distance')
plt.show()

from sklearn.cluster import AgglomerativeClustering

hiyerartical_cluster = AgglomerativeClustering(n_clusters=3,affinity='euclidean',linkage='ward')
clusters_hiyerartical = hiyerartical_cluster.fit_predict(x)
df['label_hiyerartical'] = clusters_hiyerartical

plt.scatter(df[df.label_hiyerartical == 0 ]["Serial No."],df[df.label_hiyerartical == 0]['Chance of Admit'],color = "red")
plt.scatter(df[df.label_hiyerartical == 1 ]["Serial No."],df[df.label_hiyerartical == 1]['Chance of Admit'],color = "blue")
plt.scatter(df[df.label_hiyerartical == 2 ]["Serial No."],df[df.label_hiyerartical == 2]['Chance of Admit'],color = "green")
plt.title('Hierarchical Clustering')
plt.xlabel('Candidates')
plt.ylabel('Chance of Admit')
plt.show()

plt.scatter(data[df.label_hiyerartical == 0].x,data.y[df.label_hiyerartical==0],color='red')
plt.scatter(data[df.label_hiyerartical == 1].x,data.y[df.label_hiyerartical==1],color='blue')
plt.scatter(data[df.label_hiyerartical == 2].x,data.y[df.label_hiyerartical==2],color='green')
plt.title('Hierarchical Clustering')
plt.xlabel('X')
plt.ylabel('Chance of Admit')
plt.show()

結(jié)論:從層次聚類的結(jié)果中,可以看出和K均值聚類的結(jié)果一致,只不過(guò)確定了聚類k的取值3

結(jié)論:通過(guò)本詞入門數(shù)據(jù)集的訓(xùn)練,可以掌握

1.一些特征的展示的方法

2.如何調(diào)用sklearn 的API

3.如何取比較不同模型之間的好壞

代碼+數(shù)據(jù)集:https://github.com/Mounment/python-data-analyze/tree/master/kaggle/score

相關(guān)文章

  • 使用pd.merge表連接出現(xiàn)多余行的問(wèn)題解決

    使用pd.merge表連接出現(xiàn)多余行的問(wèn)題解決

    本文主要介紹了使用pd.merge表連接出現(xiàn)多余行的問(wèn)題解決,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧
    2022-06-06
  • OpenCV模板匹配matchTemplate的實(shí)現(xiàn)

    OpenCV模板匹配matchTemplate的實(shí)現(xiàn)

    這篇文章主要介紹了OpenCV模板匹配matchTemplate的實(shí)現(xiàn),文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧
    2019-10-10
  • Python+request+unittest實(shí)現(xiàn)接口測(cè)試框架集成實(shí)例

    Python+request+unittest實(shí)現(xiàn)接口測(cè)試框架集成實(shí)例

    這篇文章主要介紹了Python+request+unittest實(shí)現(xiàn)接口測(cè)試框架集成實(shí)例,小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,也給大家做個(gè)參考。一起跟隨小編過(guò)來(lái)看看吧
    2018-03-03
  • Pytest單元測(cè)試框架生成HTML測(cè)試報(bào)告及優(yōu)化的步驟

    Pytest單元測(cè)試框架生成HTML測(cè)試報(bào)告及優(yōu)化的步驟

    本文主要介紹了Pytest單元測(cè)試框架生成HTML測(cè)試報(bào)告及優(yōu)化的步驟,文中通過(guò)示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下
    2022-01-01
  • Python中字符串格式化的方法小結(jié)

    Python中字符串格式化的方法小結(jié)

    在Python中,格式化字符串輸出是一項(xiàng)非常常見(jiàn)的任務(wù),Python提供了多種方式來(lái)實(shí)現(xiàn)字符串格式化,每種方式都有其獨(dú)特的優(yōu)勢(shì)和用法,下面我們就來(lái)學(xué)習(xí)一下這些方法的具體操作吧
    2023-11-11
  • Python中類的mro與繼承關(guān)系詳解(二)

    Python中類的mro與繼承關(guān)系詳解(二)

    這篇文章主要介紹了Python中類的mro與繼承關(guān)系詳解,上一篇我們已經(jīng)通過(guò)mro相關(guān)資料介紹了mro繼承關(guān)系的基礎(chǔ)內(nèi)容,這片緊接著上一篇文章展開(kāi)詳細(xì)內(nèi)容,需要的朋友可以參考一下
    2022-07-07
  • Python PyQt5學(xué)習(xí)之樣式設(shè)置詳解

    Python PyQt5學(xué)習(xí)之樣式設(shè)置詳解

    這篇文章主要為大家詳細(xì)介紹了Python PyQt5中樣式設(shè)置的相關(guān)資料,例如為標(biāo)簽添加背景圖片、為按鈕添加背景圖片、設(shè)置窗口透明等,感興趣的可以學(xué)習(xí)一下
    2022-12-12
  • Pygame庫(kù)200行代碼實(shí)現(xiàn)簡(jiǎn)易飛機(jī)大戰(zhàn)

    Pygame庫(kù)200行代碼實(shí)現(xiàn)簡(jiǎn)易飛機(jī)大戰(zhàn)

    本文主要介紹了Pygame庫(kù)200行代碼實(shí)現(xiàn)簡(jiǎn)易飛機(jī)大戰(zhàn),文中通過(guò)示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下
    2021-12-12
  • python使用numpy生成18種特殊數(shù)組

    python使用numpy生成18種特殊數(shù)組

    這篇文章主要介紹了python使用numpy生成18種特殊數(shù)組的方法,文章通過(guò)代碼示例介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作有一定的參考價(jià)值,需要的朋友可以參考下
    2023-09-09
  • 關(guān)于ZeroMQ 三種模式python3實(shí)現(xiàn)方式

    關(guān)于ZeroMQ 三種模式python3實(shí)現(xiàn)方式

    今天小編就為大家分享一篇關(guān)于ZeroMQ 三種模式python3實(shí)現(xiàn)方式,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧
    2019-12-12

最新評(píng)論