python中kmeans聚類實現(xiàn)代碼
k-means算法思想較簡單,說的通俗易懂點就是物以類聚,花了一點時間在python中實現(xiàn)k-means算法,k-means算法有本身的缺點,比如說k初始位置的選擇,針對這個有不少人提出k-means++算法進行改進;另外一種是要對k大小的選擇也沒有很完善的理論,針對這個比較經(jīng)典的理論是輪廓系數(shù),二分聚類的算法確定k的大小,在最后還寫了二分聚類算法的實現(xiàn),代碼主要參考機器學(xué)習(xí)實戰(zhàn)那本書:
#encoding:utf-8
'''''
Created on 2015年9月21日
@author: ZHOUMEIXU204
'''
path=u"D:\\Users\\zhoumeixu204\\Desktop\\python語言機器學(xué)習(xí)\\機器學(xué)習(xí)實戰(zhàn)代碼 python\\機器學(xué)習(xí)實戰(zhàn)代碼\\machinelearninginaction\\Ch10\\"
import numpy as np
def loadDataSet(fileName): #讀取數(shù)據(jù)
dataMat=[]
fr=open(fileName)
for line in fr.readlines():
curLine=line.strip().split('\t')
fltLine=map(float,curLine)
dataMat.append(fltLine)
return dataMat
def distEclud(vecA,vecB): #計算距離
return np.sqrt(np.sum(np.power(vecA-vecB,2)))
def randCent(dataSet,k): #構(gòu)建鏃質(zhì)心
n=np.shape(dataSet)[1]
centroids=np.mat(np.zeros((k,n)))
for j in range(n):
minJ=np.min(dataSet[:,j])
rangeJ=float(np.max(dataSet[:,j])-minJ)
centroids[:,j]=minJ+rangeJ*np.random.rand(k,1)
return centroids
dataMat=np.mat(loadDataSet(path+'testSet.txt'))
print(dataMat[:,0])
# 所有數(shù)都比-inf大
# 所有數(shù)都比+inf小
def kMeans(dataSet,k,distMeas=distEclud,createCent=randCent):
m=np.shape(dataSet)[0]
clusterAssment=np.mat(np.zeros((m,2)))
centroids=createCent(dataSet,k)
clusterChanged=True
while clusterChanged:
clusterChanged=False
for i in range(m):
minDist=np.inf;minIndex=-1 #np.inf表示無窮大
for j in range(k):
distJI=distMeas(centroids[j,:],dataSet[i,:])
if distJI
minDist=distJI;minIndex=j
if clusterAssment[i,0]!=minIndex:clusterChanged=True
clusterAssment[i,:]=minIndex,minDist**2
print centroids
for cent in range(k):
ptsInClust=dataSet[np.nonzero(clusterAssment[:,0].A==cent)[0]] #[0]這里取0是指去除坐標索引值,結(jié)果會有兩個
#np.nonzero函數(shù),尋找非0元素的下標 nz=np.nonzero([1,2,3,0,0,4,0])結(jié)果為0,1,2
centroids[cent,:]=np.mean(ptsInClust,axis=0)
return centroids,clusterAssment
myCentroids,clustAssing=kMeans(dataMat,4)
print(myCentroids,clustAssing)
#二分均值聚類(bisecting k-means)
def biKmeans(dataSet,k,distMeas=distEclud):
m=np.shape(dataSet)[0]
clusterAssment=np.mat(np.zeros((m,2)))
centroid0=np.mean(dataSet,axis=0).tolist()[0]
centList=[centroid0]
for j in range(m):
clusterAssment[j,1]=distMeas(np.mat(centroid0),dataSet[j,:])**2
while (len(centList)
lowestSSE=np.Inf
for i in range(len(centList)):
ptsInCurrCluster=dataSet[np.nonzero(clusterAssment[:,0].A==i)[0],:]
centroidMat,splitClusAss=kMeans(ptsInCurrCluster,2,distMeas)
sseSplit=np.sum(splitClusAss[:,1])
sseNotSplit=np.sum(clusterAssment[np.nonzero(clusterAssment[:,0].A!=i)[0],1])
print "sseSplit, and notSplit:",sseSplit,sseNotSplit
if (sseSplit+sseNotSplit)
bestCenToSplit=i
bestNewCents=centroidMat
bestClustAss=splitClusAss.copy()
lowestSSE=sseSplit+sseNotSplit
bestClustAss[np.nonzero(bestClustAss[:,0].A==1)[0],0]=len(centList)
bestClustAss[np.nonzero(bestClustAss[:,0].A==0)[0],0]=bestCenToSplit
print "the bestCentToSplit is:",bestCenToSplit
print 'the len of bestClustAss is:',len(bestClustAss)
centList[bestCenToSplit]=bestNewCents[0,:]
centList.append(bestNewCents[1,:])
clusterAssment[np.nonzero(clusterAssment[:,0].A==bestCenToSplit)[0],:]=bestClustAss
return centList,clusterAssment
print(u"二分聚類分析結(jié)果開始")
dataMat3=np.mat(loadDataSet(path+'testSet2.txt'))
centList,myNewAssments=biKmeans(dataMat3, 3)
print(centList)
以上就是本文的全部內(nèi)容,希望對大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
相關(guān)文章
Python 實現(xiàn)取多維數(shù)組第n維的前幾位
今天小編就為大家分享一篇Python 實現(xiàn)取多維數(shù)組第n維的前幾位,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2019-11-11
python機器學(xué)習(xí)之神經(jīng)網(wǎng)絡(luò)
這篇文章主要介紹了python機器學(xué)習(xí)之神經(jīng)網(wǎng)絡(luò),文中有非常詳細的代碼示例,對正在學(xué)習(xí)python的小伙伴們有很好地幫助,需要的朋友可以參考下2021-04-04
Python大數(shù)據(jù)用Numpy Array的原因解讀
一個Numpy數(shù)組由許多值組成,所有值的類型是相同的,Numpy 是Python科學(xué)計算的一個核心模塊,本文重點給大家介紹Python大數(shù)據(jù)Numpy Array的相關(guān)知識,感興趣的朋友一起看看吧2022-02-02

