python實(shí)現(xiàn)kMeans算法
聚類是一種無監(jiān)督的學(xué)習(xí),將相似的對(duì)象放到同一簇中,有點(diǎn)像是全自動(dòng)分類,簇內(nèi)的對(duì)象越相似,簇間的對(duì)象差別越大,則聚類效果越好。
1、k均值聚類算法
k均值聚類將數(shù)據(jù)分為k個(gè)簇,每個(gè)簇通過其質(zhì)心,即簇中所有點(diǎn)的中心來描述。首先隨機(jī)確定k個(gè)初始點(diǎn)作為質(zhì)心,然后將數(shù)據(jù)集分配到距離最近的簇中。然后將每個(gè)簇的質(zhì)心更新為所有數(shù)據(jù)集的平均值。然后再進(jìn)行第二次劃分?jǐn)?shù)據(jù)集,直到聚類結(jié)果不再變化為止。
偽代碼為
隨機(jī)創(chuàng)建k個(gè)簇質(zhì)心
當(dāng)任意一個(gè)點(diǎn)的簇分配發(fā)生改變時(shí):
對(duì)數(shù)據(jù)集中的每個(gè)數(shù)據(jù)點(diǎn):
對(duì)每個(gè)質(zhì)心:
計(jì)算數(shù)據(jù)集到質(zhì)心的距離
將數(shù)據(jù)集分配到最近距離質(zhì)心對(duì)應(yīng)的簇
對(duì)每一個(gè)簇,計(jì)算簇中所有點(diǎn)的均值并將均值作為質(zhì)心
python實(shí)現(xiàn)
import numpy as np import matplotlib.pyplot as plt def loadDataSet(fileName): dataMat = [] with open(fileName) as f: for line in f.readlines(): line = line.strip().split('\t') dataMat.append(line) dataMat = np.array(dataMat).astype(np.float32) return dataMat def distEclud(vecA,vecB): return np.sqrt(np.sum(np.power((vecA-vecB),2))) def randCent(dataSet,k): m = np.shape(dataSet)[1] center = np.mat(np.ones((k,m))) for i in range(m): centmin = min(dataSet[:,i]) centmax = max(dataSet[:,i]) center[:,i] = centmin + (centmax - centmin) * np.random.rand(k,1) return center def kMeans(dataSet,k,distMeans = distEclud,createCent = randCent): m = np.shape(dataSet)[0] clusterAssment = np.mat(np.zeros((m,2))) centroids = createCent(dataSet,k) clusterChanged = True while clusterChanged: clusterChanged = False for i in range(m): minDist = np.inf minIndex = -1 for j in range(k): distJI = distMeans(dataSet[i,:],centroids[j,:]) if distJI < minDist: minDist = distJI minIndex = j if clusterAssment[i,0] != minIndex: clusterChanged = True clusterAssment[i,:] = minIndex,minDist**2 for cent in range(k): ptsInClust = dataSet[np.nonzero(clusterAssment[:,0].A == cent)[0]] centroids[cent,:] = np.mean(ptsInClust,axis = 0) return centroids,clusterAssment data = loadDataSet('testSet.txt') muCentroids, clusterAssing = kMeans(data,4) fig = plt.figure(0) ax = fig.add_subplot(111) ax.scatter(data[:,0],data[:,1],c = clusterAssing[:,0].A) plt.show() print(clusterAssing)
2、二分k均值算法
K均值算法可能會(huì)收斂到局部最小值,而非全局最小。一種用于度量聚類效果的指標(biāo)為誤差平方和(SSE)。因?yàn)槿×似椒?,更加重視原理中心的點(diǎn)。為了克服k均值算法可能會(huì)收斂到局部最小值的問題,有人提出來二分k均值算法。
首先將所有點(diǎn)作為一個(gè)簇,然后將該簇一分為二,然后選擇所有簇中對(duì)其劃分能夠最大程度減低SSE的值的簇,直到滿足指定簇?cái)?shù)為止。
偽代碼
將所有點(diǎn)看成一個(gè)簇
計(jì)算SSE
while 當(dāng)簇?cái)?shù)目小于k時(shí):
for 每一個(gè)簇:
計(jì)算總誤差
在給定的簇上進(jìn)行k均值聚類(k=2)
計(jì)算將該簇一分為二的總誤差
選擇使得誤差最小的那個(gè)簇進(jìn)行劃分操作
python實(shí)現(xiàn)
import numpy as np import matplotlib.pyplot as plt def loadDataSet(fileName): dataMat = [] with open(fileName) as f: for line in f.readlines(): line = line.strip().split('\t') dataMat.append(line) dataMat = np.array(dataMat).astype(np.float32) return dataMat def distEclud(vecA,vecB): return np.sqrt(np.sum(np.power((vecA-vecB),2))) def randCent(dataSet,k): m = np.shape(dataSet)[1] center = np.mat(np.ones((k,m))) for i in range(m): centmin = min(dataSet[:,i]) centmax = max(dataSet[:,i]) center[:,i] = centmin + (centmax - centmin) * np.random.rand(k,1) return center def kMeans(dataSet,k,distMeans = distEclud,createCent = randCent): m = np.shape(dataSet)[0] clusterAssment = np.mat(np.zeros((m,2))) centroids = createCent(dataSet,k) clusterChanged = True while clusterChanged: clusterChanged = False for i in range(m): minDist = np.inf minIndex = -1 for j in range(k): distJI = distMeans(dataSet[i,:],centroids[j,:]) if distJI < minDist: minDist = distJI minIndex = j if clusterAssment[i,0] != minIndex: clusterChanged = True clusterAssment[i,:] = minIndex,minDist**2 for cent in range(k): ptsInClust = dataSet[np.nonzero(clusterAssment[:,0].A == cent)[0]] centroids[cent,:] = np.mean(ptsInClust,axis = 0) return centroids,clusterAssment def biKmeans(dataSet,k,distMeans = distEclud): m = np.shape(dataSet)[0] clusterAssment = np.mat(np.zeros((m,2))) centroid0 = np.mean(dataSet,axis=0).tolist() centList = [centroid0] for j in range(m): clusterAssment[j,1] = distMeans(dataSet[j,:],np.mat(centroid0))**2 while (len(centList)<k): lowestSSE = np.inf for i in range(len(centList)): ptsInCurrCluster = dataSet[np.nonzero(clusterAssment[:,0].A == i)[0],:] centroidMat,splitClustAss = kMeans(ptsInCurrCluster,2,distMeans) sseSplit = np.sum(splitClustAss[:,1]) sseNotSplit = np.sum(clusterAssment[np.nonzero(clusterAssment[:,0].A != i)[0],1]) if (sseSplit + sseNotSplit) < lowestSSE: bestCentToSplit = i bestNewCents = centroidMat.copy() bestClustAss = splitClustAss.copy() lowestSSE = sseSplit + sseNotSplit print('the best cent to split is ',bestCentToSplit) # print('the len of the bestClust') bestClustAss[np.nonzero(bestClustAss[:,0].A == 1)[0],0] = len(centList) bestClustAss[np.nonzero(bestClustAss[:,0].A == 0)[0],0] = bestCentToSplit clusterAssment[np.nonzero(clusterAssment[:,0].A == bestCentToSplit)[0],:] = bestClustAss.copy() centList[bestCentToSplit] = bestNewCents[0,:].tolist()[0] centList.append(bestNewCents[1,:].tolist()[0]) return np.mat(centList),clusterAssment data = loadDataSet('testSet2.txt') muCentroids, clusterAssing = biKmeans(data,3) fig = plt.figure(0) ax = fig.add_subplot(111) ax.scatter(data[:,0],data[:,1],c = clusterAssing[:,0].A,cmap=plt.cm.Paired) ax.scatter(muCentroids[:,0],muCentroids[:,1]) plt.show() print(clusterAssing) print(muCentroids)
代碼及數(shù)據(jù)集下載:K-means
以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
相關(guān)文章
python 讀取文件并把矩陣轉(zhuǎn)成numpy的兩種方法
今天小編就為大家分享一篇python 讀取文件并把矩陣轉(zhuǎn)成numpy的兩種方法,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧2019-02-02Python調(diào)用Jar包的兩種方式小結(jié)
這篇文章主要介紹了Python調(diào)用Jar包的兩種方式小結(jié),具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2022-12-12Keras自定義實(shí)現(xiàn)帶masking的meanpooling層方式
這篇文章主要介紹了Keras自定義實(shí)現(xiàn)帶masking的meanpooling層方式,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧2020-06-06使用python設(shè)置Excel工作表網(wǎng)格線的隱藏與顯示
Excel表格界面的直觀性很大程度上得益于表格中的網(wǎng)格線設(shè)計(jì),這些線條幫助用戶精確對(duì)齊數(shù)據(jù),清晰劃分單元格,本文將介紹如何使用Python設(shè)置隱藏或顯示Excel工作表的網(wǎng)格線,實(shí)現(xiàn)自動(dòng)話及批量處理,感興趣的朋友可以參考下2024-06-06