kubernetes存儲之GlusterFS集群詳解
1、glusterfs概述
1.1、glusterfs簡介
glusterfs是一個可擴(kuò)展,分布式文件系統(tǒng),集成來自多臺服務(wù)器上的磁盤存儲資源到單一全局命名空間,以提供共享文件存儲。
1.2、glusterfs特點
- 可以擴(kuò)展到幾PB容量
- 支持處理數(shù)千個客戶端
- 兼容POSIX接口
- 使用通用硬件,普通服務(wù)器即可構(gòu)建
- 能夠使用支持?jǐn)U展屬性的文件系統(tǒng),例如ext4,XFS
- 支持工業(yè)標(biāo)準(zhǔn)的協(xié)議,例如NFS,SMB
- 提供很多高級功能,例如副本,配額,跨地域復(fù)制,快照以及bitrot檢測
- 支持根據(jù)不同工作負(fù)載進(jìn)行調(diào)優(yōu)
1.3、glusterfs卷的模式
glusterfs中的volume的模式有很多中,包括以下幾種:
- 分布卷(默認(rèn)模式):即DHT, 也叫 分布卷: 將文件以hash算法隨機(jī)分布到 一臺服務(wù)器節(jié)點中存儲。
- 復(fù)制模式:即AFR, 創(chuàng)建volume 時帶 replica x 數(shù)量: 將文件復(fù)制到 replica x 個節(jié)點中。
- 條帶模式:即Striped, 創(chuàng)建volume 時帶 stripe x 數(shù)量: 將文件切割成數(shù)據(jù)塊,分別存儲到 stripe x 個節(jié)點中 ( 類似raid 0 )。
- 分布式條帶模式:最少需要4臺服務(wù)器才能創(chuàng)建。 創(chuàng)建volume 時 stripe 2 server = 4 個節(jié)點: 是DHT 與 Striped 的組合型。
- 分布式復(fù)制模式:最少需要4臺服務(wù)器才能創(chuàng)建。 創(chuàng)建volume 時 replica 2 server = 4 個節(jié)點:是DHT 與 AFR 的組合型。
- 條帶復(fù)制卷模式:最少需要4臺服務(wù)器才能創(chuàng)建。 創(chuàng)建volume 時 stripe 2 replica 2 server = 4 個節(jié)點: 是 Striped 與 AFR 的組合型。
- 三種模式混合: 至少需要8臺 服務(wù)器才能創(chuàng)建。 stripe 2 replica 2 , 每4個節(jié)點 組成一個 組。
2、heketi概述
heketi是一個提供RESTful API管理gfs卷的框架,能夠在kubernetes、openshift、openstack等云平臺上實現(xiàn)動態(tài)的存儲資源供應(yīng),支持gfs多集群管理,便于管理員對gfs進(jìn)行操作,在kubernetes集群中,pod將存儲的請求發(fā)送至heketi,然后heketi控制gfs集群創(chuàng)建對應(yīng)的存儲卷。
heketi動態(tài)在集群內(nèi)選擇bricks構(gòu)建指定的volumes,以確保副本會分散到集群不同的故障域內(nèi)。
heketi還支持任意數(shù)量的glusterfs集群,以保證接入的云服務(wù)器不局限于單個glusterfs集群。
3、部署heketi+glusterfs
環(huán)境:kubeadm安裝的最新k8s 1.16.2版本,由1master+2node組成,網(wǎng)絡(luò)插件選用的是flannel,默認(rèn)kubeadm安裝的k8s,會給master打上污點,本文為了實現(xiàn)gfs集群功能,先手動去掉了污點。
本文的glusterfs卷模式為復(fù)制卷模式。
另外,glusterfs在kubernetes集群中需要以特權(quán)運行,需要在kube-apiserver中添加–allow-privileged=true參數(shù)以開啟此功能,默認(rèn)此版本的kubeadm已開啟。
[root@k8s-master-01 ~]# kubectl describe nodes k8s-master-01 |grep Taint Taints: node-role.kubernetes.io/master:NoSchedule [root@k8s-master-01 ~]# kubectl taint node k8s-master-01 node-role.kubernetes.io/master- node/k8s-master-01 untainted [root@k8s-master-01 ~]# kubectl describe nodes k8s-master-01 |grep Taint Taints: <none>
3.1、準(zhǔn)備工作
為了保證pod能夠正常使用gfs作為后端存儲,需要每臺運行pod的節(jié)點上提前安裝gfs的客戶端工具,其他存儲方式也類似。
3.1.1、所有節(jié)點安裝glusterfs客戶端
$ yum install -y glusterfs glusterfs-fuse -y
3.1.2、節(jié)點打標(biāo)簽
需要安裝gfs的kubernetes設(shè)置Label,因為gfs是通過kubernetes集群的DaemonSet方式安裝的。
DaemonSet安裝方式默認(rèn)會在每個節(jié)點上都進(jìn)行安裝,除非安裝前設(shè)置篩選要安裝節(jié)點Label,帶上此標(biāo)簽的節(jié)點才會安裝。
安裝腳本中設(shè)置DaemonSet中設(shè)置安裝在貼有 storagenode=glusterfs的節(jié)點,所以這是事先將節(jié)點貼上對應(yīng)Label。
[root@k8s-master-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-01 Ready master 5d v1.16.2 k8s-node-01 Ready <none> 4d23h v1.16.2 k8s-node-02 Ready <none> 4d23h v1.16.2 [root@k8s-master-01 ~]# kubectl label node k8s-master-01 storagenode=glusterfs node/k8s-master-01 labeled [root@k8s-master-01 ~]# kubectl label node k8s-node-01 storagenode=glusterfs node/k8s-node-01 labeled [root@k8s-master-01 ~]# kubectl label node k8s-node-02 storagenode=glusterfs node/k8s-node-02 labeled [root@k8s-master-01 ~]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master-01 Ready master 5d v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-01,kubernetes.io/os=linux,node-role.kubernetes.io/master=,storagenode=glusterfs k8s-node-01 Ready <none> 4d23h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-01,kubernetes.io/os=linux,storagenode=glusterfs k8s-node-02 Ready <none> 4d23h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-02,kubernetes.io/os=linux,storagenode=glusterfs
3.1.3、所有節(jié)點加載對應(yīng)模塊
$ modprobe dm_snapshot $ modprobe dm_mirror $ modprobe dm_thin_pool
查看是否加載
$ lsmod | grep dm_snapshot $ lsmod | grep dm_mirror $ lsmod | grep dm_thin_pool
3.2、創(chuàng)建glusterfs集群
采用容器化方式部署gfs集群,同樣也可以使用傳統(tǒng)方式部署,在生產(chǎn)環(huán)境中,gfs集群最好是獨立于集群之外進(jìn)行部署,之后只需要創(chuàng)建對應(yīng)的endpoints即可。這里采用Daemonset方式部署,同時保證已經(jīng)打上標(biāo)簽的節(jié)點上都運行一個gfs服務(wù),并且均有提供存儲的磁盤。
3.2.1、下載相關(guān)安裝文件
[root@k8s-master-01 glusterfs]# pwd /root/manifests/glusterfs [root@k8s-master-01 glusterfs]# wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz [root@k8s-master-01 glusterfs]# tar xf heketi-client-v7.0.0.linux.amd64.tar.gz [root@k8s-master-01 glusterfs]# cd heketi-client/share/heketi/kubernetes/ [root@k8s-master-01 kubernetes]# pwd /root/manifests/glusterfs/heketi-client/share/heketi/kubernetes
在本集群中,下面用到的daemonset控制器及后面用到的deployment控制器的api版本均變?yōu)榱薬pps/v1,所以需要手動修改下載的json文件再進(jìn)行部署,資源編排文件中需要指定selector聲明。避免出現(xiàn)以下報錯:
[root@k8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json error: unable to recognize "glusterfs-daemonset.json": no matches for kind "DaemonSet" in version "extensions/v1beta1"
修改api版本
"apiVersion": "extensions/v1beta1"
為apps/v1
"apiVersion": "apps/v1",
指定selector聲明
[root@k8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json error: error validating "glusterfs-daemonset.json": error validating data: ValidationError(DaemonSet.spec): missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec; if you choose to ignore these errors, turn validation off with --validate=false
對應(yīng)后面內(nèi)容的selector,用matchlabel相關(guān)聯(lián)
"spec": { "selector": { "matchLabels": { "glusterfs-node": "daemonset" } },
3.2.2、創(chuàng)建集群
[root@k8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json daemonset.apps/glusterfs created
注意:
- 這里使用的是默認(rèn)的掛載方式,可使用其他磁盤作為gfs的工作目錄
- 此處創(chuàng)建的namespace為default,可手動指定為其他namespace
3.2.3、查看gfs pods
[root@k8s-master-01 kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-9tttf 1/1 Running 0 1m10s glusterfs-gnrnr 1/1 Running 0 1m10s glusterfs-v92j5 1/1 Running 0 1m10s
3.3、創(chuàng)建heketi服務(wù)
3.3.1、創(chuàng)建heketi的service account對象
[root@k8s-master-01 kubernetes]# cat heketi-service-account.json { "apiVersion": "v1", "kind": "ServiceAccount", "metadata": { "name": "heketi-service-account" } } [root@k8s-master-01 kubernetes]# kubectl apply -f heketi-service-account.json serviceaccount/heketi-service-account created [root@k8s-master-01 kubernetes]# kubectl get sa NAME SECRETS AGE default 1 71m heketi-service-account 1 5s
3.3.2、創(chuàng)建heketi對應(yīng)的權(quán)限和secret
[root@k8s-master-01 kubernetes]# kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=dafault:heketi-service-account clusterrolebinding.rbac.authorization.k8s.io/heketi-gluster-admin created [root@k8s-master-01 kubernetes]# kubectl create secret generic heketi-config-secret --from-file=./heketi.json secret/heketi-config-secret created
3.3.3、初始化部署heketi
同樣的,需要修改api版本以及增加selector聲明部分。
[root@k8s-master-01 kubernetes]# vim heketi-bootstrap.json ... "kind": "Deployment", "apiVersion": "apps/v1" ... "spec": { "selector": { "matchLabels": { "name": "deploy-heketi" } }, ... [root@k8s-master-01 kubernetes]# kubectl create -f heketi-bootstrap.json service/deploy-heketi created deployment.apps/deploy-heketi created [root@k8s-master-01 kubernetes]# vim heketi-deployment.json ... "kind": "Deployment", "apiVersion": "apps/v1", ... "spec": { "selector": { "matchLabels": { "name": "heketi" } }, "replicas": 1, ... [root@k8s-master-01 kubernetes]# kubectl apply -f heketi-deployment.json secret/heketi-db-backup created service/heketi created deployment.apps/heketi created [root@k8s-master-01 kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-heketi-6c687b4b84-p7mcr 1/1 Running 0 72s heketi-68795ccd8-9726s 0/1 ContainerCreating 0 50s glusterfs-9tttf 1/1 Running 0 48m glusterfs-gnrnr 1/1 Running 0 48m glusterfs-v92j5 1/1 Running 0 48m
3.4、創(chuàng)建gfs集群
3.4.1、復(fù)制二進(jìn)制文件
復(fù)制heketi-cli到/usr/local/bin目錄下
[root@k8s-master-01 heketi-client]# pwd /root/manifests/glusterfs/heketi-client [root@k8s-master-01 heketi-client]# cp bin/heketi-cli /usr/local/bin/ [root@k8s-master-01 heketi-client]# heketi-cli -v heketi-cli v7.0.0
3.4.2、配置topology-sample
修改topology-sample,manage為gfs管理服務(wù)的節(jié)點Node主機(jī)名,storage為節(jié)點的ip地址,device為節(jié)點上的裸設(shè)備,也就是用于提供存儲的磁盤最好使用裸設(shè)備,不進(jìn)行分區(qū)。
因此,需要預(yù)先在每個gfs的節(jié)點上準(zhǔn)備好新的磁盤,這里分別在三個節(jié)點都新添加了一塊/dev/sdb磁盤設(shè)備,大小均為10G。
[root@k8s-master-01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 2G 0 part /boot └─sda2 8:2 0 48G 0 part ├─centos-root 253:0 0 44G 0 lvm / └─centos-swap 253:1 0 4G 0 lvm sdb 8:16 0 10G 0 disk sr0 11:0 1 1024M 0 rom [root@k8s-node-01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 2G 0 part /boot └─sda2 8:2 0 48G 0 part ├─centos-root 253:0 0 44G 0 lvm / └─centos-swap 253:1 0 4G 0 lvm sdb 8:16 0 10G 0 disk sr0 11:0 1 1024M 0 rom [root@k8s-node-02 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 2G 0 part /boot └─sda2 8:2 0 48G 0 part ├─centos-root 253:0 0 44G 0 lvm / └─centos-swap 253:1 0 4G 0 lvm sdb 8:16 0 10G 0 disk sr0 11:0 1 1024M 0 rom
配置topology-sample
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "k8s-master-01" ], "storage": [ "192.168.2.10" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "k8s-node-01" ], "storage": [ "192.168.2.11" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "k8s-node-02" ], "storage": [ "192.168.2.12" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] } ] } ] }
3.4.3、獲取當(dāng)前heketi的ClusterIP
查看當(dāng)前heketi的ClusterIP,并通過環(huán)境變量聲明
[root@k8s-master-01 kubernetes]# kubectl get svc|grep heketi deploy-heketi ClusterIP 10.1.241.99 <none> 8080/TCP 3m18s [root@k8s-master-01 kubernetes]# curl http://10.1.241.99:8080/hello Hello from Heketi [root@k8s-master-01 kubernetes]# export HEKETI_CLI_SERVER=http://10.1.241.99:8080 [root@k8s-master-01 kubernetes]# echo $HEKETI_CLI_SERVER http://10.1.185.215:8080
3.4.4、使用heketi創(chuàng)建gfs集群
執(zhí)行如下命令創(chuàng)建gfs集群會提示Invalid JWT token: Token missing iss claim
[root@k8s-master-01 kubernetes]# heketi-cli topology load --json=topology-sample.json Error: Unable to get topology information: Invalid JWT token: Token missing iss claim
這是因為新版本的heketi在創(chuàng)建gfs集群時需要帶上參數(shù),聲明用戶名及密碼,相應(yīng)值在heketi.json文件中配置,即:
[root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology load --json=topology-sample.json Creating cluster ... ID: 1c5ffbd86847e5fc1562ef70c033292e Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node k8s-master-01 ... ID: b6100a5af9b47d8c1f19be0b2b4d8276 Adding device /dev/sdb ... OK Creating node k8s-node-01 ... ID: 04740cac8d42f56e354c94bdbb7b8e34 Adding device /dev/sdb ... OK Creating node k8s-node-02 ... ID: 1b33ad0dba20eaf23b5e3a4845e7cdb4 Adding device /dev/sdb ... OK
執(zhí)行了heketi-cli topology load之后,Heketi在服務(wù)器做的大致操作如下:
- 進(jìn)入任意glusterfs Pod內(nèi),執(zhí)行g(shù)luster peer status 發(fā)現(xiàn)都已把對端加入到了可信存儲池(TSP)中。
- 在運行了gluster Pod的節(jié)點上,自動創(chuàng)建了一個VG,此VG正是由topology-sample.json 文件中的磁盤裸設(shè)備創(chuàng)建而來。
- 一塊磁盤設(shè)備創(chuàng)建出一個VG,以后創(chuàng)建的PVC,即從此VG里劃分的LV。
- heketi-cli topology info 查看拓?fù)浣Y(jié)構(gòu),顯示出每個磁盤設(shè)備的ID,對應(yīng)VG的ID,總空間、已用空間、空余空間等信息。
通過部分日志查看
[root@k8s-master-01 manifests]# kubectl logs -f deploy-heketi-6c687b4b84-l5b6j ... [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [pvs -o pv_name,pv_uuid,vg_name --reportformat=json /dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [ { "report": [ { "pv": [ {"pv_name":"/dev/sdb", "pv_uuid":"1UkSIV-RYt1-QBNw-KyAR-Drm5-T9NG-UmO313", "vg_name":"vg_398329cc70361dfd4baa011d811de94a"} ] } ] } ]: Stderr [ WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds. WARNING: Device /dev/centos/root not initialized in udev database even after waiting 10000000 microseconds. WARNING: Device /dev/sda1 not initialized in udev database even after waiting 10000000 microseconds. WARNING: Device /dev/centos/swap not initialized in udev database even after waiting 10000000 microseconds. WARNING: Device /dev/sda2 not initialized in udev database even after waiting 10000000 microseconds. WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds. ] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [udevadm info --query=symlink --name=/dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [udevadm info --query=symlink --name=/dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [ ]: Stderr [] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [vgdisplay -c vg_398329cc70361dfd4baa011d811de94a] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [negroni] 2019-10-23T02:17:44Z | 200 | 93.868μs | 10.1.241.99:8080 | GET /queue/3d0b6edb0faa67e8efd752397f314a6f [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [vgdisplay -c vg_398329cc70361dfd4baa011d811de94a] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [ vg_398329cc70361dfd4baa011d811de94a:r/w:772:-1:0:0:0:-1:0:1:1:10350592:4096:2527:0:2527:YCPG9X-b270-1jf2-VwKX-ycpZ-OI9u-7ZidOc ]: Stderr [] [cmdexec] DEBUG 2019/10/23 02:17:44 heketi/executors/cmdexec/device.go:273:cmdexec.(*CmdExecutor).getVgSizeFromNode: /dev/sdb in k8s-node-01 has TotalSize:10350592, FreeSize:10350592, UsedSize:0 [heketi] INFO 2019/10/23 02:17:44 Added device /dev/sdb [asynchttp] INFO 2019/10/23 02:17:44 Completed job 3d0b6edb0faa67e8efd752397f314a6f in 3m2.694238221s [negroni] 2019-10-23T02:17:45Z | 204 | 105.23μs | 10.1.241.99:8080 | GET /queue/3d0b6edb0faa67e8efd752397f314a6f [cmdexec] INFO 2019/10/23 02:17:45 Check Glusterd service status in node k8s-node-01 [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 02:17:45 Adding node k8s-node-02 [negroni] 2019-10-23T02:17:45Z | 202 | 146.998544ms | 10.1.241.99:8080 | POST /nodes [asynchttp] INFO 2019/10/23 02:17:45 Started job 8da70b6fd6fec1d61c4ba1cd0fe27fe5 [cmdexec] INFO 2019/10/23 02:17:45 Probing: k8s-node-01 -> 192.168.2.12 [negroni] 2019-10-23T02:17:45Z | 200 | 74.577μs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5 [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [gluster --mode=script --timeout=600 peer probe 192.168.2.12] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [negroni] 2019-10-23T02:17:46Z | 200 | 79.893μs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5 [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [gluster --mode=script --timeout=600 peer probe 192.168.2.12] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [peer probe: success. ]: Stderr [] [cmdexec] INFO 2019/10/23 02:17:46 Setting snapshot limit [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [gluster --mode=script --timeout=600 snapshot config snap-max-hard-limit 14] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [gluster --mode=script --timeout=600 snapshot config snap-max-hard-limit 14] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [snapshot config: snap-max-hard-limit for System set successfully ]: Stderr [] [heketi] INFO 2019/10/23 02:17:46 Added node 1b33ad0dba20eaf23b5e3a4845e7cdb4 [asynchttp] INFO 2019/10/23 02:17:46 Completed job 8da70b6fd6fec1d61c4ba1cd0fe27fe5 in 488.404011ms [negroni] 2019-10-23T02:17:46Z | 303 | 80.712μs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5 [negroni] 2019-10-23T02:17:46Z | 200 | 242.595μs | 10.1.241.99:8080 | GET /nodes/1b33ad0dba20eaf23b5e3a4845e7cdb4 [heketi] INFO 2019/10/23 02:17:46 Adding device /dev/sdb to node 1b33ad0dba20eaf23b5e3a4845e7cdb4 [negroni] 2019-10-23T02:17:46Z | 202 | 696.018μs | 10.1.241.99:8080 | POST /devices [asynchttp] INFO 2019/10/23 02:17:46 Started job 21af2069b74762a5521a46e2b52e7d6a [negroni] 2019-10-23T02:17:46Z | 200 | 82.354μs | 10.1.241.99:8080 | GET /queue/21af2069b74762a5521a46e2b52e7d6a [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [pvcreate -qq --metadatasize=128M --dataalignment=256K '/dev/sdb'] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 ...
3.4.5、持久化heketi配置
上面創(chuàng)建的heketi沒有配置持久化的卷,如果heketi的pod重啟,可能會丟失之前的配置信息,所以現(xiàn)在創(chuàng)建heketi持久化的卷來對heketi數(shù)據(jù)進(jìn)行持久化,該持久化方式利用gfs提供的動態(tài)存儲,也可以采用其他方式進(jìn)行持久化。
在所有節(jié)點安裝device-mapper*
yum install -y device-mapper*
將配置信息保存為文件,并創(chuàng)建持久化相關(guān)信息
[root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' setup-openshift-heketi-storage Saving heketi-storage.json Saving heketi-storage.json [root@k8s-master-01 kubernetes]# kubectl apply -f heketi-storage.json secret/heketi-storage-secret created endpoints/heketi-storage-endpoints created service/heketi-storage-endpoints created job.batch/heketi-storage-copy-job created
刪除中間產(chǎn)物
[root@k8s-master-01 kubernetes]# kubectl delete all,svc,jobs,deployment,secret --selector="deploy-heketi" pod "deploy-heketi-6c687b4b84-l5b6j" deleted service "deploy-heketi" deleted deployment.apps "deploy-heketi" deleted replicaset.apps "deploy-heketi-6c687b4b84" deleted job.batch "heketi-storage-copy-job" deleted secret "heketi-storage-secret" deleted
創(chuàng)建持久化的heketi
[root@k8s-master-01 kubernetes]# kubectl apply -f heketi-deployment.json secret/heketi-db-backup created service/heketi created deployment.apps/heketi created [root@k8s-master-01 kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-cqw5d 1/1 Running 0 41m glusterfs-l2lsv 1/1 Running 0 41m glusterfs-lrdz7 1/1 Running 0 41m heketi-68795ccd8-m8x55 1/1 Running 0 32s
查看持久化后heketi的svc,并重新聲明環(huán)境變量
[root@k8s-master-01 kubernetes]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE heketi ClusterIP 10.1.45.61 <none> 8080/TCP 2m9s heketi-storage-endpoints ClusterIP 10.1.26.73 <none> 1/TCP 4m58s kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 14h [root@k8s-master-01 kubernetes]# export HEKETI_CLI_SERVER=http://10.1.45.61:8080 [root@k8s-master-01 kubernetes]# curl http://10.1.45.61:8080/hello Hello from Heketi
查看gfs集群信息,更多操作參照官方文檔說明
[root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology info Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e File: true Block: true Volumes: Name: heketidbstorage Size: 2 Id: b25f4b627cf66279bfe19e8a01e9e85d Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e Mount: 192.168.2.11:heketidbstorage Mount Options: backup-volfile-servers=192.168.2.12,192.168.2.10 Durability Type: replicate Replica: 3 Snapshot: Disabled Bricks: Id: 3ab6c19b8fe0112575ba04d58573a404 Path: /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_3ab6c19b8fe0112575ba04d58573a404/brick Size (GiB): 2 Node: b6100a5af9b47d8c1f19be0b2b4d8276 Device: 703e3662cbd8ffb24a6401bb3c3c41fa Id: d1fa386f2ec9954f4517431163f67dea Path: /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_d1fa386f2ec9954f4517431163f67dea/brick Size (GiB): 2 Node: 04740cac8d42f56e354c94bdbb7b8e34 Device: 398329cc70361dfd4baa011d811de94a Id: d2b0ae26fa3f0eafba407b637ca0d06b Path: /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_d2b0ae26fa3f0eafba407b637ca0d06b/brick Size (GiB): 2 Node: 1b33ad0dba20eaf23b5e3a4845e7cdb4 Device: 7c791bbb90f710123ba431a7cdde8d0b Nodes: Node Id: 04740cac8d42f56e354c94bdbb7b8e34 State: online Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e Zone: 1 Management Hostnames: k8s-node-01 Storage Hostnames: 192.168.2.11 Devices: Id:398329cc70361dfd4baa011d811de94a Name:/dev/sdb State:online Size (GiB):9 Used (GiB):2 Free (GiB):7 Bricks: Id:d1fa386f2ec9954f4517431163f67dea Size (GiB):2 Path: /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_d1fa386f2ec9954f4517431163f67dea/brick Node Id: 1b33ad0dba20eaf23b5e3a4845e7cdb4 State: online Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e Zone: 1 Management Hostnames: k8s-node-02 Storage Hostnames: 192.168.2.12 Devices: Id:7c791bbb90f710123ba431a7cdde8d0b Name:/dev/sdb State:online Size (GiB):9 Used (GiB):2 Free (GiB):7 Bricks: Id:d2b0ae26fa3f0eafba407b637ca0d06b Size (GiB):2 Path: /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_d2b0ae26fa3f0eafba407b637ca0d06b/brick Node Id: b6100a5af9b47d8c1f19be0b2b4d8276 State: online Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e Zone: 1 Management Hostnames: k8s-master-01 Storage Hostnames: 192.168.2.10 Devices: Id:703e3662cbd8ffb24a6401bb3c3c41fa Name:/dev/sdb State:online Size (GiB):9 Used (GiB):2 Free (GiB):7 Bricks: Id:3ab6c19b8fe0112575ba04d58573a404 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_3ab6c19b8fe0112575ba04d58573a404/brick
4、創(chuàng)建storageclass
[root@k8s-master-01 kubernetes]# vim storageclass-gfs-heketi.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-heketi provisioner: kubernetes.io/glusterfs reclaimPolicy: Retain parameters: resturl: "http://10.1.45.61:8080" restauthenabled: "true" restuser: "admin" restuserkey: "My Secret" gidMin: "40000" gidMax: "50000" volumetype: "replicate:3" allowVolumeExpansion: true [root@k8s-master-01 kubernetes]# kubectl apply -f storageclass-gfs-heketi.yaml storageclass.storage.k8s.io/gluster-heketi created
參數(shù)說明:
- reclaimPolicy:Retain 回收策略,默認(rèn)是Delete,刪除pvc后pv及后端創(chuàng)建的volume、brick(lvm)不會被刪除。
- gidMin和gidMax,能夠使用的最小和最大gid
- volumetype:卷類型及個數(shù),這里使用的是復(fù)制卷,個數(shù)必須大于1
5、測試通過gfs提供動態(tài)存儲
創(chuàng)建一個pod使用動態(tài)pv,在StorageClassName指定之前創(chuàng)建的StorageClass的name,即gluster-heketi:
[root@k8s-master-01 kubernetes]# vim pod-use-pvc.yaml apiVersion: v1 kind: Pod metadata: name: pod-use-pvc spec: containers: - name: pod-use-pvc image: busybox command: - sleep - "3600" volumeMounts: - name: gluster-volume mountPath: "/pv-data" readOnly: false volumes: - name: gluster-volume persistentVolumeClaim: claimName: pvc-gluster-heketi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-gluster-heketi spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "gluster-heketi" resources: requests: storage: 1Gi
創(chuàng)建pod并查看創(chuàng)建的pv和pvc
[root@k8s-master-01 kubernetes]# kubectl apply -f pod-use-pvc.yaml pod/pod-use-pvc created persistentvolumeclaim/pvc-gluster-heketi created [root@k8s-master-01 kubernetes]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-0fb9b246-4da4-491c-b6a2-4f38489ab11c 1Gi RWO Retain Bound default/pvc-gluster-heketi gluster-heketi 57s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/pvc-gluster-heketi Bound pvc-0fb9b246-4da4-491c-b6a2-4f38489ab11c 1Gi RWO gluster-heketi 62s
6、分析k8s通過heketi創(chuàng)建pv及pvc的過程
通過pvc及向storageclass申請創(chuàng)建對應(yīng)的pv,具體可通過查看創(chuàng)建的heketi pod的日志
首先發(fā)現(xiàn)heketi接收到請求之后運行了一個job任務(wù),創(chuàng)建了三個bricks,在三個gfs節(jié)點中創(chuàng)建對應(yīng)的目錄:
[heketi] INFO 2019/10/23 03:08:36 Allocating brick set #0 [negroni] 2019-10-23T03:08:36Z | 202 | 56.193603ms | 10.1.45.61:8080 | POST /volumes [asynchttp] INFO 2019/10/23 03:08:36 Started job 3ec932315085609bc54ead6e3f6851e8 [heketi] INFO 2019/10/23 03:08:36 Started async operation: Create Volume [heketi] INFO 2019/10/23 03:08:36 Trying Create Volume (attempt #1/5) [heketi] INFO 2019/10/23 03:08:36 Creating brick 289fe032c1f4f9f211480e24c5d74a44 [heketi] INFO 2019/10/23 03:08:36 Creating brick a3172661ba1b849d67b500c93c3dd652 [heketi] INFO 2019/10/23 03:08:36 Creating brick 917e27a9dbc5395ebf08dff8d3401b43 [negroni] 2019-10-23T03:08:36Z | 200 | 72.083μs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8 [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 1 [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_917e27a9dbc5395ebf08dff8d3401b43] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2
創(chuàng)建lv,添加自動掛載
[kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2 [kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout [meta-data=/dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652 isize=512 agcount=8, agsize=32768 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=262144, imaxpct=25 = sunit=64 swidth=64 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=64 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 ]: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [awk "BEGIN {print \"/dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]
創(chuàng)建brick,設(shè)置權(quán)限
[kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [chmod 2775 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652/brick] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2 [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chown :40000 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout []: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [chmod 2775 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2 [negroni] 2019-10-23T03:08:38Z | 200 | 83.159μs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8 [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_917e27a9dbc5395ebf08dff8d3401b43/brick] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]: Stdout []: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652/brick] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout []: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout []: Stderr [] [cmdexec] INFO 2019/10/23 03:08:38 Creating volume vol_08e8447256de2598952dcb240e615d0f replica 3
創(chuàng)建對應(yīng)的volume
[asynchttp] INFO 2019/10/23 03:08:41 Completed job 3ec932315085609bc54ead6e3f6851e8 in 5.007631648s [negroni] 2019-10-23T03:08:41Z | 303 | 78.335μs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8 [negroni] 2019-10-23T03:08:41Z | 200 | 5.751689ms | 10.1.45.61:8080 | GET /volumes/08e8447256de2598952dcb240e615d0f [negroni] 2019-10-23T03:08:41Z | 200 | 139.05μs | 10.1.45.61:8080 | GET /clusters/1c5ffbd86847e5fc1562ef70c033292e [negroni] 2019-10-23T03:08:41Z | 200 | 660.249μs | 10.1.45.61:8080 | GET /nodes/04740cac8d42f56e354c94bdbb7b8e34 [negroni] 2019-10-23T03:08:41Z | 200 | 270.334μs | 10.1.45.61:8080 | GET /nodes/1b33ad0dba20eaf23b5e3a4845e7cdb4 [negroni] 2019-10-23T03:08:41Z | 200 | 345.528μs | 10.1.45.61:8080 | GET /nodes/b6100a5af9b47d8c1f19be0b2b4d8276 [heketi] INFO 2019/10/23 03:09:39 Starting Node Health Status refresh [cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-node-01 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node 04740cac8d42f56e354c94bdbb7b8e34 up=true [cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-node-02 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node 1b33ad0dba20eaf23b5e3a4845e7cdb4 up=true [cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-master-01 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node b6100a5af9b47d8c1f19be0b2b4d8276 up=true [heketi] INFO 2019/10/23 03:09:39 Cleaned 0 nodes from health cache
7、測試數(shù)據(jù)
測試使用該pv的pod之間能否共享數(shù)據(jù),手動進(jìn)入到pod并創(chuàng)建文件
[root@k8s-master-01 kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-cqw5d 1/1 Running 0 90m glusterfs-l2lsv 1/1 Running 0 90m glusterfs-lrdz7 1/1 Running 0 90m heketi-68795ccd8-m8x55 1/1 Running 0 49m pod-use-pvc 1/1 Running 0 20m [root@k8s-master-01 kubernetes]# kubectl exec -it pod-use-pvc /bin/sh / # cd /pv-data/ /pv-data # echo "hello world">a.txt /pv-data # cat a.txt hello world
查看創(chuàng)建的卷
[root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' volume list Id:08e8447256de2598952dcb240e615d0f Cluster:1c5ffbd86847e5fc1562ef70c033292e Name:vol_08e8447256de2598952dcb240e615d0f Id:b25f4b627cf66279bfe19e8a01e9e85d Cluster:1c5ffbd86847e5fc1562ef70c033292e Name:heketidbstorage
將設(shè)備掛載查看卷中的數(shù)據(jù),vol_08e8447256de2598952dcb240e615d0f為卷名稱
[root@k8s-master-01 kubernetes]# mount -t glusterfs 192.168.2.10:vol_08e8447256de2598952dcb240e615d0f /mnt [root@k8s-master-01 kubernetes]# ll /mnt/ total 1 -rw-r--r-- 1 root 40000 12 Oct 23 11:29 a.txt [root@k8s-master-01 kubernetes]# cat /mnt/a.txt hello world
8、測試deployment
測試通過deployment控制器部署能否正常使用storageclass,創(chuàng)建nginx的deployment
[root@k8s-master-01 kubernetes]# vim nginx-deployment-gluster.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-gfs spec: selector: matchLabels: name: nginx replicas: 2 template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: nginx-gfs-html mountPath: "/usr/share/nginx/html" - name: nginx-gfs-conf mountPath: "/etc/nginx/conf.d" volumes: - name: nginx-gfs-html persistentVolumeClaim: claimName: glusterfs-nginx-html - name: nginx-gfs-conf persistentVolumeClaim: claimName: glusterfs-nginx-conf --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: glusterfs-nginx-html spec: accessModes: [ "ReadWriteMany" ] storageClassName: "gluster-heketi" resources: requests: storage: 500Mi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: glusterfs-nginx-conf spec: accessModes: [ "ReadWriteMany" ] storageClassName: "gluster-heketi" resources: requests: storage: 10Mi
查看相應(yīng)資源
[root@k8s-master-01 kubernetes]# kubectl get pod,pv,pvc|grep nginx pod/nginx-gfs-7d66cccf76-mkc76 1/1 Running 0 2m45s pod/nginx-gfs-7d66cccf76-zc8n2 1/1 Running 0 2m45s persistentvolume/pvc-87481e3a-9b7e-43aa-a0b9-4028ce0a1abb 1Gi RWX Retain Bound default/glusterfs-nginx-conf gluster-heketi 2m34s persistentvolume/pvc-f954a4ca-ea1c-458d-8490-a49a0a001ab5 1Gi RWX Retain Bound default/glusterfs-nginx-html gluster-heketi 2m34s persistentvolumeclaim/glusterfs-nginx-conf Bound pvc-87481e3a-9b7e-43aa-a0b9-4028ce0a1abb 1Gi RWX gluster-heketi 2m45s persistentvolumeclaim/glusterfs-nginx-html Bound pvc-f954a4ca-ea1c-458d-8490-a49a0a001ab5 1Gi RWX gluster-heketi 2m45s
查看掛載情況
[root@k8s-master-01 kubernetes]# kubectl exec -it nginx-gfs-7d66cccf76-mkc76 -- df -Th Filesystem Type Size Used Avail Use% Mounted on overlay overlay 44G 3.2G 41G 8% / tmpfs tmpfs 64M 0 64M 0% /dev tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/mapper/centos-root xfs 44G 3.2G 41G 8% /etc/hosts shm tmpfs 64M 0 64M 0% /dev/shm 192.168.2.10:vol_adf6fc08c8828fdda27c8aa5ce99b50c fuse.glusterfs 1014M 43M 972M 5% /etc/nginx/conf.d 192.168.2.10:vol_454e14ae3184122ff9a14d77e02b10b9 fuse.glusterfs 1014M 43M 972M 5% /usr/share/nginx/html tmpfs tmpfs 2.0G 12K 2.0G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs tmpfs 2.0G 0 2.0G 0% /proc/acpi tmpfs tmpfs 2.0G 0 2.0G 0% /proc/scsi tmpfs tmpfs 2.0G 0 2.0G 0% /sys/firmware
在宿主機(jī)掛載和創(chuàng)建文件
[root@k8s-master-01 kubernetes]# mount -t glusterfs 192.168.2.10:vol_454e14ae3184122ff9a14d77e02b10b9 /mnt/ [root@k8s-master-01 kubernetes]# cd /mnt/ [root@k8s-master-01 mnt]# echo "hello world">index.html [root@k8s-master-01 mnt]# kubectl exec -it nginx-gfs-7d66cccf76-mkc76 -- cat /usr/share/nginx/html/index.html hello world
擴(kuò)容nginx副本,查看是否能正常掛載
[root@k8s-master-01 mnt]# kubectl scale deployment nginx-gfs --replicas=3 deployment.apps/nginx-gfs scaled [root@k8s-master-01 mnt]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-cqw5d 1/1 Running 0 129m glusterfs-l2lsv 1/1 Running 0 129m glusterfs-lrdz7 1/1 Running 0 129m heketi-68795ccd8-m8x55 1/1 Running 0 88m nginx-gfs-7d66cccf76-mkc76 1/1 Running 0 8m55s nginx-gfs-7d66cccf76-qzqnv 1/1 Running 0 23s nginx-gfs-7d66cccf76-zc8n2 1/1 Running 0 8m55s [root@k8s-master-01 mnt]# kubectl exec -it nginx-gfs-7d66cccf76-qzqnv -- cat /usr/share/nginx/html/index.html hello world
至此,在k8s集群中部署heketi+glusterfs提供動態(tài)存儲結(jié)束。
參考來源:
https://github.com/heketi/heketi
https://github.com/gluster/gluster-kubernetes
http://www.dbjr.com.cn/article/244019.htm
總結(jié)
到此這篇關(guān)于kubernetes存儲之GlusterFS集群的文章就介紹到這了,更多相關(guān)kubernetes存儲GlusterFS集群內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
Rancher部署配置開源Rainbond云原生應(yīng)用管理平臺
這篇文章主要為大家介紹了Rancher部署配置開源Rainbond云原生應(yīng)用管理平臺,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-04-04kubernetes?Volume存儲卷configMap學(xué)習(xí)筆記
這篇文章主要為大家介紹了kubernetes?Volume存儲卷configMap學(xué)習(xí)筆記,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-05-05Kubernetes(k8s?1.23))安裝與卸載詳細(xì)教程
這篇文章主要介紹了Kubernetes(k8s?1.23))安裝與卸載,本文給大家介紹的非常詳細(xì),對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友可以參考下2022-07-07KVM虛擬化技術(shù)之virt-manager使用及KVM虛擬化平臺網(wǎng)絡(luò)模型介紹
這篇文章主要介紹了KVM虛擬化技術(shù)之virt-manager使用及KVM虛擬化平臺網(wǎng)絡(luò)模型介紹,需要的朋友可以參考下2016-10-10Rainbond應(yīng)用分享與發(fā)布官方文檔說明
這篇文章主要為大家介紹了Rainbond應(yīng)用分享與發(fā)布的官方文檔說明,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-04-04