kubernetes k8s 存儲(chǔ)動(dòng)態(tài)掛載配置詳解
nfs 文件系統(tǒng)
使用 nfs 文件系統(tǒng) 實(shí)現(xiàn)kubernetes存儲(chǔ)動(dòng)態(tài)掛載
1. 安裝服務(wù)端和客戶端
root@hello:~# apt install nfs-kernel-server nfs-common
其中 nfs-kernel-server 為服務(wù)端, nfs-common 為客戶端。
2. 配置 nfs 共享目錄
root@hello:~# mkdir /nfs root@hello:~# sudo vim /etc/exports /nfs *(rw,sync,no_root_squash,no_subtree_check)
各字段解析如下:
/nfs: 要共享的目錄
:指定可以訪問共享目錄的用戶 ip, * 代表所有用戶。192.168.3. 指定網(wǎng)段。192.168.3.29 指定 ip。
rw:可讀可寫。如果想要只讀的話,可以指定 ro。
sync:文件同步寫入到內(nèi)存與硬盤中。
async:文件會(huì)先暫存于內(nèi)存中,而非直接寫入硬盤。
no_root_squash:登入 nfs 主機(jī)使用分享目錄的使用者,如果是 root 的話,那么對(duì)于這個(gè)分享的目錄來說,他就具有 root 的權(quán)限!這個(gè)項(xiàng)目『極不安全』,不建議使用!但如果你需要在客戶端對(duì) nfs 目錄進(jìn)行寫入操作。你就得配置 no_root_squash。方便與安全不可兼得。
root_squash:在登入 nfs 主機(jī)使用分享之目錄的使用者如果是 root 時(shí),那么這個(gè)使用者的權(quán)限將被壓縮成為匿名使用者,通常他的 UID 與 GID 都會(huì)變成 nobody 那個(gè)系統(tǒng)賬號(hào)的身份。
subtree_check:強(qiáng)制 nfs 檢查父目錄的權(quán)限(默認(rèn))
no_subtree_check:不檢查父目錄權(quán)限
配置完成后,執(zhí)行以下命令導(dǎo)出共享目錄,并重啟 nfs 服務(wù):
root@hello:~# exportfs -a root@hello:~# systemctl restart nfs-kernel-server root@hello:~# root@hello:~# systemctl enable nfs-kernel-server
客戶端掛載
root@hello:~# apt install nfs-common root@hello:~# mkdir -p /nfs/ root@hello:~# mount -t nfs 192.168.1.66:/nfs/ /nfs/
root@hello:~# df -hT Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 7.8G 0 7.8G 0% /dev tmpfs tmpfs 1.6G 2.9M 1.6G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv ext4 97G 9.9G 83G 11% / tmpfs tmpfs 7.9G 0 7.9G 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/loop0 squashfs 56M 56M 0 100% /snap/core18/2128 /dev/loop1 squashfs 56M 56M 0 100% /snap/core18/2246 /dev/loop3 squashfs 33M 33M 0 100% /snap/snapd/12704 /dev/loop2 squashfs 62M 62M 0 100% /snap/core20/1169 /dev/loop4 squashfs 33M 33M 0 100% /snap/snapd/13640 /dev/loop6 squashfs 68M 68M 0 100% /snap/lxd/21835 /dev/loop5 squashfs 71M 71M 0 100% /snap/lxd/21029 /dev/sda2 ext4 976M 107M 803M 12% /boot tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/0 192.168.1.66:/nfs nfs4 97G 6.4G 86G 7% /nfs
創(chuàng)建配置默認(rèn)存儲(chǔ)
[root@k8s-master-node1 ~/yaml]# vim nfs-storage.yaml [root@k8s-master-node1 ~/yaml]# [root@k8s-master-node1 ~/yaml]# cat nfs-storage.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "true" ## 刪除pv的時(shí)候,pv的內(nèi)容是否要備份 --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/chenby/nfs-subdir-external-provisioner:v4.0.2 # resources: # limits: # cpu: 10m # requests: # cpu: 10m volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 192.168.1.66 ## 指定自己nfs服務(wù)器地址 - name: NFS_PATH value: /nfs/ ## nfs服務(wù)器共享的目錄 volumes: - name: nfs-client-root nfs: server: 192.168.1.66 path: /nfs/ --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
創(chuàng)建
[root@k8s-master-node1 ~/yaml]# kubectl apply -f nfs-storage.yaml storageclass.storage.k8s.io/nfs-storage created deployment.apps/nfs-client-provisioner created serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created [root@k8s-master-node1 ~/yaml]#
查看是否創(chuàng)建默認(rèn)存儲(chǔ)
[root@k8s-master-node1 ~/yaml]# kubectl get storageclasses.storage.k8s.io NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 100s [root@k8s-master-node1 ~/yaml]#
創(chuàng)建pvc進(jìn)行測(cè)試
[root@k8s-master-node1 ~/yaml]# vim pvc.yaml [root@k8s-master-node1 ~/yaml]# cat pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi [root@k8s-master-node1 ~/yaml]# [root@k8s-master-node1 ~/yaml]# kubectl apply -f pvc.yaml persistentvolumeclaim/nginx-pvc created [root@k8s-master-node1 ~/yaml]#
查看pvc
[root@k8s-master-node1 ~/yaml]# [root@k8s-master-node1 ~/yaml]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginx-pvc Bound pvc-8a4b6065-904a-4bae-bef9-1f3b5612986c 200Mi RWX nfs-storage 4s [root@k8s-master-node1 ~/yaml]#
查看pv
[root@k8s-master-node1 ~/yaml]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-8a4b6065-904a-4bae-bef9-1f3b5612986c 200Mi RWX Delete Bound default/nginx-pvc nfs-storage 103s [root@k8s-master-node1 ~/yaml]#
以上就是kubernetes k8s 存儲(chǔ)動(dòng)態(tài)掛載配置詳解的詳細(xì)內(nèi)容,更多關(guān)于kubernetes 存儲(chǔ)動(dòng)態(tài)掛載的資料請(qǐng)關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
Kubernetes應(yīng)用服務(wù)質(zhì)量管理詳解
這篇文章主要為大家介紹了Kubernetes應(yīng)用服務(wù)質(zhì)量管理詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-11-11Kubernetes?Ingress實(shí)現(xiàn)細(xì)粒度IP訪問控制
這篇文章主要為大家介紹了Kubernetes?Ingress實(shí)現(xiàn)細(xì)粒度IP訪問控制,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2023-04-04Rainbond配置組件自動(dòng)構(gòu)建部署官方文檔講解
這篇文章主要為大家介紹了Rainbond配置組件自動(dòng)構(gòu)建部署官方文檔講解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-04-04Kubernetes存儲(chǔ)系統(tǒng)數(shù)據(jù)持久化管理詳解
這篇文章主要為大家介紹了Kubernetes存儲(chǔ)系統(tǒng)數(shù)據(jù)持久化管理詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-11-11Centos?8.2?升級(jí)內(nèi)核通過elrepo源的方法
這篇文章主要介紹了Centos?8.2?升級(jí)內(nèi)核通過elrepo源,本文給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2022-10-10KubeSphere中部署Wiki系統(tǒng)wiki.js并啟用中文全文檢索
這篇文章主要為大家介紹了KubeSphere中部署Wiki系統(tǒng)wiki.js并啟用中文全文檢索實(shí)現(xiàn)過程,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-06-06