pod污點taint?與容忍度tolerations詳解
一.系統(tǒng)環(huán)境
服務(wù)器版本 | docker軟件版本 | Kubernetes(k8s)集群版本 | CPU架構(gòu) |
---|---|---|---|
CentOS Linux release 7.4.1708 (Core) | Docker version 20.10.12 | v1.21.9 | x86_64 |
Kubernetes集群架構(gòu):k8scloude1作為master節(jié)點,k8scloude2,k8scloude3作為worker節(jié)點
服務(wù)器 | 操作系統(tǒng)版本 | CPU架構(gòu) | 進(jìn)程 | 功能描述 |
---|---|---|---|---|
k8scloude1/192.168.110.130 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico | k8s master節(jié)點 |
k8scloude2/192.168.110.129 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker節(jié)點 |
k8scloude3/192.168.110.128 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker節(jié)點 |
二.前言
本文介紹污點taint 與容忍度tolerations,可以影響pod的調(diào)度。
使用污點taint 與容忍度tolerations的前提是已經(jīng)有一套可以正常運行的Kubernetes集群,關(guān)于Kubernetes(k8s)集群的安裝部署,可以查看博客《Centos7 安裝部署Kubernetes(k8s)集群》
三.污點taint
3.1 污點taint概覽
節(jié)點親和性 是 Pod 的一種屬性,它使 Pod 被吸引到一類特定的節(jié)點 (這可能出于一種偏好,也可能是硬性要求)。 污點(Taint) 則相反——它使節(jié)點能夠排斥一類特定的 Pod。
3.2 給節(jié)點添加污點taint
給節(jié)點增加一個污點的語法如下:給節(jié)點 node1 增加一個污點,它的鍵名是 key1,鍵值是 value1,效果是 NoSchedule。 這表示只有擁有和這個污點相匹配的容忍度的 Pod 才能夠被分配到 node1 這個節(jié)點。
#污點的格式:鍵=值:NoSchedule kubectl taint nodes node1 key1=value1:NoSchedule #只有鍵沒有值的話,格式為:鍵:NoSchedule kubectl taint nodes node1 key1:NoSchedule
移除污點語法如下:
kubectl taint nodes node1 key1=value1:NoSchedule-
節(jié)點的描述信息里有一個Taints字段,Taints字段表示節(jié)點有沒有污點
[root@k8scloude1 deploy]# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8scloude1 Ready control-plane,master 8d v1.21.0 192.168.110.130 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.12 k8scloude2 Ready <none> 8d v1.21.0 192.168.110.129 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.12 k8scloude3 Ready <none> 8d v1.21.0 192.168.110.128 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://20.10.12 [root@k8scloude1 deploy]# kubectl describe nodes k8scloude1 Name: k8scloude1 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8scloude1 kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 projectcalico.org/IPv4Address: 192.168.110.130/24 projectcalico.org/IPv4IPIPTunnelAddr: 10.244.158.64 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sun, 09 Jan 2022 16:19:06 +0800 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false ......
查看節(jié)點是否有污點,Taints: node-role.kubernetes.io/master:NoSchedule表示k8s集群的master節(jié)點有污點,這是默認(rèn)就存在的污點,這也是master節(jié)點為什么不能運行應(yīng)用pod的原因。
[root@k8scloude1 deploy]# kubectl describe nodes k8scloude2 | grep -i Taints Taints: <none> [root@k8scloude1 deploy]# kubectl describe nodes k8scloude1 | grep -i Taints Taints: node-role.kubernetes.io/master:NoSchedule [root@k8scloude1 deploy]# kubectl describe nodes k8scloude3 | grep -i Taints Taints: <none>
創(chuàng)建pod,nodeSelector:kubernetes.io/hostname: k8scloude1表示pod運行在標(biāo)簽為kubernetes.io/hostname=k8scloude1的節(jié)點上。
關(guān)于pod的調(diào)度詳細(xì)內(nèi)容,請查看博客《pod(八):pod的調(diào)度——將 Pod 指派給節(jié)點》
[root@k8scloude1 pod]# vim schedulepod4.yaml [root@k8scloude1 pod]# cat schedulepod4.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod1 name: pod1 namespace: pod spec: nodeSelector: kubernetes.io/hostname: k8scloude1 containers: - image: nginx imagePullPolicy: IfNotPresent name: pod1 resources: {} ports: - name: http containerPort: 80 protocol: TCP hostPort: 80 dnsPolicy: ClusterFirst restartPolicy: Always status: {}
標(biāo)簽為kubernetes.io/hostname=k8scloude1的節(jié)點為k8scloude1節(jié)點
[root@k8scloude1 pod]# kubectl get nodes -l kubernetes.io/hostname=k8scloude1 NAME STATUS ROLES AGE VERSION k8scloude1 Ready control-plane,master 8d v1.21.0
創(chuàng)建pod,因為k8scloude1上有污點,pod1不能運行在k8scloude1上,所以pod1狀態(tài)為Pending
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml pod/pod1 created #因為k8scloude1上有污點,pod1不能運行在k8scloude1上,所以pod1狀態(tài)為Pending [root@k8scloude1 pod]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 0/1 Pending 0 9s <none> <none> <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pod -o wide No resources found in pod namespace.
四.容忍度tolerations
4.1 容忍度tolerations概覽
容忍度(Toleration) 是應(yīng)用于 Pod 上的。容忍度允許調(diào)度器調(diào)度帶有對應(yīng)污點的 Pod。 容忍度允許調(diào)度但并不保證調(diào)度:作為其功能的一部分, 調(diào)度器也會評估其他參數(shù)。
污點和容忍度(Toleration)相互配合,可以用來避免 Pod 被分配到不合適的節(jié)點上。 每個節(jié)點上都可以應(yīng)用一個或多個污點,這表示對于那些不能容忍這些污點的 Pod, 是不會被該節(jié)點接受的。
4.2 設(shè)置容忍度tolerations
只有擁有和這個污點相匹配的容忍度的 Pod 才能夠被分配到 node節(jié)點。
查看k8scloude1節(jié)點的污點
[root@k8scloude1 pod]# kubectl describe nodes k8scloude1 | grep -i taint Taints: node-role.kubernetes.io/master:NoSchedule
你可以在 Pod 規(guī)約中為 Pod 設(shè)置容忍度,創(chuàng)建pod,tolerations參數(shù)表示可以容忍污點:node-role.kubernetes.io/master:NoSchedule ,nodeSelector:kubernetes.io/hostname: k8scloude1表示pod運行在標(biāo)簽為kubernetes.io/hostname=k8scloude1的節(jié)點上。
[root@k8scloude1 pod]# vim schedulepod4.yaml [root@k8scloude1 pod]# cat schedulepod4.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod1 name: pod1 namespace: pod spec: tolerations: - key: "node-role.kubernetes.io/master" operator: "Equal" value: "" effect: "NoSchedule" nodeSelector: kubernetes.io/hostname: k8scloude1 containers: - image: nginx imagePullPolicy: IfNotPresent name: pod1 resources: {} ports: - name: http containerPort: 80 protocol: TCP hostPort: 80 dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@k8scloude1 pod]# kubectl get pods -o wide No resources found in pod namespace. [root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml pod/pod1 created
查看pod,即使k8scloude1節(jié)點有污點,pod還是正常運行。
taint污點和cordon,drain的區(qū)別:某個節(jié)點上有污點,可以設(shè)置tolerations容忍度,讓pod運行在該節(jié)點,某個節(jié)點被cordon,drain,則該節(jié)點不能被分配出去運行pod。
關(guān)于cordon,drain的詳細(xì)信息,請查看博客《cordon節(jié)點,drain驅(qū)逐節(jié)點,delete 節(jié)點》
[root@k8scloude1 pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 1/1 Running 0 4s 10.244.158.84 k8scloude1 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide No resources found in pod namespace.
注意,tolerations容忍度有兩種寫法,任選一種即可:
tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" tolerations: - key: "key1" operator: "Exists" effect: "NoSchedule"
給k8scloude2節(jié)點打標(biāo)簽
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 taint=T node/k8scloude2 labeled [root@k8scloude1 pod]# kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS k8scloude1 Ready control-plane,master 8d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= k8scloude2 Ready <none> 8d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linux,taint=T k8scloude3 Ready <none> 8d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux
對k8scloude2設(shè)置污點
#污點taint的格式:鍵=值:NoSchedule [root@k8scloude1 pod]# kubectl taint node k8scloude2 wudian=true:NoSchedule node/k8scloude2 tainted [root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -i Taints Taints: wudian=true:NoSchedule
創(chuàng)建pod,tolerations參數(shù)表示容忍污點wudian=true:NoSchedule,nodeSelector:taint: T參數(shù)表示pod運行在標(biāo)簽為nodeSelector=taint: T的節(jié)點。
[root@k8scloude1 pod]# vim schedulepod4.yaml [root@k8scloude1 pod]# cat schedulepod4.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod1 name: pod1 namespace: pod spec: tolerations: - key: "wudian" operator: "Equal" value: "true" effect: "NoSchedule" nodeSelector: taint: T containers: - image: nginx imagePullPolicy: IfNotPresent name: pod1 resources: {} ports: - name: http containerPort: 80 protocol: TCP hostPort: 80 dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@k8scloude1 pod]# kubectl get pod -o wide No resources found in pod namespace. [root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml pod/pod1 created
查看pod,k8scloude2節(jié)點就算有污點也能運行pod
[root@k8scloude1 pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 1/1 Running 0 8s 10.244.112.177 k8scloude2 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide No resources found in pod namespace.
污點容忍的另一種寫法:operator: "Exists",沒有value值。
[root@k8scloude1 pod]# vim schedulepod4.yaml [root@k8scloude1 pod]# cat schedulepod4.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod1 name: pod1 namespace: pod spec: tolerations: - key: "wudian" operator: "Exists" effect: "NoSchedule" nodeSelector: taint: T containers: - image: nginx imagePullPolicy: IfNotPresent name: pod1 resources: {} ports: - name: http containerPort: 80 protocol: TCP hostPort: 80 dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml pod/pod1 created
查看pod,k8scloude2節(jié)點就算有污點也能運行pod
[root@k8scloude1 pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 1/1 Running 0 10s 10.244.112.178 k8scloude2 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide No resources found in pod namespace.
給k8scloude2節(jié)點再添加一個污點
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep Taints Taints: wudian=true:NoSchedule [root@k8scloude1 pod]# kubectl taint node k8scloude2 zang=shide:NoSchedule node/k8scloude2 tainted [root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep Taints Taints: wudian=true:NoSchedule [root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints Taints: wudian=true:NoSchedule zang=shide:NoSchedule Unschedulable: false [root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A1 Taints Taints: wudian=true:NoSchedule zang=shide:NoSchedule
創(chuàng)建pod,tolerations參數(shù)表示容忍2個污點:wudian=true:NoSchedule和zang=shide:NoSchedule,nodeSelector:taint: T參數(shù)表示pod運行在標(biāo)簽為nodeSelector=taint: T的節(jié)點。
[root@k8scloude1 pod]# vim schedulepod4.yaml [root@k8scloude1 pod]# cat schedulepod4.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod1 name: pod1 namespace: pod spec: tolerations: - key: "wudian" operator: "Equal" value: "true" effect: "NoSchedule" - key: "zang" operator: "Equal" value: "shide" effect: "NoSchedule" nodeSelector: taint: T containers: - image: nginx imagePullPolicy: IfNotPresent name: pod1 resources: {} ports: - name: http containerPort: 80 protocol: TCP hostPort: 80 dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml pod/pod1 created
查看pod,k8scloude2節(jié)點就算有2個污點也能運行pod
[root@k8scloude1 pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 1/1 Running 0 6s 10.244.112.179 k8scloude2 <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted
創(chuàng)建pod,tolerations參數(shù)表示容忍污點:wudian=true:NoSchedule,nodeSelector:taint: T參數(shù)表示pod運行在標(biāo)簽為nodeSelector=taint: T的節(jié)點。
[root@k8scloude1 pod]# vim schedulepod4.yaml [root@k8scloude1 pod]# cat schedulepod4.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod1 name: pod1 namespace: pod spec: tolerations: - key: "wudian" operator: "Equal" value: "true" effect: "NoSchedule" nodeSelector: taint: T containers: - image: nginx imagePullPolicy: IfNotPresent name: pod1 resources: {} ports: - name: http containerPort: 80 protocol: TCP hostPort: 80 dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml pod/pod1 created
查看pod,一個節(jié)點有兩個污點值,但是yaml文件只容忍一個,所以pod創(chuàng)建不成功。
[root@k8scloude1 pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 0/1 Pending 0 8s <none> <none> <none> <none> [root@k8scloude1 pod]# kubectl delete pod pod1 --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "pod1" force deleted [root@k8scloude1 pod]# kubectl get pods -o wide No resources found in pod namespace.
取消k8scloude2污點
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints Taints: wudian=true:NoSchedule zang=shide:NoSchedule Unschedulable: false #取消污點 [root@k8scloude1 pod]# kubectl taint node k8scloude2 zang- node/k8scloude2 untainted [root@k8scloude1 pod]# kubectl taint node k8scloude2 wudian- node/k8scloude2 untainted [root@k8scloude1 pod]# kubectl describe nodes k8scloude1 | grep -A2 Taints Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false Lease: [root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints Taints: <none> Unschedulable: false Lease: [root@k8scloude1 pod]# kubectl describe nodes k8scloude3 | grep -A2 Taints Taints: <none> Unschedulable: false Lease:
Tips:如果自身機(jī)器有限,只能有一臺機(jī)器,則可以把master節(jié)點的污點taint取消,就可以在master上運行pod了。
以上就是pod污點taint 與容忍度tolerations詳解的詳細(xì)內(nèi)容,更多關(guān)于污點taint容忍度tolerations 的資料請關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
docker容器狀態(tài)出現(xiàn)Exit(1)的問題及解決
這篇文章主要介紹了docker容器狀態(tài)出現(xiàn)Exit(1)的問題及解決方案,具有很好的參考價值,希望對大家有所幫助。如有錯誤或未考慮完全的地方,望不吝賜教2023-06-06docker部署firefox瀏覽器實現(xiàn)遠(yuǎn)程訪問
在使用docker時,默認(rèn)情況下只能在本地進(jìn)行訪問,本文就來介紹一下docker部署firefox瀏覽器實現(xiàn)遠(yuǎn)程訪問,具有一定的參考價值,感興趣的可以了解一下2024-01-01教你使用docker安裝elasticsearch和head插件的方法
這篇文章主要介紹了docker安裝elasticsearch和head插件,安裝時需要下載鏡像和修改系統(tǒng)參數(shù),本文分流程給大家講解的非常詳細(xì),對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友可以參考下2022-04-04Docker 部署net5程序?qū)崿F(xiàn)跨平臺功能
本文講述使用docker容器部署.net5項目、實現(xiàn)跨平臺,本文通過圖文的形式給大家介紹了創(chuàng)建.net5項目的過程及安裝成功后如何使用docker部署項目,感興趣的朋友跟隨小編一起學(xué)習(xí)吧2021-05-05Docker Compose搭建Redis主從復(fù)制環(huán)境的實現(xiàn)步驟
在Docker中搭建Redis主從架構(gòu)非常方便,下面是一個示例,演示如何使用Docker Compose設(shè)置一個Redis主從復(fù)制環(huán)境,文中有詳細(xì)的代碼示例,具有一定的參考價值,需要的朋友可以參考下2023-09-09詳解Docker使用Linux iptables 和 Interfaces管理容器網(wǎng)絡(luò)
這篇文章主要介紹了詳解Docker使用Linux iptables 和 Interfaces管理容器網(wǎng)絡(luò)的相關(guān)內(nèi)容,涉及Linux 網(wǎng)橋接口,iptables等,內(nèi)容豐富,需要的朋友可以了解下。2017-09-09