欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

pod調(diào)度將 Pod 指派給節(jié)點(diǎn)

 更新時(shí)間:2022年11月07日 16:27:07   作者:人生的哲理  
這篇文章主要為大家介紹了pod調(diào)度將Pod指派給節(jié)點(diǎn) 示例詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪

一.系統(tǒng)環(huán)境

服務(wù)器版本docker軟件版本Kubernetes(k8s)集群版本CPU架構(gòu)
CentOS Linux release 7.4.1708 (Core)Docker version 20.10.12v1.21.9x86_64

Kubernetes集群架構(gòu):k8scloude1作為master節(jié)點(diǎn),k8scloude2,k8scloude3作為worker節(jié)點(diǎn)

服務(wù)器操作系統(tǒng)版本CPU架構(gòu)進(jìn)程功能描述
k8scloude1/192.168.110.130CentOS Linux release 7.4.1708 (Core)x86_64docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calicok8s master節(jié)點(diǎn)
k8scloude2/192.168.110.129CentOS Linux release 7.4.1708 (Core)x86_64docker,kubelet,kube-proxy,calicok8s worker節(jié)點(diǎn)
k8scloude3/192.168.110.128CentOS Linux release 7.4.1708 (Core)x86_64docker,kubelet,kube-proxy,calicok8s worker節(jié)點(diǎn)

二.前言

本文介紹pod的調(diào)度,即如何讓pod運(yùn)行在Kubernetes集群的指定節(jié)點(diǎn)。

進(jìn)行pod的調(diào)度的前提是已經(jīng)有一套可以正常運(yùn)行的Kubernetes集群,關(guān)于Kubernetes(k8s)集群的安裝部署,可以查看博客《Centos7 安裝部署Kubernetes(k8s)集群》

三.pod的調(diào)度

3.1 pod的調(diào)度概述

你可以約束一個(gè) Pod 以便 限制 其只能在特定的節(jié)點(diǎn)上運(yùn)行, 或優(yōu)先在特定的節(jié)點(diǎn)上運(yùn)行。 有幾種方法可以實(shí)現(xiàn)這點(diǎn),推薦的方法都是用 標(biāo)簽選擇算符來(lái)進(jìn)行選擇。 通常這樣的約束不是必須的,因?yàn)檎{(diào)度器將自動(dòng)進(jìn)行合理的放置(比如,將 Pod 分散到節(jié)點(diǎn)上, 而不是將 Pod 放置在可用資源不足的節(jié)點(diǎn)上等等)。但在某些情況下,你可能需要進(jìn)一步控制 Pod 被部署到哪個(gè)節(jié)點(diǎn)。例如,確保 Pod 最終落在連接了 SSD 的機(jī)器上, 或者將來(lái)自?xún)蓚€(gè)不同的服務(wù)且有大量通信的 Pods 被放置在同一個(gè)可用區(qū)。

你可以使用下列方法中的任何一種來(lái)選擇 Kubernetes 對(duì)特定 Pod 的調(diào)度:

  • 與節(jié)點(diǎn)標(biāo)簽匹配的 nodeSelector
  • 親和性與反親和性
  • nodeName 字段
  • Pod 拓?fù)浞植技s束

3.2 pod自動(dòng)調(diào)度

如果不手動(dòng)指定pod運(yùn)行在哪個(gè)節(jié)點(diǎn)上,k8s會(huì)自動(dòng)調(diào)度pod的,k8s自動(dòng)調(diào)度pod在哪個(gè)節(jié)點(diǎn)上運(yùn)行考慮的因素有:

  • 待調(diào)度的pod列表
  • 可用的node列表
  • 調(diào)度算法:主機(jī)過(guò)濾,主機(jī)打分

3.2.1 創(chuàng)建3個(gè)主機(jī)端口為80的pod

查看hostPort字段的解釋?zhuān)琱ostPort字段表示把pod的端口映射到節(jié)點(diǎn),即在節(jié)點(diǎn)上公開(kāi) Pod 的端口。

#主機(jī)端口映射:hostPort: 80
[root@k8scloude1 pod]# kubectl explain pods.spec.containers.ports.hostPort
KIND:     Pod
VERSION:  v1
FIELD:    hostPort <integer>
DESCRIPTION:
     Number of port to expose on the host. If specified, this must be a valid
     port number, 0 < x < 65536. If HostNetwork is specified, this must match
     ContainerPort. Most containers do not need this.

創(chuàng)建第一個(gè)pod,hostPort: 80表示把容器的80端口映射到節(jié)點(diǎn)的80端口

[root@k8scloude1 pod]# vim schedulepod.yaml
#kind: Pod表示資源類(lèi)型為Pod   labels指定pod標(biāo)簽   metadata下面的name指定pod名字   containers下面全是容器的定義   
#image指定鏡像名字  imagePullPolicy指定鏡像下載策略   containers下面的name指定容器名
#resources指定容器資源(CPU,內(nèi)存等)   env指定容器里的環(huán)境變量   dnsPolicy指定DNS策略
#restartPolicy容器重啟策略    ports指定容器端口  containerPort容器端口  hostPort節(jié)點(diǎn)上的端口
[root@k8scloude1 pod]# cat schedulepod.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod
  name: pod
  namespace: pod
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod.yaml 
pod/pod created
[root@k8scloude1 pod]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod    1/1     Running   0          6s

可以看到pod創(chuàng)建成功。

接下來(lái)創(chuàng)建第二個(gè)pod,hostPort: 80表示把容器的80端口映射到節(jié)點(diǎn)的80端口,兩個(gè)pod只有pod名字不一樣。

[root@k8scloude1 pod]# cp schedulepod.yaml schedulepod1.yaml 
[root@k8scloude1 pod]# vim schedulepod1.yaml 
[root@k8scloude1 pod]# cat schedulepod1.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod1.yaml 
pod/pod1 created
[root@k8scloude1 pod]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod    1/1     Running   0          11m
pod1   1/1     Running   0          5s

第二個(gè)pod創(chuàng)建成功,現(xiàn)在創(chuàng)建第三個(gè)pod。

開(kāi)篇我們已經(jīng)介紹過(guò)集群架構(gòu),Kubernetes集群架構(gòu):k8scloude1作為master節(jié)點(diǎn),k8scloude2,k8scloude3作為worker節(jié)點(diǎn),k8s集群只有2個(gè)worker節(jié)點(diǎn),master節(jié)點(diǎn)默認(rèn)不運(yùn)行應(yīng)用pod,主機(jī)端口80已經(jīng)被占用兩臺(tái)worker節(jié)點(diǎn)全部占用,所以pod2無(wú)法運(yùn)行。

[root@k8scloude1 pod]# sed 's/pod1/pod2/' schedulepod1.yaml | kubectl apply -f -
pod/pod2 created
#主機(jī)端口80已經(jīng)被占用兩臺(tái)worker節(jié)點(diǎn)全部占用,pod2無(wú)法運(yùn)行
[root@k8scloude1 pod]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod    1/1     Running   0          16m
pod1   1/1     Running   0          5m28s
pod2   0/1     Pending   0          5s

觀(guān)察pod在k8s集群的分布情況,NODE顯示pod運(yùn)行在哪個(gè)節(jié)點(diǎn)

[root@k8scloude1 pod]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod    1/1     Running   0          18m
pod1   1/1     Running   0          7m28s
[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
pod    1/1     Running   0          29m   10.244.251.208   k8scloude3   &lt;none&gt;           &lt;none&gt;
pod1   1/1     Running   0          18m   10.244.112.156   k8scloude2   &lt;none&gt;           &lt;none&gt;

刪除pod

[root@k8scloude1 pod]# kubectl delete pod pod2 
pod "pod2" deleted
[root@k8scloude1 pod]# kubectl delete pod pod1 pod
pod "pod1" deleted
pod "pod" deleted

上面三個(gè)pod都是k8s自動(dòng)調(diào)度的,下面我們手動(dòng)指定pod運(yùn)行在哪個(gè)節(jié)點(diǎn)。

3.3 使用nodeName 字段指定pod運(yùn)行在哪個(gè)節(jié)點(diǎn)

使用nodeName 字段指定pod運(yùn)行在哪個(gè)節(jié)點(diǎn),這是一種比較直接的方式,nodeName 是 Pod 規(guī)約中的一個(gè)字段。如果 nodeName 字段不為空,調(diào)度器會(huì)忽略該 Pod, 而指定節(jié)點(diǎn)上的 kubelet 會(huì)嘗試將 Pod 放到該節(jié)點(diǎn)上。

 使用 nodeName 規(guī)則的優(yōu)先級(jí)會(huì)高于使用 nodeSelector 或親和性與非親和性的規(guī)則。

使用 nodeName 來(lái)選擇節(jié)點(diǎn)的方式有一些局限性:

  • 如果所指代的節(jié)點(diǎn)不存在,則 Pod 無(wú)法運(yùn)行,而且在某些情況下可能會(huì)被自動(dòng)刪除。
  • 如果所指代的節(jié)點(diǎn)無(wú)法提供用來(lái)運(yùn)行 Pod 所需的資源,Pod 會(huì)失敗, 而其失敗原因中會(huì)給出是否因?yàn)閮?nèi)存或 CPU 不足而造成無(wú)法運(yùn)行。
  • 在云環(huán)境中的節(jié)點(diǎn)名稱(chēng)并不總是可預(yù)測(cè)的,也不總是穩(wěn)定的。

創(chuàng)建pod,nodeName: k8scloude3表示pod要運(yùn)行在名為k8scloude3的節(jié)點(diǎn)

[root@k8scloude1 pod]# vim schedulepod2.yaml 
#kind: Pod表示資源類(lèi)型為Pod   labels指定pod標(biāo)簽   metadata下面的name指定pod名字   containers下面全是容器的定義   
#image指定鏡像名字  imagePullPolicy指定鏡像下載策略   containers下面的name指定容器名
#resources指定容器資源(CPU,內(nèi)存等)   env指定容器里的環(huán)境變量   dnsPolicy指定DNS策略
#restartPolicy容器重啟策略    ports指定容器端口  containerPort容器端口  hostPort節(jié)點(diǎn)上的端口
#nodeName: k8scloude3指定pod在k8scloude3上運(yùn)行
[root@k8scloude1 pod]# cat schedulepod2.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  nodeName: k8scloude3
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod2.yaml 
pod/pod1 created

可以看到pod運(yùn)行在k8scloude3節(jié)點(diǎn)

[root@k8scloude1 pod]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          7s    10.244.251.209   k8scloude3   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[root@k8scloude1 pod]# kubectl get pods
No resources found in pod namespace.

創(chuàng)建pod,nodeName: k8scloude1讓pod運(yùn)行在k8scloude1節(jié)點(diǎn)

[root@k8scloude1 pod]# vim schedulepod3.yaml 
#kind: Pod表示資源類(lèi)型為Pod   labels指定pod標(biāo)簽   metadata下面的name指定pod名字   containers下面全是容器的定義   
#image指定鏡像名字  imagePullPolicy指定鏡像下載策略   containers下面的name指定容器名
#resources指定容器資源(CPU,內(nèi)存等)   env指定容器里的環(huán)境變量   dnsPolicy指定DNS策略
#restartPolicy容器重啟策略    ports指定容器端口  containerPort容器端口  hostPort節(jié)點(diǎn)上的端口
#nodeName: k8scloude1讓pod運(yùn)行在k8scloude1節(jié)點(diǎn)
[root@k8scloude1 pod]# cat schedulepod3.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  nodeName: k8scloude1
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod3.yaml 
pod/pod1 created

可以看到pod運(yùn)行在k8scloude1,注意k8scloude1是master節(jié)點(diǎn),master節(jié)點(diǎn)一般不運(yùn)行應(yīng)用pod,并且k8scloude1有污點(diǎn),一般來(lái)說(shuō),pod是不運(yùn)行在有污點(diǎn)的主機(jī)上的,如果強(qiáng)制調(diào)度上去的話(huà),pod的狀態(tài)應(yīng)該是pending,但是通過(guò)nodeName可以把一個(gè)pod調(diào)度到有污點(diǎn)的主機(jī)上正常運(yùn)行的,比如nodeName指定pod運(yùn)行在master上

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          47s   10.244.158.81   k8scloude1   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted

3.4 使用節(jié)點(diǎn)標(biāo)簽nodeSelector指定pod運(yùn)行在哪個(gè)節(jié)點(diǎn)

與很多其他 Kubernetes 對(duì)象類(lèi)似,節(jié)點(diǎn)也有標(biāo)簽。 你可以手動(dòng)地添加標(biāo)簽。 Kubernetes 也會(huì)為集群中所有節(jié)點(diǎn)添加一些標(biāo)準(zhǔn)的標(biāo)簽。

通過(guò)為節(jié)點(diǎn)添加標(biāo)簽,你可以準(zhǔn)備讓 Pod 調(diào)度到特定節(jié)點(diǎn)或節(jié)點(diǎn)組上。 你可以使用這個(gè)功能來(lái)確保特定的 Pod 只能運(yùn)行在具有一定隔離性,安全性或監(jiān)管屬性的節(jié)點(diǎn)上。

nodeSelector 是節(jié)點(diǎn)選擇約束的最簡(jiǎn)單推薦形式。你可以將 nodeSelector 字段添加到 Pod 的規(guī)約中設(shè)置你希望目標(biāo)節(jié)點(diǎn)所具有的節(jié)點(diǎn)標(biāo)簽。 Kubernetes 只會(huì)將 Pod 調(diào)度到擁有你所指定的每個(gè)標(biāo)簽的節(jié)點(diǎn)上。nodeSelector 提供了一種最簡(jiǎn)單的方法來(lái)將 Pod 約束到具有特定標(biāo)簽的節(jié)點(diǎn)上。

3.4.1 查看標(biāo)簽

查看節(jié)點(diǎn)node的標(biāo)簽,標(biāo)簽的格式:鍵值對(duì):xxxx/yyyy.aaaa=456123,xxxx1/yyyy1.aaaa=456123,--show-labels參數(shù)顯示標(biāo)簽

[root@k8scloude1 pod]# kubectl get nodes --show-labels
NAME         STATUS   ROLES                  AGE    VERSION   LABELS
k8scloude1   Ready    control-plane,master   7d1h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8scloude2   Ready    <none>                 7d     v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linux
k8scloude3   Ready    <none>                 7d     v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux

查看namespace的標(biāo)簽

[root@k8scloude1 pod]# kubectl get ns --show-labels
NAME              STATUS   AGE    LABELS
default           Active   7d1h   kubernetes.io/metadata.name=default
kube-node-lease   Active   7d1h   kubernetes.io/metadata.name=kube-node-lease
kube-public       Active   7d1h   kubernetes.io/metadata.name=kube-public
kube-system       Active   7d1h   kubernetes.io/metadata.name=kube-system
ns1               Active   6d5h   kubernetes.io/metadata.name=ns1
ns2               Active   6d5h   kubernetes.io/metadata.name=ns2
pod               Active   4d2h   kubernetes.io/metadata.name=pod

查看pod的標(biāo)簽

[root@k8scloude1 pod]# kubectl get pod -A --show-labels 
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE    LABELS
kube-system   calico-kube-controllers-6b9fbfff44-4jzkj   1/1     Running   12         7d     k8s-app=calico-kube-controllers,pod-template-hash=6b9fbfff44
kube-system   calico-node-bdlgm                          1/1     Running   7          7d     controller-revision-hash=6b57d9cd54,k8s-app=calico-node,pod-template-generation=1
kube-system   calico-node-hx8bk                          1/1     Running   7          7d     controller-revision-hash=6b57d9cd54,k8s-app=calico-node,pod-template-generation=1
kube-system   calico-node-nsbfs                          1/1     Running   7          7d     controller-revision-hash=6b57d9cd54,k8s-app=calico-node,pod-template-generation=1
kube-system   coredns-545d6fc579-7wm95                   1/1     Running   7          7d1h   k8s-app=kube-dns,pod-template-hash=545d6fc579
kube-system   coredns-545d6fc579-87q8j                   1/1     Running   7          7d1h   k8s-app=kube-dns,pod-template-hash=545d6fc579
kube-system   etcd-k8scloude1                            1/1     Running   7          7d1h   component=etcd,tier=control-plane
kube-system   kube-apiserver-k8scloude1                  1/1     Running   11         7d1h   component=kube-apiserver,tier=control-plane
kube-system   kube-controller-manager-k8scloude1         1/1     Running   7          7d1h   component=kube-controller-manager,tier=control-plane
kube-system   kube-proxy-599xh                           1/1     Running   7          7d1h   controller-revision-hash=6795549d44,k8s-app=kube-proxy,pod-template-generation=1
kube-system   kube-proxy-lpj8z                           1/1     Running   7          7d1h   controller-revision-hash=6795549d44,k8s-app=kube-proxy,pod-template-generation=1
kube-system   kube-proxy-zxlk9                           1/1     Running   7          7d1h   controller-revision-hash=6795549d44,k8s-app=kube-proxy,pod-template-generation=1
kube-system   kube-scheduler-k8scloude1                  1/1     Running   7          7d1h   component=kube-scheduler,tier=control-plane
kube-system   metrics-server-bcfb98c76-k5dmj             1/1     Running   6          6d5h   k8s-app=metrics-server,pod-template-hash=bcfb98c76

3.4.2 創(chuàng)建標(biāo)簽

以node-role.kubernetes.io/control-plane= 標(biāo)簽為例,鍵是node-role.kubernetes.io/control-plane,值為空。

創(chuàng)建標(biāo)簽的語(yǔ)法:kubectl label 對(duì)象類(lèi)型 對(duì)象名 鍵=值

給k8scloude2節(jié)點(diǎn)設(shè)置標(biāo)簽

[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude2
node/k8scloude2 labeled
[root@k8scloude1 pod]# kubectl get nodes --show-labels
NAME         STATUS   ROLES                  AGE    VERSION   LABELS
k8scloude1   Ready    control-plane,master   7d1h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8scloude2   Ready    <none>                 7d1h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,k8snodename=k8scloude2,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linux
k8scloude3   Ready    <none>                 7d1h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux

k8scloude2節(jié)點(diǎn)刪除標(biāo)簽

[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename-
node/k8scloude2 labeled
[root@k8scloude1 pod]# kubectl get nodes --show-labels
NAME         STATUS   ROLES                  AGE    VERSION   LABELS
k8scloude1   Ready    control-plane,master   7d1h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8scloude2   Ready    <none>                 7d1h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linux
k8scloude3   Ready    <none>                 7d1h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux

列出含有標(biāo)簽k8snodename=k8scloude2的節(jié)點(diǎn)

[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude2
#列出含有標(biāo)簽k8snodename=k8scloude2的節(jié)點(diǎn)
[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude2
NAME         STATUS   ROLES    AGE    VERSION
k8scloude2   Ready    <none>   7d1h   v1.21.0
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename-
node/k8scloude2 labeled

對(duì)所有節(jié)點(diǎn)設(shè)置標(biāo)簽

[root@k8scloude1 pod]# kubectl label nodes --all k8snodename=cloude
node/k8scloude1 labeled
node/k8scloude2 labeled
node/k8scloude3 labeled

列出含有標(biāo)簽k8snodename=cloude的節(jié)點(diǎn)

#列出含有標(biāo)簽k8snodename=cloude的節(jié)點(diǎn)
[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=cloude
NAME         STATUS   ROLES                  AGE    VERSION
k8scloude1   Ready    control-plane,master   7d1h   v1.21.0
k8scloude2   Ready    <none>                 7d1h   v1.21.0
k8scloude3   Ready    <none>                 7d1h   v1.21.0
#刪除標(biāo)簽
[root@k8scloude1 pod]# kubectl label nodes --all k8snodename-
node/k8scloude1 labeled
node/k8scloude2 labeled
node/k8scloude3 labeled
[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=cloude
No resources found

--overwrite參數(shù),標(biāo)簽的覆蓋

[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude2
node/k8scloude2 labeled
#標(biāo)簽的覆蓋
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude
error: 'k8snodename' already has a value (k8scloude2), and --overwrite is false
#--overwrite參數(shù),標(biāo)簽的覆蓋
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude --overwrite
node/k8scloude2 labeled
[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude2
No resources found
[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude
NAME         STATUS   ROLES    AGE    VERSION
k8scloude2   Ready    <none>   7d1h   v1.21.0
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename-
node/k8scloude2 labeled

Tips:如果不想在k8scloude1的ROLES里看到control-plane,則可以通過(guò)取消標(biāo)簽達(dá)到目的:kubectl label nodes k8scloude1 node-role.kubernetes.io/control-plane- 進(jìn)行取消標(biāo)簽

[root@k8scloude1 pod]# kubectl get nodes --show-labels
NAME         STATUS   ROLES                  AGE    VERSION   LABELS
k8scloude1   Ready    control-plane,master   7d1h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8scloude2   Ready    <none>                 7d1h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linux
k8scloude3   Ready    <none>                 7d1h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux
[root@k8scloude1 pod]# kubectl label nodes k8scloude1 node-role.kubernetes.io/control-plane-

3.4.3 通過(guò)標(biāo)簽控制pod在哪個(gè)節(jié)點(diǎn)運(yùn)行

給k8scloude2節(jié)點(diǎn)打上標(biāo)簽k8snodename=k8scloude2

[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude2
node/k8scloude2 labeled
[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude2
NAME         STATUS   ROLES    AGE    VERSION
k8scloude2   Ready    <none>   7d1h   v1.21.0
[root@k8scloude1 pod]# kubectl get pods
No resources found in pod namespace.

創(chuàng)建pod,nodeSelector:k8snodename: k8scloude2 指定pod運(yùn)行在標(biāo)簽為k8snodename=k8scloude2的節(jié)點(diǎn)上

[root@k8scloude1 pod]# vim schedulepod4.yaml
[root@k8scloude1 pod]# cat schedulepod4.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  nodeSelector:
    k8snodename: k8scloude2
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml 
pod/pod1 created

可以看到pod運(yùn)行在k8scloude2節(jié)點(diǎn)

[root@k8scloude1 pod]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          21s   10.244.112.158   k8scloude2   <none>           <none>

刪除pod,刪除標(biāo)簽

[root@k8scloude1 pod]# kubectl get pod --show-labels
NAME   READY   STATUS    RESTARTS   AGE   LABELS
pod1   1/1     Running   0          32m   run=pod1
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted
[root@k8scloude1 pod]# kubectl get pod --show-labels
No resources found in pod namespace.
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename-
node/k8scloude2 labeled
[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude2
No resources found
[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude
No resources found

注意:如果兩臺(tái)主機(jī)的標(biāo)簽是一致的,那么通過(guò)在這兩臺(tái)機(jī)器上進(jìn)行打分,哪個(gè)機(jī)器分高,pod就運(yùn)行在哪個(gè)pod上

給k8s集群的master節(jié)點(diǎn)打標(biāo)簽

[root@k8scloude1 pod]# kubectl label nodes k8scloude1 k8snodename=k8scloude1
node/k8scloude1 labeled
[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude1
NAME         STATUS   ROLES                  AGE    VERSION
k8scloude1   Ready    control-plane,master   7d2h   v1.21.0

創(chuàng)建pod,nodeSelector:k8snodename: k8scloude1 指定pod運(yùn)行在標(biāo)簽為k8snodename=k8scloude1的節(jié)點(diǎn)上

[root@k8scloude1 pod]# vim schedulepod5.yaml 
[root@k8scloude1 pod]# cat schedulepod5.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  nodeSelector:
    k8snodename: k8scloude1
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod5.yaml 
pod/pod1 created

因?yàn)閗8scloude1上有污點(diǎn),所以pod不能運(yùn)行在k8scloude1上,pod狀態(tài)為Pending

[root@k8scloude1 pod]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
pod1   0/1     Pending   0          9s

刪除pod,刪除標(biāo)簽

[root@k8scloude1 pod]# kubectl delete pod pod1 
pod "pod1" deleted
[root@k8scloude1 pod]# kubectl get pod
No resources found in pod namespace.
[root@k8scloude1 pod]# kubectl label nodes k8scloude1 k8snodename-
node/k8scloude1 labeled
[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude1
No resources found

3.5 使用親和性與反親和性調(diào)度pod

nodeSelector 提供了一種最簡(jiǎn)單的方法來(lái)將 Pod 約束到具有特定標(biāo)簽的節(jié)點(diǎn)上。 親和性和反親和性擴(kuò)展了你可以定義的約束類(lèi)型。使用親和性與反親和性的一些好處有:

  • 親和性、反親和性語(yǔ)言的表達(dá)能力更強(qiáng)。nodeSelector 只能選擇擁有所有指定標(biāo)簽的節(jié)點(diǎn)。 親和性、反親和性為你提供對(duì)選擇邏輯的更強(qiáng)控制能力。
  • 你可以標(biāo)明某規(guī)則是“軟需求”或者“偏好”,這樣調(diào)度器在無(wú)法找到匹配節(jié)點(diǎn)時(shí)仍然調(diào)度該 Pod。
  • 你可以使用節(jié)點(diǎn)上(或其他拓?fù)溆蛑校┻\(yùn)行的其他 Pod 的標(biāo)簽來(lái)實(shí)施調(diào)度約束, 而不是只能使用節(jié)點(diǎn)本身的標(biāo)簽。這個(gè)能力讓你能夠定義規(guī)則允許哪些 Pod 可以被放置在一起。

親和性功能由兩種類(lèi)型的親和性組成:

  • 節(jié)點(diǎn)親和性功能類(lèi)似于 nodeSelector 字段,但它的表達(dá)能力更強(qiáng),并且允許你指定軟規(guī)則。
  • Pod 間親和性/反親和性允許你根據(jù)其他 Pod 的標(biāo)簽來(lái)約束 Pod。

節(jié)點(diǎn)親和性概念上類(lèi)似于 nodeSelector, 它使你可以根據(jù)節(jié)點(diǎn)上的標(biāo)簽來(lái)約束 Pod 可以調(diào)度到哪些節(jié)點(diǎn)上。 節(jié)點(diǎn)親和性有兩種:

  • requiredDuringSchedulingIgnoredDuringExecution: 調(diào)度器只有在規(guī)則被滿(mǎn)足的時(shí)候才能執(zhí)行調(diào)度。此功能類(lèi)似于 nodeSelector, 但其語(yǔ)法表達(dá)能力更強(qiáng)。
  • preferredDuringSchedulingIgnoredDuringExecution: 調(diào)度器會(huì)嘗試尋找滿(mǎn)足對(duì)應(yīng)規(guī)則的節(jié)點(diǎn)。如果找不到匹配的節(jié)點(diǎn),調(diào)度器仍然會(huì)調(diào)度該 Pod。

在上述類(lèi)型中,IgnoredDuringExecution 意味著如果節(jié)點(diǎn)標(biāo)簽在 Kubernetes 調(diào)度 Pod 后發(fā)生了變更,Pod 仍將繼續(xù)運(yùn)行。

你可以使用 Pod 規(guī)約中的 .spec.affinity.nodeAffinity 字段來(lái)設(shè)置節(jié)點(diǎn)親和性。

查看nodeAffinity字段解釋

[root@k8scloude1 pod]# kubectl explain pods.spec.affinity.nodeAffinity 
KIND:     Pod
VERSION:  v1
RESOURCE: nodeAffinity <Object>
DESCRIPTION:
     Describes node affinity scheduling rules for the pod.
     Node affinity is a group of node affinity scheduling rules.
FIELDS:
#軟策略
   preferredDuringSchedulingIgnoredDuringExecution	<[]Object>
     The scheduler will prefer to schedule pods to nodes that satisfy the
     affinity expressions specified by this field, but it may choose a node that
     violates one or more of the expressions. The node that is most preferred is
     the one with the greatest sum of weights, i.e. for each node that meets all
     of the scheduling requirements (resource request, requiredDuringScheduling
     affinity expressions, etc.), compute a sum by iterating through the
     elements of this field and adding "weight" to the sum if the node matches
     the corresponding matchExpressions; the node(s) with the highest sum are
     the most preferred.
#硬策略
   requiredDuringSchedulingIgnoredDuringExecution	<Object>
     If the affinity requirements specified by this field are not met at
     scheduling time, the pod will not be scheduled onto the node. If the
     affinity requirements specified by this field cease to be met at some point
     during pod execution (e.g. due to an update), the system may or may not try
     to eventually evict the pod from its node.

3.5.1 使用硬策略requiredDuringSchedulingIgnoredDuringExecution

創(chuàng)建pod,requiredDuringSchedulingIgnoredDuringExecution參數(shù)表示:節(jié)點(diǎn)必須包含一個(gè)鍵名為 kubernetes.io/hostname 的標(biāo)簽, 并且該標(biāo)簽的取值必須為 k8scloude2 或 k8scloude3。

你可以使用 operator 字段來(lái)為 Kubernetes 設(shè)置在解釋規(guī)則時(shí)要使用的邏輯操作符。 你可以使用 In、NotIn、Exists、DoesNotExist、Gt 和 Lt 之一作為操作符。NotIn 和 DoesNotExist 可用來(lái)實(shí)現(xiàn)節(jié)點(diǎn)反親和性行為。 你也可以使用節(jié)點(diǎn)污點(diǎn) 將 Pod 從特定節(jié)點(diǎn)上驅(qū)逐。

注意:

  • 如果你同時(shí)指定了 nodeSelector 和 nodeAffinity,兩者 必須都要滿(mǎn)足, 才能將 Pod 調(diào)度到候選節(jié)點(diǎn)上。
  • 如果你指定了多個(gè)與 nodeAffinity 類(lèi)型關(guān)聯(lián)的 nodeSelectorTerms, 只要其中一個(gè) nodeSelectorTerms 滿(mǎn)足的話(huà),Pod 就可以被調(diào)度到節(jié)點(diǎn)上。
  • 如果你指定了多個(gè)與同一 nodeSelectorTerms 關(guān)聯(lián)的 matchExpressions, 則只有當(dāng)所有 matchExpressions 都滿(mǎn)足時(shí) Pod 才可以被調(diào)度到節(jié)點(diǎn)上。
[root@k8scloude1 pod]# vim requiredDuringSchedule.yaml 
 #硬策略
[root@k8scloude1 pod]# cat requiredDuringSchedule.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values: 
            - k8scloude2
            - k8scloude3
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f requiredDuringSchedule.yaml 
pod/pod1 created

可以看到pod運(yùn)行在k8scloude3節(jié)點(diǎn)

[root@k8scloude1 pod]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          6s
[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          10s   10.244.251.212   k8scloude3   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted

創(chuàng)建pod,requiredDuringSchedulingIgnoredDuringExecution參數(shù)表示:節(jié)點(diǎn)必須包含一個(gè)鍵名為 kubernetes.io/hostname 的標(biāo)簽, 并且該標(biāo)簽的取值必須為 k8scloude4 或 k8scloude5

[root@k8scloude1 pod]# vim requiredDuringSchedule1.yaml 
[root@k8scloude1 pod]# cat requiredDuringSchedule1.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values: 
            - k8scloude4
            - k8scloude5
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f requiredDuringSchedule1.yaml 
pod/pod1 created

由于requiredDuringSchedulingIgnoredDuringExecution是硬策略,k8scloude4,k8scloude5不滿(mǎn)足條件,所以pod創(chuàng)建失敗

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
pod1   0/1     Pending   0          7s    <none>   <none>   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted

3.5.2 使用軟策略preferredDuringSchedulingIgnoredDuringExecution

給節(jié)點(diǎn)打標(biāo)簽

[root@k8scloude1 pod]# kubectl label nodes k8scloude2 xx=72
node/k8scloude2 labeled
[root@k8scloude1 pod]# kubectl label nodes k8scloude3 xx=59
node/k8scloude3 labeled

創(chuàng)建pod,preferredDuringSchedulingIgnoredDuringExecution參數(shù)表示:節(jié)點(diǎn)最好具有一個(gè)鍵名為 xx 且取值大于 60 的標(biāo)簽。

[root@k8scloude1 pod]# vim preferredDuringSchedule.yaml 
[root@k8scloude1 pod]# cat preferredDuringSchedule.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 2
        preference:
          matchExpressions:
          - key: xx
            operator: Gt
            values:
            - "60"
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f preferredDuringSchedule.yaml 
pod/pod1 created

可以看到pod運(yùn)行在k8scloude2,因?yàn)閗8scloude2標(biāo)簽為 xx=72,72大于60

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          13s   10.244.112.159   k8scloude2   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted

創(chuàng)建pod,preferredDuringSchedulingIgnoredDuringExecution參數(shù)表示:節(jié)點(diǎn)最好具有一個(gè)鍵名為 xx 且取值大于 600 的標(biāo)簽。

[root@k8scloude1 pod]# vim preferredDuringSchedule1.yaml 
[root@k8scloude1 pod]# cat preferredDuringSchedule1.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 2
        preference:
          matchExpressions:
          - key: xx
            operator: Gt
            values:
            - "600"
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f preferredDuringSchedule1.yaml 
pod/pod1 created

因?yàn)閜referredDuringSchedulingIgnoredDuringExecution是軟策略,盡管k8scloude2,k8scloude3都不滿(mǎn)足xx>600,但是還是能成功創(chuàng)建pod

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          7s    10.244.251.213   k8scloude3   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted

3.5.3 節(jié)點(diǎn)親和性權(quán)重

你可以為 preferredDuringSchedulingIgnoredDuringExecution 親和性類(lèi)型的每個(gè)實(shí)例設(shè)置 weight 字段,其取值范圍是 1 到 100。 當(dāng)調(diào)度器找到能夠滿(mǎn)足 Pod 的其他調(diào)度請(qǐng)求的節(jié)點(diǎn)時(shí),調(diào)度器會(huì)遍歷節(jié)點(diǎn)滿(mǎn)足的所有的偏好性規(guī)則, 并將對(duì)應(yīng)表達(dá)式的 weight 值加和。最終的加和值會(huì)添加到該節(jié)點(diǎn)的其他優(yōu)先級(jí)函數(shù)的評(píng)分之上。 在調(diào)度器為 Pod 作出調(diào)度決定時(shí),總分最高的節(jié)點(diǎn)的優(yōu)先級(jí)也最高。

給節(jié)點(diǎn)打標(biāo)簽

[root@k8scloude1 pod]# kubectl label nodes k8scloude2 yy=59
node/k8scloude2 labeled
[root@k8scloude1 pod]# kubectl label nodes k8scloude3 yy=72
node/k8scloude3 labeled

創(chuàng)建pod,preferredDuringSchedulingIgnoredDuringExecution指定了2條軟策略,但是權(quán)重不一樣:weight: 2 和 weight: 10

[root@k8scloude1 pod]# vim preferredDuringSchedule2.yaml 
[root@k8scloude1 pod]# cat preferredDuringSchedule2.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 2
        preference:
          matchExpressions:
          - key: xx
            operator: Gt
            values:
            - "60"
      - weight: 10
        preference:
          matchExpressions:
          - key: yy
            operator: Gt
            values:
            - "60"
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f preferredDuringSchedule2.yaml 
pod/pod1 created

存在兩個(gè)候選節(jié)點(diǎn),因?yàn)閥y>60這條規(guī)則的weight權(quán)重大,所以pod運(yùn)行在k8scloude3

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          10s   10.244.251.214   k8scloude3   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted

3.6 Pod 拓?fù)浞植技s束

你可以使用 拓?fù)浞植技s束(Topology Spread Constraints) 來(lái)控制 Pod 在集群內(nèi)故障域之間的分布, 故障域的示例有區(qū)域(Region)、可用區(qū)(Zone)、節(jié)點(diǎn)和其他用戶(hù)自定義的拓?fù)溆颉?這樣做有助于提升性能、實(shí)現(xiàn)高可用或提升資源利用率。

以上就是pod調(diào)度將 Pod 指派給節(jié)點(diǎn) 的詳細(xì)內(nèi)容,更多關(guān)于pod調(diào)度指派給節(jié)點(diǎn) 的資料請(qǐng)關(guān)注腳本之家其它相關(guān)文章!

相關(guān)文章

  • Docker v1.13.0 正式版發(fā)布

    Docker v1.13.0 正式版發(fā)布

    本文給大家分享的是Docker v1.13.0 正式版發(fā)布的信息,最近一直在學(xué)習(xí)docker中,所以關(guān)注的比較多,這里分享給大家
    2017-01-01
  • idea鏈接不上虛擬機(jī)的docker里的mongodb問(wèn)題及解決

    idea鏈接不上虛擬機(jī)的docker里的mongodb問(wèn)題及解決

    這篇文章主要介紹了idea鏈接不上虛擬機(jī)的docker里的mongodb問(wèn)題及解決,具有很好的參考價(jià)值,希望對(duì)大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教
    2024-06-06
  • Docker容器依賴(lài)link連接按順序啟動(dòng)方式

    Docker容器依賴(lài)link連接按順序啟動(dòng)方式

    這篇文章主要介紹了Docker容器依賴(lài)link連接按順序啟動(dòng)方式,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教
    2023-05-05
  • docker安裝redis并以配置文件方式啟動(dòng)詳解

    docker安裝redis并以配置文件方式啟動(dòng)詳解

    這篇文章主要介紹了docker安裝redis并以配置文件方式啟動(dòng)詳解,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧
    2020-12-12
  • 詳解如何查看 docker 容器使用的資源

    詳解如何查看 docker 容器使用的資源

    本篇文章主要介紹了詳解如何查看 docker 容器使用的資源,小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,也給大家做個(gè)參考。一起跟隨小編過(guò)來(lái)看看吧
    2017-11-11
  • Docker容器host與none網(wǎng)絡(luò)的使用

    Docker容器host與none網(wǎng)絡(luò)的使用

    本文主要介紹了Docker容器host與none網(wǎng)絡(luò)的使用,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧<BR>
    2022-06-06
  • Docker中Dockerfile制作鏡像的方法步驟

    Docker中Dockerfile制作鏡像的方法步驟

    本文主要介紹了Dockerfile制作鏡像的方法步驟,文中通過(guò)示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下
    2022-01-01
  • Docker數(shù)據(jù)存儲(chǔ)之tmpfs mounts詳解

    Docker數(shù)據(jù)存儲(chǔ)之tmpfs mounts詳解

    今天小編就為大家分享一篇關(guān)于Docker數(shù)據(jù)存儲(chǔ)之tmpfs mounts詳解,小編覺(jué)得內(nèi)容挺不錯(cuò)的,現(xiàn)在分享給大家,具有很好的參考價(jià)值,需要的朋友一起跟隨小編來(lái)看看吧
    2019-02-02
  • docker啟動(dòng)鏡像失敗后如何用日志logs查找失敗原因及解決

    docker啟動(dòng)鏡像失敗后如何用日志logs查找失敗原因及解決

    在使用docker的時(shí)候,在某些未知的情況下可能啟動(dòng)了容器,但是過(guò)了沒(méi)幾秒容器自動(dòng)退出了,這個(gè)時(shí)候如何排查問(wèn)題呢?下面這篇文章主要給大家介紹了關(guān)于docker啟動(dòng)鏡像失敗后如何用日志logs查找失敗原因及解決的相關(guān)資料,需要的朋友可以參考下
    2023-05-05
  • Docker網(wǎng)絡(luò)原理及自定義網(wǎng)絡(luò)詳細(xì)解析

    Docker網(wǎng)絡(luò)原理及自定義網(wǎng)絡(luò)詳細(xì)解析

    一般使用自定義網(wǎng)絡(luò),自定義網(wǎng)絡(luò)使用network創(chuàng)建,創(chuàng)建時(shí)可以指定子網(wǎng)網(wǎng)段及網(wǎng)關(guān)等信息,在創(chuàng)建并啟動(dòng)容器時(shí)指定使用的網(wǎng)絡(luò),今天通過(guò)本文給大家介紹Docker網(wǎng)絡(luò)原理及自定義網(wǎng)絡(luò)的相關(guān)知識(shí),感興趣的朋友一起看看吧
    2021-05-05

最新評(píng)論