Kubernetes(K8S)容器集群管理環(huán)境完整部署詳細(xì)教程-中篇
本文系列:
Kubernetes(K8S)容器集群管理環(huán)境完整部署詳細(xì)教程-上篇
接著Kubernetes(K8S)容器集群管理環(huán)境完整部署詳細(xì)教程-上篇繼續(xù)往下部署:
八、部署master節(jié)點
master節(jié)點的kube-apiserver、kube-scheduler 和 kube-controller-manager 均以多實例模式運行:kube-scheduler 和 kube-controller-manager 會自動選舉產(chǎn)生一個 leader 實例,其它實例處于阻塞模式,當(dāng) leader 掛了后,重新選舉產(chǎn)生新的 leader,從而保證服務(wù)可用性;kube-apiserver 是無狀態(tài)的,需要通過 kube-nginx 進(jìn)行代理訪問,從而保證服務(wù)可用性;
下面部署命令均在k8s-master01節(jié)點上執(zhí)行,然后遠(yuǎn)程分發(fā)文件和執(zhí)行命令。
下載最新版本二進(jìn)制文件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# wget https://dl.k8s.io/v1.14.2/kubernetes-server-linux-amd64.tar.gz
[root@k8s-master01 work]# tar -xzvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master01 work]# cd kubernetes
[root@k8s-master01 work]# tar -xzvf kubernetes-src.tar.gz
將二進(jìn)制文件拷貝到所有 master 節(jié)點:
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp kubernetes/server/bin/{apiextensions-apiserver,cloud-controller-manager,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} root@${node_master_ip}:/opt/k8s/bin/
ssh root@${node_master_ip} "chmod +x /opt/k8s/bin/*"
done8.1 - 部署高可用 kube-apiserver 集群
這里部署一個三實例kube-apiserver集群環(huán)境,它們通過nginx四層代理進(jìn)行訪問,對外提供一個統(tǒng)一的vip地址,從而保證服務(wù)可用性。下面部署命令均在k8s-master01節(jié)點上執(zhí)行,然后遠(yuǎn)程分發(fā)文件和執(zhí)行命令。
1) 創(chuàng)建 kubernetes 證書和私鑰
創(chuàng)建證書簽名請求:
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"172.16.60.250",
"172.16.60.241",
"172.16.60.242",
"172.16.60.243",
"${CLUSTER_KUBERNETES_SVC_IP}",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
解釋說明:
? hosts 字段指定授權(quán)使用該證書的 IP 或域名列表,這里列出了 VIP 、apiserver 節(jié)點 IP、kubernetes 服務(wù) IP 和域名;
? 域名最后字符不能是 .(如不能為 kubernetes.default.svc.cluster.local.),否則解析時失敗,提示:
x509: cannot parse dnsName "kubernetes.default.svc.cluster.local.";
? 如果使用非 cluster.local 域名,如 opsnull.com,則需要修改域名列表中的最后兩個域名為:kubernetes.default.svc.opsnull、kubernetes.default.svc.opsnull.com
? kubernetes 服務(wù) IP 是 apiserver 自動創(chuàng)建的,一般是 --service-cluster-ip-range 參數(shù)指定的網(wǎng)段的第一個IP,后續(xù)可以通過如下命令獲?。?
[root@k8s-master01 work]# kubectl get svc kubernetes
The connection to the server 172.16.60.250:8443 was refused - did you specify the right host or port?
上面報錯是因為kube-apiserver服務(wù)此時沒有啟動,后續(xù)待apiserver服務(wù)啟動后,以上命令就可以獲得了。
生成證書和私鑰:
[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
[root@k8s-master01 work]# ls kubernetes*pem
kubernetes-key.pem kubernetes.pem
將生成的證書和私鑰文件拷貝到所有 master 節(jié)點:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "mkdir -p /etc/kubernetes/cert"
scp kubernetes*.pem root@${node_master_ip}:/etc/kubernetes/cert/
done
2) 創(chuàng)建加密配置文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
將加密配置文件拷貝到 master 節(jié)點的 /etc/kubernetes 目錄下:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp encryption-config.yaml root@${node_master_ip}:/etc/kubernetes/
done
3) 創(chuàng)建審計策略文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
# The following requests were manually identified as high-volume and low-risk, so drop them.
- level: None
resources:
- group: ""
resources:
- endpoints
- services
- services/status
users:
- 'system:kube-proxy'
verbs:
- watch
- level: None
resources:
- group: ""
resources:
- nodes
- nodes/status
userGroups:
- 'system:nodes'
verbs:
- get
- level: None
namespaces:
- kube-system
resources:
- group: ""
resources:
- endpoints
users:
- 'system:kube-controller-manager'
- 'system:kube-scheduler'
- 'system:serviceaccount:kube-system:endpoint-controller'
verbs:
- get
- update
- level: None
resources:
- group: ""
resources:
- namespaces
- namespaces/status
- namespaces/finalize
users:
- 'system:apiserver'
verbs:
- get
# Don't log HPA fetching metrics.
- level: None
resources:
- group: metrics.k8s.io
users:
- 'system:kube-controller-manager'
verbs:
- get
- list
# Don't log these read-only URLs.
- level: None
nonResourceURLs:
- '/healthz*'
- /version
- '/swagger*'
# Don't log events requests.
- level: None
resources:
- group: ""
resources:
- events
# node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
- level: Request
omitStages:
- RequestReceived
resources:
- group: ""
resources:
- nodes/status
- pods/status
users:
- kubelet
- 'system:node-problem-detector'
- 'system:serviceaccount:kube-system:node-problem-detector'
verbs:
- update
- patch
- level: Request
omitStages:
- RequestReceived
resources:
- group: ""
resources:
- nodes/status
- pods/status
userGroups:
- 'system:nodes'
verbs:
- update
- patch
# deletecollection calls can be large, don't log responses for expected namespace deletions
- level: Request
omitStages:
- RequestReceived
users:
- 'system:serviceaccount:kube-system:namespace-controller'
verbs:
- deletecollection
# Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
# so only log at the Metadata level.
- level: Metadata
omitStages:
- RequestReceived
resources:
- group: ""
resources:
- secrets
- configmaps
- group: authentication.k8s.io
resources:
- tokenreviews
# Get repsonses can be large; skip them.
- level: Request
omitStages:
- RequestReceived
resources:
- group: ""
- group: admissionregistration.k8s.io
- group: apiextensions.k8s.io
- group: apiregistration.k8s.io
- group: apps
- group: authentication.k8s.io
- group: authorization.k8s.io
- group: autoscaling
- group: batch
- group: certificates.k8s.io
- group: extensions
- group: metrics.k8s.io
- group: networking.k8s.io
- group: policy
- group: rbac.authorization.k8s.io
- group: scheduling.k8s.io
- group: settings.k8s.io
- group: storage.k8s.io
verbs:
- get
- list
- watch
# Default level for known APIs
- level: RequestResponse
omitStages:
- RequestReceived
resources:
- group: ""
- group: admissionregistration.k8s.io
- group: apiextensions.k8s.io
- group: apiregistration.k8s.io
- group: apps
- group: authentication.k8s.io
- group: authorization.k8s.io
- group: autoscaling
- group: batch
- group: certificates.k8s.io
- group: extensions
- group: metrics.k8s.io
- group: networking.k8s.io
- group: policy
- group: rbac.authorization.k8s.io
- group: scheduling.k8s.io
- group: settings.k8s.io
- group: storage.k8s.io
# Default level for all other requests.
- level: Metadata
omitStages:
- RequestReceived
EOF
分發(fā)審計策略文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp audit-policy.yaml root@${node_master_ip}:/etc/kubernetes/audit-policy.yaml
done
4) 創(chuàng)建后續(xù)訪問 metrics-server 使用的證書
創(chuàng)建證書簽名請求:
[root@k8s-master01 work]# cat > proxy-client-csr.json <<EOF
{
"CN": "aggregator",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
CN 名稱為 aggregator,需要與 metrics-server 的 --requestheader-allowed-names 參數(shù)配置一致,否則訪問會被 metrics-server 拒絕;
生成證書和私鑰:
[root@k8s-master01 work]# cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
-ca-key=/etc/kubernetes/cert/ca-key.pem \
-config=/etc/kubernetes/cert/ca-config.json \
-profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client
[root@k8s-master01 work]# ls proxy-client*.pem
proxy-client-key.pem proxy-client.pem
將生成的證書和私鑰文件拷貝到所有 master 節(jié)點:
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp proxy-client*.pem root@${node_master_ip}:/etc/kubernetes/cert/
done
5) 創(chuàng)建 kube-apiserver systemd unit 模板文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kube-apiserver.service.template <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=${K8S_DIR}/kube-apiserver
ExecStart=/opt/k8s/bin/kube-apiserver \\
--advertise-address=##NODE_MASTER_IP## \\
--default-not-ready-toleration-seconds=360 \\
--default-unreachable-toleration-seconds=360 \\
--feature-gates=DynamicAuditing=true \\
--max-mutating-requests-inflight=2000 \\
--max-requests-inflight=4000 \\
--default-watch-cache-size=200 \\
--delete-collection-workers=2 \\
--encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
--etcd-cafile=/etc/kubernetes/cert/ca.pem \\
--etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
--etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
--etcd-servers=${ETCD_ENDPOINTS} \\
--bind-address=##NODE_MASTER_IP## \\
--secure-port=6443 \\
--tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
--tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
--insecure-port=0 \\
--audit-dynamic-configuration \\
--audit-log-maxage=15 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-mode=batch \\
--audit-log-truncate-enabled \\
--audit-log-batch-buffer-size=20000 \\
--audit-log-batch-max-size=2 \\
--audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \\
--audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
--profiling \\
--anonymous-auth=false \\
--client-ca-file=/etc/kubernetes/cert/ca.pem \\
--enable-bootstrap-token-auth \\
--requestheader-allowed-names="" \\
--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-extra-headers-prefix="X-Remote-Extra-" \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--service-account-key-file=/etc/kubernetes/cert/ca.pem \\
--authorization-mode=Node,RBAC \\
--runtime-config=api/all=true \\
--enable-admission-plugins=NodeRestriction \\
--allow-privileged=true \\
--apiserver-count=3 \\
--event-ttl=168h \\
--kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\
--kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
--kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
--kubelet-https=true \\
--kubelet-timeout=10s \\
--proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--service-node-port-range=${NODE_PORT_RANGE} \\
--logtostderr=true \\
--enable-aggregator-routing=true \\
--v=2
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
解釋說明:
--advertise-address:apiserver 對外通告的 IP(kubernetes 服務(wù)后端節(jié)點 IP);
--default-*-toleration-seconds:設(shè)置節(jié)點異常相關(guān)的閾值;
--max-*-requests-inflight:請求相關(guān)的最大閾值;
--etcd-*:訪問 etcd 的證書和 etcd 服務(wù)器地址;
--experimental-encryption-provider-config:指定用于加密 etcd 中 secret 的配置;
--bind-address: https 監(jiān)聽的 IP,不能為 127.0.0.1,否則外界不能訪問它的安全端口 6443;
--secret-port:https 監(jiān)聽端口;
--insecure-port=0:關(guān)閉監(jiān)聽 http 非安全端口(8080);
--tls-*-file:指定 apiserver 使用的證書、私鑰和 CA 文件;
--audit-*:配置審計策略和審計日志文件相關(guān)的參數(shù);
--client-ca-file:驗證 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)請求所帶的證書;
--enable-bootstrap-token-auth:啟用 kubelet bootstrap 的 token 認(rèn)證;
--requestheader-*:kube-apiserver 的 aggregator layer 相關(guān)的配置參數(shù),proxy-client & HPA 需要使用;
--requestheader-client-ca-file:用于簽名 --proxy-client-cert-file 和 --proxy-client-key-file 指定的證書;在啟用了 metric aggregator 時使用;
如果 --requestheader-allowed-names 不為空,則--proxy-client-cert-file 證書的 CN 必須位于 allowed-names 中,默認(rèn)為 aggregator;
--service-account-key-file:簽名 ServiceAccount Token 的公鑰文件,kube-controller-manager 的 --service-account-private-key-file 指定私鑰文件,兩者配對使用;
--runtime-config=api/all=true: 啟用所有版本的 APIs,如 autoscaling/v2alpha1;
--authorization-mode=Node,RBAC、--anonymous-auth=false: 開啟 Node 和 RBAC 授權(quán)模式,拒絕未授權(quán)的請求;
--enable-admission-plugins:啟用一些默認(rèn)關(guān)閉的 plugins;
--allow-privileged:運行執(zhí)行 privileged 權(quán)限的容器;
--apiserver-count=3:指定 apiserver 實例的數(shù)量;
--event-ttl:指定 events 的保存時間;
--kubelet-*:如果指定,則使用 https 訪問 kubelet APIs;需要為證書對應(yīng)的用戶(上面 kubernetes*.pem 證書的用戶為 kubernetes) 用戶定義 RBAC 規(guī)則,否則訪問 kubelet API 時提示未授權(quán);
--proxy-client-*:apiserver 訪問 metrics-server 使用的證書;
--service-cluster-ip-range: 指定 Service Cluster IP 地址段;
--service-node-port-range: 指定 NodePort 的端口范圍;
注意:
如果kube-apiserver機(jī)器沒有運行 kube-proxy,則需要添加 --enable-aggregator-routing=true 參數(shù)(這里master節(jié)點沒有作為node節(jié)點使用,故沒有運行kube-proxy,需要加這個參數(shù))
requestheader-client-ca-file 指定的 CA 證書,必須具有 client auth and server auth??!
為各節(jié)點創(chuàng)建和分發(fā) kube-apiserver systemd unit 文件
替換模板文件中的變量,為各節(jié)點生成 systemd unit 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for (( i=0; i < 3; i++ ))
do
sed -e "s/##NODE_MASTER_NAME##/${NODE_MASTER_NAMES[i]}/" -e "s/##NODE_MASTER_IP##/${NODE_MASTER_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${NODE_MASTER_IPS[i]}.service
done
其中:NODE_NAMES 和 NODE_IPS 為相同長度的 bash 數(shù)組,分別為節(jié)點名稱和對應(yīng)的 IP;
[root@k8s-master01 work]# ll kube-apiserver*.service
-rw-r--r-- 1 root root 2718 Jun 18 10:38 kube-apiserver-172.16.60.241.service
-rw-r--r-- 1 root root 2718 Jun 18 10:38 kube-apiserver-172.16.60.242.service
-rw-r--r-- 1 root root 2718 Jun 18 10:38 kube-apiserver-172.16.60.243.service
分發(fā)生成的 systemd unit 文件, 文件重命名為 kube-apiserver.service;
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp kube-apiserver-${node_master_ip}.service root@${node_master_ip}:/etc/systemd/system/kube-apiserver.service
done
6) 啟動 kube-apiserver 服務(wù)
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "mkdir -p ${K8S_DIR}/kube-apiserver"
ssh root@${node_master_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
done
注意:啟動服務(wù)前必須先創(chuàng)建工作目錄;
檢查 kube-apiserver 運行狀態(tài)
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "systemctl status kube-apiserver |grep 'Active:'"
done
預(yù)期輸出:
>>> 172.16.60.241
Active: active (running) since Tue 2019-06-18 10:42:42 CST; 1min 6s ago
>>> 172.16.60.242
Active: active (running) since Tue 2019-06-18 10:42:47 CST; 1min 2s ago
>>> 172.16.60.243
Active: active (running) since Tue 2019-06-18 10:42:51 CST; 58s ago
確保狀態(tài)為 active (running),否則查看日志,確認(rèn)原因(journalctl -u kube-apiserver)
7)打印 kube-apiserver 寫入 etcd 的數(shù)據(jù)
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# ETCDCTL_API=3 etcdctl \
--endpoints=${ETCD_ENDPOINTS} \
--cacert=/opt/k8s/work/ca.pem \
--cert=/opt/k8s/work/etcd.pem \
--key=/opt/k8s/work/etcd-key.pem \
get /registry/ --prefix --keys-only
預(yù)期會打印出很多寫入到etcd中的數(shù)據(jù)信息
8)檢查集群信息
[root@k8s-master01 work]# kubectl cluster-info
Kubernetes master is running at https://172.16.60.250:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master01 work]# kubectl get all --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 8m25s
查看集群狀態(tài)信息
[root@k8s-master01 work]# kubectl get componentstatuses #或者執(zhí)行命令"kubectl get cs"
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
controller-managerhe 和 schedule狀態(tài)為Unhealthy,是因為此時還沒有部署這兩個組件,待后續(xù)部署好之后再查看~
這里注意:
-> 如果執(zhí)行 kubectl 命令式時輸出如下錯誤信息,則說明使用的 ~/.kube/config 文件不對,請切換到正確的賬戶后再執(zhí)行該命令:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
-> 執(zhí)行 kubectl get componentstatuses 命令時,apiserver 默認(rèn)向 127.0.0.1 發(fā)送請求。當(dāng) controller-manager、scheduler 以集群模式運行時,有可能和kube-apiserver
不在一臺機(jī)器上,這時 controller-manager 或 scheduler 的狀態(tài)為 Unhealthy,但實際上它們工作正常。
9) 檢查 kube-apiserver 監(jiān)聽的端口
[root@k8s-master01 work]# netstat -lnpt|grep kube
tcp 0 0 172.16.60.241:6443 0.0.0.0:* LISTEN 15516/kube-apiserve
需要注意:
6443: 接收 https 請求的安全端口,對所有請求做認(rèn)證和授權(quán);
由于關(guān)閉了非安全端口,故沒有監(jiān)聽 8080;
10)授予 kube-apiserver 訪問 kubelet API 的權(quán)限
在執(zhí)行 kubectl exec、run、logs 等命令時,apiserver 會將請求轉(zhuǎn)發(fā)到 kubelet 的 https 端口。
這里定義 RBAC 規(guī)則,授權(quán) apiserver 使用的證書(kubernetes.pem)用戶名(CN:kuberntes)訪問 kubelet API 的權(quán)限:
[root@k8s-master01 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
11)查看kube-apiserver輸出的metrics
需要用到根證書
使用nginx的代理端口獲取metrics
[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/metrics|head
# HELP APIServiceOpenAPIAggregationControllerQueue1_adds (Deprecated) Total number of adds handled by workqueue: APIServiceOpenAPIAggregationControllerQueue1
# TYPE APIServiceOpenAPIAggregationControllerQueue1_adds counter
APIServiceOpenAPIAggregationControllerQueue1_adds 12194
# HELP APIServiceOpenAPIAggregationControllerQueue1_depth (Deprecated) Current depth of workqueue: APIServiceOpenAPIAggregationControllerQueue1
# TYPE APIServiceOpenAPIAggregationControllerQueue1_depth gauge
APIServiceOpenAPIAggregationControllerQueue1_depth 0
# HELP APIServiceOpenAPIAggregationControllerQueue1_longest_running_processor_microseconds (Deprecated) How many microseconds has the longest running processor for APIServiceOpenAPIAggregationControllerQueue1 been running.
# TYPE APIServiceOpenAPIAggregationControllerQueue1_longest_running_processor_microseconds gauge
APIServiceOpenAPIAggregationControllerQueue1_longest_running_processor_microseconds 0
# HELP APIServiceOpenAPIAggregationControllerQueue1_queue_latency (Deprecated) How long an item stays in workqueueAPIServiceOpenAPIAggregationControllerQueue1 before being requested.
直接使用kube-apiserver節(jié)點端口獲取metrics
[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.241:6443/metrics|head
[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.242:6443/metrics|head
[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.243:6443/metrics|head8.2 - 部署高可用 kube-controller-manager 集群
該集群包含 3 個節(jié)點,啟動后將通過競爭選舉機(jī)制產(chǎn)生一個 leader 節(jié)點,其它節(jié)點為阻塞狀態(tài)。當(dāng) leader 節(jié)點不可用時,阻塞的節(jié)點將再次進(jìn)行選舉產(chǎn)生新的 leader 節(jié)點,從而保證服務(wù)的可用性。為保證通信安全,本文檔先生成 x509 證書和私鑰,kube-controller-manager 在如下兩種情況下使用該證書:與 kube-apiserver 的安全端口通信; 在安全端口(https,10252) 輸出 prometheus 格式的 metrics;下面部署命令均在k8s-master01節(jié)點上執(zhí)行,然后遠(yuǎn)程分發(fā)文件和執(zhí)行命令。
1)創(chuàng)建 kube-controller-manager 證書和私鑰
創(chuàng)建證書簽名請求:
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"172.16.60.241",
"172.16.60.242",
"172.16.60.243"
],
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-controller-manager",
"OU": "4Paradigm"
}
]
}
EOF
? hosts 列表包含所有 kube-controller-manager 節(jié)點 IP;
? CN 為 system:kube-controller-manager、O 為 system:kube-controller-manager,kubernetes 內(nèi)置的 ClusterRoleBindings system:kube-controller-manager
賦予 kube-controller-manager 工作所需的權(quán)限。
生成證書和私鑰
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
[root@k8s-master01 work]# ll kube-controller-manager*pem
-rw------- 1 root root 1679 Jun 18 11:43 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1517 Jun 18 11:43 kube-controller-manager.pem
將生成的證書和私鑰分發(fā)到所有 master 節(jié)點:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp kube-controller-manager*.pem root@${node_master_ip}:/etc/kubernetes/cert/
done
2) 創(chuàng)建和分發(fā) kubeconfig 文件
kube-controller-manager 使用 kubeconfig 文件訪問 apiserver,該文件提供了 apiserver 地址、嵌入的 CA 證書和 kube-controller-manager 證書:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/work/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-controller-manager.kubeconfig
[root@k8s-master01 work]# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
[root@k8s-master01 work]# kubectl config set-context system:kube-controller-manager \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
[root@k8s-master01 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
分發(fā) kubeconfig 到所有 master 節(jié)點:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp kube-controller-manager.kubeconfig root@${node_master_ip}:/etc/kubernetes/
done
3) 創(chuàng)建和分發(fā)kube-controller-manager system unit 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kube-controller-manager.service.template <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
WorkingDirectory=${K8S_DIR}/kube-controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \\
--profiling \\
--cluster-name=kubernetes \\
--controllers=*,bootstrapsigner,tokencleaner \\
--kube-api-qps=1000 \\
--kube-api-burst=2000 \\
--leader-elect \\
--use-service-account-credentials=true \\
--concurrent-service-syncs=2 \\
--bind-address=0.0.0.0 \\
--tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\
--authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-allowed-names="" \\
--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-extra-headers-prefix="X-Remote-Extra-" \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
--experimental-cluster-signing-duration=8760h \\
--horizontal-pod-autoscaler-sync-period=10s \\
--concurrent-deployment-syncs=10 \\
--concurrent-gc-syncs=30 \\
--node-cidr-mask-size=24 \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--pod-eviction-timeout=6m \\
--terminated-pod-gc-threshold=10000 \\
--root-ca-file=/etc/kubernetes/cert/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--logtostderr=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
解釋說明:
下面兩行一般要去掉,否則執(zhí)行"kubectl get cs"檢查集群狀態(tài)時,controller-manager狀態(tài)會為"Unhealthy"
--port=0:關(guān)閉監(jiān)聽非安全端口(http),同時 --address 參數(shù)無效,--bind-address 參數(shù)有效;
--secure-port=10252
--bind-address=0.0.0.0: 在所有網(wǎng)絡(luò)接口監(jiān)聽 10252 端口的 https /metrics 請求;
--kubeconfig:指定 kubeconfig 文件路徑,kube-controller-manager 使用它連接和驗證 kube-apiserver;
--authentication-kubeconfig 和 --authorization-kubeconfig:kube-controller-manager 使用它連接 apiserver,對 client 的請求進(jìn)行認(rèn)證和授權(quán)。kube-controller-manager 不再使用 --tls-ca-file 對請求 https metrics 的 Client 證書進(jìn)行校驗。如果沒有配置這兩個 kubeconfig 參數(shù),則 client 連接 kube-controller-manager https 端口的請求會被拒絕(提示權(quán)限不足)。
--cluster-signing-*-file:簽名 TLS Bootstrap 創(chuàng)建的證書;
--experimental-cluster-signing-duration:指定 TLS Bootstrap 證書的有效期;
--root-ca-file:放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進(jìn)行校驗;
--service-account-private-key-file:簽名 ServiceAccount 中 Token 的私鑰文件,必須和 kube-apiserver 的 --service-account-key-file 指定的公鑰文件配對使用;
--service-cluster-ip-range :指定 Service Cluster IP 網(wǎng)段,必須和 kube-apiserver 中的同名參數(shù)一致;
--leader-elect=true:集群運行模式,啟用選舉功能;被選為 leader 的節(jié)點負(fù)責(zé)處理工作,其它節(jié)點為阻塞狀態(tài);
--controllers=*,bootstrapsigner,tokencleaner:啟用的控制器列表,tokencleaner 用于自動清理過期的 Bootstrap token;
--horizontal-pod-autoscaler-*:custom metrics 相關(guān)參數(shù),支持 autoscaling/v2alpha1;
--tls-cert-file、--tls-private-key-file:使用 https 輸出 metrics 時使用的 Server 證書和秘鑰;
--use-service-account-credentials=true: kube-controller-manager 中各 controller 使用 serviceaccount 訪問 kube-apiserver;
為各節(jié)點創(chuàng)建和分發(fā) kube-controller-mananger systemd unit 文件
替換模板文件中的變量,為各節(jié)點創(chuàng)建 systemd unit 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for (( i=0; i < 3; i++ ))
do
sed -e "s/##NODE_MASTER_NAME##/${NODE_MASTER_NAMES[i]}/" -e "s/##NODE_MASTER_IP##/${NODE_MASTER_IPS[i]}/" kube-controller-manager.service.template > kube-controller-manager-${NODE_MASTER_IPS[i]}.service
done
注意: NODE_NAMES 和 NODE_IPS 為相同長度的 bash 數(shù)組,分別為節(jié)點名稱和對應(yīng)的 IP;
[root@k8s-master01 work]# ll kube-controller-manager*.service
-rw-r--r-- 1 root root 1878 Jun 18 12:45 kube-controller-manager-172.16.60.241.service
-rw-r--r-- 1 root root 1878 Jun 18 12:45 kube-controller-manager-172.16.60.242.service
-rw-r--r-- 1 root root 1878 Jun 18 12:45 kube-controller-manager-172.16.60.243.service
分發(fā)到所有 master 節(jié)點:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp kube-controller-manager-${node_master_ip}.service root@${node_master_ip}:/etc/systemd/system/kube-controller-manager.service
done
注意:文件重命名為 kube-controller-manager.service;
啟動 kube-controller-manager 服務(wù)
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager"
ssh root@${node_master_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
done
注意:啟動服務(wù)前必須先創(chuàng)建工作目錄;
檢查服務(wù)運行狀態(tài)
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "systemctl status kube-controller-manager|grep Active"
done
預(yù)期輸出結(jié)果:
>>> 172.16.60.241
Active: active (running) since Tue 2019-06-18 12:49:11 CST; 1min 7s ago
>>> 172.16.60.242
Active: active (running) since Tue 2019-06-18 12:49:11 CST; 1min 7s ago
>>> 172.16.60.243
Active: active (running) since Tue 2019-06-18 12:49:12 CST; 1min 7s ago
確保狀態(tài)為 active (running),否則查看日志,確認(rèn)原因(journalctl -u kube-controller-manager)
kube-controller-manager 監(jiān)聽 10252 端口,接收 https 請求:
[root@k8s-master01 work]# netstat -lnpt|grep kube-controll
tcp 0 0 172.16.60.241:10252 0.0.0.0:* LISTEN 25709/kube-controll
檢查集群狀態(tài),controller-manager的狀態(tài)為"ok"
注意:當(dāng)kube-controller-manager集群中的1個或2個節(jié)點的controller-manager服務(wù)掛掉,只要有一個節(jié)點的controller-manager服務(wù)活著,
則集群中controller-manager的狀態(tài)仍然為"ok",仍然會繼續(xù)提供服務(wù)!
[root@k8s-master01 work]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
4) 查看輸出的 metrics
注意:以下命令在3臺kube-controller-manager節(jié)點上執(zhí)行。
由于在kube-controller-manager啟動文件中關(guān)掉了"--port=0"和"--secure-port=10252"這兩個參數(shù),則只能通過http方式獲取到kube-controller-manager
輸出的metrics信息。kube-controller-manager一般不會被訪問,只有在監(jiān)控時采集metrcis指標(biāo)數(shù)據(jù)時被訪問。
[root@k8s-master01 work]# curl -s http://172.16.60.241:10252/metrics|head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://172.16.60.241:10252/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://127.0.0.1:10252/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
[root@k8s-master01 ~]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem http://172.16.60.241:10252/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
5) kube-controller-manager 的權(quán)限
ClusteRole system:kube-controller-manager 的權(quán)限很小,只能創(chuàng)建 secret、serviceaccount 等資源對象,各 controller 的權(quán)限分散到 ClusterRole system:controller:XXX 中:
[root@k8s-master01 work]# kubectl describe clusterrole system:kube-controller-manager
Name: system:kube-controller-manager
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
secrets [] [] [create delete get update]
endpoints [] [] [create get update]
serviceaccounts [] [] [create get update]
events [] [] [create patch update]
tokenreviews.authentication.k8s.io [] [] [create]
subjectaccessreviews.authorization.k8s.io [] [] [create]
configmaps [] [] [get]
namespaces [] [] [get]
*.* [] [] [list watch]
需要在 kube-controller-manager 的啟動參數(shù)中添加 --use-service-account-credentials=true 參數(shù),這樣 main controller 會為各 controller 創(chuàng)建對應(yīng)的 ServiceAccount XXX-controller。
內(nèi)置的 ClusterRoleBinding system:controller:XXX 將賦予各 XXX-controller ServiceAccount 對應(yīng)的 ClusterRole system:controller:XXX 權(quán)限。
[root@k8s-master01 work]# kubectl get clusterrole|grep controller
system:controller:attachdetach-controller 141m
system:controller:certificate-controller 141m
system:controller:clusterrole-aggregation-controller 141m
system:controller:cronjob-controller 141m
system:controller:daemon-set-controller 141m
system:controller:deployment-controller 141m
system:controller:disruption-controller 141m
system:controller:endpoint-controller 141m
system:controller:expand-controller 141m
system:controller:generic-garbage-collector 141m
system:controller:horizontal-pod-autoscaler 141m
system:controller:job-controller 141m
system:controller:namespace-controller 141m
system:controller:node-controller 141m
system:controller:persistent-volume-binder 141m
system:controller:pod-garbage-collector 141m
system:controller:pv-protection-controller 141m
system:controller:pvc-protection-controller 141m
system:controller:replicaset-controller 141m
system:controller:replication-controller 141m
system:controller:resourcequota-controller 141m
system:controller:route-controller 141m
system:controller:service-account-controller 141m
system:controller:service-controller 141m
system:controller:statefulset-controller 141m
system:controller:ttl-controller 141m
system:kube-controller-manager 141m
以 deployment controller 為例:
[root@k8s-master01 work]# kubectl describe clusterrole system:controller:deployment-controller
Name: system:controller:deployment-controller
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
replicasets.apps [] [] [create delete get list patch update watch]
replicasets.extensions [] [] [create delete get list patch update watch]
events [] [] [create patch update]
pods [] [] [get list update watch]
deployments.apps [] [] [get list update watch]
deployments.extensions [] [] [get list update watch]
deployments.apps/finalizers [] [] [update]
deployments.apps/status [] [] [update]
deployments.extensions/finalizers [] [] [update]
deployments.extensions/status [] [] [update]
6)查看kube-controller-manager集群中當(dāng)前的leader
[root@k8s-master01 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master02_4e449819-9185-11e9-82b6-005056ac42a4","leaseDurationSeconds":15,"acquireTime":"2019-06-18T04:55:49Z","renewTime":"2019-06-18T05:04:54Z","leaderTransitions":3}'
creationTimestamp: "2019-06-18T04:03:07Z"
name: kube-controller-manager
namespace: kube-system
resourceVersion: "4604"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
uid: fa824018-917d-11e9-90d4-005056ac7c81
可見,當(dāng)前的leader為k8s-master02節(jié)點。
測試 kube-controller-manager 集群的高可用
停掉一個或兩個節(jié)點的 kube-controller-manager 服務(wù),觀察其它節(jié)點的日志,看是否獲取了 leader 權(quán)限。
比如停掉k8s-master02節(jié)點的kube-controller-manager 服務(wù)
[root@k8s-master02 ~]# systemctl stop kube-controller-manager
[root@k8s-master02 ~]# ps -ef|grep kube-controller-manager
root 25677 11006 0 13:06 pts/0 00:00:00 grep --color=auto kube-controller-manager
接著觀察kube-controller-manager集群當(dāng)前的leader情況
[root@k8s-master01 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master03_4e4c28b5-9185-11e9-b98a-005056ac7136","leaseDurationSeconds":15,"acquireTime":"2019-06-18T05:06:32Z","renewTime":"2019-06-18T05:06:57Z","leaderTransitions":4}'
creationTimestamp: "2019-06-18T04:03:07Z"
name: kube-controller-manager
namespace: kube-system
resourceVersion: "4695"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
uid: fa824018-917d-11e9-90d4-005056ac7c81
發(fā)現(xiàn)當(dāng)前l(fā)eader已經(jīng)轉(zhuǎn)移到k8s-master03節(jié)點上了??!8.3 - 部署高可用 kube-scheduler 集群
該集群包含 3 個節(jié)點,啟動后將通過競爭選舉機(jī)制產(chǎn)生一個 leader 節(jié)點,其它節(jié)點為阻塞狀態(tài)。當(dāng) leader 節(jié)點不可用后,剩余節(jié)點將再次進(jìn)行選舉產(chǎn)生新的 leader 節(jié)點,從而保證服務(wù)的可用性。為保證通信安全,本文檔先生成 x509 證書和私鑰,
kube-scheduler 在如下兩種情況下使用該證書:
與kube-apiserver 的安全端口通信;在安全端口(https,10251) 輸出 prometheus 格式的 metrics;
下面部署命令均在k8s-master01節(jié)點上執(zhí)行,然后遠(yuǎn)程分發(fā)文件和執(zhí)行命令。
1)創(chuàng)建 kube-scheduler 證書和私鑰
創(chuàng)建證書簽名請求:
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"172.16.60.241",
"172.16.60.242",
"172.16.60.243"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-scheduler",
"OU": "4Paradigm"
}
]
}
EOF
解釋說明:
hosts 列表包含所有 kube-scheduler 節(jié)點 IP;
CN 和 O 均為 system:kube-scheduler,kubernetes 內(nèi)置的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工作所需的權(quán)限;
生成證書和私鑰:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
[root@k8s-master01 work]# ls kube-scheduler*pem
kube-scheduler-key.pem kube-scheduler.pem
將生成的證書和私鑰分發(fā)到所有 master 節(jié)點:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp kube-scheduler*.pem root@${node_master_ip}:/etc/kubernetes/cert/
done
2) 創(chuàng)建和分發(fā) kubeconfig 文件
kube-scheduler 使用 kubeconfig 文件訪問 apiserver,該文件提供了 apiserver 地址、嵌入的 CA 證書和 kube-scheduler 證書:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/work/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-scheduler.kubeconfig
[root@k8s-master01 work]# kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
[root@k8s-master01 work]# kubectl config set-context system:kube-scheduler \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
[root@k8s-master01 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
分發(fā) kubeconfig 到所有 master 節(jié)點:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp kube-scheduler.kubeconfig root@${node_master_ip}:/etc/kubernetes/
done
3) 創(chuàng)建 kube-scheduler 配置文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cat >kube-scheduler.yaml.template <<EOF
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
burst: 200
kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 0.0.0.0:10251
leaderElection:
leaderElect: true
metricsBindAddress: 0.0.0.0:10251
EOF
注意:這里的ip地址最好用0.0.0.0,不然執(zhí)行"kubectl get cs"查看schedule的集群狀態(tài)會是"Unhealthy"
--kubeconfig:指定 kubeconfig 文件路徑,kube-scheduler 使用它連接和驗證 kube-apiserver;
--leader-elect=true:集群運行模式,啟用選舉功能;被選為 leader 的節(jié)點負(fù)責(zé)處理工作,其它節(jié)點為阻塞狀態(tài);
替換模板文件中的變量:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for (( i=0; i < 3; i++ ))
do
sed -e "s/##NODE_MASTER_NAME##/${NODE_MASTER_NAMES[i]}/" -e "s/##NODE_MASTER_IP##/${NODE_MASTER_IPS[i]}/" kube-scheduler.yaml.template > kube-scheduler-${NODE_MASTER_IPS[i]}.yaml
done
注意:NODE_NAMES 和 NODE_IPS 為相同長度的 bash 數(shù)組,分別為節(jié)點名稱和對應(yīng)的 IP;
[root@k8s-master01 work]# ll kube-scheduler*.yaml
-rw-r--r-- 1 root root 399 Jun 18 14:57 kube-scheduler-172.16.60.241.yaml
-rw-r--r-- 1 root root 399 Jun 18 14:57 kube-scheduler-172.16.60.242.yaml
-rw-r--r-- 1 root root 399 Jun 18 14:57 kube-scheduler-172.16.60.243.yaml
分發(fā) kube-scheduler 配置文件到所有 master 節(jié)點:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp kube-scheduler-${node_master_ip}.yaml root@${node_master_ip}:/etc/kubernetes/kube-scheduler.yaml
done
注意:重命名為 kube-scheduler.yaml;
4)創(chuàng)建 kube-scheduler systemd unit 模板文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > kube-scheduler.service.template <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
WorkingDirectory=${K8S_DIR}/kube-scheduler
ExecStart=/opt/k8s/bin/kube-scheduler \\
--config=/etc/kubernetes/kube-scheduler.yaml \\
--bind-address=0.0.0.0 \\
--tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \\
--tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \\
--authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-allowed-names="" \\
--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-extra-headers-prefix="X-Remote-Extra-" \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--logtostderr=true \\
--v=2
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
EOF
為各節(jié)點創(chuàng)建和分發(fā) kube-scheduler systemd unit 文件
替換模板文件中的變量,為各節(jié)點創(chuàng)建 systemd unit 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for (( i=0; i < 3; i++ ))
do
sed -e "s/##NODE_MASTER_NAME##/${NODE_MASTER_NAMES[i]}/" -e "s/##NODE_MASTER_IP##/${NODE_MASTER_IPS[i]}/" kube-scheduler.service.template > kube-scheduler-${NODE_MASTER_IPS[i]}.service
done
其中:NODE_NAMES 和 NODE_IPS 為相同長度的 bash 數(shù)組,分別為節(jié)點名稱和對應(yīng)的 IP;
[root@k8s-master01 work]# ll kube-scheduler*.service
-rw-r--r-- 1 root root 981 Jun 18 15:30 kube-scheduler-172.16.60.241.service
-rw-r--r-- 1 root root 981 Jun 18 15:30 kube-scheduler-172.16.60.242.service
-rw-r--r-- 1 root root 981 Jun 18 15:30 kube-scheduler-172.16.60.243.service
分發(fā) systemd unit 文件到所有 master 節(jié)點:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
scp kube-scheduler-${node_master_ip}.service root@${node_master_ip}:/etc/systemd/system/kube-scheduler.service
done
5) 啟動 kube-scheduler 服務(wù)
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "mkdir -p ${K8S_DIR}/kube-scheduler"
ssh root@${node_master_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
done
注意:啟動服務(wù)前必須先創(chuàng)建工作目錄;
檢查服務(wù)運行狀態(tài)
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "systemctl status kube-scheduler|grep Active"
done
預(yù)期輸出結(jié)果:
>>> 172.16.60.241
Active: active (running) since Tue 2019-06-18 15:33:29 CST; 1min 12s ago
>>> 172.16.60.242
Active: active (running) since Tue 2019-06-18 15:33:30 CST; 1min 11s ago
>>> 172.16.60.243
Active: active (running) since Tue 2019-06-18 15:33:30 CST; 1min 11s ago
確保狀態(tài)為 active (running),否則查看日志,確認(rèn)原因: (journalctl -u kube-scheduler)
看看集群狀態(tài),此時狀態(tài)均為"ok"
[root@k8s-master01 work]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
6) 查看輸出的 metrics
注意:以下命令要在kube-scheduler集群節(jié)點上執(zhí)行。
kube-scheduler監(jiān)聽10251和10259端口:
10251:接收 http 請求,非安全端口,不需要認(rèn)證授權(quán);
10259:接收 https 請求,安全端口,需要認(rèn)證授權(quán);
兩個接口都對外提供 /metrics 和 /healthz 的訪問。
[root@k8s-master01 work]# netstat -lnpt |grep kube-schedule
tcp6 0 0 :::10251 :::* LISTEN 6075/kube-scheduler
tcp6 0 0 :::10259 :::* LISTEN 6075/kube-scheduler
[root@k8s-master01 work]# lsof -i:10251
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kube-sche 6075 root 3u IPv6 628571 0t0 TCP *:10251 (LISTEN)
[root@k8s-master01 work]# lsof -i:10259
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kube-sche 6075 root 5u IPv6 628574 0t0 TCP *:10259 (LISTEN)
下面幾種方式均能獲取到kube-schedule的metrics數(shù)據(jù)信息(分別使用http的10251 和 https的10259端口)
[root@k8s-master01 work]# curl -s http://172.16.60.241:10251/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
[root@k8s-master01 work]# curl -s http://127.0.0.1:10251/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://172.16.60.241:10251/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://127.0.0.1:10251/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.241:10259/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
7)查看當(dāng)前的 leader
[root@k8s-master01 work]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master01_5eac29d7-919b-11e9-b242-005056ac7c81","leaseDurationSeconds":15,"acquireTime":"2019-06-18T07:33:31Z","renewTime":"2019-06-18T07:41:13Z","leaderTransitions":0}'
creationTimestamp: "2019-06-18T07:33:31Z"
name: kube-scheduler
namespace: kube-system
resourceVersion: "12218"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
uid: 5f466875-919b-11e9-90d4-005056ac7c81
可見,當(dāng)前的 leader 為 k8s-master01 節(jié)點。
測試 kube-scheduler 集群的高可用
隨便找一個或兩個 master 節(jié)點,停掉 kube-scheduler 服務(wù),看其它節(jié)點是否獲取了 leader 權(quán)限。
比如停掉k8s-master01節(jié)點的kube-schedule服務(wù),查看下leader的轉(zhuǎn)移情況
[root@k8s-master01 work]# systemctl stop kube-scheduler
[root@k8s-master01 work]# ps -ef|grep kube-scheduler
root 6871 2379 0 15:42 pts/2 00:00:00 grep --color=auto kube-scheduler
再次看看當(dāng)前的leader,發(fā)現(xiàn)leader已經(jīng)轉(zhuǎn)移為k8s-master02節(jié)點了
[root@k8s-master01 work]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master02_5efade79-919b-11e9-bbe2-005056ac42a4","leaseDurationSeconds":15,"acquireTime":"2019-06-18T07:43:03Z","renewTime":"2019-06-18T07:43:12Z","leaderTransitions":1}'
creationTimestamp: "2019-06-18T07:33:31Z"
name: kube-scheduler
namespace: kube-system
resourceVersion: "12363"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
uid: 5f466875-919b-11e9-90d4-005056ac7c81九、部署node工作節(jié)點
kubernetes node節(jié)點運行的組件有docker、kubelet、kube-proxy、flanneld。
下面部署命令均在k8s-master01節(jié)點上執(zhí)行,然后遠(yuǎn)程分發(fā)文件和執(zhí)行命令。
安裝依賴包
[root@k8s-master01 ~]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 ~]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "yum install -y epel-release"
ssh root@${node_node_ip} "yum install -y conntrack ipvsadm ntp ntpdate ipset jq iptables curl sysstat libseccomp && modprobe ip_vs "
done9.1 - 部署 docker 組件
docker 運行和管理容器,kubelet 通過 Container Runtime Interface (CRI) 與它進(jìn)行交互。
下面操作均在k8s-master01上執(zhí)行,然后遠(yuǎn)程分發(fā)文件和執(zhí)行命令。
1) 下載和分發(fā) docker 二進(jìn)制文件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz
[root@k8s-master01 work]# tar -xvf docker-18.09.6.tgz
分發(fā)二進(jìn)制文件到所有node節(jié)點:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
scp docker/* root@${node_node_ip}:/opt/k8s/bin/
ssh root@${node_node_ip} "chmod +x /opt/k8s/bin/*"
done
2) 創(chuàng)建和分發(fā) systemd unit 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > docker.service <<"EOF"
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
WorkingDirectory=##DOCKER_DIR##
Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/docker
ExecStart=/opt/k8s/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
注意事項:
-> EOF 前后有雙引號,這樣 bash 不會替換文檔中的變量,如 $DOCKER_NETWORK_OPTIONS (這些環(huán)境變量是 systemd 負(fù)責(zé)替換的。);
-> dockerd 運行時會調(diào)用其它 docker 命令,如 docker-proxy,所以需要將 docker 命令所在的目錄加到 PATH 環(huán)境變量中;
-> flanneld 啟動時將網(wǎng)絡(luò)配置寫入 /run/flannel/docker 文件中,dockerd 啟動前讀取該文件中的環(huán)境變量 DOCKER_NETWORK_OPTIONS ,然后設(shè)置 docker0 網(wǎng)橋網(wǎng)段;
-> 如果指定了多個 EnvironmentFile 選項,則必須將 /run/flannel/docker 放在最后(確保 docker0 使用 flanneld 生成的 bip 參數(shù));
-> docker 需要以 root 用于運行;
-> docker 從 1.13 版本開始,可能將 iptables FORWARD chain的默認(rèn)策略設(shè)置為DROP,從而導(dǎo)致 ping 其它 Node 上的 Pod IP 失敗,遇到這種情況時,需要手動設(shè)置策略為 ACCEPT:
# iptables -P FORWARD ACCEPT
并且把以下命令寫入 /etc/rc.local 文件中,防止節(jié)點重啟iptables FORWARD chain的默認(rèn)策略又還原為DROP
# /sbin/iptables -P FORWARD ACCEPT
分發(fā) systemd unit 文件到所有node節(jié)點機(jī)器:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# sed -i -e "s|##DOCKER_DIR##|${DOCKER_DIR}|" docker.service
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
scp docker.service root@${node_node_ip}:/etc/systemd/system/
done
3) 配置和分發(fā) docker 配置文件
使用國內(nèi)的倉庫鏡像服務(wù)器以加快 pull image 的速度,同時增加下載的并發(fā)數(shù) (需要重啟 dockerd 生效):
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > docker-daemon.json <<EOF
{
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"],
"insecure-registries": ["docker02:35000"],
"max-concurrent-downloads": 20,
"live-restore": true,
"max-concurrent-uploads": 10,
"debug": true,
"data-root": "${DOCKER_DIR}/data",
"exec-root": "${DOCKER_DIR}/exec",
"log-opts": {
"max-size": "100m",
"max-file": "5"
}
}
EOF
分發(fā) docker 配置文件到所有 node 節(jié)點:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "mkdir -p /etc/docker/ ${DOCKER_DIR}/{data,exec}"
scp docker-daemon.json root@${node_node_ip}:/etc/docker/daemon.json
done
4) 啟動 docker 服務(wù)
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
done
檢查服務(wù)運行狀態(tài)
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "systemctl status docker|grep Active"
done
預(yù)期輸出結(jié)果:
>>> 172.16.60.244
Active: active (running) since Tue 2019-06-18 16:28:32 CST; 42s ago
>>> 172.16.60.245
Active: active (running) since Tue 2019-06-18 16:28:31 CST; 42s ago
>>> 172.16.60.246
Active: active (running) since Tue 2019-06-18 16:28:32 CST; 42s ago
確保狀態(tài)為 active (running),否則查看日志,確認(rèn)原因 (journalctl -u docker)
5) 檢查 docker0 網(wǎng)橋
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0"
done
預(yù)期輸出結(jié)果:
>>> 172.16.60.244
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether c6:c2:d1:5a:9a:8a brd ff:ff:ff:ff:ff:ff
inet 172.30.88.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:27:3c:5e:5f brd ff:ff:ff:ff:ff:ff
inet 172.30.88.1/21 brd 172.30.95.255 scope global docker0
valid_lft forever preferred_lft forever
>>> 172.16.60.245
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 02:36:1d:ab:c4:86 brd ff:ff:ff:ff:ff:ff
inet 172.30.56.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:6f:36:7d:fb brd ff:ff:ff:ff:ff:ff
inet 172.30.56.1/21 brd 172.30.63.255 scope global docker0
valid_lft forever preferred_lft forever
>>> 172.16.60.246
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 4e:73:d1:0e:27:c0 brd ff:ff:ff:ff:ff:ff
inet 172.30.72.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:21:39:f4:9e brd ff:ff:ff:ff:ff:ff
inet 172.30.72.1/21 brd 172.30.79.255 scope global docker0
valid_lft forever preferred_lft forever
確認(rèn)各node節(jié)點的docker0網(wǎng)橋和flannel.1接口的IP一定要處于同一個網(wǎng)段中(如下 172.30.88.0/32 位于 172.30.88.1/21 中)?。。?
到任意一個node節(jié)點上查看 docker 的狀態(tài)信息
[root@k8s-node01 ~]# ps -elfH|grep docker
0 S root 21573 18744 0 80 0 - 28180 pipe_w 16:32 pts/2 00:00:00 grep --color=auto docker
4 S root 21147 1 0 80 0 - 173769 futex_ 16:28 ? 00:00:00 /opt/k8s/bin/dockerd --bip=172.30.88.1/21 --ip-masq=false --mtu=1450
4 S root 21175 21147 0 80 0 - 120415 futex_ 16:28 ? 00:00:00 containerd --config /data/k8s/docker/exec/containerd/containerd.toml --log-level debug
[root@k8s-node01 ~]# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.09.6
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.4.181-1.el7.elrepo.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.859GiB
Name: k8s-node01
ID: R24D:75E5:2OWS:SNU5:NPSE:SBKH:WKLZ:2ZH7:6ITY:3BE2:YHRG:6WRU
Docker Root Dir: /data/k8s/docker/data
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 22
Goroutines: 43
System Time: 2019-06-18T16:32:44.260301822+08:00
EventsListeners: 0
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
docker02:35000
127.0.0.0/8
Registry Mirrors:
https://docker.mirrors.ustc.edu.cn/
https://hub-mirror.c.163.com/
Live Restore Enabled: true
Product License: Community Engine9.2 - 部署 kubelet 組件
kubelet 運行在每個node節(jié)點上,接收 kube-apiserver 發(fā)送的請求,管理 Pod 容器,執(zhí)行交互式命令,如 exec、run、logs 等。kubelet 啟動時自動向 kube-apiserver 注冊節(jié)點信息,內(nèi)置的 cadvisor 統(tǒng)計和監(jiān)控節(jié)點的資源使用情況。為確保安全,部署時關(guān)閉了 kubelet 的非安全 http 端口,對請求進(jìn)行認(rèn)證和授權(quán),拒絕未授權(quán)的訪問(如 apiserver、heapster 的請求)。
下面部署命令均在k8s-master01節(jié)點上執(zhí)行,然后遠(yuǎn)程分發(fā)文件和執(zhí)行命令。
1)下載和分發(fā) kubelet 二進(jìn)制文件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
scp kubernetes/server/bin/kubelet root@${node_node_ip}:/opt/k8s/bin/
ssh root@${node_node_ip} "chmod +x /opt/k8s/bin/*"
done
2)創(chuàng)建 kubelet bootstrap kubeconfig 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}
do
echo ">>> ${node_node_name}"
# 創(chuàng)建 token
export BOOTSTRAP_TOKEN=$(kubeadm token create \
--description kubelet-bootstrap-token \
--groups system:bootstrappers:${node_node_name} \
--kubeconfig ~/.kube/config)
# 設(shè)置集群參數(shù)
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/cert/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig
# 設(shè)置客戶端認(rèn)證參數(shù)
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig
# 設(shè)置上下文參數(shù)
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig
# 設(shè)置默認(rèn)上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig
done
解釋說明: 向 kubeconfig 寫入的是 token,bootstrap 結(jié)束后 kube-controller-manager 為 kubelet 創(chuàng)建 client 和 server 證書;
查看 kubeadm 為各節(jié)點創(chuàng)建的 token:
[root@k8s-master01 work]# kubeadm token list --kubeconfig ~/.kube/config
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
0zqowl.aye8f834jtq9vm9t 23h 2019-06-19T16:50:43+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node03
b46tq2.muab337gxwl0dsqn 23h 2019-06-19T16:50:43+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node02
heh41x.foguhh1qa5crpzlq 23h 2019-06-19T16:50:42+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node01
解釋說明:
-> token 有效期為 1 天,超期后將不能再被用來 boostrap kubelet,且會被 kube-controller-manager 的 tokencleaner 清理;
-> kube-apiserver 接收 kubelet 的 bootstrap token 后,將請求的 user 設(shè)置為 system:bootstrap:<Token ID>,group 設(shè)置為 system:bootstrappers,
后續(xù)將為這個 group 設(shè)置 ClusterRoleBinding;
查看各 token 關(guān)聯(lián)的 Secret:
[root@k8s-master01 work]# kubectl get secrets -n kube-system|grep bootstrap-token
bootstrap-token-0zqowl bootstrap.kubernetes.io/token 7 88s
bootstrap-token-b46tq2 bootstrap.kubernetes.io/token 7 88s
bootstrap-token-heh41x bootstrap.kubernetes.io/token 7 89s
3) 分發(fā) bootstrap kubeconfig 文件到所有node節(jié)點
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}
do
echo ">>> ${node_node_name}"
scp kubelet-bootstrap-${node_node_name}.kubeconfig root@${node_node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
done
4) 創(chuàng)建和分發(fā) kubelet 參數(shù)配置文件
從 v1.10 開始,部分 kubelet 參數(shù)需在配置文件中配置,kubelet --help 會提示:
DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag
創(chuàng)建 kubelet 參數(shù)配置文件模板:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kubelet-config.yaml.template <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##NODE_NODE_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##NODE_NODE_IP##"
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:
- "${CLUSTER_DNS_SVC_IP}"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: cgroupfs
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "${CLUSTER_CIDR}"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
memory.available: "100Mi"
nodefs.available: "10%"
nodefs.inodesFree: "5%"
imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF
解釋說明:
-> address:kubelet 安全端口(https,10250)監(jiān)聽的地址,不能為 127.0.0.1,否則 kube-apiserver、heapster 等不能調(diào)用 kubelet 的 API;
-> readOnlyPort=0:關(guān)閉只讀端口(默認(rèn) 10255),等效為未指定;
-> authentication.anonymous.enabled:設(shè)置為 false,不允許匿名?訪問 10250 端口;
-> authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啟 HTTP 證書認(rèn)證;
-> authentication.webhook.enabled=true:開啟 HTTPs bearer token 認(rèn)證;
-> 對于未通過 x509 證書和 webhook 認(rèn)證的請求(kube-apiserver 或其他客戶端),將被拒絕,提示 Unauthorized;
-> authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查詢 kube-apiserver 某 user、group 是否具有操作資源的權(quán)限(RBAC);
-> featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自動 rotate 證書,證書的有效期取決于
kube-controller-manager 的 --experimental-cluster-signing-duration 參數(shù);
-> 需要 root 賬戶運行;
為各節(jié)點創(chuàng)建和分發(fā) kubelet 配置文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
sed -e "s/##NODE_NODE_IP##/${node_node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_node_ip}.yaml.template
scp kubelet-config-${node_node_ip}.yaml.template root@${node_node_ip}:/etc/kubernetes/kubelet-config.yaml
done
5)創(chuàng)建和分發(fā) kubelet systemd unit 文件
創(chuàng)建 kubelet systemd unit 文件模板:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
--allow-privileged=true \\
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
--cert-dir=/etc/kubernetes/cert \\
--cni-conf-dir=/etc/cni/net.d \\
--container-runtime=docker \\
--container-runtime-endpoint=unix:///var/run/dockershim.sock \\
--root-dir=${K8S_DIR}/kubelet \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet-config.yaml \\
--hostname-override=##NODE_NODE_NAME## \\
--pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 \\
--image-pull-progress-deadline=15m \\
--volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
--logtostderr=true \\
--v=2
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
EOF
解釋說明:
-> 如果設(shè)置了 --hostname-override 選項,則 kube-proxy 也需要設(shè)置該選項,否則會出現(xiàn)找不到 Node 的情況;
-> --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用該文件中的用戶名和 token 向 kube-apiserver 發(fā)送 TLS Bootstrapping 請求;
-> K8S approve kubelet 的 csr 請求后,在 --cert-dir 目錄創(chuàng)建證書和私鑰文件,然后寫入 --kubeconfig 文件;
-> --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 鏡像,它不能回收容器的僵尸;
為各節(jié)點創(chuàng)建和分發(fā) kubelet systemd unit 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}
do
echo ">>> ${node_node_name}"
sed -e "s/##NODE_NODE_NAME##/${node_node_name}/" kubelet.service.template > kubelet-${node_node_name}.service
scp kubelet-${node_node_name}.service root@${node_node_name}:/etc/systemd/system/kubelet.service
done
6)Bootstrap Token Auth 和授予權(quán)限
-> kubelet啟動時查找--kubeletconfig參數(shù)對應(yīng)的文件是否存在,如果不存在則使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 發(fā)送證書簽名請求 (CSR)。
-> kube-apiserver 收到 CSR 請求后,對其中的 Token 進(jìn)行認(rèn)證,認(rèn)證通過后將請求的 user 設(shè)置為 system:bootstrap:<Token ID>,group 設(shè)置為 system:bootstrappers,
這一過程稱為 Bootstrap Token Auth。
-> 默認(rèn)情況下,這個 user 和 group 沒有創(chuàng)建 CSR 的權(quán)限,kubelet 啟動失敗,錯誤日志如下:
# journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'
May 9 22:48:41 k8s-master01 kubelet[128468]: I0526 22:48:41.798230 128468 certificate_manager.go:366] Rotating certificates
May 9 22:48:41 k8s-master01 kubelet[128468]: E0526 22:48:41.801997 128468 certificate_manager.go:385] Failed while requesting a signed certificate from the master: cannot cre
ate certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:82jfrm" cannot create resource "certificatesigningrequests" i
n API group "certificates.k8s.io" at the cluster scope
May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.044828 128468 kubelet.go:2244] node "k8s-master01" not found
May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.078658 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthor
ized
May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.079873 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorize
d
May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.082683 128468 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unau
thorized
May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.084473 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unau
thorized
May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.088466 128468 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: U
nauthorized
解決辦法是:創(chuàng)建一個 clusterrolebinding,將 group system:bootstrappers 和 clusterrole system:node-bootstrapper 綁定:
# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
7) 啟動 kubelet 服務(wù)
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
ssh root@${node_node_ip} "/usr/sbin/swapoff -a"
ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
done
解釋說明:
-> 啟動服務(wù)前必須先創(chuàng)建工作目錄;
-> 關(guān)閉 swap 分區(qū),否則 kubelet 會啟動失敗 (使用"journalctl -u kubelet |tail"命令查看錯誤日志)
kubelet 啟動后使用 --bootstrap-kubeconfig 向 kube-apiserver 發(fā)送 CSR 請求,
當(dāng)這個 CSR 被 approve 后,kube-controller-manager 為 kubelet 創(chuàng)建 TLS 客戶端證書、私鑰和 --kubeletconfig 文件。
注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 參數(shù),才會為 TLS Bootstrap 創(chuàng)建證書和私鑰。
[root@k8s-master01 work]# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-4wk6q 108s system:bootstrap:0zqowl Pending
csr-mjtl5 110s system:bootstrap:heh41x Pending
csr-rfz27 109s system:bootstrap:b46tq2 Pending
[root@k8s-master01 work]# kubectl get nodes
No resources found.
此時三個node節(jié)點的csr均處于 pending 狀態(tài);
8)自動 approve CSR 請求
創(chuàng)建三個 ClusterRoleBinding,分別用于自動 approve client、renew client、renew server 證書:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > csr-crb.yaml <<EOF
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
---
# To let a node of the group "system:nodes" renew its own credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-client-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeserver"]
verbs: ["create"]
---
# To let a node of the group "system:nodes" renew its own server credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-server-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: approve-node-server-renewal-csr
apiGroup: rbac.authorization.k8s.io
EOF
解釋說明:
-> auto-approve-csrs-for-group:自動 approve node 的第一次 CSR; 注意第一次 CSR 時,請求的 Group 為 system:bootstrappers;
-> node-client-cert-renewal:自動 approve node 后續(xù)過期的 client 證書,自動生成的證書 Group 為 system:nodes;
-> node-server-cert-renewal:自動 approve node 后續(xù)過期的 server 證書,自動生成的證書 Group 為 system:nodes;
執(zhí)行創(chuàng)建:
[root@k8s-master01 work]# kubectl apply -f csr-crb.yaml
查看 kubelet 的情況
需要耐心等待一段時間(1-10 分鐘),三個節(jié)點的 CSR 都被自動 approved(測試時等待了很長一段時間才被自動approved)
[root@k8s-master01 work]# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-4m4hc 37s system:node:k8s-node01 Pending
csr-4wk6q 7m29s system:bootstrap:0zqowl Approved,Issued
csr-h8hq6 36s system:node:k8s-node02 Pending
csr-mjtl5 7m31s system:bootstrap:heh41x Approved,Issued
csr-rfz27 7m30s system:bootstrap:b46tq2 Approved,Issued
csr-t9p6n 36s system:node:k8s-node03 Pending
注意:
Pending 的 CSR 用于創(chuàng)建 kubelet server 證書,需要手動 approve,后續(xù)會說到這個。
此時發(fā)現(xiàn)所有node節(jié)點狀態(tài)均為"ready":
[root@k8s-master01 work]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node01 Ready <none> 3m v1.14.2
k8s-node02 Ready <none> 3m v1.14.2
k8s-node03 Ready <none> 2m59s v1.14.2
kube-controller-manager 為各node節(jié)點生成了 kubeconfig 文件和公私鑰(如下在node節(jié)點上執(zhí)行):
[root@k8s-node01 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2310 Jun 18 17:09 /etc/kubernetes/kubelet.kubeconfig
[root@k8s-node01 ~]# ls -l /etc/kubernetes/cert/|grep kubelet
-rw------- 1 root root 1273 Jun 18 17:16 kubelet-client-2019-06-18-17-16-31.pem
lrwxrwxrwx 1 root root 59 Jun 18 17:16 kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pem
注意:此時還沒有自動生成 kubelet server 證書;
9)手動 approve server cert csr
基于安全性考慮,CSR approving controllers 不會自動 approve kubelet server 證書簽名請求,需要手動 approve:
[root@k8s-master01 work]# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-4m4hc 6m4s system:node:k8s-node01 Pending
csr-4wk6q 12m system:bootstrap:0zqowl Approved,Issued
csr-h8hq6 6m3s system:node:k8s-node02 Pending
csr-mjtl5 12m system:bootstrap:heh41x Approved,Issued
csr-rfz27 12m system:bootstrap:b46tq2 Approved,Issued
csr-t9p6n 6m3s system:node:k8s-node03 Pending
記住上面執(zhí)行結(jié)果為"Pending"的對應(yīng)的csr的NAME名稱,然后對這些csr進(jìn)行手動approve
[root@k8s-master01 work]# kubectl certificate approve csr-4m4hc
certificatesigningrequest.certificates.k8s.io/csr-4m4hc approved
[root@k8s-master01 work]# kubectl certificate approve csr-h8hq6
certificatesigningrequest.certificates.k8s.io/csr-h8hq6 approved
[root@k8s-master01 work]# kubectl certificate approve csr-t9p6n
certificatesigningrequest.certificates.k8s.io/csr-t9p6n approved
再次查看csr,發(fā)現(xiàn)所有的CSR都為approved了
[root@k8s-master01 work]# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-4m4hc 7m46s system:node:k8s-node01 Approved,Issued
csr-4wk6q 14m system:bootstrap:0zqowl Approved,Issued
csr-h8hq6 7m45s system:node:k8s-node02 Approved,Issued
csr-mjtl5 14m system:bootstrap:heh41x Approved,Issued
csr-rfz27 14m system:bootstrap:b46tq2 Approved,Issued
csr-t9p6n 7m45s system:node:k8s-node03 Approved,Issued
再次到node節(jié)點上查看,發(fā)現(xiàn)已經(jīng)自動生成 kubelet server 證書;
[root@k8s-node01 ~]# ls -l /etc/kubernetes/cert/kubelet-*
-rw------- 1 root root 1273 Jun 18 17:16 /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pem
lrwxrwxrwx 1 root root 59 Jun 18 17:16 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pem
-rw------- 1 root root 1317 Jun 18 17:23 /etc/kubernetes/cert/kubelet-server-2019-06-18-17-23-13.pem
lrwxrwxrwx 1 root root 59 Jun 18 17:23 /etc/kubernetes/cert/kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2019-06-18-17-23-13.pem
10)kubelet 提供的 API 接口
kubelet 啟動后監(jiān)聽多個端口,用于接收 kube-apiserver 或其它客戶端發(fā)送的請求:
在node節(jié)點執(zhí)行下面命令
[root@k8s-node01 ~]# netstat -lnpt|grep kubelet
tcp 0 0 127.0.0.1:40831 0.0.0.0:* LISTEN 24468/kubelet
tcp 0 0 172.16.60.244:10248 0.0.0.0:* LISTEN 24468/kubelet
tcp 0 0 172.16.60.244:10250 0.0.0.0:* LISTEN 24468/kubelet
解釋說明:
-> 10248: healthz http服務(wù)端口,即健康檢查服務(wù)的端口
-> 10250: kubelet服務(wù)監(jiān)聽的端口,api會檢測他是否存活。即https服務(wù),訪問該端口時需要認(rèn)證和授權(quán)(即使訪問/healthz也需要);
-> 10255:只讀端口,可以不用驗證和授權(quán)機(jī)制,直接訪問。這里配置"readOnlyPort: 0"表示未開啟只讀端口10255;如果配置"readOnlyPort: 10255"則打開10255端口
-> 從 K8S v1.10 開始,去除了 --cadvisor-port 參數(shù)(默認(rèn) 4194 端口),不支持訪問 cAdvisor UI & API。
例如執(zhí)行"kubectl exec -it nginx-ds-5aedg -- sh"命令時,kube-apiserver會向 kubelet 發(fā)送如下請求:
POST /exec/default/nginx-ds-5aedg/my-nginx?command=sh&input=1&output=1&tty=1
kubelet 接收 10250 端口的 https 請求,可以訪問如下資源:
-> /pods、/runningpods
-> /metrics、/metrics/cadvisor、/metrics/probes
-> /spec
-> /stats、/stats/container
-> /logs
-> /run/、/exec/, /attach/, /portForward/, /containerLogs/
由于關(guān)閉了匿名認(rèn)證,同時開啟了webhook 授權(quán),所有訪問10250端口https API的請求都需要被認(rèn)證和授權(quán)。
預(yù)定義的 ClusterRole system:kubelet-api-admin 授予訪問 kubelet 所有 API 的權(quán)限(kube-apiserver 使用的 kubernetes 證書 User 授予了該權(quán)限):
[root@k8s-master01 work]# kubectl describe clusterrole system:kubelet-api-admin
Name: system:kubelet-api-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
nodes/log [] [] [*]
nodes/metrics [] [] [*]
nodes/proxy [] [] [*]
nodes/spec [] [] [*]
nodes/stats [] [] [*]
nodes [] [] [get list watch proxy]
11) kubelet api 認(rèn)證和授權(quán)
kubelet 配置了如下認(rèn)證參數(shù):
-> authentication.anonymous.enabled:設(shè)置為 false,不允許匿名?訪問 10250 端口;
-> authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啟 HTTPs 證書認(rèn)證;
-> authentication.webhook.enabled=true:開啟 HTTPs bearer token 認(rèn)證;
同時配置了如下授權(quán)參數(shù):
-> authroization.mode=Webhook:開啟 RBAC 授權(quán);
kubelet 收到請求后,使用 clientCAFile 對證書簽名進(jìn)行認(rèn)證,或者查詢 bearer token 是否有效。如果兩者都沒通過,則拒絕請求,提示 Unauthorized:
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://172.16.60.244:10250/metrics
Unauthorized
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://172.16.60.244:10250/metrics
Unauthorized
通過認(rèn)證后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 發(fā)送請求,查詢證書或 token 對應(yīng)的 user、group 是否有操作資源的權(quán)限(RBAC);
下面進(jìn)行證書認(rèn)證和授權(quán):
# 權(quán)限不足的證書;
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://172.16.60.244:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
# 使用部署 kubectl 命令行工具時創(chuàng)建的、具有最高權(quán)限的 admin 證書;
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics|head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
注意:--cacert、--cert、--key 的參數(shù)值必須是文件路徑,否則返回 401 Unauthorized;
bear token 認(rèn)證和授權(quán)
創(chuàng)建一個 ServiceAccount,將它和 ClusterRole system:kubelet-api-admin 綁定,從而具有調(diào)用 kubelet API 的權(quán)限:
[root@k8s-master01 work]# kubectl create sa kubelet-api-test
[root@k8s-master01 work]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
[root@k8s-master01 work]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
[root@k8s-master01 work]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
[root@k8s-master01 work]# echo ${TOKEN}
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imt1YmVsZXQtYXBpLXRlc3QtdG9rZW4tanRyMnEiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3ViZWxldC1hcGktdGVzdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImRjYjljZTE0LTkxYWMtMTFlOS05MGQ0LTAwNTA1NmFjN2M4MSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omt1YmVsZXQtYXBpLXRlc3QifQ.i_uVqjOUMLdG4lDURfhxFDOtM2addxgEquQTcpOLP_5g6UI-MjvE5jHem_Q8OtMwFs5tqlCvKJHN2IdfsRiKk_mBe_ysLQsNEoHDclZwHRVN6X84Y62q49y-ArT12YlSpfWWenw-2GawsTmORbz7AYYaU5-kgqMk95mMx57ic8uwvJYlilw4JCnkMON5ESOmgAOg30uVvsBiQVkkYTwGtAG5Tah9wADujQttBjjDOlGntpGHxj-HmZO2GivDgdrbs_UNvhzGt2maDlpP13qYv8zKiBGpSbiWOAk_olsFKQ5-dIrn04NCbh9Kkyyh9JccMSuvePaj-lgTWj5zdUfRHw
這時,再接著進(jìn)行kubelet請求
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://172.16.60.244:10250/metrics|head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
12)cadvisor 和 metrics
cadvisor 是內(nèi)嵌在 kubelet 二進(jìn)制中的,統(tǒng)計所在節(jié)點各容器的資源(CPU、內(nèi)存、磁盤、網(wǎng)卡)使用情況的服務(wù)。
瀏覽器訪問https://172.16.60.244:10250/metrics 和 https://172.16.60.244:10250/metrics/cadvisor 分別返回 kubelet 和 cadvisor 的 metrics。
注意:
-> kubelet.config.json 設(shè)置 authentication.anonymous.enabled 為 false,不允許匿名證書訪問 10250 的 https 服務(wù);
-> 參考下面的"瀏覽器訪問kube-apiserver安全端口",創(chuàng)建和導(dǎo)入相關(guān)證書,然后就可以在瀏覽器里成功訪問kube-apiserver和上面的kubelet的10250端口了。
需要通過證書方式訪問kubelet的10250端口
[root@k8s-master01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics
[root@k8s-master01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics/cadvisor
13)獲取 kubelet 的配置
從 kube-apiserver 獲取各節(jié)點 kubelet 的配置:
如果發(fā)現(xiàn)沒有jq命令(json處理工具),可以直接yum安裝jq:
[root@k8s-master01 ~]# yum install -y jq
使用部署 kubectl 命令行工具時創(chuàng)建的、具有最高權(quán)限的 admin 證書;
[root@k8s-master01 ~]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 ~]# curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem ${KUBE_APISERVER}/api/v1/nodes/k8s-node01/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
{
"syncFrequency": "1m0s",
"fileCheckFrequency": "20s",
"httpCheckFrequency": "20s",
"address": "172.16.60.244",
"port": 10250,
"rotateCertificates": true,
"serverTLSBootstrap": true,
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/cert/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"registryPullQPS": 0,
"registryBurst": 20,
"eventRecordQPS": 0,
"eventBurst": 20,
"enableDebuggingHandlers": true,
"enableContentionProfiling": true,
"healthzPort": 10248,
"healthzBindAddress": "172.16.60.244",
"oomScoreAdj": -999,
"clusterDomain": "cluster.local",
"clusterDNS": [
"10.254.0.2"
],
"streamingConnectionIdleTimeout": "4h0m0s",
"nodeStatusUpdateFrequency": "10s",
"nodeStatusReportFrequency": "1m0s",
"nodeLeaseDurationSeconds": 40,
"imageMinimumGCAge": "2m0s",
"imageGCHighThresholdPercent": 85,
"imageGCLowThresholdPercent": 80,
"volumeStatsAggPeriod": "1m0s",
"cgroupsPerQOS": true,
"cgroupDriver": "cgroupfs",
"cpuManagerPolicy": "none",
"cpuManagerReconcilePeriod": "10s",
"runtimeRequestTimeout": "10m0s",
"hairpinMode": "promiscuous-bridge",
"maxPods": 220,
"podCIDR": "172.30.0.0/16",
"podPidsLimit": -1,
"resolvConf": "/etc/resolv.conf",
"cpuCFSQuota": true,
"cpuCFSQuotaPeriod": "100ms",
"maxOpenFiles": 1000000,
"contentType": "application/vnd.kubernetes.protobuf",
"kubeAPIQPS": 1000,
"kubeAPIBurst": 2000,
"serializeImagePulls": false,
"evictionHard": {
"memory.available": "100Mi"
},
"evictionPressureTransitionPeriod": "5m0s",
"enableControllerAttachDetach": true,
"makeIPTablesUtilChains": true,
"iptablesMasqueradeBit": 14,
"iptablesDropBit": 15,
"failSwapOn": true,
"containerLogMaxSize": "20Mi",
"containerLogMaxFiles": 10,
"configMapAndSecretChangeDetectionStrategy": "Watch",
"enforceNodeAllocatable": [
"pods"
],
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1"
}
或者直接執(zhí)行下面語句:(https://172.16.60.250:8443 就是變量${KUBE_APISERVER})
[root@k8s-master01 ~]# curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node01/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
[root@k8s-master01 ~]# curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node02/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
[root@k8s-master01 ~]# curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node03/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'9.3 - 瀏覽器訪問kube-apiserver等安全端口,創(chuàng)建和導(dǎo)入證書的做法
瀏覽器訪問 kube-apiserver 的安全端口 6443 (代理端口是8443)時,提示證書不被信任:


這是因為 kube-apiserver 的 server 證書是我們創(chuàng)建的根證書 ca.pem 簽名的,需要將根證書 ca.pem 導(dǎo)入操作系統(tǒng),并設(shè)置永久信任。
這里說下Mac OS系統(tǒng)客戶機(jī)上導(dǎo)入證書的方法:
1)點擊Mac本上的"鑰匙串訪問" -> "系統(tǒng)" -> "證書" -> "kebernetes"(雙擊里面的"信任",改成"始終信任"),如下圖:

清除瀏覽器緩存,再次訪問,發(fā)現(xiàn)證書已經(jīng)被信任了!(紅色感嘆號已經(jīng)消失了)

2)需要給瀏覽器生成一個 client 證書,訪問 apiserver 的 6443 https 端口時使用。
這里使用部署 kubectl 命令行工具時創(chuàng)建的 admin 證書、私鑰和上面的 ca 證書,創(chuàng)建一個瀏覽器可以使用 PKCS#12/PFX 格式的證書:
[root@k8s-master01 ~]# cd /opt/k8s/work/ [root@k8s-master01 work]# openssl pkcs12 -export -out admin.pfx -inkey admin-key.pem -in admin.pem -certfile ca.pem Enter Export Password: # 這里輸入自己設(shè)定的任意密碼,比如"123456" Verifying - Enter Export Password: # 確認(rèn)密碼: 123456 [root@k8s-master01 work]# ll admin.pfx -rw-r--r-- 1 root root 3613 Jun 23 23:56 admin.pfx
將在k8s-master01服務(wù)器上生成的client證書admin.pfx拷貝到Mac本機(jī),導(dǎo)入到"鑰匙串訪問" -> "系統(tǒng)" -> "證書" 里面 (導(dǎo)入時會提示輸入admin.pfx證書的密碼,即"123456"),如下圖:

清除瀏覽器歷史記錄,一定要重啟瀏覽器,接著訪問apiserver地址,接著會提示選擇一個瀏覽器證書,這里選中上面導(dǎo)入的"admin.pfx", 然后再次訪問apiserver,發(fā)現(xiàn)相應(yīng)的metrics數(shù)據(jù)就成功顯示出來了??!(注意,如果失敗了。則可以刪除證書,然后重新生成,重新導(dǎo)入再跟著操作步驟來一遍,清除瀏覽器緩存,重啟瀏覽器,選擇導(dǎo)入的證書,再次訪問即可?。?/p>


同樣的,再上面apiserver訪問的client證書導(dǎo)入到本地瀏覽器后,再訪問kubelet的10250端口的metric時,也會提示選擇導(dǎo)入的證書"admin.pfx",然后就會正常顯示對應(yīng)的metrics數(shù)據(jù)了。(k8s集群的其他組件metrics的https證書方式方式同理!)


9.4 - 部署 kube-proxy 組件
kube-proxy運行在所有的node節(jié)點上,它監(jiān)聽apiserver中service和endpoint的變化情況,創(chuàng)建路由規(guī)則以提供服務(wù)IP和負(fù)載均衡功能。下面部署命令均在k8s-master01節(jié)點上執(zhí)行,然后遠(yuǎn)程分發(fā)文件和執(zhí)行命令。
1)下載和分發(fā) kube-proxy 二進(jìn)制文件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
scp kubernetes/server/bin/kube-proxy root@${node_node_ip}:/opt/k8s/bin/
ssh root@${node_node_ip} "chmod +x /opt/k8s/bin/*"
done
2) 創(chuàng)建 kube-proxy 證書
創(chuàng)建證書簽名請求:
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
注意:
CN:指定該證書的 User 為 system:kube-proxy;
預(yù)定義的 RoleBinding system:node-proxier 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調(diào)用 kube-apiserver Proxy 相關(guān) API 的權(quán)限;
該證書只會被 kube-proxy 當(dāng)做 client 證書使用,所以 hosts 字段為空;
生成證書和私鑰:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@k8s-master01 work]# ll kube-proxy*
-rw-r--r-- 1 root root 1013 Jun 24 20:21 kube-proxy.csr
-rw-r--r-- 1 root root 218 Jun 24 20:21 kube-proxy-csr.json
-rw------- 1 root root 1679 Jun 24 20:21 kube-proxy-key.pem
-rw-r--r-- 1 root root 1411 Jun 24 20:21 kube-proxy.pem
3)創(chuàng)建和分發(fā) kubeconfig 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/work/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
[root@k8s-master01 work]# kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
[root@k8s-master01 work]# kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
[root@k8s-master01 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
注意:--embed-certs=true:將 ca.pem 和 admin.pem 證書內(nèi)容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加時,寫入的是證書文件路徑);
分發(fā) kubeconfig 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}
do
echo ">>> ${node_node_name}"
scp kube-proxy.kubeconfig root@${node_node_name}:/etc/kubernetes/
done
4)創(chuàng)建 kube-proxy 配置文件
從 v1.10 開始,kube-proxy 部分參數(shù)可以配置文件中配置??梢允褂?--write-config-to 選項生成該配置文件。
創(chuàng)建 kube-proxy config 文件模板:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > kube-proxy-config.yaml.template <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
burst: 200
kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
qps: 100
bindAddress: ##NODE_NODE_IP##
healthzBindAddress: ##NODE_NODE_IP##:10256
metricsBindAddress: ##NODE_NODE_IP##:10249
enableProfiling: true
clusterCIDR: ${CLUSTER_CIDR}
hostnameOverride: ##NODE_NODE_NAME##
mode: "ipvs"
portRange: ""
kubeProxyIPTablesConfiguration:
masqueradeAll: false
kubeProxyIPVSConfiguration:
scheduler: rr
excludeCIDRs: []
EOF
注意:
bindAddress: 監(jiān)聽地址;
clientConnection.kubeconfig: 連接 apiserver 的 kubeconfig 文件;
clusterCIDR: kube-proxy 根據(jù) --cluster-cidr 判斷集群內(nèi)部和外部流量,指定 --cluster-cidr 或 --masquerade-all 選項后 kube-proxy 才會對訪問 Service IP 的請求做 SNAT;
hostnameOverride: 參數(shù)值必須與 kubelet 的值一致,否則 kube-proxy 啟動后會找不到該 Node,從而不會創(chuàng)建任何 ipvs 規(guī)則;
mode: 使用 ipvs 模式;
為各節(jié)點創(chuàng)建和分發(fā) kube-proxy 配置文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for (( i=0; i < 3; i++ ))
do
echo ">>> ${NODE_NODE_NAMES[i]}"
sed -e "s/##NODE_NODE_NAME##/${NODE_NODE_NAMES[i]}/" -e "s/##NODE_NODE_IP##/${NODE_NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NODE_NAMES[i]}.yaml.template
scp kube-proxy-config-${NODE_NODE_NAMES[i]}.yaml.template root@${NODE_NODE_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
done
[root@k8s-master01 work]# ll kube-proxy-config-k8s-node0*
-rw-r--r-- 1 root root 500 Jun 24 20:27 kube-proxy-config-k8s-node01.yaml.template
-rw-r--r-- 1 root root 500 Jun 24 20:27 kube-proxy-config-k8s-node02.yaml.template
-rw-r--r-- 1 root root 500 Jun 24 20:27 kube-proxy-config-k8s-node03.yaml.template
5)創(chuàng)建和分發(fā) kube-proxy systemd unit 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy-config.yaml \\
--logtostderr=true \\
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
分發(fā) kube-proxy systemd unit 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}
do
echo ">>> ${node_node_name}"
scp kube-proxy.service root@${node_node_name}:/etc/systemd/system/
done
6)啟動 kube-proxy 服務(wù)
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
ssh root@${node_node_ip} "modprobe ip_vs_rr"
ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
done
注意:啟動服務(wù)前必須先創(chuàng)建工作目錄;
檢查啟動結(jié)果:
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "systemctl status kube-proxy|grep Active"
done
預(yù)期結(jié)果:
>>> 172.16.60.244
Active: active (running) since Mon 2019-06-24 20:35:31 CST; 2min 0s ago
>>> 172.16.60.245
Active: active (running) since Mon 2019-06-24 20:35:30 CST; 2min 0s ago
>>> 172.16.60.246
Active: active (running) since Mon 2019-06-24 20:35:32 CST; 1min 59s ago
確保狀態(tài)為 active (running),否則查看日志,確認(rèn)原因(journalctl -u kube-proxy)
7)查看監(jiān)聽端口(在任意一臺node節(jié)點上查看)
[root@k8s-node01 ~]# netstat -lnpt|grep kube-prox
tcp 0 0 172.16.60.244:10249 0.0.0.0:* LISTEN 3830/kube-proxy
tcp 0 0 172.16.60.244:10256 0.0.0.0:* LISTEN 3830/kube-proxy
需要注意:
10249:該端口用于http prometheus metrics port;
10256:該端口用于http healthz port;
8)查看 ipvs 路由規(guī)則
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "/usr/sbin/ipvsadm -ln"
done
預(yù)期輸出:
>>> 172.16.60.244
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr
-> 172.16.60.241:6443 Masq 1 0 0
-> 172.16.60.242:6443 Masq 1 0 0
-> 172.16.60.243:6443 Masq 1 0 0
>>> 172.16.60.245
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr
-> 172.16.60.241:6443 Masq 1 0 0
-> 172.16.60.242:6443 Masq 1 0 0
-> 172.16.60.243:6443 Masq 1 0 0
>>> 172.16.60.246
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr
-> 172.16.60.241:6443 Masq 1 0 0
-> 172.16.60.242:6443 Masq 1 0 0
-> 172.16.60.243:6443 Masq 1 0 0
由上面可以看出:所有通過 https 訪問 K8S SVC kubernetes 的請求都轉(zhuǎn)發(fā)到 kube-apiserver 節(jié)點的 6443 端口;十、驗證Kubernetes集群功能
使用 daemonset 驗證 master 和 worker 節(jié)點是否工作正常。
1)檢查節(jié)點狀態(tài)
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node01 Ready <none> 6d3h v1.14.2
k8s-node02 Ready <none> 6d3h v1.14.2
k8s-node03 Ready <none> 6d3h v1.14.2
各node節(jié)點狀態(tài)都為 Ready 時正常。
2)創(chuàng)建測試文件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
name: nginx-ds
labels:
app: nginx-ds
spec:
type: NodePort
selector:
app: nginx-ds
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
labels:
addonmanager.kubernetes.io/mode: Reconcile
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: nginx:1.7.9
ports:
- containerPort: 80
EOF
執(zhí)行測試
[root@k8s-master01 work]# kubectl create -f nginx-ds.yml
3)檢查各節(jié)點的 Pod IP 連通性
稍微等一會兒,或者或刷幾次下面的命令,才會顯示出Pod的IP信息
[root@k8s-master01 work]# kubectl get pods -o wide|grep nginx-ds
nginx-ds-4lf8z 1/1 Running 0 46s 172.30.56.2 k8s-node02 <none> <none>
nginx-ds-6kfsw 1/1 Running 0 46s 172.30.72.2 k8s-node03 <none> <none>
nginx-ds-xqdgw 1/1 Running 0 46s 172.30.88.2 k8s-node01 <none> <none>
可見,nginx-ds的 Pod IP分別是 172.30.56.2、172.30.72.2、172.30.88.2,在所有 Node 上分別 ping 這三個 IP,看是否連通:
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh ${node_node_ip} "ping -c 1 172.30.56.2"
ssh ${node_node_ip} "ping -c 1 172.30.72.2"
ssh ${node_node_ip} "ping -c 1 172.30.88.2"
done
預(yù)期輸出結(jié)果:
>>> 172.16.60.244
PING 172.30.56.2 (172.30.56.2) 56(84) bytes of data.
64 bytes from 172.30.56.2: icmp_seq=1 ttl=63 time=0.542 ms
--- 172.30.56.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms
PING 172.30.72.2 (172.30.72.2) 56(84) bytes of data.
64 bytes from 172.30.72.2: icmp_seq=1 ttl=63 time=0.654 ms
--- 172.30.72.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms
PING 172.30.88.2 (172.30.88.2) 56(84) bytes of data.
64 bytes from 172.30.88.2: icmp_seq=1 ttl=64 time=0.103 ms
--- 172.30.88.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms
>>> 172.16.60.245
PING 172.30.56.2 (172.30.56.2) 56(84) bytes of data.
64 bytes from 172.30.56.2: icmp_seq=1 ttl=64 time=0.106 ms
--- 172.30.56.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms
PING 172.30.72.2 (172.30.72.2) 56(84) bytes of data.
64 bytes from 172.30.72.2: icmp_seq=1 ttl=63 time=0.408 ms
--- 172.30.72.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms
PING 172.30.88.2 (172.30.88.2) 56(84) bytes of data.
64 bytes from 172.30.88.2: icmp_seq=1 ttl=63 time=0.345 ms
--- 172.30.88.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms
>>> 172.16.60.246
PING 172.30.56.2 (172.30.56.2) 56(84) bytes of data.
64 bytes from 172.30.56.2: icmp_seq=1 ttl=63 time=0.350 ms
--- 172.30.56.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms
PING 172.30.72.2 (172.30.72.2) 56(84) bytes of data.
64 bytes from 172.30.72.2: icmp_seq=1 ttl=64 time=0.105 ms
--- 172.30.72.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms
PING 172.30.88.2 (172.30.88.2) 56(84) bytes of data.
64 bytes from 172.30.88.2: icmp_seq=1 ttl=63 time=0.584 ms
--- 172.30.88.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms
4)檢查服務(wù) IP 和端口可達(dá)性
[root@k8s-master01 work]# kubectl get svc |grep nginx-ds
nginx-ds NodePort 10.254.41.83 <none> 80:30876/TCP 4m24s
可見:
Service Cluster IP:10.254.41.83
服務(wù)端口:80
NodePort 端口:30876
在所有 Node 上 curl Service IP:
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh ${node_node_ip} "curl -s 10.254.41.83"
done
預(yù)期輸出: nginx歡迎頁面內(nèi)容。
5)檢查服務(wù)的 NodePort 可達(dá)性
在所有 Node 上執(zhí)行:
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
do
echo ">>> ${node_node_ip}"
ssh ${node_node_ip} "curl -s ${node_node_ip}:30876"
done
預(yù)期輸出: nginx 歡迎頁面內(nèi)容。點擊查看 Kubernetes(K8S)容器集群管理環(huán)境完整部署詳細(xì)教程-下篇
相關(guān)文章
Dashboard管理Kubernetes集群與API訪問配置
這篇文章介紹了Dashboard管理Kubernetes集群與API訪問配置的方法,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧2022-04-04
了解Kubernetes中的Service和Endpoint
這篇文章介紹了Kubernetes中的Service和Endpoint,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧2022-04-04
Kubernetes(K8S)容器集群管理環(huán)境完整部署詳細(xì)教程-中篇
本系列文章主要介紹了Kubernetes(K8S)容器集群管理環(huán)境完整部署的詳細(xì)教程,分為上中下三篇文章,此為中篇,需要的朋友可以參考下2022-01-01
Kubernetes(K8S)容器集群管理環(huán)境完整部署詳細(xì)教程-下篇
本系列文章主要介紹了Kubernetes(K8S)容器集群管理環(huán)境完整部署的詳細(xì)教程,分為上中下三篇文章,此為中篇,主要講解了K8S部署metrics-server插件,K8S部署集群dashboard插件,K8S部署集群coredns插件,K8S部署kube-state-metrics插件,K8S部署harbor私有倉庫2022-01-01
Kubernetes Visitor設(shè)計模式及發(fā)送pod創(chuàng)建請求解析
這篇文章主要為大家介紹了Kubernetes Visitor設(shè)計模式及發(fā)送pod創(chuàng)建請求解析,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-11-11
Kubernetes關(guān)鍵組件與結(jié)構(gòu)組成介紹
這篇文章介紹了Kubernetes的關(guān)鍵組件與結(jié)構(gòu)組成,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧2022-03-03
Kubernetes(K8S)入門基礎(chǔ)內(nèi)容介紹
這篇文章介紹了Kubernetes(K8S)的入門基礎(chǔ)內(nèi)容,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧2022-03-03

