k8s高可用集群安裝教程
一、安裝負(fù)載均衡器
k8s負(fù)載均衡器 官方指南
1、準(zhǔn)備三臺(tái)機(jī)器
節(jié)點(diǎn)名稱 | IP |
---|---|
master-1 | 192.168.1.11 |
master-2 | 192.168.1.12 |
master-3 | 192.168.1.13 |
2、在這三臺(tái)機(jī)器分別安裝haproxy和keepalived作為負(fù)載均衡器
# 安裝haproxy sudo dnf install haproxy -y # 安裝Keepalived sudo yum install epel-release -y sudo yum install keepalived -y # 查看安裝成功信息 sudo dnf info haproxy sudo dnf info keepalived
3、k8s負(fù)載均衡器配置文件
官方指南 按需替換成自己的機(jī)器ip和端口即可,192.168.1.9 是為keepalived提供的虛擬ip,只要該ip沒(méi)有被占用,均可,從節(jié)點(diǎn)將MASTER改為BACKUP,priority 101改成100,讓MASTER占比大
3.1 /etc/keepalived/keepalived.conf
! /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script check_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 101 authentication { auth_type PASS auth_pass 42 } virtual_ipaddress { 192.168.1.9 } track_script { check_apiserver } }
3.2 /etc/keepalived/check_apiserver.sh
#!/bin/sh errorExit() { echo "*** $*" 1>&2 exit 1 } curl -sfk --max-time 2 https://localhost:6553/healthz -o /dev/null || errorExit "Error GET https://localhost:6553/healthz"
3.3 授予腳本權(quán)限
chmod +x /etc/keepalived/check_apiserver.sh
3.4 /etc/haproxy/haproxy.cfg
# /etc/haproxy/haproxy.cfg #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global log stdout format raw local0 daemon #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 1 timeout http-request 10s timeout queue 20s timeout connect 5s timeout client 35s timeout server 35s timeout http-keep-alive 10s timeout check 10s #--------------------------------------------------------------------- # apiserver frontend which proxys to the control plane nodes #--------------------------------------------------------------------- frontend apiserver bind *:6553 mode tcp option tcplog default_backend apiserverbackend #--------------------------------------------------------------------- # round robin balancing for apiserver #--------------------------------------------------------------------- backend apiserverbackend option httpchk http-check connect ssl http-check send meth GET uri /healthz http-check expect status 200 mode tcp balance roundrobin server master-1 192.168.1.11:6443 check verify none server master-2 192.168.1.12:6443 check verify none server master-3 192.168.1.13:6443 check verify none # [...]
3.5 驗(yàn)證haproxy.cfg是否有語(yǔ)法錯(cuò)誤,并重啟
haproxy -c -f /etc/haproxy/haproxy.cfg systemctl restart haproxy systemctl restart keepalived
二、安裝k8s集群
基礎(chǔ)配置,請(qǐng)參照我的上一篇單主節(jié)點(diǎn)執(zhí)行
1、堆疊(Stacked)etcd 拓?fù)?/h3>
直接執(zhí)行初始化即可
優(yōu)點(diǎn):操作簡(jiǎn)單,節(jié)點(diǎn)數(shù)要求少
缺點(diǎn):堆疊集群存在耦合失敗的風(fēng)險(xiǎn)。如果一個(gè)節(jié)點(diǎn)發(fā)生故障,則 etcd 成員和控制平面實(shí)例都將丟失, 并且冗余會(huì)受到影響。
kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--apiserver-advertise-address=192.168.1.11 \
--control-plane-endpoint 192.168.1.9:6553 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr 10.244.0.0/12 \
--kubernetes-version=v1.23.8 \
--upload-certs \
--v=6
直接執(zhí)行初始化即可
優(yōu)點(diǎn):操作簡(jiǎn)單,節(jié)點(diǎn)數(shù)要求少
缺點(diǎn):堆疊集群存在耦合失敗的風(fēng)險(xiǎn)。如果一個(gè)節(jié)點(diǎn)發(fā)生故障,則 etcd 成員和控制平面實(shí)例都將丟失, 并且冗余會(huì)受到影響。
kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --apiserver-advertise-address=192.168.1.11 \ --control-plane-endpoint 192.168.1.9:6553 \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr 10.244.0.0/12 \ --kubernetes-version=v1.23.8 \ --upload-certs \ --v=6
2、外部 etcd 拓?fù)?/h3>
優(yōu)點(diǎn):拓?fù)浣Y(jié)構(gòu)解耦了控制平面和 etcd 成員。因此它提供了一種 HA 設(shè)置, 其中失去控制平面實(shí)例或者 etcd 成員的影響較小,并且不會(huì)像堆疊的 HA 拓?fù)淠菢佑绊懠喝哂?br />缺點(diǎn):拓?fù)湫枰獌杀队诙询B HA 拓?fù)涞闹鳈C(jī)數(shù)量。 具有此拓?fù)涞?HA 集群至少需要三個(gè)用于控制平面節(jié)點(diǎn)的主機(jī)和三個(gè)用于 etcd 節(jié)點(diǎn)的主機(jī) 官方指南
優(yōu)點(diǎn):拓?fù)浣Y(jié)構(gòu)解耦了控制平面和 etcd 成員。因此它提供了一種 HA 設(shè)置, 其中失去控制平面實(shí)例或者 etcd 成員的影響較小,并且不會(huì)像堆疊的 HA 拓?fù)淠菢佑绊懠喝哂?br />缺點(diǎn):拓?fù)湫枰獌杀队诙询B HA 拓?fù)涞闹鳈C(jī)數(shù)量。 具有此拓?fù)涞?HA 集群至少需要三個(gè)用于控制平面節(jié)點(diǎn)的主機(jī)和三個(gè)用于 etcd 節(jié)點(diǎn)的主機(jī) 官方指南
2.1 準(zhǔn)備三臺(tái)機(jī)器
節(jié)點(diǎn)名稱 | IP |
---|---|
etcd-1 | 192.168.1.3 |
etcd-2 | 192.168.1.4 |
etcd-3 | 192.168.1.5 |
2.2 每個(gè)etcd節(jié)點(diǎn)創(chuàng)建配置文件/etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service] ExecStart= # 將下面的 "systemd" 替換為你的容器運(yùn)行時(shí)所使用的 cgroup 驅(qū)動(dòng)。 # kubelet 的默認(rèn)值為 "cgroupfs"。 # 如果需要的話,將 "--container-runtime-endpoint " 的值替換為一個(gè)不同的容器運(yùn)行時(shí)。 ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd Restart=always
2.3 啟動(dòng)kubelet
systemctl daemon-reload systemctl restart kubelet # 查看kubelet狀態(tài),正常應(yīng)變?yōu)閞unning systemctl status kubelet
2.4 使用以下腳本文件啟動(dòng),注意替換自己的IP和主機(jī)名
# 使用你的主機(jī) IP 替換 HOST0、HOST1 和 HOST2 的 IP 地址,在etcd-1 上執(zhí)行以下命令: export HOST0=192.168.1.3 export HOST1=192.168.1.4 export HOST2=192.168.1.5 # 使用你的主機(jī)名更新 NAME0、NAME1 和 NAME2 export NAME0="etcd-1" export NAME1="etcd-2" export NAME2="etcd-3" # 創(chuàng)建臨時(shí)目錄來(lái)存儲(chǔ)將被分發(fā)到其它主機(jī)上的文件 mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/ HOSTS=(${HOST0} ${HOST1} ${HOST2}) NAMES=(${NAME0} ${NAME1} ${NAME2}) for i in "${!HOSTS[@]}"; do HOST=${HOSTS[$i]} NAME=${NAMES[$i]} cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml --- apiVersion: "kubeadm.k8s.io/v1beta3" kind: InitConfiguration nodeRegistration: name: ${NAME} localAPIEndpoint: advertiseAddress: ${HOST} --- apiVersion: "kubeadm.k8s.io/v1beta3" kind: ClusterConfiguration etcd: local: serverCertSANs: - "${HOST}" peerCertSANs: - "${HOST}" extraArgs: initial-cluster: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380 initial-cluster-state: new name: ${NAME} listen-peer-urls: https://${HOST}:2380 listen-client-urls: https://${HOST}:2379 advertise-client-urls: https://${HOST}:2379 initial-advertise-peer-urls: https://${HOST}:2380 EOF done
2.5 在任意etcd節(jié)點(diǎn)生成證書(shū)
kubeadm init phase certs etcd-ca #這一操作創(chuàng)建如下兩個(gè)文件: #/etc/kubernetes/pki/etcd/ca.crt #/etc/kubernetes/pki/etcd/ca.key
2.6 為每個(gè)成員創(chuàng)建證書(shū)
kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml cp -R /etc/kubernetes/pki /tmp/${HOST2}/ # 清理不可重復(fù)使用的證書(shū) find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml cp -R /etc/kubernetes/pki /tmp/${HOST1}/ find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml # 不需要移動(dòng) certs 因?yàn)樗鼈兪墙o HOST0 使用的 # 清理不應(yīng)從此主機(jī)復(fù)制的證書(shū) find /tmp/${HOST2} -name ca.key -type f -delete find /tmp/${HOST1} -name ca.key -type f -delete
2.7 證書(shū)已生成,現(xiàn)在必須將它們移動(dòng)到對(duì)應(yīng)的主機(jī)。復(fù)制tmp下各自節(jié)點(diǎn)證書(shū)目錄pki至/etc/kubernetes/
2.8 在對(duì)應(yīng)的etcd節(jié)點(diǎn)分別執(zhí)行,按需取用和替換自己的etcd節(jié)點(diǎn)IP
# 鏡像處理 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0 k8s.gcr.io/etcd:3.5.1-0 sudo systemctl daemon-reload sudo systemctl restart kubelet kubeadm init phase etcd local --config=/tmp/192.168.1.3/kubeadmcfg.yaml #kubeadm init phase etcd local --config=/tmp/192.168.1.4/kubeadmcfg.yaml #kubeadm init phase etcd local --config=/tmp/192.168.1.5/kubeadmcfg.yaml
2.9 驗(yàn)證etcd集群
# 驗(yàn)證集群狀態(tài) docker run --rm -it \ --net host \ -v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 etcdctl \ --cert /etc/kubernetes/pki/etcd/peer.crt \ --key /etc/kubernetes/pki/etcd/peer.key \ --cacert /etc/kubernetes/pki/etcd/ca.crt \ --endpoints https://192.168.1.3:2379 endpoint health --cluster
3、 配置完etcd集群,就在第一個(gè)節(jié)點(diǎn)配置k8s集群?jiǎn)?dòng)文件 config kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration imageRepository: "registry.cn-hangzhou.aliyuncs.com/google_containers" localAPIEndpoint: advertiseAddress: 192.168.1.11 uploadCerts: true --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration imageRepository: "registry.cn-hangzhou.aliyuncs.com/google_containers" kubernetesVersion: v1.23.8 controlPlaneEndpoint: "192.168.1.9:6553" networking: podSubnet: "10.244.0.0/16" serviceSubnet: "10.244.0.0/12" etcd: external: endpoints: - https://192.168.1.3:2379 - https://192.168.1.4:2379 - https://192.168.1.5:2379 caFile: /etc/kubernetes/pki/etcd/ca.crt certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
4、從任意etcd節(jié)點(diǎn),復(fù)制/etc/kubernetes/pki目錄文件到初始化集群的k8s節(jié)點(diǎn)
kubeadm init --config kubeadm-config.yaml --upload-certs --v=6
# 主節(jié)點(diǎn)加入 kubeadm join 192.168.1.9:6553 --token a26srm.c7sssutz83mz94lq \ --discovery-token-ca-cert-hash sha256:560139f5ea4b8d3a279de53d9d5d503d41c29394c3ba46a4f312f361708b8b71 \ --control-plane --certificate-key b6e4df72059c9893d2be4d0e5b7fa2e7c466e0400fe39bd244d0fbf7f3e9c04c
# 從節(jié)點(diǎn)加入 kubeadm join 192.168.1.9:6553 --token a26srm.c7sssutz83mz94lq \ --discovery-token-ca-cert-hash sha256:560139f5ea4b8d3a279de53d9d5d503d41c29394c3ba46a4f312f361708b8b71
安裝flannel網(wǎng)絡(luò)插件
apiVersion: v1 kind: Namespace metadata: labels: k8s-app: flannel pod-security.kubernetes.io/enforce: privileged name: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: flannel name: flannel namespace: kube-flannel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: flannel name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - get - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: flannel name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-flannel --- apiVersion: v1 data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "EnableNFTables": false, "Backend": { "Type": "vxlan" } } kind: ConfigMap metadata: labels: app: flannel k8s-app: flannel tier: node name: kube-flannel-cfg namespace: kube-flannel --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: flannel k8s-app: flannel tier: node name: kube-flannel-ds namespace: kube-flannel spec: selector: matchLabels: app: flannel k8s-app: flannel template: metadata: labels: app: flannel k8s-app: flannel tier: node spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux containers: - args: - --ip-masq - --kube-subnet-mgr command: - /opt/bin/flanneld env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: EVENT_QUEUE_DEPTH value: "5000" image: registry.cn-hangzhou.aliyuncs.com/1668334351/flannel:v0.26.4 name: kube-flannel resources: requests: cpu: 100m memory: 50Mi securityContext: capabilities: add: - NET_ADMIN - NET_RAW privileged: false volumeMounts: - mountPath: /run/flannel name: run - mountPath: /etc/kube-flannel/ name: flannel-cfg - mountPath: /run/xtables.lock name: xtables-lock hostNetwork: true initContainers: - args: - -f - /flannel - /opt/cni/bin/flannel command: - cp image: registry.cn-hangzhou.aliyuncs.com/1668334351/flannel-cni-plugin:v1.6.2 name: install-cni-plugin volumeMounts: - mountPath: /opt/cni/bin name: cni-plugin - args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist command: - cp image: registry.cn-hangzhou.aliyuncs.com/1668334351/flannel:v0.26.4 name: install-cni volumeMounts: - mountPath: /etc/cni/net.d name: cni - mountPath: /etc/kube-flannel/ name: flannel-cfg priorityClassName: system-node-critical serviceAccountName: flannel tolerations: - effect: NoSchedule operator: Exists volumes: - hostPath: path: /run/flannel name: run - hostPath: path: /opt/cni/bin name: cni-plugin - hostPath: path: /etc/cni/net.d name: cni - configMap: name: kube-flannel-cfg name: flannel-cfg - hostPath: path: /run/xtables.lock type: FileOrCreate name: xtables-lock
kubectl apply -f kube-flannel.yml
到此這篇關(guān)于k8s高可用集群安裝的文章就介紹到這了,更多相關(guān)k8s高可用集群安裝內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
使用sealos快速搭建K8s集群環(huán)境的過(guò)程
這篇文章主要介紹了使用sealos快速搭建K8s集群環(huán)境,主要包括sealos安裝方法,虛擬機(jī)設(shè)置方法,本文給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2022-09-09Kubernetes?權(quán)限管理認(rèn)證鑒權(quán)詳解
這篇文章主要為大家介紹了Kubernetes?權(quán)限管理認(rèn)證鑒權(quán)詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-11-11k8s編排之Deployment知識(shí)點(diǎn)詳解
這篇文章主要為大家介紹了k8s編排之Deployment知識(shí)點(diǎn)詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2023-01-01