centos7部署k8s集群1.28.2版本完整步驟
1. 準備工作(所有節(jié)點執(zhí)行)
1.1. 準備虛擬機
本地部署,僅供參考。
三個節(jié)點:名字為k8s-node1、k8s-node2、k8s-master
設置系統(tǒng)主機名及Host 文件
sudo cat << EOF >> /etc/hosts 192.168.255.141 k8s-node1 192.168.255.142 k8s-node2 192.168.255.140 k8s-master EOF
# 對應的節(jié)點執(zhí)行 sudo hostnamectl set-hostname k8s-node1 sudo hostnamectl set-hostname k8s-node2 sudo hostnamectl set-hostname k8s-master
1.2 更新yum
# 需要更新很久 sudo yum update -y #設置存儲庫 sudo yum install -y yum-utils sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
1.3 相關設置
1.3.1 禁用iptables和firewalld服務
systemctl stop firewalld systemctl disable firewalld systemctl stop iptables systemctl disable iptables
1.3.2 禁用selinux
# 永久關閉 sed -i 's/enforcing/disabled/' /etc/selinux/config # 臨時關閉 setenforce 0
1.3.3 禁用swap分區(qū)
# 臨時關閉 swapoff -a # 永久關閉 vim /etc/fstab 將行 /dev/mapper/xxx swap xxx 注釋
1.3.4 調整內核參數(shù),對于 K8S
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF # 依次執(zhí)行下面命令 sysctl -p modprobe br_netfilter lsmod | grep br_netfilter
顯示:
1.3.5 配置 ipvs 功能
# 安裝ipset和ipvsadm yum install ipset ipvsadmin -y # 添加需要加載的模塊寫入腳本文件 cat <<EOF > /etc/sysconfig/modules/ipvs.modules #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF # 為腳本文件添加執(zhí)行權限 chmod +x /etc/sysconfig/modules/ipvs.modules # 執(zhí)行腳本文件 /bin/bash /etc/sysconfig/modules/ipvs.modules # 查看對應的模塊是否加載成功 lsmod | grep -e ip_vs -e nf_conntrack_ipv4
顯示:
重啟
reboot
2. 安裝docker和cri-dockerd(所有節(jié)點執(zhí)行)
2.1 安裝docker
2.1.1 移除舊版docker(新安裝虛擬機則不需執(zhí)行)
sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine
2.1.2 安裝docker及其依賴庫
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
2.1.3 啟動Docker,設置開機自啟動
# 啟動docker sudo systemctl start docker # 設置docker開機啟動 sudo systemctl enable docker # 驗證 sudo systemctl status docker
2.2 安裝cri-dockerd
k8s 1.24版本后需要使用cri-dockerd和docker通信
2.2.1 下載cri-dockerd
# 若沒有wget,則執(zhí)行 sudo yum install -y wget # 下載 sudo wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4-3.el7.x86_64.rpm # 安裝 sudo rpm -ivh cri-dockerd-0.3.4-3.el7.x86_64.rpm # 重載系統(tǒng)守護進程 sudo systemctl daemon-reload
2.2.2 設置鏡像加速
sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://c12xt3od.mirror.aliyuncs.com"] } EOF
2.2.3 修改配置文件
修改第10行 ExecStart= 改為 ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7 vi /usr/lib/systemd/system/cri-docker.service
2.2.4 自啟動、重啟Docker組件
# 重載系統(tǒng)守護進程 sudo systemctl daemon-reload # 設置cri-dockerd自啟動 sudo systemctl enable cri-docker.socket cri-docker # 啟動cri-dockerd sudo systemctl start cri-docker.socket cri-docker # 檢查Docker組件狀態(tài) sudo systemctl status docker cir-docker.socket cri-docker
顯示:
3. 安裝Kubernetes
3.1 安裝kubectl(所有節(jié)點執(zhí)行)
# 當前使用的是最新版本 v1.28.2 # 下載 curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" # 檢驗 curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" echo "$(cat kubectl.sha256) kubectl" | sha256sum --check # 安裝 sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl # 測試 kubectl version --client
3.2 安裝kubeadm(所有節(jié)點執(zhí)行)
# 改國內源 cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=1 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF # 安裝 sudo yum install -y install kubeadm-1.28.2-0 kubelet-1.28.2-0 kubectl-1.28.2-0 --disableexcludes=kubernetes # 設置自啟動 sudo systemctl enable --now kubelet
3.3 安裝runc(所有節(jié)點執(zhí)行)
# 下載 runc.amd64 sudo wget https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64 # 安裝 sudo install -m 755 runc.amd64 /usr/local/bin/runc # 驗證 runc -v
3.4 部署集群
3.4.1 初始化集群(master節(jié)點執(zhí)行)
# 執(zhí)行 kubeadm init 命令 kubeadm init --node-name=k8s-master --image-repository=registry.aliyuncs.com/google_containers --cri-socket=unix:///var/run/cri-dockerd.sock --apiserver-advertise-address=192.168.255.140 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 # 需要修改的參數(shù) --apiserver-advertise-address # 指定 API 服務器的廣告地址、我設置為master節(jié)點的ip # 初始化成功后運行下面的命令 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # master節(jié)點執(zhí)行 配置文件的復制(為了在node節(jié)點可以使用kubectl相關命令) scp /etc/kubernetes/admin.conf 192.168.255.141:/etc/kubernetes/ scp /etc/kubernetes/admin.conf 192.168.255.142:/etc/kubernetes/
顯示:
3.4.2 node節(jié)點加入(node節(jié)點執(zhí)行)
# 到node節(jié)點檢查admin.conf文件是否傳輸完成 ls /etc/kubernetes/ admin.conf manifests # 將admin.conf加入環(huán)境變量,直接使用永久生效 echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile # 加載 source ~/.bash_profile # ---------------------------------加入集群------------------------------------- # 1.在master節(jié)點執(zhí)行 kubeadm init成功后,會出現(xiàn) kubeadm join xxx xxx的命令,直接復制到node節(jié)點執(zhí)行就好。 # 2.下面是若沒有復制到kubeadm join的命令或者是想要在集群中加入新節(jié)點, # 則先在master執(zhí)行,獲取token 和 discovery-token-ca-cert-hash。 # 獲取 token 參數(shù) kubeadm token list # 查看已有 token kubeadm token create # 沒有token則執(zhí)行,創(chuàng)建新的 TOKEN # 獲取 discovery-token-ca-cert-hash 參數(shù) openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' # 3.node節(jié)點執(zhí)行 kubeadm join # 修改獲取的 token 和 discovery-token-ca-cert-hash 后,再執(zhí)行 kubeadm join 192.168.255.140:6443 --token y8v2nc.ie2ovh1kxqtgppbo --discovery-token-ca-cert-hash sha256:1fa593d1bc58653afaafc9ca492bde5b8e40e9adef055e8e939d4eb34fb436bf --cri-socket unix:///var/run/cri-dockerd.sock
3.4.3 重新加入集群(node節(jié)點執(zhí)行)
# 先執(zhí)行 kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock # 再獲取TOKEN、discovery-token-ca-cert-hash 參數(shù)后,最后執(zhí)行 kubeadm join 192.168.255.140:6443 --token y8v2nc.ie2ovh1kxqtgppbo --discovery-token-ca-cert-hash sha256:1fa593d1bc58653afaafc9ca492bde5b8e40e9adef055e8e939d4eb34fb436bf --cri-socket unix:///var/run/cri-dockerd.sock
3.4.4 安裝網絡插件下載然后運行
# 下載,若網絡抽風~~,則復制下面的kube-flannel.yml sudo wget https://github.com/flannel-io/flannel/releases/download/v0.22.3/kube-flannel.yml # 執(zhí)行 kubectl apply -f kube-flannel.yml
或者
vi kube-flannel.yml
# kube-flannel.yml apiVersion: v1 kind: Namespace metadata: labels: k8s-app: flannel pod-security.kubernetes.io/enforce: privileged name: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: flannel name: flannel namespace: kube-flannel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: flannel name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - get - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch - apiGroups: - networking.k8s.io resources: - clustercidrs verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: flannel name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-flannel --- apiVersion: v1 data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } kind: ConfigMap metadata: labels: app: flannel k8s-app: flannel tier: node name: kube-flannel-cfg namespace: kube-flannel --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: flannel k8s-app: flannel tier: node name: kube-flannel-ds namespace: kube-flannel spec: selector: matchLabels: app: flannel k8s-app: flannel template: metadata: labels: app: flannel k8s-app: flannel tier: node spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux containers: - args: - --ip-masq - --kube-subnet-mgr command: - /opt/bin/flanneld env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: EVENT_QUEUE_DEPTH value: "5000" image: docker.io/flannel/flannel:v0.22.3 name: kube-flannel resources: requests: cpu: 100m memory: 50Mi securityContext: capabilities: add: - NET_ADMIN - NET_RAW privileged: false volumeMounts: - mountPath: /run/flannel name: run - mountPath: /etc/kube-flannel/ name: flannel-cfg - mountPath: /run/xtables.lock name: xtables-lock hostNetwork: true initContainers: - args: - -f - /flannel - /opt/cni/bin/flannel command: - cp image: docker.io/flannel/flannel-cni-plugin:v1.2.0 name: install-cni-plugin volumeMounts: - mountPath: /opt/cni/bin name: cni-plugin - args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist command: - cp image: docker.io/flannel/flannel:v0.22.3 name: install-cni volumeMounts: - mountPath: /etc/cni/net.d name: cni - mountPath: /etc/kube-flannel/ name: flannel-cfg priorityClassName: system-node-critical serviceAccountName: flannel tolerations: - effect: NoSchedule operator: Exists volumes: - hostPath: path: /run/flannel name: run - hostPath: path: /opt/cni/bin name: cni-plugin - hostPath: path: /etc/cni/net.d name: cni - configMap: name: kube-flannel-cfg name: flannel-cfg - hostPath: path: /run/xtables.lock type: FileOrCreate name: xtables-lock
3.5 測試kubernetes 集群
# 下面一般在master節(jié)點執(zhí)行,若node節(jié)點可以使用kubectl命令,也可以在node節(jié)點上操作 kubectl get nodes kubectl get pod -A
顯示:
3.5.1 使用nginx測試
vi nginx-deployment.yaml kubectl apply -f nginx-deployment.yaml
# nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - name: http port: 80 targetPort: 80 nodePort: 30080 type: NodePort
執(zhí)行
[root@k8s-master k8s]# kubectl get pod,svc |grep nginx pod/nginx-deployment-7c79c4bf97-4xzc9 1/1 Running 0 83s pod/nginx-deployment-7c79c4bf97-lp4fn 1/1 Running 0 83s pod/nginx-deployment-7c79c4bf97-vt8wh 1/1 Running 0 83s service/nginx-service NodePort 10.97.154.241 <none> 80:30080/TCP 83s
訪問:http://192.168.255.140:30080/,出現(xiàn)這個頁面就算大功告成!
總結
到此這篇關于centos7部署k8s集群1.28.2版本的文章就介紹到這了,更多相關centos7部署k8s集群內容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!
相關文章
KubeSphere中部署Wiki系統(tǒng)wiki.js并啟用中文全文檢索
這篇文章主要為大家介紹了KubeSphere中部署Wiki系統(tǒng)wiki.js并啟用中文全文檢索實現(xiàn)過程,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步,早日升職加薪2022-06-06Google?Kubernetes?Engine?集群實戰(zhàn)詳解
這篇文章主要為大家介紹了Google?Kubernetes?Engine?集群實戰(zhàn)詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步,早日升職加薪2022-08-08K8S-ConfigMap實現(xiàn)應用和配置分離詳解
這篇文章主要為大家介紹了K8S-ConfigMap實現(xiàn)應用和配置分離詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步,早日升職加薪2023-04-04