欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

Centos7 安裝部署Kubernetes(k8s)集群實(shí)現(xiàn)過程

 更新時(shí)間:2022年11月04日 16:11:35   作者:人生的哲理  
這篇文章主要為大家介紹了Centos7 安裝部署Kubernetes(k8s)集群實(shí)現(xiàn)過程詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪

一.系統(tǒng)環(huán)境

服務(wù)器版本docker軟件版本CPU架構(gòu)
CentOS Linux release 7.4.1708 (Core)Docker version 20.10.12x86_64

二.前言

下圖描述了軟件部署方式的變遷:傳統(tǒng)部署時(shí)代,虛擬化部署時(shí)代容器部署時(shí)代。

傳統(tǒng)部署時(shí)代

早期,各個(gè)組織是在物理服務(wù)器上運(yùn)行應(yīng)用程序。 由于無法限制在物理服務(wù)器中運(yùn)行的應(yīng)用程序資源使用,因此會導(dǎo)致資源分配問題。 例如,如果在同一臺物理服務(wù)器上運(yùn)行多個(gè)應(yīng)用程序, 則可能會出現(xiàn)一個(gè)應(yīng)用程序占用大部分資源的情況,而導(dǎo)致其他應(yīng)用程序的性能下降。 一種解決方案是將每個(gè)應(yīng)用程序都運(yùn)行在不同的物理服務(wù)器上, 但是當(dāng)某個(gè)應(yīng)用程式資源利用率不高時(shí),剩余資源無法被分配給其他應(yīng)用程式, 而且維護(hù)許多物理服務(wù)器的成本很高。

虛擬化部署時(shí)代

因此,虛擬化技術(shù)被引入了。虛擬化技術(shù)允許你在單個(gè)物理服務(wù)器的 CPU 上運(yùn)行多臺虛擬機(jī)(VM)。 虛擬化能使應(yīng)用程序在不同 VM 之間被彼此隔離,且能提供一定程度的安全性, 因?yàn)橐粋€(gè)應(yīng)用程序的信息不能被另一應(yīng)用程序隨意訪問。

虛擬化技術(shù)能夠更好地利用物理服務(wù)器的資源,并且因?yàn)榭奢p松地添加或更新應(yīng)用程序, 而因此可以具有更高的可擴(kuò)縮性,以及降低硬件成本等等的好處。 通過虛擬化,你可以將一組物理資源呈現(xiàn)為可丟棄的虛擬機(jī)集群。

每個(gè) VM 是一臺 完整的計(jì)算機(jī),在虛擬化硬件之上運(yùn)行所有組件,包括其自己的操作系統(tǒng)。

容器部署時(shí)代

容器類似于 VM,但是更寬松的隔離特性,使容器之間可以共享操作系統(tǒng)(OS)。 因此,容器比起 VM 被認(rèn)為是更輕量級的。且與 VM 類似,每個(gè)容器都具有自己的文件系統(tǒng)、CPU、內(nèi)存、進(jìn)程空間等。 由于它們與基礎(chǔ)架構(gòu)分離,因此可以跨云和 OS 發(fā)行版本進(jìn)行移植。

容器因具有許多優(yōu)勢而變得流行起來,例如:

  • 敏捷應(yīng)用程序的創(chuàng)建和部署:與使用 VM 鏡像相比,提高了容器鏡像創(chuàng)建的簡便性和效率。
  • 持續(xù)開發(fā)、集成和部署:通過快速簡單的回滾(由于鏡像不可變性), 提供可靠且頻繁的容器鏡像構(gòu)建和部署。
  • 關(guān)注開發(fā)與運(yùn)維的分離:在構(gòu)建、發(fā)布時(shí)創(chuàng)建應(yīng)用程序容器鏡像,而不是在部署時(shí), 從而將應(yīng)用程序與基礎(chǔ)架構(gòu)分離。
  • 可觀察性:不僅可以顯示 OS 級別的信息和指標(biāo),還可以顯示應(yīng)用程序的運(yùn)行狀況和其他指標(biāo)信號。
  • 跨開發(fā)、測試和生產(chǎn)的環(huán)境一致性:在筆記本計(jì)算機(jī)上也可以和在云中運(yùn)行一樣的應(yīng)用程序。
  • 跨云和操作系統(tǒng)發(fā)行版本的可移植性:可在 Ubuntu、RHEL、CoreOS、本地、 Google Kubernetes Engine 和其他任何地方運(yùn)行。
  • 以應(yīng)用程序?yàn)橹行牡墓芾恚禾岣叱橄蠹墑e,從在虛擬硬件上運(yùn)行 OS 到使用邏輯資源在 OS 上運(yùn)行應(yīng)用程序。
  • 松散耦合、分布式、彈性、解放的微服務(wù):應(yīng)用程序被分解成較小的獨(dú)立部分, 并且可以動(dòng)態(tài)部署和管理 - 而不是在一臺大型單機(jī)上整體運(yùn)行。
  • 資源隔離:可預(yù)測的應(yīng)用程序性能。
  • 資源利用:高效率和高密度。

三.Kubernetes

3.1 概述

Kubernetes 是一個(gè)可移植、可擴(kuò)展的開源平臺,用于管理容器化的工作負(fù)載和服務(wù),可促進(jìn)聲明式配置和自動(dòng)化。 Kubernetes 擁有一個(gè)龐大且快速增長的生態(tài),其服務(wù)、支持和工具的使用范圍相當(dāng)廣泛。

Kubernetes 這個(gè)名字源于希臘語,意為“舵手”或“飛行員”。k8s 這個(gè)縮寫是因?yàn)?k 和 s 之間有八個(gè)字符的關(guān)系。 Google 在 2014 年開源了 Kubernetes 項(xiàng)目。 Kubernetes 建立在 Google 大規(guī)模運(yùn)行生產(chǎn)工作負(fù)載十幾年經(jīng)驗(yàn)的基礎(chǔ)上, 結(jié)合了社區(qū)中最優(yōu)秀的想法和實(shí)踐。

Kubernetes 為你提供的功能如下:

  • 服務(wù)發(fā)現(xiàn)和負(fù)載均衡:Kubernetes 可以使用 DNS 名稱或自己的 IP 地址來曝露容器。 如果進(jìn)入容器的流量很大, Kubernetes 可以負(fù)載均衡并分配網(wǎng)絡(luò)流量,從而使部署穩(wěn)定。
  • 存儲編排:Kubernetes 允許你自動(dòng)掛載你選擇的存儲系統(tǒng),例如本地存儲、公共云提供商等。
  • 自動(dòng)部署和回滾:你可以使用 Kubernetes 描述已部署容器的所需狀態(tài), 它可以以受控的速率將實(shí)際狀態(tài)更改為期望狀態(tài)。 例如,你可以自動(dòng)化 Kubernetes 來為你的部署創(chuàng)建新容器, 刪除現(xiàn)有容器并將它們的所有資源用于新容器。
  • 自動(dòng)完成裝箱計(jì)算:你為 Kubernetes 提供許多節(jié)點(diǎn)組成的集群,在這個(gè)集群上運(yùn)行容器化的任務(wù)。 你告訴 Kubernetes 每個(gè)容器需要多少 CPU 和內(nèi)存 (RAM)。 Kubernetes 可以將這些容器按實(shí)際情況調(diào)度到你的節(jié)點(diǎn)上,以最佳方式利用你的資源。
  • 自我修復(fù):Kubernetes 將重新啟動(dòng)失敗的容器、替換容器、殺死不響應(yīng)用戶定義的運(yùn)行狀況檢查的容器, 并且在準(zhǔn)備好服務(wù)之前不將其通告給客戶端。
  • 密鑰與配置管理:Kubernetes 允許你存儲和管理敏感信息,例如密碼、OAuth 令牌和 ssh 密鑰。 你可以在不重建容器鏡像的情況下部署和更新密鑰和應(yīng)用程序配置,也無需在堆棧配置中暴露密鑰。

3.2 Kubernetes 組件

Kubernetes 集群架構(gòu)如下:

Kubernetes 集群組件如下:

Kubernetes有兩種節(jié)點(diǎn)類型:master節(jié)點(diǎn),worker節(jié)點(diǎn)。master節(jié)點(diǎn)又稱為控制平面(Control Plane)??刂破矫嬗泻芏嘟M件,控制平面組件會為集群做出全局決策,比如資源的調(diào)度。 以及檢測和響應(yīng)集群事件,例如當(dāng)不滿足部署的 replicas 字段時(shí), 要啟動(dòng)新的 pod)。

控制平面組件可以在集群中的任何節(jié)點(diǎn)上運(yùn)行。 然而,為了簡單起見,設(shè)置腳本通常會在同一個(gè)計(jì)算機(jī)上啟動(dòng)所有控制平面組件, 并且不會在此計(jì)算機(jī)上運(yùn)行用戶容器。

3.2.1 控制平面組件

控制平面組件如下

  • kube-apiserver:API 服務(wù)器是 Kubernetes 控制平面的組件, 該組件負(fù)責(zé)公開了 Kubernetes API,負(fù)責(zé)處理接受請求的工作。 API 服務(wù)器是 Kubernetes 控制平面的前端。
    Kubernetes API 服務(wù)器的主要實(shí)現(xiàn)是 kube-apiserver。 kube-apiserver 設(shè)計(jì)上考慮了水平擴(kuò)縮,也就是說,它可通過部署多個(gè)實(shí)例來進(jìn)行擴(kuò)縮。 你可以運(yùn)行 kube-apiserver 的多個(gè)實(shí)例,并在這些實(shí)例之間平衡流量。
  • etcd:etcd 是兼顧一致性與高可用性的鍵值對數(shù)據(jù)庫,可以作為保存 Kubernetes 所有集群數(shù)據(jù)的后臺數(shù)據(jù)庫。你的 Kubernetes 集群的 etcd 數(shù)據(jù)庫通常需要有個(gè)備份計(jì)劃。
  • kube-scheduler:kube-scheduler 是控制平面的組件, 負(fù)責(zé)監(jiān)視新創(chuàng)建的、未指定運(yùn)行節(jié)點(diǎn)(node)的 Pods, 并選擇節(jié)點(diǎn)來讓 Pod 在上面運(yùn)行。調(diào)度決策考慮的因素包括單個(gè) Pod 及 Pods 集合的資源需求、軟硬件及策略約束、 親和性及反親和性規(guī)范、數(shù)據(jù)位置、工作負(fù)載間的干擾及最后時(shí)限。
  • kube-controller-manager:kube-controller-manager 是控制平面的組件, 負(fù)責(zé)運(yùn)行控制器進(jìn)程。從邏輯上講, 每個(gè)控制器都是一個(gè)單獨(dú)的進(jìn)程, 但是為了降低復(fù)雜性,它們都被編譯到同一個(gè)可執(zhí)行文件,并在同一個(gè)進(jìn)程中運(yùn)行。
    這些控制器包括:
    節(jié)點(diǎn)控制器(Node Controller):負(fù)責(zé)在節(jié)點(diǎn)出現(xiàn)故障時(shí)進(jìn)行通知和響應(yīng)
    任務(wù)控制器(Job Controller):監(jiān)測代表一次性任務(wù)的 Job 對象,然后創(chuàng)建 Pods 來運(yùn)行這些任務(wù)直至完成
    端點(diǎn)控制器(Endpoints Controller):填充端點(diǎn)(Endpoints)對象(即加入 Service 與 Pod)
    服務(wù)帳戶和令牌控制器(Service Account & Token Controllers):為新的命名空間創(chuàng)建默認(rèn)帳戶和 API 訪問令牌
  • cloud-controller-manager:一個(gè) Kubernetes 控制平面組件, 嵌入了特定于云平臺的控制邏輯。 云控制器管理器(Cloud Controller Manager)允許你將你的集群連接到云提供商的 API 之上, 并將與該云平臺交互的組件同與你的集群 交互的組件分離開來。cloud-controller-manager 僅運(yùn)行特定于云平臺的控制器。 因此如果你在自己的環(huán)境中運(yùn)行 Kubernetes,或者在本地計(jì)算機(jī)中運(yùn)行學(xué)習(xí)環(huán)境, 所部署的集群不需要有云控制器管理器。
    與 kube-controller-manager 類似,cloud-controller-manager 將若干邏輯上獨(dú)立的控制回路組合到同一個(gè)可執(zhí)行文件中, 供你以同一進(jìn)程的方式運(yùn)行。 你可以對其執(zhí)行水平擴(kuò)容(運(yùn)行不止一個(gè)副本)以提升性能或者增強(qiáng)容錯(cuò)能力。
    下面的控制器都包含對云平臺驅(qū)動(dòng)的依賴:
    節(jié)點(diǎn)控制器(Node Controller):用于在節(jié)點(diǎn)終止響應(yīng)后檢查云提供商以確定節(jié)點(diǎn)是否已被刪除
    路由控制器(Route Controller):用于在底層云基礎(chǔ)架構(gòu)中設(shè)置路由
    服務(wù)控制器(Service Controller):用于創(chuàng)建、更新和刪除云提供商負(fù)載均衡器

3.2.2 Node組件

節(jié)點(diǎn)組件會在每個(gè)節(jié)點(diǎn)上運(yùn)行,負(fù)責(zé)維護(hù)運(yùn)行的 Pod 并提供 Kubernetes 運(yùn)行環(huán)境。

node組件如下

  • kubelet:kubelet 會在集群中每個(gè)節(jié)點(diǎn)(node)上運(yùn)行。 它保證容器(containers)都運(yùn)行在 Pod 中。kubelet 接收一組通過各類機(jī)制提供給它的 PodSpecs, 確保這些 PodSpecs 中描述的容器處于運(yùn)行狀態(tài)且健康。 kubelet 不會管理不是由 Kubernetes 創(chuàng)建的容器。
  • kube-proxy:kube-proxy 是集群中每個(gè)節(jié)點(diǎn)(node)所上運(yùn)行的網(wǎng)絡(luò)代理, 實(shí)現(xiàn) Kubernetes 服務(wù)(Service) 概念的一部分。kube-proxy 維護(hù)節(jié)點(diǎn)上的一些網(wǎng)絡(luò)規(guī)則, 這些網(wǎng)絡(luò)規(guī)則會允許從集群內(nèi)部或外部的網(wǎng)絡(luò)會話與 Pod 進(jìn)行網(wǎng)絡(luò)通信。如果操作系統(tǒng)提供了可用的數(shù)據(jù)包過濾層,則 kube-proxy 會通過它來實(shí)現(xiàn)網(wǎng)絡(luò)規(guī)則。 否則,kube-proxy 僅做流量轉(zhuǎn)發(fā)。

四.安裝部署Kubernetes集群

4.1 環(huán)境介紹

Kubernetes集群架構(gòu):k8scloude1作為master節(jié)點(diǎn),k8scloude2,k8scloude3作為worker節(jié)點(diǎn)

服務(wù)器操作系統(tǒng)版本CPU架構(gòu)進(jìn)程功能描述
k8scloude1/192.168.110.130CentOS Linux release 7.4.1708 (Core)x86_64docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calicok8s master節(jié)點(diǎn)
k8scloude2/192.168.110.129CentOS Linux release 7.4.1708 (Core)x86_64docker,kubelet,kube-proxy,calicok8s worker節(jié)點(diǎn)
k8scloude3/192.168.110.128CentOS Linux release 7.4.1708 (Core)x86_64docker,kubelet,kube-proxy,calicok8s worker節(jié)點(diǎn)

4.2 配置節(jié)點(diǎn)的基本環(huán)境

先配置節(jié)點(diǎn)的基本環(huán)境,3個(gè)節(jié)點(diǎn)都要同時(shí)設(shè)置,在此以k8scloude1作為示例

首先設(shè)置主機(jī)名

[root@localhost ~]# vim /etc/hostname 
[root@localhost ~]# cat /etc/hostname 
k8scloude1

配置節(jié)點(diǎn)IP地址(可選)

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens32
[root@k8scloude1 ~]# cat  /etc/sysconfig/network-scripts/ifcfg-ens32
TYPE=Ethernet
BOOTPROTO=static
NAME=ens32
DEVICE=ens32
ONBOOT=yes
DNS1=114.114.114.114
IPADDR=192.168.110.130
NETMASK=255.255.255.0
GATEWAY=192.168.110.2
ZONE=trusted

重啟網(wǎng)絡(luò)

[root@localhost ~]# service network restart
Restarting network (via systemctl):                        [  確定  ]
[root@localhost ~]# systemctl restart NetworkManager

重啟機(jī)器之后,主機(jī)名變?yōu)閗8scloude1,測試機(jī)器是否可以訪問網(wǎng)絡(luò)

[root@k8scloude1 ~]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=1 ttl=128 time=25.9 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=2 ttl=128 time=26.7 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=3 ttl=128 time=26.4 ms
^C
--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 25.960/26.393/26.724/0.320 ms

配置IP和主機(jī)名映射

[root@k8scloude1 ~]# vim /etc/hosts
[root@k8scloude1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.110.130 k8scloude1
192.168.110.129 k8scloude2
192.168.110.128 k8scloude3
#復(fù)制 /etc/hosts到其他兩個(gè)節(jié)點(diǎn)
[root@k8scloude1 ~]# scp /etc/hosts 192.168.110.129:/etc/hosts
[root@k8scloude1 ~]# scp /etc/hosts 192.168.110.128:/etc/hosts
#可以ping通其他兩個(gè)節(jié)點(diǎn)則成功
[root@k8scloude1 ~]# ping k8scloude1
PING k8scloude1 (192.168.110.130) 56(84) bytes of data.
64 bytes from k8scloude1 (192.168.110.130): icmp_seq=1 ttl=64 time=0.044 ms
64 bytes from k8scloude1 (192.168.110.130): icmp_seq=2 ttl=64 time=0.053 ms
^C
--- k8scloude1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.044/0.048/0.053/0.008 ms
[root@k8scloude1 ~]# ping k8scloude2
PING k8scloude2 (192.168.110.129) 56(84) bytes of data.
64 bytes from k8scloude2 (192.168.110.129): icmp_seq=1 ttl=64 time=0.297 ms
64 bytes from k8scloude2 (192.168.110.129): icmp_seq=2 ttl=64 time=1.05 ms
64 bytes from k8scloude2 (192.168.110.129): icmp_seq=3 ttl=64 time=0.254 ms
^C
--- k8scloude2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.254/0.536/1.057/0.368 ms
[root@k8scloude1 ~]# ping k8scloude3
PING k8scloude3 (192.168.110.128) 56(84) bytes of data.
64 bytes from k8scloude3 (192.168.110.128): icmp_seq=1 ttl=64 time=0.285 ms
64 bytes from k8scloude3 (192.168.110.128): icmp_seq=2 ttl=64 time=0.513 ms
64 bytes from k8scloude3 (192.168.110.128): icmp_seq=3 ttl=64 time=0.390 ms
^C
--- k8scloude3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.285/0.396/0.513/0.093 ms

關(guān)閉屏保(可選)

[root@k8scloude1 ~]# setterm -blank 0

下載新的yum源

[root@k8scloude1 ~]# rm -rf /etc/yum.repos.d/* ;wget ftp://ftp.rhce.cc/k8s/* -P /etc/yum.repos.d/
--2022-01-07 17:07:28--  ftp://ftp.rhce.cc/k8s/*
           => “/etc/yum.repos.d/.listing”
正在解析主機(jī) ftp.rhce.cc (ftp.rhce.cc)... 101.37.152.41
正在連接 ftp.rhce.cc (ftp.rhce.cc)|101.37.152.41|:21... 已連接。
正在以 anonymous 登錄 ... 登錄成功!
==> SYST ... 完成。   ==> PWD ... 完成。
......
100%[=======================================================================================================================================================================>] 276         --.-K/s 用時(shí) 0s      
2022-01-07 17:07:29 (81.9 MB/s) - “/etc/yum.repos.d/k8s.repo” 已保存 [276]
#新的repo文件如下
[root@k8scloude1 ~]# ls /etc/yum.repos.d/
CentOS-Base.repo  docker-ce.repo  epel.repo  k8s.repo

關(guān)閉selinux,設(shè)置SELINUX=disabled

[root@k8scloude1 ~]# cat /etc/selinux/config 
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 
[root@k8scloude1 ~]# getenforce
Disabled
[root@k8scloude1 ~]# setenforce 0
setenforce: SELinux is disabled

配置防火墻允許所有數(shù)據(jù)包通過

[root@k8scloude1 ~]# firewall-cmd --set-default-zone=trusted
Warning: ZONE_ALREADY_SET: trusted
success
[root@k8scloude1 ~]# firewall-cmd --get-default-zone
trusted

Linux swapoff命令用于關(guān)閉系統(tǒng)交換分區(qū)(swap area)。

注意:如果不關(guān)閉swap,就會在kubeadm初始化Kubernetes的時(shí)候報(bào)錯(cuò):“[ERROR Swap]: running with swap on is not supported. Please disable swap”

[root@k8scloude1 ~]# swapoff -a ;sed -i '/swap/d' /etc/fstab
[root@k8scloude1 ~]# cat /etc/fstab
# /etc/fstab
# Created by anaconda on Thu Oct 18 23:09:54 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=9875fa5e-2eea-4fcc-a83e-5528c7d0f6a5 /                       xfs     defaults        0 0

4.3 節(jié)點(diǎn)安裝docker,并進(jìn)行相關(guān)配置

k8s是容器編排工具,需要容器管理工具,所以三個(gè)節(jié)點(diǎn)同時(shí)安裝docker,還是以k8scloude1為例。

安裝docker

[root@k8scloude1 ~]# yum -y install docker-ce
已加載插件:fastestmirror
base                                                                                                                                           | 3.6 kB  00:00:00     
......
已安裝:
  docker-ce.x86_64 3:20.10.12-3.el7                                                                                                                                   
......
完畢!

設(shè)置docker開機(jī)自啟動(dòng)并現(xiàn)在啟動(dòng)docker

[root@k8scloude1 ~]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8scloude1 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 六 2022-01-08 22:10:38 CST; 18s ago
     Docs: https://docs.docker.com
 Main PID: 1377 (dockerd)
   Memory: 30.8M
   CGroup: /system.slice/docker.service
           └─1377 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

查看docker版本

[root@k8scloude1 ~]# docker --version
Docker version 20.10.12, build e91ed57

配置docker鏡像加速器

[root@k8scloude1 ~]# cat > /etc/docker/daemon.json <<EOF
> {
> "registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"] 
> }
> EOF
[root@k8scloude1 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"] 
}

重啟docker

[root@k8scloude1 ~]# systemctl restart docker
[root@k8scloude1 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 六 2022-01-08 22:17:45 CST; 8s ago
     Docs: https://docs.docker.com
 Main PID: 1529 (dockerd)
   Memory: 32.4M
   CGroup: /system.slice/docker.service
           └─1529 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

設(shè)置iptables不對bridge的數(shù)據(jù)進(jìn)行處理,啟用IP路由轉(zhuǎn)發(fā)功能

[root@k8scloude1 ~]# cat <<EOF> /etc/sysctl.d/k8s.conf 
> net.bridge.bridge-nf-call-ip6tables = 1 
> net.bridge.bridge-nf-call-iptables = 1 
> net.ipv4.ip_forward = 1 
> EOF
#使配置生效
[root@k8scloude1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

4.4 安裝kubelet,kubeadm,kubectl

三個(gè)節(jié)點(diǎn)都安裝kubelet,kubeadm,kubectl:

  • Kubelet 是 kubernetes 工作節(jié)點(diǎn)上的一個(gè)代理組件,運(yùn)行在每個(gè)節(jié)點(diǎn)上
  • Kubeadm 是一個(gè)快捷搭建kubernetes(k8s)的安裝工具,它提供了 kubeadm init 以及 kubeadm join 這兩個(gè)命令來快速創(chuàng)建 kubernetes 集群,kubeadm 通過執(zhí)行必要的操作來啟動(dòng)和運(yùn)行一個(gè)最小可用的集群
  • kubectl是Kubernetes集群的命令行工具,通過kubectl能夠?qū)罕旧磉M(jìn)行管理,并能夠在集群上進(jìn)行容器化應(yīng)用的安裝部署。
#repoid:禁用為給定kubernetes定義的排除
##--disableexcludes=kubernetes  禁掉除了這個(gè)之外的別的倉庫 
[root@k8scloude1 ~]# yum -y install kubelet-1.21.0-0 kubeadm-1.21.0-0 kubectl-1.21.0-0 --disableexcludes=kubernetes
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
正在解決依賴關(guān)系
--> 正在檢查事務(wù)
---> 軟件包 kubeadm.x86_64.0.1.21.0-0 將被 安裝
......
已安裝:
  kubeadm.x86_64 0:1.21.0-0                              kubectl.x86_64 0:1.21.0-0                              kubelet.x86_64 0:1.21.0-0                             
......
完畢!

設(shè)置kubelet開機(jī)自啟動(dòng)并現(xiàn)在啟動(dòng)kubelet

[root@k8scloude1 ~]# systemctl enable kubelet --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
#kubelet現(xiàn)在是啟動(dòng)不了的
[root@k8scloude1 ~]# systemctl status kubelet 
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 六 2022-01-08 22:35:33 CST; 3s ago
     Docs: https://kubernetes.io/docs/
  Process: 1722 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
 Main PID: 1722 (code=exited, status=1/FAILURE)
1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
1月 08 22:35:33 k8scloude1 systemd[1]: Unit kubelet.service entered failed state.
1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service failed.

4.5 kubeadm初始化

查看kubeadm哪些版本是可用的

[root@k8scloude2 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
已安裝的軟件包
kubeadm.x86_64                                                                  1.21.0-0                                                                   @kubernetes
可安裝的軟件包
kubeadm.x86_64                                                                  1.6.0-0                                                                    kubernetes 
kubeadm.x86_64                                                                  1.6.1-0                                                                    kubernetes 
kubeadm.x86_64                                                                  1.6.2-0                                                                    kubernetes 
......                                                          
kubeadm.x86_64                                                                  1.23.0-0                                                                   kubernetes 
kubeadm.x86_64                                                                  1.23.1-0                                 

kubeadm init:在主節(jié)點(diǎn)k8scloude1上初始化 Kubernetes 控制平面節(jié)點(diǎn)

#進(jìn)行kubeadm初始化
#--image-repository registry.aliyuncs.com/google_containers:使用阿里云鏡像倉庫,不然有些鏡像下載不下來
#--kubernetes-version=v1.21.0:指定k8s的版本
#--pod-network-cidr=10.244.0.0/16:指定pod的網(wǎng)段
#如下報(bào)錯(cuò):registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0下載不下來,原因?yàn)椋篶oredns改名為coredns/coredns了,手動(dòng)下載coredns即可
#coredns是一個(gè)用go語言編寫的開源的DNS服務(wù)
[root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

手動(dòng)下載coredns鏡像

[root@k8scloude1 ~]# docker pull coredns/coredns:1.8.0
1.8.0: Pulling from coredns/coredns
c6568d217a00: Pull complete 
5984b6d55edf: Pull complete 
Digest: sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e
Status: Downloaded newer image for coredns/coredns:1.8.0
docker.io/coredns/coredns:1.8.0

需要重命名coredns鏡像,不然識別不了

[root@k8scloude1 ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
#刪除coredns/coredns:1.8.0鏡像
[root@k8scloude1 ~]# docker rmi coredns/coredns:1.8.0

此時(shí)可以發(fā)現(xiàn)現(xiàn)在k8scloude1上有7個(gè)鏡像,缺一個(gè)鏡像,kubeadm初始化都不能成功

[root@k8scloude1 ~]# docker images 
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.21.0    4d217480042e   9 months ago    126MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.21.0    38ddd85fe90e   9 months ago    122MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.21.0    09708983cc37   9 months ago    120MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.21.0    62ad3129eca8   9 months ago    50.6MB
registry.aliyuncs.com/google_containers/pause                     3.4.1      0f8457a4c2ec   12 months ago   683kB
registry.aliyuncs.com/google_containers/coredns/coredns           v1.8.0     296a6d5035e2   14 months ago   42.5MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   16 months ago   253MB

重新進(jìn)行kubeadm初始化

[root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8scloude1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.110.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 65.002757 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8scloude1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8scloude1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: nta3x4.3e54l2dqtmj9tlry
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.110.130:6443 --token nta3x4.3e54l2dqtmj9tlry \
        --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8 

根據(jù)提示創(chuàng)建目錄和配置文件

[root@k8scloude1 ~]# mkdir -p $HOME/.kube
[root@k8scloude1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8scloude1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

現(xiàn)在已經(jīng)可以看到master節(jié)點(diǎn)了

[root@k8scloude1 ~]# kubectl get node
NAME         STATUS     ROLES                  AGE     VERSION
k8scloude1   NotReady   control-plane,master   5m54s   v1.21.0

4.6 添加worker節(jié)點(diǎn)到k8s集群

接下來把另外的兩個(gè)worker節(jié)點(diǎn)也加入到k8s集群。

kubeadm init的時(shí)候輸出了如下這句:

kubeadm join 192.168.110.130:6443 --token nta3x4.3e54l2dqtmj9tlry --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8

在另外兩個(gè)worker節(jié)點(diǎn)執(zhí)行這一命令就可以把節(jié)點(diǎn)加入到k8s集群里。

如果加入集群的token忘了,可以使用如下的命令獲取最新的加入命令token

[root@k8scloude1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8 

在另外兩個(gè)節(jié)點(diǎn)執(zhí)行加入集群的token命令

[root@k8scloude2 ~]# kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8scloude3 ~]# kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在k8scloude1查看節(jié)點(diǎn)狀態(tài),可以看到兩個(gè)worker節(jié)點(diǎn)都加入到了k8s集群

[root@k8scloude1 ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8scloude1   NotReady   control-plane,master   8m43s   v1.21.0
k8scloude2   NotReady   <none>                 28s     v1.21.0
k8scloude3   NotReady   <none>                 25s     v1.21.0

可以發(fā)現(xiàn)worker節(jié)點(diǎn)加入到k8s集群后多了兩個(gè)鏡像

[root@k8scloude2 ~]# docker images
REPOSITORY                                           TAG       IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.21.0   38ddd85fe90e   9 months ago    122MB
registry.aliyuncs.com/google_containers/pause        3.4.1     0f8457a4c2ec   12 months ago   683kB
[root@k8scloude3 ~]# docker images
REPOSITORY                                           TAG       IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.21.0   38ddd85fe90e   9 months ago    122MB
registry.aliyuncs.com/google_containers/pause        3.4.1     0f8457a4c2ec   12 months ago   683kB

4.7 部署CNI網(wǎng)絡(luò)插件calico

雖然現(xiàn)在k8s集群已經(jīng)有1個(gè)master節(jié)點(diǎn),2個(gè)worker節(jié)點(diǎn),但是此時(shí)三個(gè)節(jié)點(diǎn)的狀態(tài)都是NotReady的,原因是沒有CNI網(wǎng)絡(luò)插件,為了節(jié)點(diǎn)間的通信,需要安裝cni網(wǎng)絡(luò)插件,常用的cni網(wǎng)絡(luò)插件有calico和flannel,兩者區(qū)別為:flannel不支持復(fù)雜的網(wǎng)絡(luò)策略,calico支持網(wǎng)絡(luò)策略,因?yàn)榻窈筮€要配置k8s網(wǎng)絡(luò)策略networkpolicy,所以本文選用的cni網(wǎng)絡(luò)插件為calico!

現(xiàn)在去官網(wǎng)下載calico.yaml文件:

官網(wǎng):https://projectcalico.docs.tigera.io/about/about-calico

搜索框里直接搜索calico.yaml

找到下載calico.yaml的命令

下載calico.yaml文件

[root@k8scloude1 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  212k  100  212k    0     0  44222      0  0:00:04  0:00:04 --:--:-- 55704
[root@k8scloude1 ~]# ls
calico.yaml  

查看需要下載的calico鏡像,這四個(gè)鏡像需要在所有節(jié)點(diǎn)都下載,以k8scloude1為例

[root@k8scloude1 ~]# grep image calico.yaml
          image: docker.io/calico/cni:v3.21.2
          image: docker.io/calico/cni:v3.21.2
          image: docker.io/calico/pod2daemon-flexvol:v3.21.2
          image: docker.io/calico/node:v3.21.2
          image: docker.io/calico/kube-controllers:v3.21.2
[root@k8scloude1 ~]# docker pull docker.io/calico/cni:v3.21.2
v3.21.2: Pulling from calico/cni
Digest: sha256:ce618d26e7976c40958ea92d40666946d5c997cd2f084b6a794916dc9e28061b
Status: Image is up to date for calico/cni:v3.21.2
docker.io/calico/cni:v3.21.2
[root@k8scloude1 ~]# docker pull docker.io/calico/pod2daemon-flexvol:v3.21.2
v3.21.2: Pulling from calico/pod2daemon-flexvol
Digest: sha256:b034c7c886e697735a5f24e52940d6d19e5f0cb5bf7caafd92ddbc7745cfd01e
Status: Image is up to date for calico/pod2daemon-flexvol:v3.21.2
docker.io/calico/pod2daemon-flexvol:v3.21.2
[root@k8scloude1 ~]# docker pull docker.io/calico/node:v3.21.2
v3.21.2: Pulling from calico/node
Digest: sha256:6912fe45eb85f166de65e2c56937ffb58c935187a84e794fe21e06de6322a4d0
Status: Image is up to date for calico/node:v3.21.2
docker.io/calico/node:v3.21.2
[root@k8scloude1 ~]# docker pull docker.io/calico/kube-controllers:v3.21.2
v3.21.2: Pulling from calico/kube-controllers
d6a693444ed1: Pull complete 
a5399680e995: Pull complete 
8f0eb4c2bcba: Pull complete 
52fe18e41b06: Pull complete 
2f8d3f9f1a40: Pull complete 
bc94a7e3e934: Pull complete 
55bf7cf53020: Pull complete 
Digest: sha256:1f4fcdcd9d295342775977b574c3124530a4b8adf4782f3603a46272125f01bf
Status: Downloaded newer image for calico/kube-controllers:v3.21.2
docker.io/calico/kube-controllers:v3.21.2
#主要是如下4個(gè)鏡像
[root@k8scloude1 ~]# docker images 
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
calico/node                                                       v3.21.2    f1bca4d4ced2   4 weeks ago     214MB
calico/pod2daemon-flexvol                                         v3.21.2    7778dd57e506   5 weeks ago     21.3MB
calico/cni                                                        v3.21.2    4c5c32530391   5 weeks ago     239MB
calico/kube-controllers                                           v3.21.2    b20652406028   5 weeks ago     132MB

修改calico.yaml 文件,CALICO_IPV4POOL_CIDR的IP段要和kubeadm初始化時(shí)候的pod網(wǎng)段一致,注意格式要對齊,不然會報(bào)錯(cuò)

[root@k8scloude1 ~]# vim calico.yaml 
[root@k8scloude1 ~]# cat calico.yaml | egrep "CALICO_IPV4POOL_CIDR|"10.244""
             - name: CALICO_IPV4POOL_CIDR
               value: "10.244.0.0/16"

不直觀的話看圖片:修改calico.yaml 文件

應(yīng)用calico.yaml文件

[root@k8scloude1 ~]# kubectl apply -f calico.yaml
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

此時(shí)發(fā)現(xiàn)三個(gè)節(jié)點(diǎn)都是Ready狀態(tài)了

[root@k8scloude1 ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8scloude1   Ready    control-plane,master   53m   v1.21.0
k8scloude2   Ready    &lt;none&gt;                 45m   v1.21.0
k8scloude3   Ready    &lt;none&gt;                 45m   v1.21.0

4.8 配置kubectl命令tab鍵自動(dòng)補(bǔ)全

查看kubectl自動(dòng)補(bǔ)全命令

[root@k8scloude1 ~]# kubectl --help | grep bash
  completion    Output shell completion code for the specified shell (bash or zsh)

添加source <(kubectl completion bash)到/etc/profile,并使配置生效

[root@k8scloude1 ~]# cat /etc/profile | head -2
# /etc/profile
source &lt;(kubectl completion bash)
[root@k8scloude1 ~]# source /etc/profile

此時(shí)即可kubectl命令tab鍵自動(dòng)補(bǔ)全

[root@k8scloude1 ~]# kubectl get nodes 
NAME         STATUS   ROLES                  AGE   VERSION
k8scloude1   Ready    control-plane,master   59m   v1.21.0
k8scloude2   Ready    &lt;none&gt;                 51m   v1.21.0
k8scloude3   Ready    &lt;none&gt;                 51m   v1.21.0
#注意:需要bash-completion-2.1-6.el7.noarch包,不然不能自動(dòng)補(bǔ)全命令
[root@k8scloude1 ~]# rpm -qa | grep bash
bash-completion-2.1-6.el7.noarch
bash-4.2.46-30.el7.x86_64
bash-doc-4.2.46-30.el7.x86_64

自此,Kubernetes(k8s)集群部署完畢!

更多關(guān)于Centos7安裝部署Kubernetes的資料請關(guān)注腳本之家其它相關(guān)文章!

相關(guān)文章

  • Docker網(wǎng)絡(luò)原理及自定義網(wǎng)絡(luò)詳細(xì)解析

    Docker網(wǎng)絡(luò)原理及自定義網(wǎng)絡(luò)詳細(xì)解析

    一般使用自定義網(wǎng)絡(luò),自定義網(wǎng)絡(luò)使用network創(chuàng)建,創(chuàng)建時(shí)可以指定子網(wǎng)網(wǎng)段及網(wǎng)關(guān)等信息,在創(chuàng)建并啟動(dòng)容器時(shí)指定使用的網(wǎng)絡(luò),今天通過本文給大家介紹Docker網(wǎng)絡(luò)原理及自定義網(wǎng)絡(luò)的相關(guān)知識,感興趣的朋友一起看看吧
    2021-05-05
  • docker部署nginx并且實(shí)現(xiàn)https的方法步驟

    docker部署nginx并且實(shí)現(xiàn)https的方法步驟

    啟用HTTPS可以提高網(wǎng)站的安全性、可信度,同時(shí)符合法規(guī)要求,本文主要介紹了docker部署nginx并且實(shí)現(xiàn)https的方法步驟,具有一定的參考價(jià)值,感興趣的可以了解一下
    2024-07-07
  • 使用Docker安裝SonarQube的詳細(xì)教程

    使用Docker安裝SonarQube的詳細(xì)教程

    這篇文章主要介紹了Docker安裝SonarQube的教程,本文給大家介紹的非常詳細(xì),對大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下
    2021-10-10
  • Docker 基礎(chǔ)網(wǎng)絡(luò)配置詳解

    Docker 基礎(chǔ)網(wǎng)絡(luò)配置詳解

    這篇文章主要介紹了Docker 基礎(chǔ)網(wǎng)絡(luò)配置詳解,小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,也給大家做個(gè)參考。一起跟隨小編過來看看吧
    2018-09-09
  • Docker容器中數(shù)據(jù)卷volumes的使用

    Docker容器中數(shù)據(jù)卷volumes的使用

    volume(卷)用來存儲docker持久化的數(shù)據(jù),其實(shí)就是一個(gè)主機(jī)上的一個(gè)目錄,由docker統(tǒng)一管理,下面這篇文章主要給大家介紹了關(guān)于Docker容器中數(shù)據(jù)卷volumes使用的相關(guān)資料,需要的朋友可以參考下
    2022-04-04
  • 如何下載docker鏡像包

    如何下載docker鏡像包

    這篇文章主要介紹了如何下載docker鏡像包問題,具有很好的參考價(jià)值,希望對大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教
    2024-01-01
  • Docker搭建MySQL雙主復(fù)制詳細(xì)教程

    Docker搭建MySQL雙主復(fù)制詳細(xì)教程

    Docker MySQL 雙主復(fù)制是一種數(shù)據(jù)庫高可用和數(shù)據(jù)冗余的技術(shù),它利用 Docker 容器化的優(yōu)勢,在兩個(gè)或多臺MySQL服務(wù)器上同時(shí)運(yùn)行并維護(hù)數(shù)據(jù)庫副本,這兩個(gè)主服務(wù)器可以互相同步數(shù)據(jù)更新,本文給大家介紹了Docker搭建MySQL雙主復(fù)制詳細(xì)教程,需要的朋友可以參考下
    2024-07-07
  • docker 查看容器的掛載目錄操作

    docker 查看容器的掛載目錄操作

    這篇文章主要介紹了docker 查看容器的掛載目錄操作,具有很好的參考價(jià)值,希望對大家有所幫助。一起跟隨小編過來看看吧
    2021-03-03
  • 云原生使用Docker部署Firefox瀏覽器詳細(xì)圖文教程

    云原生使用Docker部署Firefox瀏覽器詳細(xì)圖文教程

    下面這篇文章主要給大家介紹了關(guān)于云原生使用Docker部署Firefox瀏覽器的相關(guān)資料,這對于一些特殊的測試場景非常有用,例如需要在不同版本的瀏覽器中進(jìn)行測試,需要的朋友可以參考下
    2024-04-04
  • 淺談Windows平臺上Docker安裝與使用

    淺談Windows平臺上Docker安裝與使用

    本篇文章主要介紹了淺談Windows平臺上Docker安裝與使用,小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,也給大家做個(gè)參考。一起跟隨小編過來看看吧
    2017-12-12

最新評論