kubernetes-1.25.6二進(jìn)制部署方式
1. 基礎(chǔ)環(huán)境
| 主機(jī)名稱 | ip地址 |
| master1 | 10.66.6.2 |
| node1 | 10.66.6.4 |
| node2 | 10.66.6.5 |
說(shuō)明:
master節(jié)點(diǎn)為2臺(tái)以nginx為代理實(shí)現(xiàn)高可用
系統(tǒng)使用ubuntu 20.04
2. 基礎(chǔ)環(huán)境配置
通過(guò)公鑰做ssh免密.
2.1 所有節(jié)點(diǎn)配置hosts
10.66.6.2 master1 10.66.6.4 node1 10.66.6.5 node2
2.2 關(guān)閉防火墻,selinux,dnsmasq,swap
#關(guān)閉防火墻 systemctl disable --now firewalld #關(guān)閉dnsmasq systemctl disable --now dnsmasq #關(guān)閉postfix systemctl disable --now postfix #關(guān)閉NetworkManager systemctl disable --now NetworkManager #關(guān)閉selinux sed -ri 's/(^SELINUX=).*/\1disabled/' /etc/selinux/config setenforce 0 #關(guān)閉swap sed -ri 's@(^.*swap *swap.*0 0$)@#\1@' /etc/fstab swapoff -a
2.3 配置時(shí)間同步
#安裝ntpdate apt-get install ntpdate -y #執(zhí)行同步,可以使用自己的ntp服務(wù)器如果沒(méi)有 ntpdate ntp1.aliyun.com #添加定時(shí)任務(wù) crontab -e 0 */1 * * * ntpdate ntp1.aliyun.com
2.4 節(jié)點(diǎn)修改資源限制
cat > /etc/security/limits.conf <<EOF * soft core unlimited * hard core unlimited * soft nproc 1000000 * hard nproc 1000000 * soft nofile 1000000 * hard nofile 1000000 * soft memlock 32000 * hard memlock 32000 * soft msgqueue 8192000 EOF
2.5 安裝基本軟件
apt-get install ipvsadm ipset conntrack sysstat libseccomp psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl -y
2.6 升級(jí)系統(tǒng)內(nèi)核
#查看系統(tǒng)內(nèi)核 uname -r #查看軟件庫(kù)中內(nèi)核 sudo apt list | grep linux-generic* #下載內(nèi)核 apt-get install linux-generic-hwe-20.04-edge/focal-updates #下載腳本 wget https://raw.githubusercontent.com/pimlie/ubuntu-mainline-kernel.sh/master/ubuntu-mainline-kernel.sh #把腳本放在可執(zhí)行路徑 install ubuntu-mainline-kernel.sh /usr/local/bin/ #檢查最新的可用內(nèi)核版本 ubuntu-mainline-kernel.sh -c #獲得最新版本并確認(rèn)這就是您想要安裝在系統(tǒng)上的版本之后,運(yùn)行 ubuntu-mainline-kernel.sh -i #重啟服務(wù)器后確認(rèn) reboot uname -rs
2.7 修改內(nèi)核參數(shù)
cat >/etc/sysctl.conf<<EOF net.ipv4.tcp_keepalive_time=600 net.ipv4.tcp_keepalive_intvl=30 net.ipv4.tcp_keepalive_probes=10 net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.lo.disable_ipv6=1 net.ipv4.neigh.default.gc_stale_time=120 net.ipv4.conf.all.rp_filter=0 # 默認(rèn)為1,系統(tǒng)會(huì)嚴(yán)格校驗(yàn)數(shù)據(jù)包的反向路徑,可能導(dǎo)致丟包 net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.default.arp_announce=2 net.ipv4.conf.lo.arp_announce=2 net.ipv4.conf.all.arp_announce=2 net.ipv4.ip_local_port_range= 45001 65000 net.ipv4.ip_forward=1 net.ipv4.tcp_max_tw_buckets=6000 net.ipv4.tcp_syncookies=1 net.ipv4.tcp_synack_retries=2 net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-iptables=1 net.netfilter.nf_conntrack_max=2310720 net.ipv6.neigh.default.gc_thresh1=8192 net.ipv6.neigh.default.gc_thresh2=32768 net.ipv6.neigh.default.gc_thresh3=65536 net.core.netdev_max_backlog=16384 # 每CPU網(wǎng)絡(luò)設(shè)備積壓隊(duì)列長(zhǎng)度 net.core.rmem_max = 16777216 # 所有協(xié)議類型讀寫(xiě)的緩存區(qū)大小 net.core.wmem_max = 16777216 net.ipv4.tcp_max_syn_backlog = 8096 # 第一個(gè)積壓隊(duì)列長(zhǎng)度 net.core.somaxconn = 32768 # 第二個(gè)積壓隊(duì)列長(zhǎng)度 fs.inotify.max_user_instances=8192 # 表示每一個(gè)real user ID可創(chuàng)建的inotify instatnces的數(shù)量上限,默認(rèn)128. fs.inotify.max_user_watches=524288 # 同一用戶同時(shí)可以添加的watch數(shù)目,默認(rèn)8192。 fs.file-max=52706963 fs.nr_open=52706963 kernel.pid_max = 4194303 net.bridge.bridge-nf-call-arptables=1 vm.swappiness=0 # 禁止使用 swap 空間,只有當(dāng)系統(tǒng) OOM 時(shí)才允許使用它 vm.overcommit_memory=1 # 不檢查物理內(nèi)存是否夠用 vm.panic_on_oom=0 # 開(kāi)啟 OOM vm.max_map_count = 262144 EOF
2.8 加載ipvs模塊
cat >/etc/modules-load.d/ipvs.conf <<EOF ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT EOF systemctl enable --now systemd-modules-load.service #重啟服務(wù)器執(zhí)行檢查 lsmod | grep -e ip_vs -e nf_conntrack
3. 軟件包準(zhǔn)備
以下均為github下載地址
- kubernetes 1.25.6 地址
https://dl.k8s.io/v1.25.6/kubernetes-server-linux-amd64.tar.gz
- etcd 地址
https://github.com/etcd-io/etcd/releases/download/v3.5.7/etcd-v3.5.7-linux-amd64.tar.gz
- docker-ce 地址
https://github.com/containerd/containerd/releases
- cri-docker 地址
https://github.com/Mirantis/cri-dockerd/releases
- containerd 地址
https://github.com/containerd/containerd/releases
- cfssl 地址
https://github.com/cloudflare/cfssl/releases
4. 安裝docker,cri-docker
4.1 安裝docker-ce
tar xf docker-23.0.1.tgz cp docker/* /usr/bin
container啟動(dòng)文件
cat > /usr/lib/systemd/system/containerd.service << EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=1048576 TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF
docker 啟動(dòng)文件
cat > /usr/lib/systemd/system/docker.service << EOF [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket containerd.service [Service] Type=notify ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ExecReload=/bin/kill -s HUP TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target EOF
docker的socket文件
cat > /usr/lib/systemd/system/ << EOF [Unit] Description=Docker Socket for the API [Socket] ListenStream=/var/run/docker.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target EOF
創(chuàng)建docker配置文件
cat > /etc/docker/daemon.json << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF啟動(dòng)添加開(kāi)機(jī)自啟動(dòng)
groupadd docker
systemctl enable --now containerd.service systemctl enable --now docker.socket systemctl enable --now docker.service
4.2 安裝cri-docker
tar xf cri-dockerd-0.3.1.amd64.tgz cp cri-dockerd/* /usr/bin
創(chuàng)建啟動(dòng)文件
cat > /usr/lib/systemd/system/cri-docker.service << EOF [Unit] Description=CRI Interface for Docker Application Container Engine Documentation=https://docs.mirantis.com After=network-online.target firewalld.service docker.service Wants=network-online.target Requires=cri-docker.socket [Service] Type=notify ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=kubernetes/pause:latest ExecReload=/bin/kill -s HUP TimeoutSec=0 RestartSec=2 Restart=always # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. # Both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. StartLimitBurst=3 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. StartLimitInterval=60s # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Comment TasksMax if your systemd version does not support it. # Only systemd 226 and above support this option. TasksMax=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target EOF
創(chuàng)建cri-docker socker文件
cat > /usr/lib/systemd/system/cri-docker.socket << EOF [Unit] Description=CRI Docker Socket for the API PartOf=cri-docker.service [Socket] ListenStream=%t/cri-dockerd.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target EOF
添加開(kāi)機(jī)自啟動(dòng),并啟動(dòng)
systemctl enable --now cri-docker.socket systemctl enable --now cri-docker
4.3 安裝containerd
tar xf containerd-1.6.19-linux-amd64.tar.gz -C /
cat > /usr/lib/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=1048576 TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF
systemctl enable --now containerd.service
添加配置重啟
mkdir /etc/containerd /usr/local/bin/containerd config default > /etc/containerd/config.toml systemctl restart containerd
4.4 安裝crictl 客戶工具
#解壓 tar xf crictl-v1.22.0-linux-amd64.tar.gz -C /usr/bin/ #生成配置文件 cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock EOF #測(cè)試 crictl info
4.5 安裝cfssl工具
#主節(jié)點(diǎn)操作
tar xf cfssl-1.6.3.tar.gz -C /usr/bin
mkdir /opt/pki/{etcd,kubernetes} -p5. 生成kubernetes集群證書(shū)
在主節(jié)點(diǎn)操作
5.1 生成etcd的ca證書(shū)
mkdir /opt/pki/etcd/ -p cd /opt/pki/etcd/ #創(chuàng)建etcd證書(shū)的ca mkdir ca #生成etcd證書(shū)ca配置文件與申請(qǐng)文件 cd ca/
生成配置文件
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"etcd": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
#生成申請(qǐng)文件
cat > ca-csr.json <<EOF
{
"CA":{"expiry":"87600h"},
"CN": "etcd-cluster",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"TS": "Beijing",
"L": "Beijing",
"O": "etcd-cluster",
"OU": "System"
}
]
}
EOF
#生成ca證書(shū)
cfssl gencert -initca ca-csr.json | cfssljson -bare ca生成etcd服務(wù)端證書(shū)
cat > etcd-server-csr.json << EOF
{
"CN": "etcd-server",
"hosts": [
"10.66.6.2",
"10.66.6.3",
"10.66.6.4",
"10.66.6.5",
"10.66.6.6",
"127.0.0.1"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"TS": "Beijing",
"L": "Beijing",
"O": "etcd-server",
"OU": "System"
}
]
}
EOF#生成證書(shū) cfssl gencert \ -ca=ca/ca.pem \ -ca-key=ca/ca-key.pem \ -config=ca/ca-config.json \ -profile=etcd \ etcd-server-csr.json | cfssljson -bare etcd-server
生成etcd客戶端證書(shū)
#生成etcd證書(shū)申請(qǐng)文件
cd /opt/pki/etcd/
cat > etcd-client-csr.json << EOF
{
"CN": "etcd-client",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"TS": "Beijing",
"L": "Beijing",
"O": "etcd-client",
"OU": "System"
}
]
}
EOF#生成證書(shū) cfssl gencert \ -ca=ca/ca.pem \ -ca-key=ca/ca-key.pem \ -config=ca/ca-config.json \ -profile=etcd \ etcd-client-csr.json | cfssljson -bare etcd-client
拷貝證書(shū)到master和node節(jié)點(diǎn)
for i in $master;do
ssh $i "mkdir /etc/etcd/ssl -p"
scp /opt/pki/etcd/ca/ca.pem /opt/pki/etcd/{etcd-server.pem,etcd-server-key.pem,etcd-client.pem,etcd-client-key.pem} $i:/etc/etcd/ssl/
done5.2 創(chuàng)建kubernetes各組件證書(shū)
5.2.1 創(chuàng)建kubernetes的ca
mkdir /opt/pki/kubernetes/ -p cd /opt/pki/kubernetes/ mkdir ca cd ca
創(chuàng)建ca配置文件
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF生成ca申請(qǐng)文件
cat > ca-csr.json <<EOF
{
"CA":{"expiry":"87600h"},
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"TS": "Beijing",
"L": "Beijing",
"O": "kubernetes",
"OU": "System"
}
]
}
EOF生成ca證書(shū)
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
5.3 創(chuàng)建kueb-apiserver證書(shū)
mkdir /opt/pki/kubernetes/kube-apiserver -p cd /opt/pki/kubernetes/kube-apiserver
生成申請(qǐng)文件
cat > kube-apiserver-csr.json < EOF
< EOF
{
"CN": "kube-apiserver",
"hosts": [
"127.0.0.1",
"10.66.6.2",
"10.66.6.3",
"10.66.6.4",
"10.66.6.5",
"10.66.6.6",
"10.66.6.7",
"10.200.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"TS": "Beijing",
"L": "Beijing",
"O": "kube-apiserver",
"OU": "System"
}
]
}
EOF生成證書(shū)
cfssl gencert \ -ca=../ca/ca.pem \ -ca-key=../ca/ca-key.pem \ -config=../ca/ca-config.json \ -profile=kubernetes \ kube-apiserver-csr.json | cfssljson -bare kube-apiserver
for i in master;do
ssh $i "mkdir /etc/kubernetes/pki -p"
scp /opt/pki/kubernetes/ca/{ca.pem,ca-key.pem} /opt/pki/kubernetes/kube-apiserver/{kube-apiserver-key.pem,kube-apiserver.pem} $i:/etc/kubernetes/pki
done#拷貝證書(shū)到node節(jié)點(diǎn)
master="node1 node2" for i in $master;do ssh $i "mkdir /etc/kubernetes/pki -p" scp /opt/pki/kubernetes/ca/ca.pem $i:/etc/kubernetes/pki done
5.4 創(chuàng)建proxy-client證書(shū)以及ca
mkdir /opt/pki/proxy-client cd /opt/pki/proxy-client
生成ca配置文件
cat > front-proxy-ca-csr.json <<EOF
{
"CA":{"expiry":"87600h"},
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF生成ca文件
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca
生成客戶端證書(shū)申請(qǐng)文件
cat > front-proxy-client-csr.json <<EOF
{
"CN": "front-proxy-client",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF生成證書(shū)
cfssl gencert \ -ca=front-proxy-ca.pem \ -ca-key=front-proxy-ca-key.pem \ -config=../kubernetes/ca/ca-config.json \ -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare front-proxy-client
拷貝證書(shū)到節(jié)點(diǎn)
for i in $master;do
scp /opt/pki/proxy-client/{front-proxy-ca.pem,front-proxy-client.pem,front-proxy-client-key.pem} $i:/etc/kubernetes/pki
done
for i in $node;do
scp /opt/pki/proxy-client/front-proxy-ca.pem $i:/etc/kubernetes/pki
done5.5 創(chuàng)建kube-controller-manager證書(shū)與認(rèn)證文件
mkdir /opt/pki/kubernetes/kube-controller-manager cd /opt/pki/kubernetes/kube-controller-manager
生成配置文件
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"TS": "Beijing",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU": "System"
}
]
}
EOF生成證書(shū)文件
cfssl gencert \ -ca=../ca/ca.pem \ -ca-key=../ca/ca-key.pem \ -config=../ca/ca-config.json \ -profile=kubernetes \ kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
生成配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=../ca/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig拷貝證書(shū)到節(jié)點(diǎn)
for i in $master;do scp /opt/pki/kubernetes/kube-controller-manager/kube-controller-manager.kubeconfig $i:/etc/kubernetes done
5.6 生成kube-scheduler證書(shū)文件
mkdir /opt/pki/kubernetes/kube-scheduler cd /opt/pki/kubernetes/kube-scheduler
生成申請(qǐng)文件
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"TS": "Beijing",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU": "System"
}
]
}
EOF生成證書(shū)
cfssl gencert \ -ca=../ca/ca.pem \ -ca-key=../ca/ca-key.pem \ -config=../ca/ca-config.json \ -profile=kubernetes \ kube-scheduler-csr.json | cfssljson -bare kube-scheduler
生成配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=../ca/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig拷貝證書(shū)到節(jié)點(diǎn)
for i in $master;do scp /opt/pki/kubernetes/kube-scheduler/kube-scheduler.kubeconfig $i:/etc/kubernetes done
5.7.生成kubernetes集群管理員證書(shū)
mkdir /opt/pki/kubernetes/admin cd /opt/pki/kubernetes/admin
生成證書(shū)申請(qǐng)文件
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"TS": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF生成證書(shū)
cfssl gencert \ -ca=../ca/ca.pem \ -ca-key=../ca/ca-key.pem \ -config=../ca/ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare admin
生成配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=../ca/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig6. etcd 部署
6.1 安裝etcd
tar xf etcd-v3.5.5-linux-amd64.tar.gz cp etcd-v3.5.5-linux-amd64/etcd* /usr/bin/ rm -rf etcd-v3.5.5-linux-amd64
創(chuàng)建配置文件
cat > /etc/etcd/etcd.config.yml <<EOF name: 'etcd-1' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://10.66.6.2:2380' listen-client-urls: 'https://10.66.6.2:2379,https://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://10.66.6.2:2380' advertise-client-urls: 'https://10.66.6.2:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'etcd-1=https://10.66.6.2:2380' #配置etcd節(jié)點(diǎn)根據(jù)自己情況 initial-cluster-token: 'etcd-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/etcd/ssl/etcd-server.pem' key-file: '/etc/etcd/ssl/etcd-server-key.pem' client-cert-auth: true trusted-ca-file: '/etc/etcd/ssl/ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/etcd/ssl/etcd-server.pem' key-file: '/etc/etcd/ssl/etcd-server-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/etcd/ssl/ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF
生成啟動(dòng)文件
cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service EOF
systemctl enable --now etcd
6.2 配置etcdctl 客戶端工具
? #設(shè)置全局變量 cat > /etc/profile.d/etcdctl.sh <<EOF #!/bin/bash export ETCDCTL_API=3 export ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 export ETCDCTL_CACERT=/etc/etcd/ssl/ca.pem export ETCDCTL_CERT=/etc/etcd/ssl/etcd-client.pem export ETCDCTL_KEY=/etc/etcd/ssl/etcd-client-key.pem EOF #生效 source /etc/profile #驗(yàn)證集群狀態(tài) etcdctl member list ?
7. 部署kubernetes
分發(fā)二進(jìn)制文件
tar xf kubernetes-server-linux-amd64.tar.gz
#分發(fā)master組件
for i in $master;do
scp kubernetes/server/bin/{kubeadm,kube-apiserver,kube-controller-manager,kube-scheduler,kube-proxy,kubelet,kubectl} $i:/usr/bin
done
#分發(fā)node組件
for i in $node;do
scp kubernetes/server/bin/{kube-proxy,kubelet} $i:/usr/bin
done7.1 安裝kube-apiserver
#創(chuàng)建ServiceAccount Key
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
#分發(fā)master組件
for i in $master;do
scp /etc/kubernetes/pki/{sa.pub,sa.key} $i:/etc/kubernetes/pki/
done創(chuàng)建service文件
a=`ifconfig eth0 | awk -rn 'NR==2{print $2}'`
cat > /etc/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/bin/kube-apiserver \\
--v=2 \\
--logtostderr=true \\
--allow-privileged=true \\
--bind-address=$a \\
--secure-port=6443 \\
--advertise-address=$a \\
--service-cluster-ip-range=10.200.0.0/16 \\
--service-node-port-range=30000-42767 \\
--etcd-servers=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 \\
--etcd-cafile=/etc/etcd/ssl/ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd-client.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-client-key.pem \\
--client-ca-file=/etc/kubernetes/pki/ca.pem \\
--tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem \\
--kubelet-client-certificate=/etc/kubernetes/pki/kube-apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/pki/kube-apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\
--authorization-mode=Node,RBAC \\
--enable-bootstrap-token-auth=true \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
--requestheader-allowed-names=aggregator \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-username-headers=X-Remote-User
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF啟動(dòng)服務(wù)
systemctl enable --now kube-apiserver.service
7.2 安裝kube-controller-manager
#生成service文件
cat > /etc/systemd/system/kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=10.200.0.0/16 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF#啟動(dòng)
#啟動(dòng)服務(wù) systemctl enable --now kube-controller-manager.service
7.3 安裝kube-scheduler
#生成service文件
cat > /etc/systemd/system/kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/bin/kube-scheduler \
--v=2 \
--logtostderr=true \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
#啟動(dòng)服務(wù)
systemctl enable --now kube-scheduler.service7.4 在master節(jié)點(diǎn)部署kubectl工具
mkdir /root/.kube/ -p cp /opt/pki/kubernetes/admin/admin.kubeconfig /root/.kube/config
驗(yàn)證
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"} 7.5 部署kubelet
7.5.1 使用TLS Bootstrapping自動(dòng)認(rèn)證kubelet
創(chuàng)建TLS Bootstrapping認(rèn)證文件
mkdir /opt/pki/kubernetes/kubelet -p cd /opt/pki/kubernetes/kubelet #生成隨機(jī)認(rèn)證key a=`head -c 16 /dev/urandom | od -An -t x | tr -d ' ' | head -c6` b=`head -c 16 /dev/urandom | od -An -t x | tr -d ' ' | head -c16`
生成權(quán)限綁定文件
cat > bootstrap.secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-$a
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
description: "The default bootstrap token generated by 'kubelet '."
token-id: $a
token-secret: $b
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-certificate-rotation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver
EOF生成配置文件
#生成配置文件 kubectl config set-cluster kubernetes \ --certificate-authority=../ca/ca.pem \ --embed-certs=true \ --server=https://10.66.6.2:6443 \ --kubeconfig=bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user \ --token=$a.$b \ --kubeconfig=bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes \ --user=tls-bootstrap-token-user \ --kubeconfig=bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes \ --kubeconfig=bootstrap-kubelet.kubeconfig #創(chuàng)建權(quán)限 kubectl apply -f bootstrap.secret.yaml
分發(fā)認(rèn)證文件
for i in $node;do ssh $i "mkdir /etc/kubernetes -p" scp /opt/pki/kubernetes/kubelet/bootstrap-kubelet.kubeconfig $i:/etc/kubernetes done
7.5.2 部署kubernetes 組件
使用docker容器運(yùn)行方式
mkdir /etc/systemd/system/kubelet.service.d/ -p mkdir /etc/kubernetes/manifests/ -p
生成service文件
cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service [Service] ExecStart=/usr/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target EOF
生成service 配置文件
cat > /usr/lib/systemd/system/kubelet.service.d/10-kubelet.conf << Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" Environment="KUBELET_SYSTEM_ARGS=--hostname-override=10.66.6.2" Environment="KUBELET_RINTIME=--container-runtime=remote --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock" Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml" Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node=''" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_RINTIME EOF
使用container的方式部署
a=`ifconfig eth0 | awk -rn 'NR==2{print $2}'`
mkdir /etc/systemd/system/kubelet.service.d/ -p
mkdir /etc/kubernetes/manifests/ -p
#生成service文件
cat > /etc/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
#生成service配置文件
cat > /etc/systemd/system/kubelet.service.d/10-kubelet.conf <<EOF
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--hostname-override=$a"
Environment="KUBELET_RINTIME=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_SYSTEM_ARGS \$KUBELET_EXTRA_ARGS \$KUBELET_RINTIME
EOFkubelet配置文件生成
a=`ifconfig eth0 | awk -rn 'NR==2{print $2}'`
#生成配置文件
cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: $a
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.200.0.2
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF啟動(dòng)服務(wù)
systemctl enable --now kubelet.service
7.6 部署kube-proxy
mkdir /opt/pki/kubernetes/kube-proxy/ -p cd /opt/pki/kubernetes/kube-proxy/
生成配置文件
kubectl -n kube-system create serviceaccount kube-proxy
kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
cat >kube-proxy-scret.yml<<EOF
apiVersion: v1
kind: Secret
metadata:
name: kube-proxy
namespace: kube-system
annotations:
kubernetes.io/service-account.name: "kube-proxy"
type: kubernetes.io/service-account-token
EOFkubectl apply -f kube-proxy-scret.yml
JWT_TOKEN=$(kubectl -n kube-system get secret/kube-proxy \
--output=jsonpath='{.data.token}' | base64 -d)
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.66.6.2:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kubernetes \
--token=${JWT_TOKEN} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=kubernetes \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context kubernetes \
--kubeconfig=kube-proxy.kubeconfig拷貝文件到節(jié)點(diǎn)
for i in $node;do scp /opt/pki/kubernetes/kube-proxy/kube-proxy.kubeconfig $i:/etc/kubernetes done
生成service文件
cat > /etc/systemd/system/kube-proxy.service <<EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.conf \ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF
生成配置文件
cat > /etc/kubernetes/kube-proxy.yaml << EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 10.66.6.2 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 10.100.0.0/16 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "10.66.6.2" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms EOF
啟動(dòng)服務(wù)
systemctl enable --now kube-proxy.service
驗(yàn)證工作模式
curl 127.0.0.1:10249/proxyMode
8. 安裝組件
8.1 安裝calico網(wǎng)絡(luò)插件
yaml文件下載
https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/calico-typha.yaml
修改如下:
- name: CALICO_IPV4POOL_CIDR value: "10.100.0.0/16" kubectl apply -f calico-typha.yaml #驗(yàn)證 kubectl get node
8.2 安裝calicoctl客戶端
mkdir /etc/calico -p cat >/etc/calico/calicoctl.cfg <<EOF apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/root/.kube/config" EOF #驗(yàn)證 calicoctl node status
8.3 安裝dashboard
地址:
https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
修改yaml文件
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #添加
ports:
- port: 443
targetPort: 8443
nodePort: 30001 #添加
selector:
k8s-app: kubernetes-dashboard
#創(chuàng)建
kubectl apply -f dashboard.yaml創(chuàng)建用戶
cat >admin.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
name: admin-user
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: "admin-user"
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF創(chuàng)建用戶
kubectl apply -f admin.yaml #獲取用戶token kubectl describe secrets -n kubernetes-dashboard admin-user
8.4 安裝mertrics-server
下載地址:
https://github.com/kubernetes-sigs/metrics-server/
拷貝證書(shū)文件
for i in $node;do scp /opt/pki/proxy-client/front-proxy-ca.pem $i:/etc/kubernetes/pki/ done
修改配置
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-extra-headers-prefix=X-Remote-Extra-
volumeMounts:
- mountPath: /tmp
name: tmp-dir
- mountPath: /etc/kubernetes/pki
name: ca-ssl
volumes:
- emptyDir: {}
name: tmp-dir
- name: ca-ssl
hostPath:
path: /etc/kubernetes/pki
kubectl apply -f components.yaml #驗(yàn)證 kubectl top node
總結(jié)
以上為個(gè)人經(jīng)驗(yàn),希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。
相關(guān)文章
k8s部署Pyroscope并分析golang性能瓶頸(最新推薦)
這篇文章主要介紹了k8s部署Pyroscope并分析golang性能瓶頸,Pyroscope支持多種編程語(yǔ)言并提供了豐富的性能數(shù)據(jù),可以幫助我們跟蹤應(yīng)用程序的執(zhí)行情況,并根據(jù)收集到的數(shù)據(jù)來(lái)識(shí)別性能瓶頸,需要的朋友可以參考下2023-04-04
K8S-ConfigMap實(shí)現(xiàn)應(yīng)用和配置分離詳解
這篇文章主要為大家介紹了K8S-ConfigMap實(shí)現(xiàn)應(yīng)用和配置分離詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2023-04-04
Rainbond云原生部署開(kāi)源社區(qū)Discourse的配置過(guò)程
這篇文章主要為大家介紹了Rainbond云原生部署開(kāi)源社區(qū)Discourse配置過(guò)程,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-04-04
KVM虛擬化技術(shù)之virt-manager使用及KVM虛擬化平臺(tái)網(wǎng)絡(luò)模型介紹
這篇文章主要介紹了KVM虛擬化技術(shù)之virt-manager使用及KVM虛擬化平臺(tái)網(wǎng)絡(luò)模型介紹,需要的朋友可以參考下2016-10-10
K8S之StatefulSet有狀態(tài)服務(wù)詳解
本文主要介紹了K8S之StatefulSet有狀態(tài)服務(wù)詳解,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2022-07-07

