欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

centos7基于keepalived+nginx部署k8s1.26.0高可用集群

 更新時間:2025年01月11日 16:46:55   作者:南北二斗  
Kubernetes是一個開源的容器編排平臺,用于自動化地部署、擴展和管理容器化應用程序,在生產(chǎn)環(huán)境中,為了確保集群的高可用性,我們需要使用多個Master節(jié)點來實現(xiàn)冗余和故障切換

k8s集群角色

IP地址

主機名

master

192.168.209.116

k8s-master1

master

192.168.209.117

k8s-master2

master

192.168.209.118

k8s-master3

node

192.168.209.119

k8s-node1

echo "設置主機名"

echo "在 192.168.209.116 上執(zhí)行如下:"
hostnamectl set-hostname k8s-master1 && bash

echo "在 192.168.209.117 上執(zhí)行如下:"
hostnamectl set-hostname k8s-master2 && bash

echo "在 192.168.209.118 上執(zhí)行如下:"
hostnamectl set-hostname k8s-master3 && bash

echo "在 192.168.209.119 上執(zhí)行如下:"
hostnamectl set-hostname k8s-node1 && bash

一、初始化(所有節(jié)點都執(zhí)行)

echo "配置阿里云yum源"
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache


echo "更新系統(tǒng)并安裝必要工具..."
yum update -y
yum install -y yum-utils device-mapper-persistent-data lvm2 bash-completion


echo "禁用 SELinux 和防火墻..."
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
systemctl disable --now firewalld

echo "禁用 swap..."
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

echo "# 優(yōu)化系統(tǒng)配置,開啟 IP 轉發(fā)、關閉 swap 等"
echo "優(yōu)化系統(tǒng)配置..."
cat <<EOF | tee /etc/sysctl.d/k8s.conf
vm.swappiness = 0
vm.panic_on_oom = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.all.disable_ipv6 = 1
net.netfilter.nf_conntrack_max = 2310720
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 1048576
fs.file-max = 52706963
fs.nr_open = 52706963
EOF

sysctl -p /etc/sysctl.d/k8s.conf

echo "加載 br_netfilter 模塊..."
modprobe br_netfilter
lsmod | grep br_netfilter


echo "安裝 ipset 和 ipvsadm..."
yum -y install ipset ipvsadm


echo "配置 ipvsadm 模塊加載方式..."
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs
lsmod | grep -e ip_vs -e nf_conntrack

二、安裝containerd(所有節(jié)點都執(zhí)行)

echo "安裝 Containerd..."
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y containerd.io
containerd config default > /etc/containerd/config.toml

#修改/etc/containerd/config.toml文件:
#1、把 SystemdCgroup = false 修改成 SystemdCgroup = true

#2、把 sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"

#3、在[plugins."io.containerd.grpc.v1.cri".registry.mirrors]下面加4行
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."swr.cn-north-4.myhuaweicloud.com"]
          endpoint = ["https://swr.cn-north-4.myhuaweicloud.com"]

        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io"]
systemctl enable --now containerd
systemctl start containerd

三、安裝docker-ce(所有節(jié)點都執(zhí)行)

echo "停止舊版本docker"
sudo systemctl stop docker

echo "卸載舊版本docker"
# yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

sudo rm -rf /var/lib/docker
sudo rm -rf /run/docker
sudo rm -rf /var/run/docker
sudo rm -rf /etc/docker

echo "安裝docker-ce"

yum install -y yum-utils device-mapper-persistent-data lvm2 git
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce -y
cat /etc/docker/daemon.json
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": [
    "https://2a6bf1988cb6428c877f723ec7530dbc.mirror.swr.myhuaweicloud.com",
    "https://docker.m.daocloud.io",
    "https://hub-mirror.c.163.com",
    "https://mirror.baidubce.com",
    "https://your_preferred_mirror",
    "https://dockerhub.icu",
    "https://docker.registry.cyou",
    "https://docker-cf.registry.cyou",
    "https://dockercf.jsdelivr.fyi",
    "https://docker.jsdelivr.fyi",
    "https://dockertest.jsdelivr.fyi",
    "https://mirror.aliyuncs.com",
    "https://dockerproxy.com",
    "https://mirror.baidubce.com",
    "https://docker.m.daocloud.io",
    "https://docker.nju.edu.cn",
    "https://docker.mirrors.sjtug.sjtu.edu.cn",
    "https://docker.mirrors.ustc.edu.cn",
    "https://mirror.iscas.ac.cn",
    "https://docker.rainbond.cc"
    ]
}
systemctl enable --now docker
systemctl restart docker

四、安裝kubelet+kubeadm+kubectl(所有節(jié)點都執(zhí)行)

echo "安裝 Kubernetes 工具..."
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.26.0 kubeadm-1.25.0 kubectl-1.26.0

systemctl enable kubelet
systemctl restart kubelet

五、安裝keepalived+nginx(只在master節(jié)點執(zhí)行)

sudo yum install -y epel-release
sudo yum install -y nginx keepalived

echo "配置nginx"
vim /etc/nginx/nginx.conf

#在http塊的上方加上stream塊
...
stream {
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;
    upstream k8s-apiserver {
            server 192.168.209.118:6443 weight=5 max_fails=3 fail_timeout=30s;
            server 192.168.209.117:6443 weight=5 max_fails=3 fail_timeout=30s;
            server 192.168.209.116:6443 weight=5 max_fails=3 fail_timeout=30s;
    }
    server {
       listen 16443; # 由于nginx與master節(jié)點復用,這個監(jiān)聽端口不能是6443,否則會沖突
       proxy_pass k8s-apiserver;
    }
}

http {

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+nginx

echo "配置keepalived"
echo "--------k8s-master1配置--------------"


cat /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33  # 修改為實際網(wǎng)卡名
    virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的
    priority 100    # 優(yōu)先級,備服務器設置 90
    advert_int 1    # 指定VRRP 心跳包通告間隔時間,默認1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虛擬IP
    virtual_ipaddress {
        192.168.209.111/24
    }
    track_script {
        check_nginx
    }
}

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm部署k8s高可用集群_02

echo "配置keepalived"
echo "--------k8s-master2配置--------------"


cat /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33  # 修改為實際網(wǎng)卡名
    virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的
    priority 90    # 優(yōu)先級,備服務器設置 90
    advert_int 1    # 指定VRRP 心跳包通告間隔時間,默認1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虛擬IP
    virtual_ipaddress {
        192.168.209.111/24
    }
    track_script {
        check_nginx
    }
}

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+nginx_03

echo "配置keepalived"
echo "--------k8s-master3配置--------------"

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33  # 修改為實際網(wǎng)卡名
    virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的
    priority 80    # 優(yōu)先級,備服務器設置 80
    advert_int 1    # 指定VRRP 心跳包通告間隔時間,默認1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虛擬IP
    virtual_ipaddress {
        192.168.209.111/24
    }
    track_script {
        check_nginx
    }
}

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm部署k8s高可用集群_04

cat /etc/keepalived/check_nginx.sh
#!/bin/bash
#1、判斷Nginx是否存活
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
if [ $counter -eq 0 ]; then
    #2、如果不存活則嘗試啟動Nginx
    service nginx start
    sleep 2
    #3、等待2秒后再次獲取一次Nginx狀態(tài)
    counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
    #4、再次進行判斷,如Nginx還不存活則停止Keepalived,讓地址進行漂移
    if [ $counter -eq 0 ]; then
        service  keepalived stop
    fi
fi
echo "啟動nginx和keepalived服務"

chmod +x /etc/keepalived/check_nginx.sh
systemctl daemon-reload && systemctl restart nginx
systemctl restart keepalived && systemctl enable nginx keepalived

六、kubeadm初始化k8s集群(只在k8s-master1執(zhí)行)

改注釋文字的地方,改成下面這個樣子

kubeadm config print init-defaults > kubeadm.yaml

cat kubeadm.yaml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
#localAPIEndpoint:            #注釋掉
#  advertiseAddress: 1.2.3.4  #注釋掉
#  bindPort: 6443             #注釋掉
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock   #指定好
  imagePullPolicy: IfNotPresent
#  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers     #改成這樣
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
controlPlaneEndpoint: 192.168.209.111:16443                              #改成vip+nginx端口
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16                                    #指定pod網(wǎng)段                                   
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_keepalived+nginx_05

kubeadm init --cnotallow=kubeadm.yaml --ignore-preflight-errors=SystemVerification

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+nginx_06

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes

至此 k8s主節(jié)點已安裝完成

-----------------------------

注意提前去k8s-master2和k8s-master3節(jié)點上創(chuàng)建個文件夾

mkdir -p /etc/kubernetes/pki/etcd/

echo "將k8s-master1中的證書scp到k8s-master2和k8s-master3節(jié)點"

cd  /etc/kubernetes/pki/
scp ca.* k8s-master2:/etc/kubernetes/pki/
scp sa.* k8s-master2:/etc/kubernetes/pki/
scp front-proxy-ca.*  k8s-master2:/etc/kubernetes/pki/
scp etcd/ca.*  k8s-master2:/etc/kubernetes/pki/etcd/

scp ca.* k8s-master3:/etc/kubernetes/pki/
scp sa.* k8s-master3:/etc/kubernetes/pki/
scp front-proxy-ca.*  k8s-master3:/etc/kubernetes/pki/
scp etcd/ca.*  k8s-master3:/etc/kubernetes/pki/etcd/

七、將其他master join到k8s集群(在k8s-master2和k8s-master3執(zhí)行)

--control-plane --ignore-preflight-errors=SystemVerification 加上這兩個

kubeadm join 192.168.209.111:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:ec363a06d7681d941e7969fb6e994f4a4c1c4ef0d154c7290131c1e830b4bec5 \
        --control-plane --ignore-preflight-errors=SystemVerification

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_centos7部署k8s高可用集群_07

八、將node節(jié)點join到k8s集群(只在k8s-node1節(jié)點執(zhí)行)

kubeadm join 192.168.209.111:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:ec363a06d7681d941e7969fb6e994f4a4c1c4ef0d154c7290131c1e830b4bec5   --ignore-preflight-errors=SystemVerification

九、部署calico網(wǎng)絡插件(只在k8s-master1執(zhí)行)

curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml -O

sed -i 's|docker.io/calico/cni:v3.25.0|swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/cni:v3.25.0|g' calico.yaml
sed -i 's|docker.io/calico/node:v3.25.0|swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/node:v3.25.0|g' calico.yaml
sed -i 's|docker.io/calico/kube-controllers:v3.25.0|swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/kube-controllers:v3.25.0|g' calico.yaml
cat calico.yaml
#在下面對應位置加上2行內(nèi)容
            - name: IP_AUTODETECTION_METHOD
              value: "interface=ens33"
              
#在下面對應位置加上2行內(nèi)容              
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_centos7部署k8s高可用集群_08

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm部署k8s高可用集群_09

kubectl apply -f calico.yaml

calico相關pod為running代表成功

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+nginx_10

測試DNS 解析和網(wǎng)絡是否正常

kubectl run busybox --image swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/library/busybox:1.28 --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_centos7部署k8s高可用集群_11

十、配置etcd高可用狀態(tài)

vim /etc/kubernetes/manifests/etcd.yaml

#將- --initial-cluster=xuegod63=https://192.168.209.116:2380
改成
- --initial-cluster=k8s-master1=https://192.168.209.116:2380,k8s-master2=https://192.168.209.117:2380,k8s-master3=https://192.168.209.118:2380

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_centos7部署k8s高可用集群_12

測試 etcd 集群是否配置成功:

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes  registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt member list

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+keepalived_13

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://192.168.209.116:2379,https://192.168.209.117:2379,https://192.168.209.118:2379 endpoint health --cluster

全是successfuly代表正常,博主電腦資源不足把一臺master3關機了

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_centos7部署k8s高可用集群_14

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl -w table --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://192.168.209.116:2379,https://192.168.209.117:2379,https://192.168.209.118:2379  endpoint status --cluster

能看到三個endpoint代表正常,博主電腦資源不足把一臺master3關機了

centos7基于keepalived+nginx部署k8s1.26.0高可用集群_kubeadm+keepalived_15

十一、總結

到此這篇關于centos7基于keepalived+nginx部署k8s1.26.0高可用集群的文章就介紹到這了,更多相關Centos7安裝部署Kubernetes(k8s) 高可用集群內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!

相關文章

  • 用GitLab搭建自己的私有GitHub庫的步驟

    用GitLab搭建自己的私有GitHub庫的步驟

    本篇文章主要介紹了用GitLab搭建自己的私有GitHub庫的步驟,小編覺得挺不錯的,現(xiàn)在分享給大家,也給大家做個參考。一起跟隨小編過來看看吧
    2017-11-11
  • dell r710 服務器配置RAID5(3塊硬盤做RAID5)

    dell r710 服務器配置RAID5(3塊硬盤做RAID5)

    這篇文章主要介紹了dell r710 服務器配置RAID5圖文教程,需要的朋友可以參考下
    2014-08-08
  • 下載站mime屬性設置(讓文件可下載)

    下載站mime屬性設置(讓文件可下載)

    有時候一些下載網(wǎng)站為了支持更多的格式,一般情況下支持rar,zip等常用的壓縮包文件的下載,對于iso等很多文件都是不支持下載,其實通過設置mime就可以了
    2013-07-07
  • Manjaro安裝CUDA實現(xiàn)教程解析

    Manjaro安裝CUDA實現(xiàn)教程解析

    這篇文章主要介紹了Manjaro安裝CUDA實現(xiàn)教程解析,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友可以參考下
    2020-10-10
  • Windows服務器文件備份到本地的方法、Windows服務器數(shù)據(jù)備份方案

    Windows服務器文件備份到本地的方法、Windows服務器數(shù)據(jù)備份方案

    重要的數(shù)據(jù)必須備份,并且必須實時備份,否則一旦出現(xiàn)意外情況,將會給服務器文件安全帶來巨大災難。那么,如何備份服務器文件呢?下面就一起來了解一下
    2019-05-05
  • Make命令基礎用法教程

    Make命令基礎用法教程

    本文詳細講解了Make命令基礎用法,文中通過示例代碼介紹的非常詳細。對大家的學習或工作具有一定的參考借鑒價值,需要的朋友可以參考下
    2022-01-01
  • 云服務器+ASF實現(xiàn)全天掛卡掛時長的方法

    云服務器+ASF實現(xiàn)全天掛卡掛時長的方法

    我的服務器部署在騰訊云上,系統(tǒng)為Ubuntu?amd64,下面的wget使用了萌歪大大的GH代理,所以可以直接一條一條地輸入,下面通過本文給大家分享云服務器+ASF實現(xiàn)全天掛卡掛時長的方法,感興趣的朋友跟隨小編一起看看吧
    2024-05-05
  • 用rsync對網(wǎng)站進行鏡像備份步驟

    用rsync對網(wǎng)站進行鏡像備份步驟

    rsync是類unix系統(tǒng)下的數(shù)據(jù)鏡像備份工具,從軟件的命名上就可以看出來了——remote sync。
    2010-03-03
  • nfs和web服務器的搭建過程

    nfs和web服務器的搭建過程

    這篇文章主要介紹了nfs和web服務器的搭建過程,本文通過圖文并茂的形式給大家介紹的非常詳細,感興趣的朋友跟隨小編一起看看吧
    2024-07-07
  • 如何使用cpolar?內(nèi)網(wǎng)穿透將本地?web?網(wǎng)站發(fā)布上線(無需服務器)

    如何使用cpolar?內(nèi)網(wǎng)穿透將本地?web?網(wǎng)站發(fā)布上線(無需服務器)

    這篇文章主要介紹了使用cpolar?內(nèi)網(wǎng)穿透將本地web網(wǎng)站發(fā)布上線(無需服務器),這里我們以macOS系統(tǒng)自帶的Apache為例,在本地啟用Apache服務器,并通過cpolar內(nèi)網(wǎng)穿透將其暴露至公網(wǎng),實現(xiàn)在外公網(wǎng)環(huán)境下訪問本地web服務,需要的朋友可以參考下
    2023-03-03

最新評論