Docker?容器跨主機通信?overlay的詳細步驟
Docker 容器跨主機通信 overlay
一.Overlay網(wǎng)絡概述
? Overlay網(wǎng)絡是指在不改變現(xiàn)有網(wǎng)絡基礎設施的前提下,通過某種約定通信協(xié)議,把二層報文封裝在IP報文之上的新的數(shù)據(jù)格式。Overlay網(wǎng)絡采用VXLAN(Virtual Extensible LAN)技術創(chuàng)建一個虛擬網(wǎng)絡,將不同主機上的容器連接到同一個邏輯網(wǎng)絡中。這樣,容器就可以像在同一臺主機上一樣進行通信,而無需關心底層的網(wǎng)絡細節(jié)。
二.Overlay網(wǎng)絡的優(yōu)勢
? 跨主機通信:Overlay網(wǎng)絡允許在不同主機上的容器之間進行通信,打破了主機之間的隔離。
擴展性:Overlay網(wǎng)絡可以支持大量的容器和主機,滿足大規(guī)模容器化部署的需求。
隔離性:通過Overlay網(wǎng)絡,可以為不同的容器提供獨立的網(wǎng)絡環(huán)境,避免網(wǎng)絡沖突和干擾。
靈活性:Overlay網(wǎng)絡支持動態(tài)添加和刪除容器,無需重新配置網(wǎng)絡。
三、實現(xiàn)Overlay網(wǎng)絡的步驟
1.準備環(huán)境
docker01 192.168.73.128 ens33 centos7 docker02 192.168.73.129 ens33 centos7
2.初始化一個swarm集群
# 初始化swarm集群
[root@docker01 ~]# docker swarm init --advertise-addr 192.168.73.128
Swarm initialized: current node (9fn9iyxhkxvjey06lwhgv7zhb) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-1rqzq4jhf4wm77e8yn6s347sd189u84mr0u042kwxxv05n8bbx-1rv4ae95p4e4y6aztlhkvqtfx 192.168.73.128:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
# 查看集群
[root@docker01 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
9fn9iyxhkxvjey06lwhgv7zhb * docker01 Ready Active Leader 26.1.4
# 查看網(wǎng)絡
[root@docker01 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
6ad75bb1b055 bridge bridge local
f12f6a0f8bbf docker_gwbridge bridge local
b1c70e3e1ded host host local
116fnum2qea1 ingress overlay swarm
e9eddadcf473 none null local3.創(chuàng)建 overlay network
# 創(chuàng)建ovelay,自定義子網(wǎng)和網(wǎng)關,不輸入系統(tǒng)會自動生成
[root@docker01 ~]# docker network create -d overlay --subnet=192.168.100.0/24 --gateway=192.168.100.1 --attachable my-overlay
ue8rewtwd72difwr86gi5wwsl
# -–attachable:允許集群服務間的容器交互連接或者獨立的容器之間能夠連接。swarm在設計之初是為了service(一組container)而服務的,因此通過swarm創(chuàng)建的overlay網(wǎng)絡在一開始并不支持單獨的container加入其中。但是在docker1.13, 我們可以通過“–attach” 參數(shù)聲明當前創(chuàng)建的overlay網(wǎng)絡可以被container直接加入。
[root@docker01 ~]# docker network inspect my-overlay
[
{
"Name": "my-overlay",
"Id": "ue8rewtwd72difwr86gi5wwsl",
"Created": "2024-10-12T06:04:04.784641012Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "192.168.100.0/24", # 自定義子網(wǎng)
"Gateway": "192.168.100.1" # 自定義網(wǎng)關
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098"
},
"Labels": null
}
]4.將docker02加入集群
# 獲取加入集群命令
[root@docker01 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-1rqzq4jhf4wm77e8yn6s347sd189u84mr0u042kwxxv05n8bbx-1rv4ae95p4e4y6aztlhkvqtfx 192.168.73.128:2377
# 在docker02上執(zhí)行
[root@docker02 ~]# docker swarm join --token SWMTKN-1-1rqzq4jhf4wm77e8yn6s347sd189u84mr0u042kwxxv05n8bbx-1rv4ae95p4e4y6aztlhkvqtfx 192.168.73.128:2377
This node joined a swarm as a worker.
# docker01 上查詢是否加入成功
[root@docker01 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
9fn9iyxhkxvjey06lwhgv7zhb * docker01 Ready Active Leader 26.1.4
4m1xin9edttdt1ljxtzy3q2ab docker02 Ready Active 26.1.45.docker01創(chuàng)建容器
[root@docker01 ~]# docker run -d --name=busybox1 --network=my-overlay harbor.linux.com/k8s/busybox:latest /bin/sleep 3600
d8b2749823f570d6c08eccf2df369774323a72628e18e09bc9dfb09735bd00f0
# 查詢?nèi)萜鱥p
[root@docker01 ~]# docker exec -it busybox1 /bin/sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:c0:a8:64:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.2/24 brd 192.168.100.255 scope global eth0
valid_lft forever preferred_lft forever
16: eth1@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.3/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
# 查看網(wǎng)絡,可以發(fā)現(xiàn)有了容器的IP及LB IP
[root@docker01 ~]# docker network inspect my-overlay
[
{
"Name": "my-overlay",
"Id": "ue8rewtwd72difwr86gi5wwsl",
"Created": "2024-10-12T14:15:21.812458427+08:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "192.168.100.0/24",
"Gateway": "192.168.100.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"d8b2749823f570d6c08eccf2df369774323a72628e18e09bc9dfb09735bd00f0": {
"Name": "busybox1",
"EndpointID": "225db615a5d654cd813171ed4d07c7f485b362551507dd18125a4405c45b18fb",
"MacAddress": "02:42:c0:a8:64:02",
"IPv4Address": "192.168.100.2/24",
"IPv6Address": ""
},
"lb-my-overlay": {
"Name": "my-overlay-endpoint",
"EndpointID": "cce907ae37301e39dbd55abcba03e50e562f589f7d08e6c57ace8891d4785747",
"MacAddress": "02:42:c0:a8:64:03",
"IPv4Address": "192.168.100.3/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098"
},
"Labels": {},
"Peers": [
{
"Name": "9fc55aae1502",
"IP": "192.168.73.128"
}
]
}
]6.docker02創(chuàng)建容器
[root@docker02 ~]# docker run -d --name=busybox2 --network=my-overlay harbor.linux.com/k8s/busybox:latest /bin/sleep 3600
3dd7b088c68f0247d312070bc63e15a697d5e8aa198f7bf53450ea96843cc41b
# 查詢?nèi)萜鱥p
[root@docker02 ~]# docker exec -it busybox2 /bin/sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:c0:a8:64:04 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.4/24 brd 192.168.100.255 scope global eth0
valid_lft forever preferred_lft forever
15: eth1@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.3/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
# 查看網(wǎng)絡,可以發(fā)現(xiàn)有了容器的IP及LB IP
[root@docker02 ~]# docker network inspect my-overlay
[
{
"Name": "my-overlay",
"Id": "ue8rewtwd72difwr86gi5wwsl",
"Created": "2024-10-12T14:21:11.23116209+08:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "192.168.100.0/24",
"Gateway": "192.168.100.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3dd7b088c68f0247d312070bc63e15a697d5e8aa198f7bf53450ea96843cc41b": {
"Name": "busybox2",
"EndpointID": "e0b40de6b74e610d1e13318f8420fcaa5b7e8b9bcd9c68f6faaa9931ea72f992",
"MacAddress": "02:42:c0:a8:64:04",
"IPv4Address": "192.168.100.4/24",
"IPv6Address": ""
},
"lb-my-overlay": {
"Name": "my-overlay-endpoint",
"EndpointID": "4fe0523b017be80ce24bddd8b034633f998e36630b100d3a0468ce02ba4e533d",
"MacAddress": "02:42:c0:a8:64:05",
"IPv4Address": "192.168.100.5/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098"
},
"Labels": {},
"Peers": [
{
"Name": "9fc55aae1502",
"IP": "192.168.73.128"
},
{
"Name": "473174a88eb9",
"IP": "192.168.73.129"
}
]
}
] 7.在容器內(nèi)部互相ping對方
# docker01 ping內(nèi)外網(wǎng)測試 [root@docker01 ~]# docker exec -it busybox1 /bin/sh / # ping 192.168.100.4 PING 192.168.100.4 (192.168.100.4): 56 data bytes 64 bytes from 192.168.100.4: seq=0 ttl=64 time=1.033 ms 64 bytes from 192.168.100.4: seq=1 ttl=64 time=1.702 ms 64 bytes from 192.168.100.4: seq=2 ttl=64 time=2.209 ms ^C --- 192.168.100.4 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 1.033/1.648/2.209 ms / # ping 192.168.100.5 PING 192.168.100.5 (192.168.100.5): 56 data bytes 64 bytes from 192.168.100.5: seq=0 ttl=64 time=0.897 ms 64 bytes from 192.168.100.5: seq=1 ttl=64 time=1.988 ms 64 bytes from 192.168.100.5: seq=2 ttl=64 time=0.893 ms ^C --- 192.168.100.5 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.893/1.259/1.988 ms / # ping www.baidu.com PING www.baidu.com (182.61.200.7): 56 data bytes 64 bytes from 182.61.200.7: seq=0 ttl=127 time=21.864 ms 64 bytes from 182.61.200.7: seq=1 ttl=127 time=23.486 ms 64 bytes from 182.61.200.7: seq=2 ttl=127 time=23.225 ms ^C --- www.baidu.com ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 21.864/22.858/23.486 ms # docker02 ping內(nèi)外網(wǎng)測試 [root@docker02 ~]# docker exec -it busybox2 /bin/sh / # ping 192.168.100.2 PING 192.168.100.2 (192.168.100.2): 56 data bytes 64 bytes from 192.168.100.2: seq=0 ttl=64 time=1.001 ms 64 bytes from 192.168.100.2: seq=1 ttl=64 time=1.374 ms 64 bytes from 192.168.100.2: seq=2 ttl=64 time=1.990 ms ^C --- 192.168.100.2 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 1.001/1.455/1.990 ms / # ping 192.168.100.3 PING 192.168.100.3 (192.168.100.3): 56 data bytes 64 bytes from 192.168.100.3: seq=0 ttl=64 time=0.572 ms 64 bytes from 192.168.100.3: seq=1 ttl=64 time=0.999 ms 64 bytes from 192.168.100.3: seq=2 ttl=64 time=1.199 ms ^C --- 192.168.100.3 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.572/0.923/1.199 ms / # ping www.baidu.com PING www.baidu.com (182.61.200.6): 56 data bytes 64 bytes from 182.61.200.6: seq=0 ttl=127 time=22.134 ms 64 bytes from 182.61.200.6: seq=1 ttl=127 time=22.375 ms 64 bytes from 182.61.200.6: seq=2 ttl=127 time=22.372 ms ^C --- www.baidu.com ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 22.134/22.293/22.375 ms # Overlay網(wǎng)絡ping通內(nèi)外網(wǎng)的原理 # 內(nèi)部網(wǎng)絡通信: 1.在Overlay網(wǎng)絡中,虛擬機之間的通信通過虛擬隧道進行。每個虛擬機都有一個唯一的IP地址,這個地址在Overlay網(wǎng)絡中有效。 2.當一個虛擬機需要向另一個虛擬機發(fā)送數(shù)據(jù)時,數(shù)據(jù)會被封裝在Overlay報文中,并通過虛擬隧道傳輸?shù)侥繕颂摂M機。 3.由于Overlay網(wǎng)絡是在物理網(wǎng)絡之上構(gòu)建的,因此虛擬機之間的通信不會受到物理網(wǎng)絡細節(jié)的影響。 # 外部網(wǎng)絡通信: 1.Overlay網(wǎng)絡通常與物理網(wǎng)絡相連,以便虛擬機能夠訪問外部網(wǎng)絡。 2.在這種情況下,虛擬機發(fā)出的數(shù)據(jù)會被封裝在Overlay報文中,并通過物理網(wǎng)絡的網(wǎng)關或路由器發(fā)送到外部網(wǎng)絡。 3.外部網(wǎng)絡收到的數(shù)據(jù)會被解封裝,并根據(jù)目標IP地址進行路由和轉(zhuǎn)發(fā)。

VXLAN(Virtual eXtensible Local Area Network)是一種網(wǎng)絡虛擬化技術,它使用隧道協(xié)議將二層以太網(wǎng)幀封裝在三層IP報文中,從而實現(xiàn)跨物理網(wǎng)絡的二層連接。這里的vxlan:vethXXX表示通過VXLAN隧道連接的虛擬以太網(wǎng)接口,它可能用于將容器的流量封裝并通過宿主機發(fā)送到外部網(wǎng)絡或其他容器。vxlan才是實體,overlay只是網(wǎng)絡模型;一句話就是vxlan是overlay網(wǎng)絡的一種實現(xiàn)。
到此這篇關于Docker 容器跨主機通信 overlay的文章就介紹到這了,更多相關Docker 容器跨主機通信內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!
相關文章
解決Docker鏡像拉取超時及優(yōu)化網(wǎng)絡配置的方法(failed to do request
在使用 Docker 構(gòu)建和部署應用時,拉取鏡像是一個關鍵的步驟,然而,在一些網(wǎng)絡環(huán)境中,特別是企業(yè)內(nèi)部網(wǎng)或受限網(wǎng)絡環(huán)境中,可能會遇到 Docker 鏡像拉取失敗或超時的問題,這篇博客將詳細探討如何應對 Docker 鏡像拉取超時的問題,需要的朋友可以參考下2024-11-11
Docker數(shù)據(jù)卷掛載命令volume(-v)與mount的使用總結(jié)
本文主要介紹了Docker數(shù)據(jù)卷掛載命令volume(-v)與mount的使用總結(jié),文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友們下面隨著小編來一起學習學習吧2022-08-08
docker打包鏡像后EasyExcel報錯,dockerfile缺少字體的解決
解決Docker打包鏡像后EasyExcel報錯的問題:1. Dockerfile增加字體配置;2. 使用EasyExcel的write時添加"inMemory"參數(shù)為true,開啟內(nèi)存處理模式(不推薦,1W數(shù)據(jù)以內(nèi)可以考慮)2025-02-02
解決docker run中使用 ./ 相對路徑掛載文件或目錄失敗的問題
這篇文章主要介紹了解決docker run中使用‘./‘相對路徑掛載文件或目錄失敗的問題,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2021-03-03

