k8s?pod如何使用sriov
講述下如何使用multus來實現(xiàn)sriov的使用。
一、sriov 簡介
SR-IOV在2010年左右由Intel提出,但是隨著容器技術(shù)的推廣,intel官方也給出了SR-IOV技術(shù)在容器中使用的開源組件,例如:sriov-cni和sriov-device-plugin等,所以SR-IOV也開始在容器領(lǐng)域得到的大量使用。
在傳統(tǒng)的虛擬化中,虛擬機的網(wǎng)卡通常是通過橋接(Bridge或OVS)的方式,因為這種方式最方便,也最簡單,但是這樣做最大的問題在于性能。本文講的SR-IOV在2010年左右由Intel提出,SR-IOV全稱Single-Root I/O Virtualization,是一種基于硬件的虛擬化解決方案,它允許多個云主機高效共享PCIe設(shè)備,且同時獲得與物理設(shè)備性能媲美的I/O性能,能有效提高性能和可伸縮性。
SR-IOV技數(shù)主要是虛擬出來通道給用戶使用的,通道分為兩種:
- PF(Physical Function,物理功能):管理 PCIe 設(shè)備在物理層面的通道功能,可以看作是一個完整的 PCIe 設(shè)備,包含了 SR-IOV 的功能結(jié)構(gòu),具有管理、配置 VF 的功能。
- VF(Virtual Function,虛擬功能):是 PCIe 設(shè)備在虛擬層面的通道功能,即僅僅包含了 I/O 功能,VF 之間共享物理資源。VF 是一種裁剪版的 PCIe 設(shè)備,僅允許配置其自身的資源,虛擬機無法通過 VF 對 SR-IOV 網(wǎng)卡進(jìn)行管理。所有的 VF 都是通過 PF 衍生而來,有些型號的 SR-IOV 網(wǎng)卡最多可以生成 256 個 VF。SR-IOV設(shè)備數(shù)據(jù)包分發(fā)機制
從邏輯上可以認(rèn)為啟用了 SR-IOV 技術(shù)后的物理網(wǎng)卡內(nèi)置了一個特別的 Switch,將所有的 PF 和 VF 端口連接起來,通過 VF 和 PF 的 MAC 地址以及 VLAN ID 來進(jìn)行數(shù)據(jù)包分發(fā)。
- 在 Ingress 上(從外部進(jìn)入網(wǎng)卡):如果數(shù)據(jù)包的目的MAC地址和VLANID都匹配某一個VF,那么數(shù)據(jù)包會分發(fā)到該VF,否則數(shù)據(jù)包會進(jìn)入PF;如果數(shù)據(jù)包的目的MAC地址是廣播地址,那么數(shù)據(jù)包會在同一個 VLAN 內(nèi)廣播,所有 VLAN ID 一致的 VF 都會收到該數(shù)據(jù)包。
- 在 Egress 上(從 PF 或者 VF發(fā)出):如果數(shù)據(jù)包的MAC地址不匹配同一VLAN內(nèi)的任何端口(VF或PF),那么數(shù)據(jù)包會向網(wǎng)卡外部轉(zhuǎn)發(fā),否則會直接在內(nèi)部轉(zhuǎn)發(fā)給對應(yīng)的端口;如果數(shù)據(jù)包的 MAC 地址為廣播地址,那么數(shù)據(jù)包會在同一個 VLAN 內(nèi)以及向網(wǎng)卡外部廣播。注意:所有未設(shè)置 VLAN ID 的 VF 和 PF,可以認(rèn)為是在同一個 LAN 中,不帶 VLAN 的數(shù)據(jù)包在該 LAN 中按照上述規(guī)則進(jìn)行處理。此外,設(shè)置了 VLAN 的 VF,發(fā)出數(shù)據(jù)包時,會自動給數(shù)據(jù)包加上 VLAN,在接收到數(shù)據(jù)包時,可以設(shè)置是否由硬件剝離 VLAN 頭部。
二、SR-IOV設(shè)備與容器網(wǎng)絡(luò)
英特爾推出了 SR-IOV CNI 插件,支持 Kubernetes pod 在兩種模式任意之一的條件下直接連接 SR-IOV 虛擬功能 (VF)。
- 第一個模式在容器主機核心中使用標(biāo)準(zhǔn) SR-IOV VF 驅(qū)動程序。
- 第二個模式支持在用戶空間執(zhí)行 VF 驅(qū)動程序和網(wǎng)絡(luò)協(xié)議的 DPDK VNF。
本文介紹的是第一個模式,直接連接SR-IOV虛擬功能(vf設(shè)備),如下圖所示:

上圖中包含了一個node節(jié)點上使用的組件:kubelet、sriov-device-plugin、sriov-cni和multus-cni。
節(jié)點上的vf設(shè)備需要提前生成,然后由sriov-device-plugin將vf設(shè)備發(fā)布到k8s集群中。
在pod創(chuàng)建的時候,由kubelet調(diào)用multus-cni,multus-cni分別調(diào)用默認(rèn)cni和sriov-cni插件為pod構(gòu)建網(wǎng)絡(luò)環(huán)境。
sriov-cni就是將主機上的vf設(shè)備添加進(jìn)容器的網(wǎng)絡(luò)命名空間中并配置ip地址。
三、環(huán)境準(zhǔn)備
- k8s環(huán)境
[root@node1 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready control-plane,master 47d v1.23.17 node2 Ready control-plane,master 47d v1.23.17 node3 Ready control-plane,master 47d v1.23.17
- 硬件環(huán)境
[root@node1 ~]# lspci -nn | grep -i eth 23:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01) 23:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01) 41:00.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] 41:00.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] 42:00.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] 42:00.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] 63:00.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] 63:00.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] a1:00.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] a1:00.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] [root@node1 ~]# 本環(huán)境將使用Mellanox Technologies MT27710進(jìn)行實驗測試。 ########確認(rèn)網(wǎng)卡是否支持sriov [root@node1 ~]# lspci -v -s 41:00.0 41:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] Subsystem: Mellanox Technologies Stand-up ConnectX-4 Lx EN, 25GbE dual-port SFP28, PCIe3.0 x8, MCX4121A-ACAT Physical Slot: 19 Flags: bus master, fast devsel, latency 0, IRQ 195, IOMMU group 56 Memory at 2bf48000000 (64-bit, prefetchable) [size=32M] Expansion ROM at c6f00000 [disabled] [size=1M] Capabilities: [60] Express Endpoint, MSI 00 Capabilities: [48] Vital Product Data Capabilities: [9c] MSI-X: Enable+ Count=64 Masked- Capabilities: [c0] Vendor Specific Information: Len=18 <?> Capabilities: [40] Power Management version 3 Capabilities: [100] Advanced Error Reporting Capabilities: [150] Alternative Routing-ID Interpretation (ARI) Capabilities: [180] Single Root I/O Virtualization (SR-IOV) ##支持sriov Capabilities: [1c0] Secondary PCI Express Capabilities: [230] Access Control Services Kernel driver in use: mlx5_core Kernel modules: mlx5_core ####網(wǎng)卡支持的驅(qū)動類型
- 開啟vf
[root@node1 ~]# echo 8 > /sys/class/net/ens19f0/device/sriov_numvfs
####物理機查看開啟的vf
[root@node1 ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: ens19f0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether e8:eb:d3:33:be:ea brd ff:ff:ff:ff:ff:ff
vf 0 link/ether 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
vf 1 link/ether 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
vf 2 link/ether 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
vf 3 link/ether 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
vf 4 link/ether 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
vf 5 link/ether 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
vf 6 link/ether 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
vf 7 link/ether 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
###確認(rèn)vf被開啟
[root@node1 ~]# lspci -nn | grep -i ether
23:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
23:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
41:00.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015]
41:00.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015]
41:00.2 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016]
41:00.3 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016]
41:00.4 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016]
41:00.5 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016]
41:00.6 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016]
41:00.7 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016]
41:01.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016]
41:01.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016]
42:00.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015]
42:00.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015]
63:00.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015]
63:00.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015]
a1:00.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015]
a1:00.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015]
#####ip a查看在系統(tǒng)中被識別
[root@node1 ~]# ip a | grep ens19f0v
18: ens19f0v0: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
19: ens19f0v1: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
20: ens19f0v2: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
21: ens19f0v3: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
22: ens19f0v4: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
23: ens19f0v5: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
24: ens19f0v6: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
25: ens19f0v7: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
[root@node1 ~]#四、sriov安裝
- sriov-device-plugin安裝
[root@node1 ~]# git clone https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin.git
[root@node1 ~]# cd sriov-network-device-plugin/
[root@node1 ~]# make image ###編譯鏡像
[root@node1 ~]#
或者直接通過pull 命令下載鏡像
[root@node1 ~]# docker pull ghcr.io/k8snetworkplumbingwg/sriov-network-device-plugin:latest-amd
##############################
SR-IOV設(shè)備的pf資源和vf資源需要發(fā)布到k8s集群中以供pod使用,所以這邊需要用到device-plugin,device-plugin的pod是用daemonset部署的,運行在每個node節(jié)點上,節(jié)點上的kubelet服務(wù)會通過grpc方式調(diào)用device-plugin里的ListAndWatch接口獲取節(jié)點上的所有SR-IOV設(shè)備device信息,device-plugin也會通過register方法向kubelet注冊自己的服務(wù),當(dāng)kubelet需要為pod分配SR-IOV設(shè)備時,會調(diào)用device-plugin的Allocate方法,傳入deviceId,獲取設(shè)備的詳細(xì)信息。
##############修改configmap,主要是用于篩選節(jié)點上的SR-IOV的vf設(shè)備,注冊vf到k8s集群
[root@node1 ~]# vim sriov-network-device-plugin/deployments/configMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: sriovdp-config
namespace: kube-system
data:
config.json: |
{
"resourceList": [{
"resourcePrefix": "Mellanox.com",
"resourceName": "Mellanox_sriov_switchdev_MT27710_ens19f0_vf",
"selectors": {
"drivers": ["mlx5_core"],
"pfNames": ["ens19f0#0-7"] ###填寫被系統(tǒng)中識別到設(shè)備名稱也可以使用設(shè)備廠商的vendors,配置方式多種
}
}
]
}
#######部署sriov-device-plugin
[root@node1 ~]# kubectl create -f deployments/configMap.yaml
[root@node1 ~]# kubectl create -f deployments/sriovdp-daemonset.yaml
######查看sriov已經(jīng)啟動
[root@node1 ~]# kubectl get po -A -o wide | grep sriov
kube-system kube-sriov-device-plugin-amd64-d7ctb 1/1 Running 0 6d5h 172.28.30.165 node3 <none> <none>
kube-system kube-sriov-device-plugin-amd64-h86dl 1/1 Running 0 6d5h 172.28.30.164 node2 <none> <none>
kube-system kube-sriov-device-plugin-amd64-rlpwb 1/1 Running 0 6d5h 172.28.30.163 node1 <none> <none>
[root@node1 ~]#
#####describe node查看vf已經(jīng)被注冊到節(jié)點
[root@node1 ~]# kubectl describe node node1
---------
Capacity:
cpu: 128
devices.kubevirt.io/kvm: 1k
devices.kubevirt.io/tun: 1k
devices.kubevirt.io/vhost-net: 1k
ephemeral-storage: 256374468Ki
hugepages-1Gi: 120Gi
Mellanox.com/Mellanox_sriov_switchdev_MT27710_ens19f0_vf: 8 ##已經(jīng)被注冊
memory: 527839304Ki
pods: 110
Allocatable:
cpu: 112
devices.kubevirt.io/kvm: 1k
devices.kubevirt.io/tun: 1k
devices.kubevirt.io/vhost-net: 1k
ephemeral-storage: 236274709318
hugepages-1Gi: 120Gi
Mellanox.com/Mellanox_sriov_switchdev_MT27710_ens19f0_vf: 8 ##可分配數(shù)量
- sriov cni安裝
[root@node1 ~]# git clone https://github.com/k8snetworkplumbingwg/sriov-cni.git [root@node1 ~]# cd sriov-cni [root@node1 ~]# make ###編譯sriov cni [root@node1 ~]# cp build/sriov /opt/cni/bin/ #每個sriov節(jié)點都要拷貝以及執(zhí)行下面修改權(quán)限的命令 [root@node1 ~]# chmod 777 /opt/cni/bin/sriov
sriov-cni主要做的事情:
首先sriov-cni部署后,會在/opt/cni/bin目錄下放一個sriov的可執(zhí)行文件。
然后,當(dāng)kubelet會調(diào)用multus-cni插件,然后multus-cni插件里會調(diào)用delegates數(shù)組里的插件,delegates數(shù)組中會有SR-IOV信息,然后通過執(zhí)行/opt/cni/bin/sriov命令為容器構(gòu)建網(wǎng)絡(luò)環(huán)境,這邊構(gòu)建的網(wǎng)絡(luò)環(huán)境的工作有:
根據(jù)kubelet分配的sriov設(shè)備id找到設(shè)備,并將其添加到容器的網(wǎng)絡(luò)命名空間中為該設(shè)備添加ip地址
- multus安裝
安裝步驟可以參考http://www.dbjr.com.cn/server/325044x0h.htm
五、pod使用sriov
- 創(chuàng)建net-attach-def
[root@node1 ~]# vim sriov-attach.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: sriov-attach
annotations:
k8s.v1.cni.cncf.io/resourceName: Mellanox.com/Mellanox_sriov_switchdev_MT27710_ens19f0_vf
spec:
config: '{
"cniVersion": "0.3.1",
"name": "sriov-attach",
"type": "sriov",
"ipam": {
"type": "calico-ipam",
"range": "222.0.0.0/8"
}
}'
[root@node1 ~]# kubectl apply -f sriov-attach.yaml
networkattachmentdefinition.k8s.cni.cncf.io/sriov-attach created
[root@node1 ~]# kubectl get net-attach-def
NAME AGE
sriov-attach 12s
- 定義pod yaml
[root@node1 ~]# cat sriov-attach.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sriov
labels:
app: sriov-attach
spec:
replicas: 1
selector:
matchLabels:
app: sriov-attach
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: sriov-attach
labels:
app: sriov-attach
spec:
containers:
- name: sriov-attach
image: docker.io/library/nginx:latest
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 1
memory: 1Gi
Mellanox.com/Mellanox_sriov_switchdev_MT27710_ens19f0_vf: '1'
limits:
cpu: 1
memory: 1Gi
Mellanox.com/Mellanox_sriov_switchdev_MT27710_ens19f0_vf: '1'
[root@node1 ~]#
#####啟動pod測試
[root@node1 ~]# kubectl apply -f sriov-attach.yaml
deployment.apps/sriov created
[root@node1 ~]# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
sriov-65c8f754f9-jlcd5 1/1 Running 0 6s 172.25.36.87 node1 <none> <none>- 查看pod
#########1:describe pod查看資源分配情況
[root@node1 wzb]# kubectl describe po sriov-65c8f754f9-jlcd5
Name: sriov-65c8f754f9-jlcd5
Namespace: default
Priority: 0
Node: node1/172.28.30.163
Start Time: Wed, 28 Feb 2024 20:56:24 +0800
Labels: app=sriov-attach
pod-template-hash=65c8f754f9
Annotations: cni.projectcalico.org/containerID: 21ec82394a00c893e5304577b59984441bd3adac82929b5f9b5538f988245bf5
cni.projectcalico.org/podIP: 172.25.36.87/32
cni.projectcalico.org/podIPs: 172.25.36.87/32
k8s.v1.cni.cncf.io/network-status:
[{
"name": "k8s-pod-network",
"ips": [
"172.25.36.87"
],
"default": true,
"dns": {}
},{
"name": "default/sriov-attach",
"interface": "net1",
"ips": [
"172.25.36.90"
],
"mac": "f6:c2:e5:d1:7b:fa",
"dns": {},
"device-info": {
"type": "pci",
"version": "1.1.0",
"pci": {
"pci-address": "0000:41:00.6"
}
}
}]
k8s.v1.cni.cncf.io/networks: sriov-attach
Status: Running
IP: 172.25.36.87
IPs:
IP: 172.25.36.87
Controlled By: ReplicaSet/sriov-65c8f754f9
Containers:
nginx:
Container ID: containerd://6d5246c3e36a125ba60bad6af63f8bffe4710d78c2e14e6afb0d466c3f0f5d6e
Image: docker.io/library/nginx:latest
Image ID: sha256:12766a6745eea133de9fdcd03ff720fa971fdaf21113d4bc72b417c123b15619
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 28 Feb 2024 20:56:28 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 1
intel.com/intel_sriov_switchdev_MT27710_ens19f0_vf: 1
memory: 1Gi
Requests:
cpu: 1
intel.com/intel_sriov_switchdev_MT27710_ens19f0_vf: 1 ##pod中已經(jīng)分配了sriov資源
memory: 1Gi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pnl9d (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-pnl9d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 105s default-scheduler Successfully assigned default/sriov-65c8f754f9-jlcd5 to node1
Normal AddedInterface 103s multus Add eth0 [172.25.36.87/32] from k8s-pod-network
Normal AddedInterface 102s multus Add net1 [172.25.36.90/26] from default/sriov-attach ####sriov網(wǎng)卡正常被添加
Normal Pulled 102s kubelet Container image "docker.io/library/nginx:latest" already present on machine
Normal Created 102s kubelet Created container nginx
Normal Started 102s kubelet Started container nginx
[root@node1 wzb]#
#######################
進(jìn)入pod內(nèi)部查看,已經(jīng)有網(wǎng)卡net1 獲取到地址
[root@node1 ~]# crictl ps | grep sriov-attach
6d5246c3e36a1 12766a6745eea 4 minutes ago Running nginx 0 21ec82394a00c
[root@node1 ~]# crictl inspect 6d5246c3e36a1 | grep -i pid
"pid": 2775224,
"pid": 1
"type": "pid"
[root@node1 ~]# ns
nsec3hash nsenter nslookup nss-policy-check nstat nsupdate
[root@node1 ~]# nsenter -t 2775224 -n bash
[root@node1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if30729: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default
link/ether ae:f9:85:03:13:2f brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.25.36.87/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::acf9:85ff:fe03:132f/64 scope link
valid_lft forever preferred_lft forever
21: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
link/ether f6:c2:e5:d1:7b:fa brd ff:ff:ff:ff:ff:ff
inet 172.25.36.90/26 brd 172.25.36.127 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::f4c2:e5ff:fed1:7bfa/64 scope link
valid_lft forever preferred_lft forever
[root@node1 ~]#
總結(jié)
以上為個人經(jīng)驗,希望能給大家一個參考,也希望大家多多支持腳本之家。
相關(guān)文章
kubernetes之statefulset搭建MySQL集群
這篇文章主要為大家介紹了kubernetes之statefulset搭建MySQL集群示例詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2023-04-04
詳解k8s?NetworkPolicy?網(wǎng)絡(luò)策略是怎么樣的
這篇文章主要為大家介紹了k8s?NetworkPolicy?網(wǎng)絡(luò)策略是怎么樣的深入解析,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2023-04-04
kubernetes?部署dashboard最新詳細(xì)步驟
這篇文章主要介紹了kubernetes?部署dashboard最新詳細(xì)步驟,本文給大家介紹的非常詳細(xì),感興趣的朋友跟隨小編一起看看吧2024-06-06
CentOS 7.9 升級內(nèi)核 kernel-ml-5.6.14版本的方法
這篇文章主要介紹了CentOS 7.9 升級內(nèi)核 kernel-ml-5.6.14版本,默認(rèn)內(nèi)核版本為3.10.0,現(xiàn)升級到 5.6.14 版本,本文給大家介紹的非常詳細(xì),對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友可以參考下2022-10-10

