OpenStack安裝部署Liberty Neutron
最近在兩臺物理機(jī)環(huán)境中部署了OpenStack的Liberty版本,各個模塊都遇到或多或少的問題,但尤以Neutron的問題最讓人頭疼。盡管OpenStack一直致力于簡化Neutron的部署,但對于非網(wǎng)絡(luò)技術(shù)人員來說依然存在著很大的挑戰(zhàn),根本原因還是由于網(wǎng)絡(luò)自身的復(fù)雜性所導(dǎo)致的,因此想要成功部署Neutron還是需要網(wǎng)絡(luò)基礎(chǔ)的,但這并不意味著沒有網(wǎng)絡(luò)基礎(chǔ)就不能成功部署Neutron并使其工作。本文將總結(jié)Neutron的安裝部署步驟,并對遇到的問題進(jìn)行詳細(xì)的描述,旨在提供解決問題的思路或者給與一定的啟示。
根據(jù)官方部署文檔的說法,Neutron允許創(chuàng)建由其它OpenStack服務(wù)管理的接口設(shè)備,并將這樣的設(shè)備附加到網(wǎng)絡(luò),可以實(shí)現(xiàn)各種插件或代理以適應(yīng)不同的網(wǎng)絡(luò)設(shè)備和軟件,為OpenStack架構(gòu)提供足夠的靈活性。Neutron主要包括以下組件:
- Neutron-server:接收并轉(zhuǎn)發(fā)API請求到合適的Neutron插件,如linuxbridge-agent。
- Neutron插件和代理:連接port,創(chuàng)建網(wǎng)絡(luò)或子網(wǎng),提供IP地址。根據(jù)云計算環(huán)境使用的廠商或者技術(shù),這些插件和代理會有所區(qū)別。Neutron為思科的虛擬和物理交換機(jī)、NEC的OpenFlow產(chǎn)品、Open vSwitch(OVS)、Linux bridging和VMware的NSX產(chǎn)品提供了插件或代理。
- 消息隊列:Neutron使用消息隊列在Neutron-server和各種代理之間路由消息,也存儲特定插件的狀態(tài)。主要的消息隊列有RabbitMQ、Qpid和ZeroMQ。
Neutron將網(wǎng)絡(luò)、子網(wǎng)和路由器抽象為對象,每個抽象對象都具有與其對應(yīng)的物理概念或設(shè)備的功能,網(wǎng)絡(luò)包含子網(wǎng),路由器在不同的子網(wǎng)和網(wǎng)絡(luò)之間路由消息。每個路由器有一個連接到網(wǎng)絡(luò)的網(wǎng)關(guān)(Gateway)和許多連接子網(wǎng)的虛擬網(wǎng)卡,連接到相同路由器的子網(wǎng)可以互相訪問。這和實(shí)際物理環(huán)境中路由器的功能是一致的。
無論以何種方式部署Neutron,在創(chuàng)建網(wǎng)絡(luò)時至少需要創(chuàng)建一個external網(wǎng)絡(luò)。與其它網(wǎng)絡(luò)不同的是,external網(wǎng)絡(luò)不僅僅是虛擬定義網(wǎng)絡(luò),它代表了物理的,OpenStack安裝之外的外部網(wǎng)絡(luò)的視圖。External網(wǎng)絡(luò)上的IP地址可以被外部物理網(wǎng)絡(luò)訪問,由于external網(wǎng)絡(luò)僅僅表示外部網(wǎng)絡(luò)的視圖,因此在該網(wǎng)絡(luò)總DHCP是禁用的。除了external網(wǎng)絡(luò),還要有一個或多個internal網(wǎng)絡(luò),VMs直接連接到這些軟件定義網(wǎng)絡(luò)。相同internal網(wǎng)絡(luò)上的VMs可以互相訪問,或連接到相同路由器上不同子網(wǎng)中的VMs也可以互相訪問,比如主機(jī)A位于子網(wǎng)N1,主機(jī)B位于子網(wǎng)N2,N1和N2連接到相同的路由器,那么A和B之間是網(wǎng)絡(luò)可達(dá)的。外部網(wǎng)絡(luò)訪問VMs或VMs訪問外部網(wǎng)絡(luò)的功能有路由器完成,路由器的網(wǎng)關(guān)連接external網(wǎng)絡(luò),internal網(wǎng)絡(luò)連接路由器的接口,與實(shí)際的物理網(wǎng)絡(luò)結(jié)構(gòu)相似??梢詾閕nternal網(wǎng)絡(luò)中的ports分配external網(wǎng)絡(luò)中的IP地址,port指的是連接到子網(wǎng)的連接。通過將external網(wǎng)絡(luò)中的IP與VMs的ports關(guān)聯(lián)可以實(shí)現(xiàn)外部網(wǎng)絡(luò)對VMs的訪問。Neutron也支持安全組。安全組使管理員可以在組內(nèi)定義防火墻規(guī)則,VM可以屬于多個安全組,Neutron根據(jù)安全組的規(guī)則或策略阻塞或允許ports,或VMs允許的通信類型。
本次將Neutron部署在兩臺物理機(jī)controller和compute上,其中controller做為控制節(jié)點(diǎn)(網(wǎng)絡(luò)節(jié)點(diǎn)),compute為計算節(jié)點(diǎn),controller節(jié)點(diǎn)已經(jīng)安裝配置了MySQL、RabbitMQ和keystone。首先需要在mysql數(shù)據(jù)庫中創(chuàng)建neutron對應(yīng)的數(shù)據(jù)庫:
MariaDB [(none)]> create database neutron; Query OK, 1 row affected (0.04 sec) MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'neutron'; Query OK, 0 rows affected (0.24 sec) MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'%' identified by 'neutron'; Query OK, 0 rows affected (0.00 sec)
其次在keystone中創(chuàng)建neutron對應(yīng)的用戶、service和endpoint:
[opst@controller ~]$ source admin-openrc.sh [root@controller opst]# openstack user create --domain default --password neutron neutron +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | default | | enabled | True | | id | 24a20abcf4324cfca09484959244aaf7 | | name | neutron | +-----------+------------------------------------+ [root@controller opst]# openstack role add --project service --user neutron admin [root@controller opst]# openstack service create --name neutron --description 'The Networking Service' network [root@controller opst]# openstack endpoint create --region RegionOne neutron public http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | a7aae0431e2948ce8070ddf0a14bbdf8 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | ec4391191490440787799e973b54c816 | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+------------------------------------+ [root@controller opst]# openstack endpoint create --region RegionOne neutron internal http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 8599ab44484b4c3dbc12f8c945490cef | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | ec4391191490440787799e973b54c816 | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+-----------------------------------+ [root@controller opst]# openstack endpoint create --region RegionOne neutron admin http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 66ef114548e84911873e83949ef76307 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | ec4391191490440787799e973b54c816 | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+-------------------------------------+
完成上述基礎(chǔ)服務(wù)的配置管理后,要決定Neutron部署架構(gòu),可以采用的架構(gòu)有兩種,Provider網(wǎng)絡(luò)和Self-service網(wǎng)絡(luò)。Provider網(wǎng)絡(luò)可能是最簡單的Neutron架構(gòu),僅支持為VM分配public(Provider)網(wǎng)絡(luò),沒有自服務(wù)網(wǎng)絡(luò),路由器或者浮動IP,只有admin或者其他特權(quán)用戶可以管理Provider網(wǎng)絡(luò)。Self-service網(wǎng)絡(luò)支持layer-3服務(wù),允許VMs分配private網(wǎng)絡(luò),demo用戶或其他非特權(quán)用戶可以自行管理網(wǎng)絡(luò),比如連接self-service網(wǎng)絡(luò)和provider網(wǎng)絡(luò)的路由器。另外,浮動IP可以使外部網(wǎng)絡(luò),比如Internet,訪問VMs。Self-service網(wǎng)絡(luò)也支持VMs分配public(Provider)網(wǎng)絡(luò)。
本次部署采用Self-service網(wǎng)絡(luò)。在控制節(jié)點(diǎn)controller部署self-service網(wǎng)絡(luò)相關(guān)安裝包,根據(jù)操作系統(tǒng)是否已經(jīng)安裝了ebtables或ipset,最后兩個安裝包可做相應(yīng)取舍。
[root@controller opst]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset
編輯/etc/neutron/neutron.conf:
[DEFAULT] verbose = True core_plugin = ml2 service_plugins = router allow_overlapping_ips = True rpc_backend=rabbit auth_strategy = keystone notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller:8774/v2 [database] connection = mysql://neutron:neutron@controller/neutron [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = opst [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = neutron [nova] auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = nova [oslo_concurrency] lock_path = /var/lib/tmp/neutron
編輯/etc/neutron/plugins/ml2/ml2_conf.ini:
[ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = public [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = True
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini:
[linux_bridge] physical_interface_mappings = public:eno1 [vxlan] enable_vxlan = True local_ip = 192.168.81.66 l2_population = True [agent] prevent_arp_spoofing = True [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDrive
編輯/etc/neutron/l3_agent.ini:
[DEFAULT] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver external_network_bridge = verbose = True
編輯/etc/neutron/dhcp_agent.ini:
[DEFAULT] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = True verbose = True dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
創(chuàng)建/etc/neutron/dnsmasq-neutron.conf,并添加如下內(nèi)容,啟動DHCP的MTU選項(26),并將大小設(shè)置為1450字節(jié)。
dhcp-option-force=26,1450
編輯/etc/neutron/metadata_agent.ini:
[DEFAULT] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_region = RegionOne auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = neutron nova_metadata_ip = controller metadata_proxy_shared_secret = Test(與nova.conf中[neutron]metadata_proxy_shared_secret的值相同) verbose = True
編輯/etc/nova/nova.conf:
[neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = True metadata_proxy_shared_secret = METADATA_SECRET
Neutron的初始化腳本需要鏈接文件/etc/neutron/plugin.ini指向/etc/neutron/plugins/ml2/ml2_conf.ini:
[root@controller opst]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
推送數(shù)據(jù)到MySQL數(shù)據(jù)庫中:
[root@controller opst]#su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重啟nova-api:
[root@controller opst]# systemctl restart openstack-nova-api.service
配置Neutron服務(wù)隨系統(tǒng)啟動(無論是Provider網(wǎng)絡(luò)還是Self-service網(wǎng)絡(luò)):
[root@controller opst]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
啟動Neutron服務(wù)(無論是Provider網(wǎng)絡(luò)還是Self-service網(wǎng)絡(luò)):
[root@controller opst]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
如果是Self-service網(wǎng)絡(luò),則還需要啟動neutron-l3-agent.service服務(wù):
[root@controller opst]# systemctl enable neutron-l3-agent.service [root@controller opst]# systemctl start neutron-l3-agent.service
在計算節(jié)點(diǎn)compute部署self-service網(wǎng)絡(luò)相關(guān)安裝包,根據(jù)操作系統(tǒng)是否已經(jīng)安裝了ebtables或ipset,最后兩個安裝包可做相應(yīng)取舍:
[root@controller opst]# systemctl enable neutron-l3-agent.service [root@controller opst]# systemctl start neutron-l3-agent.service
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.in:
[linux_bridge] physical_interface_mappings = public:eno1 [vxlan] enable_vxlan = True local_ip = 192.168.81.65(compute IP地址) l2_population = True [agent] prevent_arp_spoofing = True [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDrive
編輯/etc/nova/nova.conf:
[neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = neutron password = neutron
重啟nova:
[neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = neutron password = neutron
啟動neutron:
[root@compute opst]# systemctl enable neutron-linuxbridge-agent.service [root@compute opst]# systemctl start neutron-linuxbridge-agent.service
在控制節(jié)點(diǎn)驗證Neutron的安裝:
[opst@controller ~]$ neutron ext-list +-----------------------+-----------------------------------------------+ | alias | name | +-----------------------+-----------------------------------------------+ | dns-integration | DNS Integration | | ext-gw-mode | Neutron L3 Configurable external gateway mode | | binding | Port Binding | | agent | agent | | subnet_allocation | Subnet Allocation | | l3_agent_scheduler | L3 Agent Scheduler | | external-net | Neutron external network | | flavors | Neutron Service Flavors | | net-mtu | Network MTU | | quotas | Quota management support | | l3-ha | HA Router extension | | provider | Provider Network | | multi-provider | Multi Provider Network | | extraroute | Neutron Extra Route | | router | Neutron L3 Router | | extra_dhcp_opt | Neutron Extra DHCP opts | | security-group | security-group | | dhcp_agent_scheduler | DHCP Agent Scheduler | | rbac-policies | RBAC Policies | | port-security | Port Security | | allowed-address-pairs | Allowed Address Pairs | | dvr | Distributed Virtual Router | +-----------------------+--------------------------------------------------+
繼續(xù)在控制節(jié)點(diǎn)執(zhí)行下面的命令:
[opst@controller ~]$ neutron agent-list +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+ | id | agent_type | host | alive | admin_state_up | binary | +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+ | 111e36b2-a1d6-4c86-90c3-e4e4725fa560 | L3 agent | controller | :-) | True | neutron-l3-agent | | 7aaa5eca-0436-423f-af2f-fbae0ebc6fa1 | Linux bridge agent | compute | :-) | True | neutron-linuxbridge-agent | | 7e394c14-ce7e-45e0-9ac6-3f9250c04984 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent | | a6ec9a47-e8f1-4948-9d9b-b172fd6757d6 | Metadata agent | controller | :-) | True | neutron-metadata-agent | | f99f491b-79e1-4d5d-8c3f-9ecdcf11452c | DHCP agent | controller | :-) | True | neutron-dhcp-agent | +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
這條命令的輸出一是看相關(guān)服務(wù)是否已經(jīng)全部部署成功,二是是否正常運(yùn)行,可通過觀察第四列的值來確定,:-) 為運(yùn)行正常,xxxx則存在問題。在驗證Neutron部署成功后,將創(chuàng)建虛擬網(wǎng)絡(luò)。根據(jù)所選擇的網(wǎng)絡(luò)架構(gòu)(Provider或者Self-service),需要執(zhí)行不同的命令創(chuàng)建不同的網(wǎng)絡(luò)。由于部署的是Self-service網(wǎng)絡(luò),因此既需要創(chuàng)建Public provider網(wǎng)絡(luò)也需要創(chuàng)建private project網(wǎng)絡(luò),public網(wǎng)絡(luò)用于通過layer-2連接實(shí)際的物理網(wǎng)絡(luò),private網(wǎng)絡(luò)使用layer-3和NAT連接物理網(wǎng)絡(luò),并包含為VMs分配IP的DHCP,VMs能夠訪問Internet,如果想要外部網(wǎng)絡(luò)訪問VMs,則需要浮動IP地址與VM關(guān)聯(lián)。
即使成功完成了上述步驟,并且解決了可能出現(xiàn)的問題,比如配置問題,在創(chuàng)建虛擬網(wǎng)絡(luò)時依然存在著巨大的風(fēng)險。盡管openstack致力于簡化創(chuàng)建網(wǎng)絡(luò)的工作,但對于新手來說依然困難重重,尤其是部署的環(huán)境和官方文檔所推薦的環(huán)境不一致時。根據(jù)官方文檔的說明,當(dāng)部署Self-service網(wǎng)絡(luò)時,在創(chuàng)建private網(wǎng)絡(luò)之前必須先創(chuàng)建public網(wǎng)絡(luò),但在實(shí)際創(chuàng)建虛擬網(wǎng)絡(luò)的過程中,發(fā)現(xiàn)即使先創(chuàng)建private網(wǎng)絡(luò)再創(chuàng)建public網(wǎng)絡(luò),也并未影響網(wǎng)絡(luò)創(chuàng)建成功與否,但為了與官方保持一致還是按照先創(chuàng)建public網(wǎng)絡(luò)再創(chuàng)建private網(wǎng)絡(luò)的順序進(jìn)行操作。
執(zhí)行下面的命令創(chuàng)建public網(wǎng)絡(luò),其中--provider:physical_networkpublic中的public與flat_networks = public中保持一致:
[opst@controller ~]$ neutron net-create public --shared --router:external --provider:physical_network public --provider:network_type flat Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | a4b483a8-8331-4fd4-bfec-431e4df8b7ff | | mtu | 0 | | name | public | | port_security_enabled | True | | provider:network_type | flat | | provider:physical_network | public | | provider:segmentation_id | | | router:external | True | | shared | True | | status | ACTIVE | | subnets | | | tenant_id | a242aca931bb49e08a49e5e942c8129d | +---------------------------+----------------------------------------+
接下來創(chuàng)建子網(wǎng),這條命令的執(zhí)行結(jié)果將直接影響是否可以連通外部網(wǎng)絡(luò),在經(jīng)過無數(shù)次失敗的嘗試后終于使Neutron可以連通外網(wǎng)。錯誤的原因在于錯誤的設(shè)置了192.168.81.0/24及start和end的值,上述值必須為實(shí)際的物理網(wǎng)絡(luò)地址或范圍,比如我的物理機(jī)僅有一塊網(wǎng)卡,通過DHCP分配的IP地址為192.168.81.xxx,因此PUBLIC_NETWORK_CIDR就必須為192.168.81.0/24,而start和end則為未使用的192.168.81.xxx地址。
[opst@controller ~]$ neutron subnet-create public 192.168.81.0/24 --name public --allocation-pool start=192.168.81.100,end=192.168.81.200 --dns-nameserver 192.168.85.253 --gateway 192.168.81.254 Created a new subnet: +-------------------+------------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------+ | allocation_pools | {"start": "192.168.81.100", "end": "192.168.81.200"} | | cidr | 192.168.81.0/24 | | dns_nameservers | 192.168.85.253 | | enable_dhcp | True | | gateway_ip | 192.168.81.254 | | host_routes | | | id | da0ff1d1-35e5-4adb-90d6-9c45bc7864c8 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | public | | network_id | a4b483a8-8331-4fd4-bfec-431e4df8b7ff | | subnetpool_id | | | tenant_id | a242aca931bb49e08a49e5e942c8129d | +-------------------+----------------------------------------------------------+
創(chuàng)建完public網(wǎng)絡(luò)后,接下來創(chuàng)建private網(wǎng)絡(luò)。Private網(wǎng)絡(luò)用于為VMs分配IP地址,使其之間可以互聯(lián)訪問。
[root@controller opst]# neutron net-create private Created a new network: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | id | 23ec30de-c5dc-49b2-923c-3b7b83e1d9d1 | | mtu | 0 | | name | private | | port_security_enabled | True | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | 25b21a1cd9aa474e95a8cfd8e175714c | +-----------------------+---------------------------------------+
在private網(wǎng)絡(luò)上創(chuàng)建子網(wǎng)的命令如下,其中172.16.1.0/24為子網(wǎng)的CIDR,推薦值為10.0.0.0/8,172.16.0.0/12和192.168.0.0/16。
[root@controller opst]# neutron subnet-create private 172.16.1.0/24 --name private --dns-nameserver 192.168.85.253 --gateway 172.16.1.1 Created a new subnet: +-------------------+------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------+ | allocation_pools | {"start": "172.16.1.2", "end": "172.16.1.254"} | | cidr | 172.16.1.0/24 | | dns_nameservers | 192.168.85.253 | | enable_dhcp | True | | gateway_ip | 172.16.1.1 | | host_routes | | | id | dd9ee06a-fd00-41f5-93ba-82f3c0c1e052 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | private | | network_id | 23ec30de-c5dc-49b2-923c-3b7b83e1d9d1 | | subnetpool_id | | | tenant_id | 25b21a1cd9aa474e95a8cfd8e175714c | +-------------------+-------------------------------------------------+
創(chuàng)建完網(wǎng)絡(luò)后,需要創(chuàng)建路由器以使private網(wǎng)絡(luò)和public網(wǎng)絡(luò)互連。創(chuàng)建路由器的命令及輸出如下:
[opst@controller ~]$ neutron router-create router Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+---------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 27512806-cac3-47f8-acff-b7bd847e2866 | | name | router | | routes | | | status | ACTIVE | | tenant_id | 25b21a1cd9aa474e95a8cfd8e175714c | +-----------------------+--------------------------------------+
創(chuàng)建路由器后,需要為路由器添加內(nèi)部網(wǎng)絡(luò)的網(wǎng)卡,其中的private為內(nèi)部網(wǎng)絡(luò)子網(wǎng)的名稱或ID:
[opst@controller ~]$ neutron router-interface-add router private Added interface f8979659-e714-4298-90cb-57d2b156166c to router router
最后為路由器添加網(wǎng)關(guān),其中public為external網(wǎng)絡(luò)的名稱或ID:
[opst@controller ~]$ neutron router-gateway-set router public Set gateway for router router
完成上述步驟后,理論上Neutron的部署就成功了,但在未實(shí)際測試網(wǎng)絡(luò)的連通性的情況下,并不能百分百確定網(wǎng)絡(luò)是連通的。測定連通性最簡單的方式是執(zhí)行ping操作,如果ping的結(jié)果沒有錯誤信息,則可以大致認(rèn)為網(wǎng)絡(luò)是連通的。那具體應(yīng)該ping什么樣的地址,建議測試虛擬路由器的external網(wǎng)絡(luò)和internal網(wǎng)絡(luò)的IP地址,這些信息通過執(zhí)行下面的命令獲得:
[opst@controller ~]$ neutron router-port-list router +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+ | 653e03b9-8e4a-41c7-97c9-f0a1fa4f086f | | fa:16:3e:34:b6:5c | {"subnet_id": "da0ff1d1-35e5-4adb-90d6-9c45bc7864c8", "ip_address": "192.168.81.101"} | | f8979659-e714-4298-90cb-57d2b156166c | | fa:16:3e:9f:fb:83 | {"subnet_id": "dd9ee06a-fd00-41f5-93ba-82f3c0c1e052", "ip_address": "172.16.1.1"} | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
通過ping 192.168.81.101.可以確定external網(wǎng)絡(luò)是否連通,而要想確認(rèn)internal網(wǎng)絡(luò)是否連通則需要在路由器的網(wǎng)絡(luò)命名空間下ping 172.16.1.1,網(wǎng)絡(luò)命名空間通過執(zhí)行下面的命令獲得:
[opst@controller ~]$ ip netns qrouter-27512806-cac3-47f8-acff-b7bd847e2866 (id: 2) qdhcp-a4b483a8-8331-4fd4-bfec-431e4df8b7ff (id: 1) qdhcp-23ec30de-c5dc-49b2-923c-3b7b83e1d9d1 (id: 0)
然后以root用戶執(zhí)行下面的命令:
ip netns exec qrouter-27512806-cac3-47f8-acff-b7bd847e2866 ping 172.16.1.1
如果上述命令的輸出顯示無錯誤,則基本可認(rèn)為external網(wǎng)絡(luò)和internal網(wǎng)絡(luò)都是連通的,進(jìn)而可以繼續(xù)VM的創(chuàng)建,否則即使能夠創(chuàng)建VM,也無法保證VM網(wǎng)絡(luò)可用。
至此基本完成了Neutron的部署和虛擬網(wǎng)絡(luò)的創(chuàng)建,但可能遇到的問題或需要注意的地方遠(yuǎn)遠(yuǎn)超過了這篇文章所能涵蓋的。下面就幾個可能出現(xiàn)的問題進(jìn)行簡單的總結(jié)。
如果internal網(wǎng)絡(luò)類型為vxlan,也即參數(shù)tenant_network_types的值為vxlan,則需要宿主操作系統(tǒng)的內(nèi)核為3.13或者更高版本??梢酝ㄟ^uname –r查看內(nèi)核版本,而升級則可通過下面的命令進(jìn)行:
[root@controller opst]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org [root@controller opst]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm [root@controller opst]# yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y [root@controller opst]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg CentOS Linux (4.5.0-1.el7.elrepo.x86_64) 7 (Core) CentOS Linux (3.10.0-327.10.1.el7.x86_64) 7 (Core) [root@controller opst]# grub2-set-default 0 reboot
另外一些問題在命令行中是否無法察覺的,也就是說,當(dāng)執(zhí)行了某條命令后,輸出信息并不足以判斷是否成功,直到測試網(wǎng)絡(luò)連通性或者創(chuàng)建VMs時才發(fā)現(xiàn)某些步驟存在問題。應(yīng)對類似問題的最好方法就是在創(chuàng)建虛擬網(wǎng)絡(luò)及創(chuàng)建VMs時觀察Neutron和Nova日志的輸出,這樣沒執(zhí)行一條命令就能確定是否成功,有助于問題的定位,同時不會浪費(fèi)時間或精力。比如在Neutron的日志中常見的如下錯誤信息:
2016-04-06 17:12:51.605 3136 ERROR neutron.plugins.ml2.managers [req-0dc4b947-ebbe-47a9-823c-a4f9ead7df74 - - - - -] Failed to bind port 24191168-b2c0-47d8-aa81-c46ad9bcf7b6 on host controller 2016-04-06 17:12:51.605 3136 ERROR neutron.plugins.ml2.managers [req-0dc4b947-ebbe-47a9-823c-a4f9ead7df74 - - - - -] Failed to bind port 24191168-b2c0-47d8-aa81-c46ad9bcf7b6 on host controller 2016-04-06 17:12:51.623 3136 INFO neutron.plugins.ml2.plugin [req-0dc4b947-ebbe-47a9-823c-a4f9ead7df74 - - - - -] Attempt 2 to bind port 24191168-b2c0-47d8-aa81-c46ad9bcf7b6 2016-04-06 17:12:52.218 3136 WARNING neutron.plugins.ml2.rpc [req-8c5de5e4-288f-4f1a-a377-3b348c3ee13b - - - - -] Device tap24191168-b2 requested by agent lb0010c6b0ae66 on network 521f09b9-791e-482f-9403-5ddac2d047b4 not bound, vif_type: binding_failed
該問題有可能是因為內(nèi)核版本低于3.13以至于不支持vxlan網(wǎng)絡(luò),另一個可能的原因是由于public子網(wǎng)的CIDR與所屬的物理網(wǎng)段不一致。
以上兩個問題,尤其是最后一個問題是在部署Neutron時遇到的最大關(guān)卡,其它問題,如配置文件的拼寫錯誤,都是容易排查的。Neutron的功能及角色決定了其復(fù)雜性,涉及的內(nèi)容不僅僅是網(wǎng)絡(luò)知識,還包括操作系統(tǒng)、虛擬化等方面,因此要想更深入的理解掌握Neutron,需要在遇到不同的問題時,查閱相關(guān)資料進(jìn)行深入學(xué)習(xí),否則也僅僅是做到了部署。
以上就是本文的全部內(nèi)容,希望對大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
相關(guān)文章
Centos7環(huán)境準(zhǔn)備openstack pike的安裝
本篇文章主要介紹了Centos7環(huán)境準(zhǔn)備openstack pike的安裝,小編覺得挺不錯的,現(xiàn)在分享給大家,也給大家做個參考。一起跟隨小編過來看看吧2018-03-03Openstack各組件邏輯關(guān)系及運(yùn)行流程解析
這篇文章主要為大家介紹了Openstack各組件邏輯關(guān)系及運(yùn)行流程解析,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步早日升職加薪2022-04-04CentOS 6.4下安裝部署OpenStack云計算平臺的方法
現(xiàn)在好多公司都使用Openstack,所以也想著學(xué)習(xí)下用OpenStack云計算平臺,這篇文章給加詳細(xì)介紹了CentOS 6.4下安裝部署OpenStack云計算平臺的方法,有需要的朋友們可以參考借鑒,下面來一起看看吧。2016-10-10詳解Openstack使用ubuntu鏡像啟動虛擬機(jī)實(shí)例
這篇文章主要介紹了詳解Openstack使用ubuntu鏡像啟動虛擬機(jī)實(shí)例,具有一定的參考價值,感興趣的小伙伴們可以參考一下。2017-04-04centos下最簡安裝openstack——使用packstack詳解
本篇文章主要介紹了centos下最簡安裝openstack——使用packstack,具有一定的參考價值,有興趣的可以了解一下。2017-01-01