LVS+Keepalived構建高可用負載均衡(測試篇)
更新時間:2013年06月13日 17:45:01 作者:
這篇文章主要介紹了LVS+Keepalived構建高可用負載均衡的測試方法,需要的朋友可以參考下
一、 啟動LVS高可用集群服務
首先,啟動每個real server節(jié)點的服務:
[root@localhost ~]# /etc/init.d/lvsrs start
start LVS of REALServer
然后,分別在主備Director Server啟動Keepalived服務:
[root@DR1 ~]#/etc/init.d/Keepalived start
[root@DR1 ~]#/ ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP bogon:http rr
-> real-server1:http Route 1 1 0
-> real-server2:http Route 1 1 0
此時查看Keepalived服務的系統(tǒng)日志信息如下:
[root@localhost ~]# tail -f /var/log/messages
Feb 28 10:01:56 localhost Keepalived: Starting Keepalived v1.1.19 (02/27,2011)
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.25 added
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Configuration is using : 12063 Bytes
Feb 28 10:01:56 localhost Keepalived: Starting Healthcheck child process, pid=4623
Feb 28 10:01:56 localhost Keepalived_vrrp: Netlink reflector reports IP 192.168.12.25 added
Feb 28 10:01:56 localhost Keepalived: Starting VRRP child process, pid=4624
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Activating healtchecker for service [192.168.12.246:80]
Feb 28 10:01:56 localhost Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Activating healtchecker for service [192.168.12.237:80]
Feb 28 10:01:57 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Feb 28 10:01:58 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Feb 28 10:01:58 localhost Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Feb 28 10:01:58 localhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:01:58 localhost avahi-daemon[2778]: Registering new address record for 192.168.12.135 on eth0.
二、 高可用性功能測試
高可用性是通過LVS的兩個Director Server完成的,為了模擬故障,我們先將主Director Server上面的Keepalived服務停止,然后觀察備用Director Server上Keepalived的運行日志,信息如下:
Feb 28 10:08:52 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.12.135
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:08:54 lvs-backup Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:08:54 lvs-backup avahi-daemon[3349]: Registering new address record for 192.168.12.135 on eth0.
Feb 28 10:08:59 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.12.135
從日志中可以看出,主機出現故障后,備機立刻檢測到,此時備機變?yōu)镸ASTER角色,并且接管了主機的虛擬IP資源,最后將虛擬IP綁定在eth0設備上。
接著,重新啟動主Director Server上的Keepalived服務,繼續(xù)觀察備用Director Server的日志狀態(tài):
備用Director Server的日志狀態(tài):
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: Netlink reflector reports IP 192.168.12.135 removed
Feb 28 10:12:11 lvs-backup Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 removed
Feb 28 10:12:11 lvs-backup avahi-daemon[3349]: Withdrawing address record for 192.168.12.135 on eth0.
從日志可知,備機在檢測到主機重新恢復正常后,重新返回BACKUP角色,并且釋放了虛擬IP資源。
三、 負載均衡測試
這里假定兩個real server節(jié)點配置www服務的網頁文件根目錄均為/webdata/www目錄,然后分別執(zhí)行如下操作:
在real server1 執(zhí)行:
echo "This is real server1" /webdata/www/index.html
在real server2 執(zhí)行:
echo "This is real server2" /webdata/www/index.html
接著打開瀏覽器,訪問http://192.168.12.135這個地址,然后不斷刷新此頁面,如果能分別看到“This is real server1”和“This is real server2”就表明LVS已經在進行負載均衡了。
四、 故障切換測試
故障切換是測試當某個節(jié)點出現故障后,Keepalived監(jiān)控模塊是否能及時發(fā)現,然后屏蔽故障節(jié)點,同時將服務轉移到正常節(jié)點來執(zhí)行。
這里我們將real server 1節(jié)點服務停掉,假定這個節(jié)點出現故障,然后查看主、備機日志信息,相關日志如下:
Feb 28 10:14:12 localhost Keepalived_healthcheckers: TCP connection to [192.168.12.246:80] failed !!!
Feb 28 10:14:12 localhost Keepalived_healthcheckers: Removing service [192.168.12.246:80] from VS [192.168.12.135:80]
Feb 28 10:14:12 localhost Keepalived_healthcheckers: Remote SMTP server [192.168.12.1:25] connected.
Feb 28 10:14:12 localhost Keepalived_healthcheckers: SMTP alert successfully sent.
通過日志可以看出,Keepalived監(jiān)控模塊檢測到192.168.12.246這臺主機出現故障后,將此節(jié)點從集群系統(tǒng)中剔除掉了。
此時訪問http://192.168.12.135這個地址,應該只能看到“This is real server2”了,這是因為節(jié)點1出現故障,而Keepalived監(jiān)控模塊將節(jié)點1從集群系統(tǒng)中剔除了。
下面重新啟動real server 1節(jié)點的服務,可以看到Keepalived日志信息如下:
Feb 28 10:15:48 localhost Keepalived_healthcheckers: TCP connection to [192.168.12.246:80] success.
Feb 28 10:15:48 localhost Keepalived_healthcheckers: Adding service [192.168.12.246:80] to VS [192.168.12.135:80]
Feb 28 10:15:48 localhost Keepalived_healthcheckers: Remote SMTP server [192.168.12.1:25] connected.
Feb 28 10:15:48 localhost Keepalived_healthcheckers: SMTP alert successfully sent.
從日志可知,Keepalived監(jiān)控模塊檢測到192.168.12.246這臺主機恢復正常后,又將此節(jié)點加入了集群系統(tǒng)中。
此時再次訪問http://192.168.12.135這個地址,然后不斷刷新此頁面,應該又能分別看到“This is real server1”和“This is real server2”頁面了,這說明在real server 1節(jié)點恢復正常后,Keepalived監(jiān)控模塊將此節(jié)點加入了集群系統(tǒng)中。
本文出自 “技術成就夢想” 博客
首先,啟動每個real server節(jié)點的服務:
[root@localhost ~]# /etc/init.d/lvsrs start
start LVS of REALServer
然后,分別在主備Director Server啟動Keepalived服務:
[root@DR1 ~]#/etc/init.d/Keepalived start
[root@DR1 ~]#/ ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP bogon:http rr
-> real-server1:http Route 1 1 0
-> real-server2:http Route 1 1 0
此時查看Keepalived服務的系統(tǒng)日志信息如下:
[root@localhost ~]# tail -f /var/log/messages
Feb 28 10:01:56 localhost Keepalived: Starting Keepalived v1.1.19 (02/27,2011)
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.25 added
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Configuration is using : 12063 Bytes
Feb 28 10:01:56 localhost Keepalived: Starting Healthcheck child process, pid=4623
Feb 28 10:01:56 localhost Keepalived_vrrp: Netlink reflector reports IP 192.168.12.25 added
Feb 28 10:01:56 localhost Keepalived: Starting VRRP child process, pid=4624
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Activating healtchecker for service [192.168.12.246:80]
Feb 28 10:01:56 localhost Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Activating healtchecker for service [192.168.12.237:80]
Feb 28 10:01:57 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Feb 28 10:01:58 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Feb 28 10:01:58 localhost Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Feb 28 10:01:58 localhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:01:58 localhost avahi-daemon[2778]: Registering new address record for 192.168.12.135 on eth0.
二、 高可用性功能測試
高可用性是通過LVS的兩個Director Server完成的,為了模擬故障,我們先將主Director Server上面的Keepalived服務停止,然后觀察備用Director Server上Keepalived的運行日志,信息如下:
Feb 28 10:08:52 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.12.135
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:08:54 lvs-backup Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:08:54 lvs-backup avahi-daemon[3349]: Registering new address record for 192.168.12.135 on eth0.
Feb 28 10:08:59 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.12.135
從日志中可以看出,主機出現故障后,備機立刻檢測到,此時備機變?yōu)镸ASTER角色,并且接管了主機的虛擬IP資源,最后將虛擬IP綁定在eth0設備上。
接著,重新啟動主Director Server上的Keepalived服務,繼續(xù)觀察備用Director Server的日志狀態(tài):
備用Director Server的日志狀態(tài):
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: Netlink reflector reports IP 192.168.12.135 removed
Feb 28 10:12:11 lvs-backup Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 removed
Feb 28 10:12:11 lvs-backup avahi-daemon[3349]: Withdrawing address record for 192.168.12.135 on eth0.
從日志可知,備機在檢測到主機重新恢復正常后,重新返回BACKUP角色,并且釋放了虛擬IP資源。
三、 負載均衡測試
這里假定兩個real server節(jié)點配置www服務的網頁文件根目錄均為/webdata/www目錄,然后分別執(zhí)行如下操作:
在real server1 執(zhí)行:
echo "This is real server1" /webdata/www/index.html
在real server2 執(zhí)行:
echo "This is real server2" /webdata/www/index.html
接著打開瀏覽器,訪問http://192.168.12.135這個地址,然后不斷刷新此頁面,如果能分別看到“This is real server1”和“This is real server2”就表明LVS已經在進行負載均衡了。
四、 故障切換測試
故障切換是測試當某個節(jié)點出現故障后,Keepalived監(jiān)控模塊是否能及時發(fā)現,然后屏蔽故障節(jié)點,同時將服務轉移到正常節(jié)點來執(zhí)行。
這里我們將real server 1節(jié)點服務停掉,假定這個節(jié)點出現故障,然后查看主、備機日志信息,相關日志如下:
Feb 28 10:14:12 localhost Keepalived_healthcheckers: TCP connection to [192.168.12.246:80] failed !!!
Feb 28 10:14:12 localhost Keepalived_healthcheckers: Removing service [192.168.12.246:80] from VS [192.168.12.135:80]
Feb 28 10:14:12 localhost Keepalived_healthcheckers: Remote SMTP server [192.168.12.1:25] connected.
Feb 28 10:14:12 localhost Keepalived_healthcheckers: SMTP alert successfully sent.
通過日志可以看出,Keepalived監(jiān)控模塊檢測到192.168.12.246這臺主機出現故障后,將此節(jié)點從集群系統(tǒng)中剔除掉了。
此時訪問http://192.168.12.135這個地址,應該只能看到“This is real server2”了,這是因為節(jié)點1出現故障,而Keepalived監(jiān)控模塊將節(jié)點1從集群系統(tǒng)中剔除了。
下面重新啟動real server 1節(jié)點的服務,可以看到Keepalived日志信息如下:
Feb 28 10:15:48 localhost Keepalived_healthcheckers: TCP connection to [192.168.12.246:80] success.
Feb 28 10:15:48 localhost Keepalived_healthcheckers: Adding service [192.168.12.246:80] to VS [192.168.12.135:80]
Feb 28 10:15:48 localhost Keepalived_healthcheckers: Remote SMTP server [192.168.12.1:25] connected.
Feb 28 10:15:48 localhost Keepalived_healthcheckers: SMTP alert successfully sent.
從日志可知,Keepalived監(jiān)控模塊檢測到192.168.12.246這臺主機恢復正常后,又將此節(jié)點加入了集群系統(tǒng)中。
此時再次訪問http://192.168.12.135這個地址,然后不斷刷新此頁面,應該又能分別看到“This is real server1”和“This is real server2”頁面了,這說明在real server 1節(jié)點恢復正常后,Keepalived監(jiān)控模塊將此節(jié)點加入了集群系統(tǒng)中。
本文出自 “技術成就夢想” 博客
您可能感興趣的文章:
- linux服務器之LVS、Nginx和HAProxy負載均衡器對比總結
- LVS+Keepalived構建高可用負載均衡配置方法(配置篇)
- LVS(Linux Virtual Server)Linux 虛擬服務器介紹及配置(負載均衡系統(tǒng))
- Linux 系統(tǒng) nginx 服務器安裝及負載均衡配置詳解
- linux下nginx負載均衡搭建的方法步驟
- linux負載均衡總結性說明 四層負載和七層負載有什么區(qū)別
- linux下Nginx+Tomcat負載均衡配置方法
- Red Hat Linux,Apache2.0+Weblogic9.2負載均衡集群安裝配置
- 使用nginx來負載均衡 本文在window與linux下配置nginx實現負載
- 深入理解Linux負載均衡LVS
相關文章
linux普通用戶su root切換提示沒有文件或目錄的解決方法
這篇文章主要介紹了linux普通用戶su root切換提示沒有文件或目錄的解決方法,需要的朋友可以參考下2017-07-07Apache中使非偽靜態(tài)url跳轉到偽靜態(tài)url的方法
這篇文章主要介紹了Apache中使非偽靜態(tài)url跳轉到偽靜態(tài)url的方法,主要是在使用.htaccess時的問題,需要的朋友可以參考下2015-07-07