LVS+Keepalived 高可用环境部署记录(主主和主从模式)

一、LVS+Keepalived主从热备的高可用环境部署

1)环境准备


1

2

3

4

5

6

7

8

9

10

11

12

LVS_Keepalived_Master      182.148.15.237

LVS_Keepalived_Backup      182.148.15.236

Real_Server1               182.148.15.233

Real_Server2               182.148.15.238

VIP                        182.148.15.239

 

系统版本都是centos6.8

特别注意:

Director Server与Real Server必须有一块网卡连在同一物理网段上!否则lvs会转发失败!

在远程telnet vip port会报错:

"telnet: connect to address *.*.*.*: No route to host"

基本的网络拓扑图如下:

2)LVS_keepalived_Master和LVS_keepalived_Backup两台服务器上安装配置LVS和keepalived的操作记录:


1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

1)关闭 SElinux、配置防火墙(在LVS_Keepalived_Master 和 LVS_Keepalived_Backup两台机器上都要操作)

[[email protected]_Keepalived_Master ~]# vim /etc/sysconfig/selinux

#SELINUX=enforcing                #注释掉

#SELINUXTYPE=targeted             #注释掉

SELINUX=disabled                  #增加

   

[[email protected]_Keepalived_Master ~]# setenforce 0      #临时关闭selinux。上面文件配置后,重启机器后就永久生效。

   

注意下面182.148.15.0/24是服务器的公网网段,192.168.1.0/24是服务器的私网网段

一定要注意:加上这个组播规则后,MASTER和BACKUP故障时,才能实现VIP资源的正常转移。其故障恢复后,VIP也还会正常转移回来。

[[email protected]_Keepalived_Master ~]# vim /etc/sysconfig/iptables

.......

-A INPUT -s 182.148.15.0/24 -d 224.0.0.18 -j ACCEPT      #允许组播地址通信。

-A INPUT -s 192.168.1.0/24 -d 224.0.0.18 -j ACCEPT

-A INPUT -s 182.148.15.0/24 -p vrrp -j ACCEPT            #允许 VRRP(虚拟路由器冗余协)通信

-A INPUT -s 192.168.1.0/24 -p vrrp -j ACCEPT

-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

   

[[email protected]_Keepalived_Master ~]# /etc/init.d/iptables restart

  

----------------------------------------------------------------------------------------------------------------------

2)LVS安装(在LVS_Keepalived_Master 和 LVS_Keepalived_Backup两台机器上都要操作)

需要安装以下软件包

[[email protected]_Keepalived_Master ~]# yum install -y libnl* popt*

  

查看是否加载lvs模块

[[email protected]_Keepalived_Master src]# modprobe -l |grep ipvs

  

下载并安装LVS

[[email protected]_Keepalived_Master ~]# cd /usr/local/src/

[[email protected]_Keepalived_Master src]# wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz

解压安装

[[email protected]_Keepalived_Master src]# ln -s /usr/src/kernels/2.6.32-431.5.1.el6.x86_64/ /usr/src/linux

[[email protected]_Keepalived_Master src]# tar -zxvf ipvsadm-1.26.tar.gz

[[email protected]_Keepalived_Master src]# cd ipvsadm-1.26

[[email protected]_Keepalived_Master ipvsadm-1.26]# make && make install

   

LVS安装完成,查看当前LVS集群

[[email protected]_Keepalived_Master ipvsadm-1.26]# ipvsadm -L -n

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

  

----------------------------------------------------------------------------------------------------------------------

3)编写LVS启动脚本/etc/init.d/realserver(在Real_Server1 和Real_Server2上都要操作,realserver脚本内容是一样的)

[[email protected]_Server1 ~]# vim /etc/init.d/realserver

#!/bin/sh

VIP=182.148.15.239

/etc/rc.d/init.d/functions

   

case "$1" in

# 禁用本地的ARP请求、绑定本地回环地址

start)

    /sbin/ifconfig lo down

    /sbin/ifconfig lo up

    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

    /sbin/sysctl -p >/dev/null 2>&1

    /sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up     #在回环地址上绑定VIP,设定掩码,与Direct Server(自身)上的IP保持通信

    /sbin/route add -host $VIP dev lo:0

    echo "LVS-DR real server starts successfully.\n"

    ;;

stop)

    /sbin/ifconfig lo:0 down

    /sbin/route del $VIP >/dev/null 2>&1

    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

echo "LVS-DR real server stopped.\n"

    ;;

status)

    isLoOn=`/sbin/ifconfig lo:0 | grep "$VIP"`

    isRoOn=`/bin/netstat -rn | grep "$VIP"`

    if "$isLoON" == "" -a "$isRoOn" == "" ]; then

        echo "LVS-DR real server has run yet."

    else

        echo "LVS-DR real server is running."

    fi

    exit 3

    ;;

*)

    echo "Usage: $0 {start|stop|status}"

    exit 1

esac

exit 0

  

将lvs脚本加入开机自启动

[[email protected]_Server1 ~]# chmod +x /etc/init.d/realserver

[[email protected]_Server1 ~]# echo "/etc/init.d/realserver start" >> /etc/rc.d/rc.local

  

启动LVS脚本(注意:如果这两台realserver机器重启了,一定要确保service realserver start  启动了,即lo:0本地回环上绑定了vip地址,否则lvs转发失败!)

[[email protected]_Server1 ~]# service realserver start

LVS-DR real server starts successfully.\n

  

查看Real_Server1服务器,发现VIP已经成功绑定到本地回环口lo上了

[[email protected]_Server1 ~]# ifconfig

eth0      Link encap:Ethernet  HWaddr 52:54:00:D1:27:75

          inet addr:182.148.15.233  Bcast:182.148.15.255  Mask:255.255.255.224

          inet6 addr: fe80::5054:ff:fed1:2775/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:309741 errors:0 dropped:0 overruns:0 frame:0

          TX packets:27993954 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:37897512 (36.1 MiB)  TX bytes:23438654329 (21.8 GiB)

  

lo        Link encap:Local Loopback

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

  

lo:0      Link encap:Local Loopback

          inet addr:182.148.15.239  Mask:255.255.255.255

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

  

----------------------------------------------------------------------------------------------------------------------

4)安装Keepalived(LVS_Keepalived_Master 和 LVS_Keepalived_Backup两台机器都要操作)

[[email protected]_Keepalived_Master ~]# yum install -y openssl-devel

[[email protected]_Keepalived_Master ~]# cd /usr/local/src/

[[email protected]_Keepalived_Master src]# wget http://www.keepalived.org/software/keepalived-1.3.5.tar.gz

[[email protected]_Keepalived_Master src]# tar -zvxf keepalived-1.3.5.tar.gz

[[email protected]_Keepalived_Master src]# cd keepalived-1.3.5

[[email protected]_Keepalived_Master keepalived-1.3.5]# ./configure --prefix=/usr/local/keepalived

[[email protected]_Keepalived_Master keepalived-1.3.5]# make && make install

         

[[email protected]_Keepalived_Master keepalived-1.3.5]# cp /usr/local/src/keepalived-1.3.5/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/

[[email protected]_Keepalived_Master keepalived-1.3.5]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

[[email protected]_Keepalived_Master keepalived-1.3.5]# mkdir /etc/keepalived/

[[email protected]_Keepalived_Master keepalived-1.3.5]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

[[email protected]_Keepalived_Master keepalived-1.3.5]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/

[[email protected]_Keepalived_Master keepalived-1.3.5]# echo "/etc/init.d/keepalived start" >> /etc/rc.local

   

[[email protected]_Keepalived_Master keepalived-1.3.5]# chmod +x /etc/rc.d/init.d/keepalived      #添加执行权限

[[email protected]_Keepalived_Master keepalived-1.3.5]# chkconfig keepalived on                   #设置开机启动

[[email protected]_Keepalived_Master keepalived-1.3.5]# service keepalived start                   #启动

[[email protected]_Keepalived_Master keepalived-1.3.5]# service keepalived stop                    #关闭

[[email protected]_Keepalived_Master keepalived-1.3.5]# service keepalived restart                 #重启

  

----------------------------------------------------------------------------------------------------------------------

5)接着配置LVS+Keepalived配置

现在LVS_Keepalived_Master和LVS_Keepalived_Backup两台机器上打开ip_forward转发功能

[[email protected]_Keepalived_Master ~]# echo "1" > /proc/sys/net/ipv4/ip_forward

  

LVS_Keepalived_Master机器上的keepalived.conf配置:

[[email protected]_Keepalived_Master ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

   

global_defs {

   router_id LVS_Master

}

   

vrrp_instance VI_1 {

    state MASTER               #指定instance初始状态,实际根据优先级决定.backup节点不一样

    interface eth0             #虚拟IP所在网

    virtual_router_id 51       #VRID,相同VRID为一个组,决定多播MAC地址

    priority 100               #优先级,另一台改为90.backup节点不一样

    advert_int 1               #检查间隔

    authentication {

        auth_type PASS         #认证方式,可以是pass或ha

        auth_pass 1111         #认证密码

    }

    virtual_ipaddress {

        182.148.15.239         #VIP

    }

}

   

virtual_server 182.148.15.239 80 {

    delay_loop 6               #服务轮询的时间间隔

    lb_algo wrr                #加权轮询调度,LVS调度算法 rr|wrr|lc|wlc|lblc|sh|sh

    lb_kind DR                 #LVS集群模式 NAT|DR|TUN,其中DR模式要求负载均衡器网卡必须有一块与物理网卡在同一个网段

    #nat_mask 255.255.255.0

    persistence_timeout 50     #会话保持时间

    protocol TCP              #健康检查协议

   

    ## Real Server设置,80就是连接端口

    real_server 182.148.15.233 80 {

        weight 3  ##权重

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

    real_server 182.148.15.238 80 {

        weight 3

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

}

  

启动keepalived

[[email protected]_Keepalived_Master ~]# /etc/init.d/keepalived start

Starting keepalived:                                       [  OK  ]

  

[[email protected]_Keepalived_Master ~]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 52:54:00:68:dc:b6 brd ff:ff:ff:ff:ff:ff

    inet 182.48.115.237/27 brd 182.48.115.255 scope global eth0

    inet 182.48.115.239/32 scope global eth0

    inet6 fe80::5054:ff:fe68:dcb6/64 scope link

       valid_lft forever preferred_lft forever

  

注意此时网卡的变化,可以看到虚拟网卡已经分配到了realserver上。

此时查看LVS集群状态,可以看到集群下有两个Real Server,调度算法,权重等信息。ActiveConn代表当前Real Server的活跃连接数。

[[email protected]_Keepalived_Master ~]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  182.48.115.239:80 wrr persistent 50

  -> 182.48.115.233:80            Route   3      0          0      

  -> 182.48.115.238:80            Route   3      0          0

  

-------------------------------------------------------------------------

LVS_Keepalived_Backup机器上的keepalived.conf配置:

[[email protected]_Keepalived_Backup ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

   

global_defs {

   router_id LVS_Backup

}

   

vrrp_instance VI_1 {

    state BACKUP          

    interface eth0         

    virtual_router_id 51   

    priority 90           

    advert_int 1          

    authentication {

        auth_type PASS     

        auth_pass 1111     

    }

    virtual_ipaddress {

        182.148.15.239     

    }

}

   

virtual_server 182.148.15.239 80 {

    delay_loop 6          

    lb_algo wrr           

    lb_kind DR             

   

    persistence_timeout 50 

    protocol TCP         

   

    real_server 182.148.15.233 80 {

        weight 3

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

    real_server 182.148.15.238 80 {

        weight 3

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

}

  

[[email protected]_Keepalived_Backup ~]# /etc/init.d/keepalived start

Starting keepalived:                                       [  OK  ]

  

查看LVS_Keepalived_Backup机器上,发现VIP默认在LVS_Keepalived_Master机器上,只要当LVS_Keepalived_Backup发生故障时,VIP资源才会飘到自己这边来。

[[email protected]_Keepalived_Backup ~]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 52:54:00:7c:b8:f0 brd ff:ff:ff:ff:ff:ff

    inet 182.48.115.236/27 brd 182.48.115.255 scope global eth0

    inet 182.48.115.239/27 brd 182.48.115.255 scope global secondary eth0:0

    inet6 fe80::5054:ff:fe7c:b8f0/64 scope link

       valid_lft forever preferred_lft forever

[[email protected]_Keepalived_Backup ~]# ipvsadm -L -n

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  182.48.115.239:80 wrr persistent 50

  -> 182.48.115.233:80            Route   3      0          0      

  -> 182.48.115.238:80            Route   3      0          0

3)后端两台Real Server上的操作


1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

在两台Real Server上配置好nginx,nginx安装配置过程省略。

分别在两台Real Server上配置两个域名www.wangshibo.com和www.guohuihui.com。

在LVS_Keepalived_Master 和 LVS_Keepalived_Backup两台机器上要能正常访问这两个域名

[[email protected]_Keepalived_Master ~]# curl http://www.wangshibo.com

this is page of Real_Server1:182.148.15.238 www.wangshibo.com

[[email protected]_Keepalived_Master ~]# curl http://www.guohuihui.com

this is page of Real_Server2:182.148.15.238 www.guohuihui.com

[[email protected]_Keepalived_Backup ~]# curl http://www.wangshibo.com

this is page of Real_Server1:182.148.15.238 www.wangshibo.com

[[email protected]_Keepalived_Backup ~]# curl http://www.guohuihui.com

this is page of Real_Server2:182.148.15.238 www.guohuihui.com

关闭182.148.15.238这台机器(即Real_Server2)的nginx,发现对应域名的请求就会到Real_Server1上

[[email protected]_Server2 ~]# /usr/local/nginx/sbin/nginx -s stop

[[email protected]_Server2 ~]# lsof -i:80

[[email protected]_Server2 ~]#

再次在LVS_Keepalived_Master 和 LVS_Keepalived_Backup两台机器上访问这两个域名,就会发现已经负载到Real_Server1上了

[[email protected]_Keepalived_Master ~]# curl http://www.wangshibo.com

this is page of Real_Server1:182.148.15.233 www.wangshibo.com

[[email protected]_Keepalived_Master ~]# curl http://www.guohuihui.com

this is page of Real_Server1:182.148.15.233 www.guohuihui.com

[[email protected]_Keepalived_Backup ~]# curl http://www.wangshibo.com

this is page of Real_Server1:182.148.15.233 www.wangshibo.com

[[email protected]_Keepalived_Backup ~]# curl http://www.guohuihui.com

this is page of Real_Server1:182.148.15.233 www.guohuihui.com

另外,设置这两台Real Server的iptables,让其80端口只对前面的两个vip资源开放

[[email protected]_Server1 ~]# vim /etc/sysconfig/iptables

......

-A INPUT -s 182.148.15.239 -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

[[email protected]_Server1 ~]# /etc/init.d/iptables restart

4)测试
将www.wangshibo.com和www.guohuihui.com测试域名解析到VIP:182.148.15.239,然后在浏览器里是可以正常访问的。


1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

1)测试LVS功能(上面Keepalived的lvs配置中,自带了健康检查,当后端服务器的故障出现故障后会自动从lvs集群中踢出,当故障恢复后,再自动加入到集群中)

先查看当前LVS集群,如下:发现后端两台Real Server的80端口都运行正常

[[email protected]_Keepalived_Master ~]# ipvsadm -L -n

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  182.148.15.239:80 wrr persistent 50

  -> 182.148.15.233:80            Route   3      0          0       

  -> 182.148.15.238:80            Route   3      0          0

 

现在测试关闭一台Real Server,比如Real_Server2

[[email protected]_Server2 ~]# /usr/local/nginx/sbin/nginx -s stop

 

过一会儿再次查看当前LVS集群,如下:发现Real_Server2已经被踢出当前LVS集群了

[[email protected]_Keepalived_Master ~]# ipvsadm -L -n

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  182.148.15.239:80 wrr persistent 50

  -> 182.148.15.233:80            Route   3      0          0

 

最后重启Real_Server2的80端口,发现LVS集群里又再次将其添加进来了

[[email protected]_Server2 ~]# /usr/local/nginx/sbin/nginx

 

[[email protected]_Keepalived_Master ~]# ipvsadm -L -n

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  182.148.15.239:80 wrr persistent 50

  -> 182.148.15.233:80            Route   3      0          0       

  -> 182.148.15.238:80            Route   3      0          0 

 

以上测试中,http://www.wangshibo.com和http://www.guohuihui.com域名访问都不受影响。

 

 

2)测试Keepalived心跳测试的高可用

默认情况下,VIP资源是在LVS_Keepalived_Master上

[[email protected]_Keepalived_Master ~]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 52:54:00:68:dc:b6 brd ff:ff:ff:ff:ff:ff

    inet 182.148.15.237/27 brd 182.148.15.255 scope global eth0

    inet 182.148.15.239/32 scope global eth0

    inet 182.148.15.239/27 brd 182.148.15.255 scope global secondary eth0:0

    inet6 fe80::5054:ff:fe68:dcb6/64 scope link

       valid_lft forever preferred_lft forever

 

然后关闭LVS_Keepalived_Master的keepalived,发现VIP就会转移到LVS_Keepalived_Backup上。

[[email protected]_Keepalived_Master ~]# /etc/init.d/keepalived stop

Stopping keepalived:                                       [  OK  ]

[[email protected]_Keepalived_Master ~]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 52:54:00:68:dc:b6 brd ff:ff:ff:ff:ff:ff

    inet 182.148.15.237/27 brd 182.148.15.255 scope global eth0

    inet 182.148.15.239/27 brd 182.148.15.255 scope global secondary eth0:0

    inet6 fe80::5054:ff:fe68:dcb6/64 scope link

       valid_lft forever preferred_lft forever

 

查看系统日志,能查看到LVS_Keepalived_Master的VIP的移动信息

[[email protected]_Keepalived_Master ~]# tail -f /var/log/messages

.............

May  8 10:19:36 Haproxy_Keepalived_Master Keepalived_healthcheckers[20875]: TCP connection to [182.148.15.233]:80 failed.

May  8 10:19:39 Haproxy_Keepalived_Master Keepalived_healthcheckers[20875]: TCP connection to [182.148.15.233]:80 failed.

May  8 10:19:39 Haproxy_Keepalived_Master Keepalived_healthcheckers[20875]: Check on service [182.148.15.233]:80 failed after 1 retry.

May  8 10:19:39 Haproxy_Keepalived_Master Keepalived_healthcheckers[20875]: Removing service [182.148.15.233]:80 from VS [182.148.15.239]:80

 

[[email protected]_Keepalived_Backup ~]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 52:54:00:7c:b8:f0 brd ff:ff:ff:ff:ff:ff

    inet 182.148.15.236/27 brd 182.148.15.255 scope global eth0

    inet 182.148.15.239/32 scope global eth0

    inet 182.148.15.239/27 brd 182.148.15.255 scope global secondary eth0:0

    inet6 fe80::5054:ff:fe7c:b8f0/64 scope link

       valid_lft forever preferred_lft forever

 

 

接着再重新启动LVS_Keepalived_Master的keepalived,发现VIP又转移回来了

[[email protected]_Keepalived_Master ~]# /etc/init.d/keepalived start

Starting keepalived:                                       [  OK  ]

[[email protected]_Keepalived_Master ~]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 52:54:00:68:dc:b6 brd ff:ff:ff:ff:ff:ff

    inet 182.148.15.237/27 brd 182.148.15.255 scope global eth0

    inet 182.148.15.239/32 scope global eth0

    inet 182.148.15.239/27 brd 182.148.15.255 scope global secondary eth0:0

    inet6 fe80::5054:ff:fe68:dcb6/64 scope link

       valid_lft forever preferred_lft forever

 

 

查看系统日志,能查看到LVS_Keepalived_Master的VIP转移回来的信息

[[email protected]_Keepalived_Master ~]# tail -f /var/log/messages

.............

May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: Sending gratuitous ARP on eth0 for 182.148.15.239

May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 182.148.15.239

May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: Sending gratuitous ARP on eth0 for 182.148.15.239

May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: Sending gratuitous ARP on eth0 for 182.148.15.239

May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: Sending gratuitous ARP on eth0 for 182.148.15.239

May  8 10:23:12 Haproxy_Keepalived_Master Keepalived_vrrp[5863]: Sending gratuitous ARP on eth0 for 182.148.15.239

二、LVS+Keepalived主主热备的高可用环境部署


1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

主主环境相比于主从环境,区别只在于:

1)LVS负载均衡层需要两个VIP。比如182.148.15.239和182.148.15.235

2)后端的realserver上要绑定这两个VIP到lo本地回环口上

3)Keepalived.conf的配置相比于上面的主从模式也有所不同

 

主主架构的具体配置如下:

1)编写LVS启动脚本(在Real_Server1 和Real_Server2上都要操作,realserver脚本内容是一样的)

 

由于后端realserver机器要绑定两个VIP到本地回环口lo上(分别绑定到lo:0和lo:1),所以需要编写两个启动脚本

[[email protected]_Server1 ~]# vim /etc/init.d/realserver1

#!/bin/sh

VIP=182.148.15.239

/etc/rc.d/init.d/functions

   

case "$1" in

 

start)

    /sbin/ifconfig lo down

    /sbin/ifconfig lo up

    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

    /sbin/sysctl -p >/dev/null 2>&1

    /sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up  

    /sbin/route add -host $VIP dev lo:0

    echo "LVS-DR real server starts successfully.\n"

    ;;

stop)

    /sbin/ifconfig lo:0 down

    /sbin/route del $VIP >/dev/null 2>&1

    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

echo "LVS-DR real server stopped.\n"

    ;;

status)

    isLoOn=`/sbin/ifconfig lo:0 | grep "$VIP"`

    isRoOn=`/bin/netstat -rn | grep "$VIP"`

    if "$isLoON" == "" -a "$isRoOn" == "" ]; then

        echo "LVS-DR real server has run yet."

    else

        echo "LVS-DR real server is running."

    fi

    exit 3

    ;;

*)

    echo "Usage: $0 {start|stop|status}"

    exit 1

esac

exit 0

 

 

[[email protected]_Server1 ~]# vim /etc/init.d/realserver2

#!/bin/sh

VIP=182.148.15.235

/etc/rc.d/init.d/functions

   

case "$1" in

 

start)

    /sbin/ifconfig lo down

    /sbin/ifconfig lo up

    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

    /sbin/sysctl -p >/dev/null 2>&1

    /sbin/ifconfig lo:1 $VIP netmask 255.255.255.255 up   

    /sbin/route add -host $VIP dev lo:1

    echo "LVS-DR real server starts successfully.\n"

    ;;

stop)

    /sbin/ifconfig lo:1 down

    /sbin/route del $VIP >/dev/null 2>&1

    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

echo "LVS-DR real server stopped.\n"

    ;;

status)

    isLoOn=`/sbin/ifconfig lo:1 | grep "$VIP"`

    isRoOn=`/bin/netstat -rn | grep "$VIP"`

    if "$isLoON" == "" -a "$isRoOn" == "" ]; then

        echo "LVS-DR real server has run yet."

    else

        echo "LVS-DR real server is running."

    fi

    exit 3

    ;;

*)

    echo "Usage: $0 {start|stop|status}"

    exit 1

esac

exit 0

 

将lvs脚本加入开机自启动

[[email protected]_Server1 ~]# chmod +x /etc/init.d/realserver1

[[email protected]_Server1 ~]# chmod +x /etc/init.d/realserver2

[[email protected]_Server1 ~]# echo "/etc/init.d/realserver1" >> /etc/rc.d/rc.local

[[email protected]_Server1 ~]# echo "/etc/init.d/realserver2" >> /etc/rc.d/rc.local

  

启动LVS脚本

[[email protected]_Server1 ~]# service realserver1 start

LVS-DR real server starts successfully.\n

 

[[email protected]_Server1 ~]# service realserver2 start

LVS-DR real server starts successfully.\n

 

查看Real_Server1服务器,发现VIP已经成功绑定到本地回环口lo上了

[[email protected]_Server1 ~]# ifconfig

eth0      Link encap:Ethernet  HWaddr 52:54:00:D1:27:75

          inet addr:182.148.15.233  Bcast:182.148.15.255  Mask:255.255.255.224

          inet6 addr: fe80::5054:ff:fed1:2775/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:309741 errors:0 dropped:0 overruns:0 frame:0

          TX packets:27993954 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:37897512 (36.1 MiB)  TX bytes:23438654329 (21.8 GiB)

  

lo        Link encap:Local Loopback

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

  

lo:0      Link encap:Local Loopback

          inet addr:182.148.15.239  Mask:255.255.255.255

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

 

lo:1      Link encap:Local Loopback

          inet addr:182.148.15.235  Mask:255.255.255.255

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

 

 

2)Keepalived.conf的配置

LVS_Keepalived_Master机器上的Keepalived.conf配置

先打开ip_forward路由转发功能

[[email protected]_Keepalived_Master ~]# echo "1" > /proc/sys/net/ipv4/ip_forward

[[email protected]_Keepalived_Master ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

   

global_defs {

   router_id LVS_Master

}

   

vrrp_instance VI_1 {

    state MASTER             

    interface eth0           

    virtual_router_id 51     

    priority 100             

    advert_int 1             

    authentication {

        auth_type PASS       

        auth_pass 1111       

    }

    virtual_ipaddress {

        182.148.15.239       

    }

}

   

vrrp_instance VI_2 {

    state BACKUP          

    interface eth0         

    virtual_router_id 52  

    priority 90           

    advert_int 1          

    authentication {

        auth_type PASS     

        auth_pass 1111     

    }

    virtual_ipaddress {

        182.148.15.235   

    }

}

 

virtual_server 182.148.15.239 80 {

    delay_loop 6             

    lb_algo wrr              

    lb_kind DR               

    #nat_mask 255.255.255.0

    persistence_timeout 50   

    protocol TCP            

   

 

    real_server 182.148.15.233 80 {

        weight 3

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

    real_server 182.148.15.238 80 {

        weight 3

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

}

 

 

virtual_server 182.148.15.235 80 {

    delay_loop 6             

    lb_algo wrr              

    lb_kind DR               

    #nat_mask 255.255.255.0

    persistence_timeout 50   

    protocol TCP            

   

 

    real_server 182.148.15.233 80 {

        weight 3

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

    real_server 182.148.15.238 80 {

        weight 3

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

}

 

 

 

LVS_Keepalived_Backup机器上的Keepalived.conf配置

[[email protected]_Keepalived_Backup ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

   

global_defs {

   router_id LVS_Backup

}

   

vrrp_instance VI_1 {

    state Backup            

    interface eth0           

    virtual_router_id 51     

    priority 90             

    advert_int 1             

    authentication {

        auth_type PASS       

        auth_pass 1111       

    }

    virtual_ipaddress {

        182.148.15.239       

    }

}

   

vrrp_instance VI_2 {

    state Master         

    interface eth0         

    virtual_router_id 52 

    priority 100          

    advert_int 1          

    authentication {

        auth_type PASS     

        auth_pass 1111     

    }

    virtual_ipaddress {

        182.148.15.235   

    }

}

 

virtual_server 182.148.15.239 80 {

    delay_loop 6             

    lb_algo wrr              

    lb_kind DR               

    #nat_mask 255.255.255.0

    persistence_timeout 50   

    protocol TCP            

   

 

    real_server 182.148.15.233 80 {

        weight 3

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

    real_server 182.148.15.238 80 {

        weight 3

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

}

 

 

virtual_server 182.148.15.235 80 {

    delay_loop 6             

    lb_algo wrr              

    lb_kind DR               

    #nat_mask 255.255.255.0

    persistence_timeout 50   

    protocol TCP            

   

 

    real_server 182.148.15.233 80 {

        weight 3

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

    real_server 182.148.15.238 80 {

        weight 3

        TCP_CHECK {

            connect_timeout 3

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

}

 

 

 

其他验证操作和上面主从模式一样~~~

原文地址:https://www.cnblogs.com/jians/p/12120713.html

时间: 01-02

LVS+Keepalived 高可用环境部署记录(主主和主从模式)的相关文章

Haproxy+Keepalived高可用环境部署梳理(主主和主从模式)

Nginx.LVS.HAProxy 是目前使用最广泛的三种负载均衡软件,本人都在多个项目中实施过,通常会结合Keepalive做健康检查,实现故障转移的高可用功能. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59

LVS + Keepalived 高可用群集部署

简介:keepalived是专门针对LVS设计的一款强大的辅助工具,主要用来提供故障切换(Failover)和健康检查(Health Checking)功能--判断LVS负载调度器.节点服务器的可用性,及时隔离并替换为新的服务器,当故障主机修复后将其重新加入群集.Keepalived采用VRRP(Virtual Router Redundancy Protocl,虚拟路由冗余协议)热备份协议,以软件的方式实现Linux服务器的多机热备功能. 实验拓扑: 实验环境: 序号 名称 类型 IP地址 1

Ubuntu构建LVS+Keepalived高可用负载均衡集群【生产环境部署】

1.环境说明: 系统版本:Ubuntu 14.04 LVS1物理IP:14.17.64.2   初始接管VIP:14.17.64.13 LVS2物理IP:14.17.64.3   初始接管VIP:14.17.64.14 真实web服务器IP:14.17.64.4-12 实现效果:去往14.17.64.13或14的报文,转发给14.17.64.4-12中的一台服务器处理,实现负载均衡. 要求1:当一台真实web服务器故障后,自动把该IP从负载均衡中踢除,恢复后自动加入集群. 要求2:当一台LVS服

Ubuntu构建LVS+Keepalived高可用负载均衡集群【生产环境】

1.环境说明: 系统版本:Ubuntu 14.04 LVS1物理IP:14.17.64.2   初始接管VIP:14.17.64.13 LVS2物理IP:14.17.64.3   初始接管VIP:14.17.64.14 真实web服务器IP:14.17.64.4-12 实现效果:去往14.17.64.13或14的报文,转发给14.17.64.4-12中的一台服务器处理,实现负载均衡. 要求1:当一台真实web服务器故障后,自动把该IP从负载均衡中踢除,恢复后自动加入集群. 要求2:当一台LVS服

linux企业常用服务---lvs+Keepalived高可用集群

部署前准备: iptables和selinux没配置,关掉 挂载系统镜像作为本地yum源,修改yum文件 源码包准备keepalived-1.2.13.tar.gz 环境介绍: 主服务器ip:192.168.100.157(keeplived+lvs) 从服务器ip:192.168.100.156(keeplived+lvs) 节点服务器ip:192.168.100.153-192.168.100.155(httpd) 集群vip:192.168.100.95 1.安装keepalived(在两

(2)LVS+Keepalived高可用负载均衡架构原理及配置

1.keepalived 介绍2.keepalived 优缺点3.keepalived 应用场景4.keepalived 安装配置5.keepalived+lvs 高可用6.keepalived+nginx 高可用7.keepalived 切换原理8.性能优化9.常见故障 一.keepalived 介绍 1.keepalived 定义keepalived是一个基于VRRP(virtual route redundent protocol)协议来实现的LVS服务高可用方案,可以利用其来避免单点故障

Centos 7搭建LVS+Keepalived高可用Web服务群集

一.LVS+Keepalived高可用群集 Keepalived的设计目标是构建高可用的LVS负载均衡群集,可以调用ipvsadm工具来创建虚拟服务器.管理服务器池,而不仅仅用作双机热备.使用Keepalived构建LVS群集更加简便易用,主要优势体现在:对LVS负载调度器实现热备切换,提高可用性:对服务器池中的节点进行健康检查,自动移除失效节点,恢复后再重新加入. 在基于LVS+Keepalived实现的LVS群集结构中,至少包括两台热备的负载调度器,三台以上的节点服务器.此博客将以DR模式的

CentOS7.4—构建LVS+Keepalived高可用群集

LVS+Keepalived高可用群集 目录第一部分 环境准备第二部分 使用Keepalived搭建双机热备第三部分 配置Web节点服务器第四部分 测试LVS+Keepalived高可用群集 第一部分 环境准备一:调度器两台(双机热备)系统:Linux-CentOS 7.4IP地址:192.168.80.10(主)IP地址:192.168.80.20(备)二:Web服务器两台系统:Linux-CentOS 7.4IP地址:192.168.80.30(SERVER AA)IP地址:192.168.

LVS+keepalived 高可用群集

LVS+keepalived 高可用群集 实验目的: 使用 keepalived 实现 LVS 双机热备. 实验环境: 主机 操作系统 IP地址 主要软件 LVS 负载调度器 CentOS 7.3 x86_64 192.168.217.128 keepalived LVS 负载调度器 CentOS 7.3 x86_64 192.168.217.129 keepalived web 服务器 1 CentOS 7.3 x86_64 192.168.217.130 http web 服务器 2 Cen