码迷,mamicode.com
首页 > 其他好文 > 详细

centos7实现网卡绑定bonding技术

时间:2020-05-19 16:47:20      阅读:110      评论:0      收藏:0      [点我收藏+]

标签:data   tin   nat   ethernet   ttl   seq   pad   dns   策略   

1.centos7实现网卡绑定技术

1.1 介绍

将多块网卡绑定同一IP地址对外提供服务,可以实现高可用或者负载均衡。直接给两块网卡设置同一IP地址是不可以的。通过bonding,虚拟一块网卡对外提供连接,物理网卡的被修改为相同的MAC地址

1.2 Bonding工作模式

Mode 0 (balance-rr)
轮转(Round-robin)策略:从头到尾顺序的在每一个slave接口上面发送数据包。本模式提供负载均衡和容错的能力
Mode 1 (active-backup)
活动-备份(主备)策略:只有一个slave被激活,当且仅当活动的slave接口失败时才会激活其他slave.为了避免交换机发生混乱,此时绑定的MAC地址只有一个外部端口上可见
Mode 3 (broadcast)
广播策略:在所有的slave接口上传送所有的报文,提供容错能力active-backup、balance-tlb 和 balance-alb 模式不需要交换机的任何特殊配置。其他绑定模式需要配置交换机以便整合链接。如:Cisco 交换机需要在模式 0、2 和 3 中使用 EtherChannel,但在模式4中需要 LACP和EtherChannel

1.3 配置bonding

1.3.1 默认情况下bonding模块没有被加载,需先加载bonding模块

[root@c2 ~]# lsmod |grep bonding
[root@c2 ~]# modprobe bonding
[root@c2 ~]# lsmod |grep bonding
bonding               152656  0 

1.3.2 新建bond网口的配置文件

[root@c2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0 
DEVICE=bond0
NAME=bond0
TYPE=Bond
BONDING_MASTER=yes
IPADDR=10.1.1.243
PREFIX=24
GATEWAY=10.1.1.254
DNS1=202.96.128.166
ONBOOT=yes
BOOTPROTO=none
BONDING_OPTS="mode=0 miimon=100"

1.3.3 网口eth0、eth1的配置文件

[root@c2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Generated by dracut initrd
NAME="eth0"
DEVICE="eth0"
ONBOOT=yes
BOOTPROTO=none
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
[root@c2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
# Generated by dracut initrd
NAME="eth1"
DEVICE="eth1"
ONBOOT=yes
BOOTPROTO=none
TYPE=Ethernet
MASTER=bond0
SLAVE=yes

1.4 重新加载网络配置

[root@c2 ~]# nmcli connection reload        
[root@c2 ~]# systemctl restart network.service

1.5 查看bonding状态

[root@c2 ~]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:ba:03:9e
Slave queue ID: 0
[root@c2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ba:03:94 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 00:0c:29:ba:03:94 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:ba:03:94 brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.243/24 brd 10.0.1.255 scope global noprefixroute bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feba:394/64 scope link 
       valid_lft forever preferred_lft forever

1.6 测试

1.6.1 在另一台服务器上长ping

[root@c1 ~]# ping -c 1000 c2
PING c2 (10.0.1.243) 56(84) bytes of data.
64 bytes from c2 (10.0.1.243): icmp_seq=1 ttl=64 time=0.232 ms
64 bytes from c2 (10.0.1.243): icmp_seq=2 ttl=64 time=0.215 ms
64 bytes from c2 (10.0.1.243): icmp_seq=3 ttl=64 time=0.255 ms
64 bytes from c2 (10.0.1.243): icmp_seq=4 ttl=64 time=0.255 ms
64 bytes from c2 (10.0.1.243): icmp_seq=5 ttl=64 time=0.249 ms
64 bytes from c2 (10.0.1.243): icmp_seq=6 ttl=64 time=0.236 ms
........

1.6.2 关掉eth1网口

[root@c2 ~]# nmcli connection down eth1

1.6.3 结果

From c1 (10.0.1.242) icmp_seq=255 Destination Host Unreachable
From c1 (10.0.1.242) icmp_seq=256 Destination Host Unreachable
From c1 (10.0.1.242) icmp_seq=257 Destination Host Unreachable
From c1 (10.0.1.242) icmp_seq=258 Destination Host Unreachable
From c1 (10.0.1.242) icmp_seq=259 Destination Host Unreachable
From c1 (10.0.1.242) icmp_seq=260 Destination Host Unreachable
64 bytes from c2 (10.0.1.243): icmp_seq=302 ttl=64 time=0.243 ms
64 bytes from c2 (10.0.1.243): icmp_seq=303 ttl=64 time=0.234 ms
64 bytes from c2 (10.0.1.243): icmp_seq=304 ttl=64 time=0.210 ms
64 bytes from c2 (10.0.1.243): icmp_seq=305 ttl=64 time=0.213 ms
64 bytes from c2 (10.0.1.243): icmp_seq=306 ttl=64 time=0.236 ms
64 bytes from c2 (10.0.1.243): icmp_seq=307 ttl=64 time=0.291 ms

注:当绑定模式为0负载均衡时,网卡故障后切换比较慢;模式1,准备模式切换比较快。可以修改ficfg-bond0的BONDING_OPTS="mode=1 miimon=100"进行测试

centos7实现网卡绑定bonding技术

标签:data   tin   nat   ethernet   ttl   seq   pad   dns   策略   

原文地址:https://blog.51cto.com/rickzhu/2496550

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!