码迷,mamicode.com
首页 > 其他好文 > 详细

drbd+nfs+heartbeat高可用

时间:2017-08-17 17:17:50      阅读:215      评论:0      收藏:0      [点我收藏+]

标签:module   expec   configure   mon   .com   drive   vmware   warning   drbdadm   

DRBD是一种基于软件的、基于网络的块复制存储解决方案

DRBD在IP网络传输,所有在集中使用DRBD作为共享存储设备,不需要任何硬件投资,可以节约很多成本

 

 

NFS1

         IP1:10.10.10.166     心跳和数据传输网卡  不配置网关,添加路由即可

                            添加路由:route add -host IP dev eth0并且写在rc.local内

         VIP:10.10.10.150

         DRBD至少使用两个分区需要单独分出来

NFS2

         IP1:10.10.10.167

         VIP:10.10.10.50

配置主机名、IP、hosts文件、关闭selinux、iptables

 

 

 

安装drbd源

CentOS 7

yum install glibc* -y

# rpm --import http://elrepo.org/RPM-GPG-KEY-elrepo.org

# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

 

CentOS 6

rpm -Uvh elrepo-release-6-6.el6.elrepo.noarch.rpm

或者

rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

配置阿里云源

         mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

        

         CentOS 6

 

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo

 

CentOS 7

 

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

 

yum makecache

 

配置DRBD

         两台主机分别磁盘分区

parted -s /dev/sdb mklabel gpt                                             #分区表转换为GPT       

parted -s /dev/sdb mkpart primary 0% 80%

parted -s /dev/sdb mkpart primary 81% 100%                           

打印分区结果

[root@mfsmaster ~]# parted /dev/sdb p

Model: VMware, VMware Virtual S (scsi)

Disk /dev/sdb: 21.5GB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

 

Number  Start   End     Size    File system  Name     标志

 1      1049kB  17.2GB  17.2GB               primary

 2      17.4GB  21.5GB  4079MB               primary

        

 

 

安装DRBD

 

分出两个区,一个数据同步使用sdb1,一个记录日志sdb2

fdisk /dev/ddb

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

                                                                 p

artition number (1-4): 1

First cylinder (1-2610, default 1): 回车

Last cylinder, +cylinders or +size{K,M,G} (1305-2610, default 2610):+10G

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

                                                                 p

在执行执行一次分出sdb2  w保存分区                                          

 

升级内核yum install kernel kernel-devel kernel-headers -y

        

         yum install kmod-drbd83 drbd83 -y

        

         加载DRBD到内核

                   modprobe drbd

                   加载不成功执行depmod然后重启

                   echo "modprobe drbd > /dev/null 2>&1" > /etc/sysconfig/modules/drbd.modules

         检查是否安装成功

                   lsmod | grep -i drbd

         查看drbd.ko安装路径

                   modprobe -l | grep -i drbd

         安装成功之后drbd相关工具(drbdadm,drbdsetup)被安装在/sbin/目录下       

 

配置drbd

vim /etc/drbd.confi

#include "drbd.d/global_common.conf";

#include "drbd.d/*.res";

global {

        usage-count no;

}

common {

        syncer { rate 200M; }

}

resource nfsha {

        protocol C;

                   startup {

                 wfc-timeout 120;

                 degr-wfc-timeout 120;

        }

 

        net {

                cram-hmac-alg "sha1";

                shared-secret "nfs-ha";

        }

        

         disk {

                on-io-error detach;

                fencing resource-only;

        }

        device /dev/drbd0;

        on nfs1 {

                disk /dev/sdb1;

                address 10.10.10.166:7788;

                meta-disk internal;

        }

        on nfs2 {

                disk /dev/sdb1;

                address 10.10.10.167:7788;

                meta-disk internal;

        }

 

}

        

 

解释:

global {

        usage-count no;  #允不允许官方统计

}

common {

        syncer {rate 200M; }                  //设置主备节点同步时速率的最大值单位是字节

                  

}

resource r0 {                                                   //资源名称

        protocol C;                                             //使用的协议

 

        handlers {

                # These are EXAMPLE handlers only.

                # They may have severe implications,

                # like hard resetting the node under certain circumstances.

                # Be careful when chosing your poison.

 

                 pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";

                 pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";

                 local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";

                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";

                 split-brain "/usr/lib/drbd/notify-split-brain.sh root";

                 out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";

                # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";

                # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;

        }

 

        startup {

                 wfc-timeout 120;

                 degr-wfc-timeout 120;

        }

 

        net {

                cram-hmac-alg "sha1";   //DRBD同步时使用的验证方式和密码信息

                shared-secret "nfsha";

        }

        

         disk {                                                                                                                //使用dpod功能保证在数据不同步时不进行切换

                on-io-error detach;

                fencing resource-only;

        }

        device /dev/drbd0;

        on master-drbd {                                                                       //每个主机的说明以on开头后面是hostname(与uname -n一样)

                disk /dev/sdb1;                                                                 // drbd0使用的磁盘分区是/dev/sdb1

                address 192.168.100.10:7788;                          //

                meta-disk internal;                                                //DRBD的源数据存方式

        }

        on slave-drbd {

                disk /dev/sdb1;

                address 192.168.100.20:7788;

                meta-disk internal;

        }

 

}

 

 

复制配置信息到另一台主机

 

启动DRBD(同时)

         首先创建供DRBD记录信息的数据块

                   两台主机分别执行drbdadm create-md nfsha或者drbdadm create-md all

                   启动DRBD service drbd start

         查看节点状态        

                   cat /proc/drbd

                   version: 8.3.16 (api:88/proto:86-97)

GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37

 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:5242684

        

         ro表示角色信息,第一次启动时都为Secondary(备)状态

         ds表示磁盘信息,Inconsistent表示磁盘数据不一致状态

         ns表示网络发送的数据包信息

         dw表示磁盘写信息

         dr表示磁盘读信息

 

设置主用节点

         drbdsetup /dev/drbd0 primary -o

         drbdadm primary r0或者 drbdadm primary all

设置从 drbdadm secondary nfsha

启动之后再次查看状态        

         version: 8.3.16 (api:88/proto:86-97)

GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37

 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----

    ns:4812800 nr:0 dw:0 dr:4813472 al:0 bm:293 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:429884

         [=================>..] sync‘ed: 91.9% (416/5116)M

         finish: 0:00:10 speed: 40,304 (38,812) K/sec

        

sync‘ed:同步进度

等待一段时间再次查看(同步时间可能很长)

         version: 8.3.16 (api:88/proto:86-97)

GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37

 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----

    ns:5242684 nr:0 dw:0 dr:5243356 al:0 bm:320 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

 

ds状态变为UpToDate/UpToDate说明成功了

 

挂载DRBD设备

         mount只能用在主设备上,因此只要主设备才能格式化挂载。

         要在备设备上挂载,必须先卸载主,在升级备为主,然后再挂载

 

主设备进行格式化

         mkfs.ext4 /dev/drbd0

         tune2fs -c -1 /dev/drbd0

挂载

mkdir /data

         mount /dev/drbd0 /data

         关闭DRBD开机自启动

测试

          dd if=/dev/zero of=/data/test.file bs=100M count=2

查看备用节点是否同步

         主操作

          umount /data

          drbdadm secondary all

        

         备用节点升级为主

          drbdadm primary nfsha

          mount /dev/drbd0 /data

          挂载查看是否含有test.file

          

          

 

 

安装Heartbeat

配置vip

cd /etc/sysconfig/network-scripts/

cp ifcfg-eth0 ifcfg-eth0:0

 

DEVICE=eth0:0

ONBOOT=yes

IPADDR=10.10.10.50

NETMASK=255.255.255.255

 

yum install pam-devel -y

yum install python-devel -y

yum install gcc-c++ -y

yum install glib* -y

yum install libxslt* -y

yum install tkinter* -y

yum install elfutils* -y

yum install lm_sensors* -y

yum install perl-Compress* perl-libwww* perl-HTML* perl-XML* perl-Net* perl-IO* perl-Digest* -y

yum install bzip2* -y

yum install ncurses* -y

yum install imake* -y

yum install autoconf* -y

yum install flex -y

yum install beecrypt* -y

yum install net-snmp* -y

yum install perl-LDAP-0.40-1.el6.noarch.rpm -y

yum install perl-Parse-* perl-Mail-DKIM* -y

yum install libnet* -y

yum install openssl openssl-devel -y

 

tar xf libnet-1.1.2.1.tar.gz

cd libnet

./configure

make &&make install

cd ..

tar xf heartbeat-2.0.7.tar.gz

cd heartbeat-2.0.7

./ConfigureMe configure --enable-fatal-warnings=no --disable-swig --disable-snmp-subagent

./ConfigureMe make --enable-fatal-warnings=no || gmake

make install

 

 

cd /usr/share/doc/heartbeat-2.0.7/

cp ha.cf haresources authkeys /etc/ha.d/

cd /etc/ha.d/

 

 

设置heartbeat配置文件

(nfs1)

编辑ha.cf,添加下面配置:

 

 

# vi /etc/ha.d/ha.cf

logfile         /var/log/ha-log

logfacility     local0

keepalive       2

deadtime        5

ucast           eth0 10.10.10.167    # 指定对方网卡及IP

auto_failback   off

node           nfs1 nfs2

 

(nfs2)

编辑ha.cf,添加下面配置:

 

# vi /etc/ha.d/ha.cf

logfile         /var/log/ha-log

logfacility     local0

keepalive       2

deadtime        5

ucast           eth0 10.10.10.166

auto_failback   off

node            nfs1 nfs2

 

编辑双机互联验证文件authkeys,添加以下内容:(node1,node2)

 

 

# vi /etc/ha.d/authkeys

auth 1

1 crc

给验证文件600权限

 

 

# chmod 600 /etc/ha.d/authkeys

编辑集群资源文件:(nfs1,nfs2)

 

 

# vi /etc/ha.d/haresources

nfs1 IPaddr::10.10.10.50/24/eth0 drbddisk::nfsha Filesystem::/dev/drbd0::/data::ext4 killnfsd

这里ip为虚ip   注意网卡(eth0)资源名(nfsha)和挂载点(data)

注:该文件内IPaddr,Filesystem等脚本存放路径在/etc/ha.d/resource.d/下,也可在该目录下存放服务启动脚本(例如:mysql,www),将相同脚本名称添加到/etc/ha.d/haresources内容中,从而跟随heartbeat启动而启动该脚本。

 

IPaddr::10.10.10.0/24/eth0:用IPaddr脚本配置对外服务的浮动虚拟IP

drbddisk::nfsha:用drbddisk脚本实现DRBD主从节点资源组的挂载和卸载

Filesystem::/dev/drbd0::/data::ext4:用Filesystem脚本实现磁盘挂载和卸载

 

编辑脚本文件killnfsd,用来重启NFS服务:(node1,node2)

 

# vi /etc/ha.d/resource.d/killnfsd

killall -9 nfsd; /etc/init.d/nfs restart;exit 0

赋予755执行权限:

 

 

chmod 755 /etc/ha.d/resource.d/killnfsd

 

创建DRBD脚本文件drbddisk:(nfs1,nfs2)

 

编辑drbddisk,添加下面的脚本内容

 

 

# vi /etc/ha.d/resource.d/drbddisk

 

#!/bin/bash

#

# This script is inteded to be used as resource script by heartbeat

#

# Copright 2003-2008 LINBIT Information Technologies

# Philipp Reisner, Lars Ellenberg

#

###

 

DEFAULTFILE="/etc/default/drbd"

DRBDADM="/sbin/drbdadm"

 

if [ -f $DEFAULTFILE ]; then

 . $DEFAULTFILE

fi

 

if [ "$#" -eq 2 ]; then

 RES="$1"

 CMD="$2"

else

 RES="all"

 CMD="$1"

fi

 

## EXIT CODES

# since this is a "legacy heartbeat R1 resource agent" script,

# exit codes actually do not matter that much as long as we conform to

#  http://wiki.linux-ha.org/HeartbeatResourceAgent

# but it does not hurt to conform to lsb init-script exit codes,

# where we can.

#  http://refspecs.linux-foundation.org/LSB_3.1.0/

#LSB-Core-generic/LSB-Core-generic/iniscrptact.html

####

 

drbd_set_role_from_proc_drbd()

{

local out

if ! test -e /proc/drbd; then

ROLE="Unconfigured"

return

fi

 

dev=$( $DRBDADM sh-dev $RES )

minor=${dev#/dev/drbd}

if [[ $minor = *[!0-9]* ]] ; then

# sh-minor is only supported since drbd 8.3.1

minor=$( $DRBDADM sh-minor $RES )

fi

if [[ -z $minor ]] || [[ $minor = *[!0-9]* ]] ; then

ROLE=Unknown

return

fi

 

if out=$(sed -ne "/^ *$minor: cs:/ { s/:/ /g; p; q; }" /proc/drbd); then

set -- $out

ROLE=${5%/**}

: ${ROLE:=Unconfigured} # if it does not show up

else

ROLE=Unknown

fi

}

 

case "$CMD" in

   start)

# try several times, in case heartbeat deadtime

# was smaller than drbd ping time

try=6

while true; do

$DRBDADM primary $RES && break

let "--try" || exit 1 # LSB generic error

sleep 1

done

;;

   stop)

# heartbeat (haresources mode) will retry failed stop

# for a number of times in addition to this internal retry.

try=3

while true; do

$DRBDADM secondary $RES && break

# We used to lie here, and pretend success for anything != 11,

# to avoid the reboot on failed stop recovery for "simple

# config errors" and such. But that is incorrect.

# Don‘t lie to your cluster manager.

# And don‘t do config errors...

let --try || exit 1 # LSB generic error

sleep 1

done

;;

   status)

if [ "$RES" = "all" ]; then

   echo "A resource name is required for status inquiries."

   exit 10

fi

ST=$( $DRBDADM role $RES )

ROLE=${ST%/**}

case $ROLE in

Primary|Secondary|Unconfigured)

# expected

;;

*)

# unexpected. whatever...

# If we are unsure about the state of a resource, we need to

# report it as possibly running, so heartbeat can, after failed

# stop, do a recovery by reboot.

# drbdsetup may fail for obscure reasons, e.g. if /var/lock/ is

# suddenly readonly.  So we retry by parsing /proc/drbd.

drbd_set_role_from_proc_drbd

esac

case $ROLE in

Primary)

echo "running (Primary)"

exit 0 # LSB status "service is OK"

;;

Secondary|Unconfigured)

echo "stopped ($ROLE)"

exit 3 # LSB status "service is not running"

;;

*)

# NOTE the "running" in below message.

# this is a "heartbeat" resource script,

# the exit code is _ignored_.

echo "cannot determine status, may be running ($ROLE)"

exit 4 #  LSB status "service status is unknown"

;;

esac

;;

   *)

echo "Usage: drbddisk [resource] {start|stop|status}"

exit 1

;;

esac

 

exit 0

赋予755执行权限:

 

 

 chmod 755 /etc/ha.d/resource.d/drbddisk

启动HeartBeat服务

 

在两个节点上启动HeartBeat服务,先启动node1:(node1,node2)

 

 

 service heartbeat start

现在从其他机器能够ping通虚IP 192.168.0.190,表示配置成功

 

配置NFS:(nfs1,nfs2)

安装nfs  yum install nfs-utils rpcbind

 

编辑exports配置文件,添加以下配置:

 

 

# vi /etc/exports

/data        *(rw,no_root_squash)

重启NFS服务:

 

 

 

 

 

 service rpcbind restart

 service nfs restart

 chkconfig rpcbind on

 chkconfig nfs off

注:这里设置NFS开机不要自动运行,因为/etc/ha.d/resource.d/killnfsd 该脚本会控制NFS的启动。

 

五、测试高可用

 

1、正常热备切换

在客户端挂载NFS共享目录

 

 

# mount -t nfs 192.168.0.190:/store /tmp

模拟将主节点node1 的heartbeat服务停止,则备节点node2会立即无缝接管;测试客户端挂载的NFS共享读写正常。

 

此时备机node2上的DRBD状态:

 

 

# service drbd status

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.corp.com, 2015-05-12 21:05:41

m:res  cs         ro                 ds                 p  mounted     fstype

0:r0   Connected  Primary/Secondary  UpToDate/UpToDate  C  /store      ext4

异常宕机切换

强制关机,直接关闭node1电源

 

node2节点也会立即无缝接管,测试客户端挂载的NFS共享读写正常。

 

此时node2上的DRBD状态:

 

# service drbd status

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.corp.com, 2015-05-12 21:05:41

m:res  cs         ro                 ds                 p  mounted     fstype

0:r0   Connected  Primary/Unknown    UpToDate/DUnknown  C  /store      ext4

 

 

nfs  heartbeat、drbd不要设置成开机自启动

 

主宕机后先启动drbd  然后再起的heartbeat

 

drbd+nfs+heartbeat高可用

标签:module   expec   configure   mon   .com   drive   vmware   warning   drbdadm   

原文地址:http://www.cnblogs.com/deadly/p/7382624.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!