前提:
本配置共有两个测试节点,分别node1和node2,相的IP地址分别为202.207.178.6和202.207.178.7,管理节点202.207.178.8,对node1和node2进行配置。此时已经配置好drbd,并且可以正常工作了!
(为避免影响,先关闭防火墙和SElinux,DRBD相关配置详见http://10927734.blog.51cto.com/10917734/1867283)
一、安装corosync
1、先停止drbd服务,并禁止其开机自动启动
主节点:
[root@node2 ~]# umount /mydata/
[root@node2 ~]# drbdadm secondary mydrbd
[root@node2 ~]# service drbd stop
[root@node2 ~]# chkconfig drbd off
从节点:
[root@node1 ~]# service drbd stop
[root@node1 ~]# chkconfig drbd off
2、安装相关软件包
[root@fsy ~]# for I in {1..2}; do ssh node$I 'mkdir /root/corosync/'; scp *.rpm node$I:/root/corosync; ssh node$I 'yum -y --nogpgcheck localinstall /root/corosync/*.rpm'; done
(将heartbeat-3.0.4-2.el6.i686.rpm和heartbeat-libs-3.0.4-2.el6.i686.rpm复制到主目录下进行)
[root@fsy ~]# for I in {1..2}; do ssh node$I 'yum -y install cluster-glue corosync libesmtp pacemaker pacemaker-cts'; done
3、创建所需日志目录
[root@node1 corosync]# mkdir /var/log/cluster
[root@node2 ~]# mkdir /var/log/cluster
4、配置corosync,(以下命令在node1上执行),并尝试启动
# cd /etc/corosync
# cp corosync.conf.example corosync.conf
接着编辑corosync.conf,添加如下内容:
修改以下语句:
bindnetaddr: 202.207.178.0#网络地址,节点所在的网络地址段
secauth: on#打开安全认证
threads: 2#启动的线程数
to_syslog: no (不在默认位置记录日志)
添加如下内容,定义pacemaker随corosync启动,并且定义corosync的工作用户和组:
service {
ver: 0
name: pacemaker
}
aisexec {
user: root
group: root
}
生成节点间通信时用到的认证密钥文件:
# corosync-keygen
将corosync和authkey复制至node2:
# scp -p corosync.conf authkey node2:/etc/corosync/
尝试启动,(以下命令在node1上执行):
# service corosync start
注意:启动node2需要在node1上使用如上命令进行,不要在node2节点上直接启动
# ssh node2 '/etc/init.d/corosync start'
5、测试是否正常
查看corosync引擎是否正常启动:
# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log
输出以下内容:
Oct 23 00:38:06 corosync [MAIN ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service.
Oct 23 00:38:06 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'
查看初始化成员节点通知是否正常发出:
# grep TOTEM /var/log/cluster/corosync.log
输出如下内容:
Oct 23 00:38:06 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
Oct 23 00:38:06 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Oct 23 00:38:06 corosync [TOTEM ] The network interface [202.207.178.6] is now up.
Oct 23 00:39:35 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
检查启动过程中是否有错误产生:
# grep ERROR: /var/log/messages | grep -v unpack_resources
查看pacemaker是否正常启动:
# grep pcmk_startup /var/log/cluster/corosync.log
输出如下内容:
Oct 23 00:38:06 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
Oct 23 00:38:06 corosync [pcmk ] Logging: Initialized pcmk_startup
Oct 23 00:38:06 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295
Oct 23 00:38:06 corosync [pcmk ] info: pcmk_startup: Service: 9
Oct 23 00:38:06 corosync [pcmk ] info: pcmk_startup: Local hostname: node1
使用如下命令查看集群节点的启动状态:
# crm_mon
Last updated: Tue Oct 25 17:28:10 2016 Last change: Tue Oct 25 17:21:56 2016 by hacluster via crmd on node1
Stack: classic openais (with plugin)
Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0) - partition with quorum
2 nodes and 0 resources configured, 2 expected votes
Online: [ node1 node2 ]
从上面的信息可以看出两个节点都已经正常启动,并且集群已经处于正常工作状态。
二、配置资源及约束
1、安装crmsh软件包:
pacemaker本身只是一个资源管理器,我们需要一个接口才能对pacemker上的资源进行定义与管理,而crmsh即是pacemaker的配置接口,从pacemaker 1.1.8开始,crmsh 发展成一个独立项目,
pacemaker中不再提供。crmsh提供了一个命令行的交互接口来对Pacemaker集群进行管理,它具有更强大的管理功能,同样也更加易用,在更多的集群上都得到了广泛的应用,类似软件还有 pcs;
在/etc/yum.repo.d/ 下的配置文件中添加以下内容
[ewai]
name=aaa
baseurl=http://download.opensuse.org/repositories/network:/ha- clustering:/Stable/CentOS_CentOS-6/
enabled=1
gpgcheck=0
# yum clean all
# yum makecache
[root@node1 yum.repos.d]# yum install crmsh
2、检查配置文件有无语法错误,并进行相关配置
crm(live)configure# verify
我们里可以通过如下命令先禁用stonith:
# crm configure property stonith-enabled=false
或 crm(live)configure# property stonith-enabled=false
crm(live)configure# commit
配置不具备法定票数的处理方式:
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# verify
crm(live)configure# commit
配置资源粘性,使资源更愿意留在当前节点
crm(live)configure# rsc_defaults resource-stickiness=100
crm(live)configure# verify
crm(live)configure# commit
3、配置资源
定义一个名为mysqldrbd的资源:
(interval:定义监控的时间间隔)
crm(live)configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240 op stop timeout=100 op monitor role=Master interval=20 timeout=30 op monitor role=Slave interval=30 timeout=30
crm(live)configure# verify
定义一个名为ms_mysqldrbd的主从类型的资源:
指明是mysqldrbd的克隆,master-max=1:定义最多出现1个主资源,master-node-max=1:主资源在同一时刻只能出现在一个节点上,clone-max=2:定义最多有两个克隆资源,clone-node-max:定义在每个节点上只能启动1个克隆资源
crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
crm(live)configure# verify
crm(live)configure# commit
4、测试
[root@node1 ~]# crm status
Last updated: Sun Oct 23 13:05:43 2016Last change: Sun Oct 23 13:03:52 2016 by root via cibadmin on node1
Stack: classic openais (with plugin)
Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0) - partition with quorum
2 nodes and 2 resources configured, 2 expected votes
Online: [ node1 node2 ]
Full list of resources:
Master/Slave Set: ms_mysqldrbd [mysqldrbd]
Masters: [ node1 ]
Slaves: [ node2 ]
[root@node1 ~]# drbd-overview
0:mydrbd Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@node1 ~]# crm node standby
[root@node1 ~]# crm status
Last updated: Sun Oct 23 13:06:30 2016Last change: Sun Oct 23 13:06:25 2016 by root via crm_attribute on node1
Stack: classic openais (with plugin)
Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0) - partition with quorum
2 nodes and 2 resources configured, 2 expected votes
Node node1: standby
Online: [ node2 ]
Full list of resources:
Master/Slave Set: ms_mysqldrbd [mysqldrbd]
Masters: [ node2 ]
Stopped: [ node1 ]
[root@node1 ~]# crm node online
[root@node1 ~]# crm status
Last updated: Sun Oct 23 13:07:00 2016Last change: Sun Oct 23 13:06:58 2016 by root via crm_attribute on node1
Stack: classic openais (with plugin)
Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0) - partition with quorum
2 nodes and 2 resources configured, 2 expected votes
Online: [ node1 node2 ]
Full list of resources:
Master/Slave Set: ms_mysqldrbd [mysqldrbd]
Masters: [ node2 ]
Slaves: [ node1 ]
服务正常!
5、配置一个文件系统资源,使DRBD自动挂载,并配置排列约束,使此资源和主节点在一起;同时配置一个顺序约束,实现先启动drbd,再启动mystor
crm(live)configure# primitive mystore ocf:Filesystem params device=/dev/drbd0 directory=/mydata fstype=ext4 op start timeout=60 op stop timeout=60
crm(live)configure# verify
crm(live)configure# colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master
crm(live)configure# order mystore_after_ms_mysqldrbd mandatory: ms_mysqldrbd:promote mystore:start
crm(live)configure# verify
crm(live)configure# commit
测试:
[root@node2 ~]# crm node standby
[root@node2 ~]# crm status
Last updated: Sun Oct 23 13:45:26 2016Last change: Sun Oct 23 13:45:20 2016 by root via crm_attribute on node2
Stack: classic openais (with plugin)
Current DC: node2 (version 1.1.14-8.el6_8.1-70404b0) - partition with quorum
2 nodes and 3 resources configured, 2 expected votes
Node node2: standby
Online: [ node1 ]
Full list of resources:
Master/Slave Set: ms_mysqldrbd [mysqldrbd]
Masters: [ node1 ]
Stopped: [ node2 ]
mystore(ocf::heartbeat:Filesystem):Started node1
[root@node1 yum.repos.d]# ls /mydata/
fsy lost+found
此时测试,一切正常!
三、安装Mysql(先在主节点上,后在从节点上)
1.将下载好的压缩包解压至/usr/local,并进入此目录
#tar xf mysql-5.5.52-linux2.6-i686.tar.gz -C /usr/local
#cd /usr/local/
2.为解压后的目录创建一个链接,并进入此目录
#ln -sv mysql-5.5.52-linux2.6-i686 mysql
#cd mysql
3.创建MySQL用户(使其成为系统用户)和MySQL组
#groupadd -r -g 306 mysql
#useradd -g 306 -r -u 306 mysql
4.使mysql下的所有文件都属于mysql用户和mysql组
#chown -R mysql:mysql /usr/local/mysql/*
5.创建数据目录,并使其属于mysql用户和mysql组,其他人无权限
#mkdir /mydata/data
#chown -R mysql:mysql /mydata/data/
#chmod o-rx /mydata/data/
6.准备就绪,开始安装
#scripts/mysql_install_db --user=mysql --datadir=/mydata/data/
7.安装完成后为了安全,更改/usr/local/mysql下所有文件的权限
#chown -R root:mysql /usr/local/mysql/*
8.准备启动脚本,并禁止其开机自动启动
#cp support-files/mysql.server /etc/init.d/mysqld
#chkconfig --add mysqld
#chkconfig mysqld off
9.编辑数据库配置文件
#cp support-files/my-large.cnf /etc/my.cnf
#vim /etc/my.cnf,修改和添加以下内容:
thread_concurrency =2(因为我的CPU数为1,所以线程数改为2)
datadir = /mydata/data
10.启动mysql
# service mysqld start
# /usr/local/mysql/bin/mysql
11.测试是否正常
mysql> show databases;
mysql> CREATE DATABASE mydb;
mysql> show databases;
12.关闭主节点上的mysql服务,使从节点变为主节点,安装mysql
[root@node1 mysql]# service mysqld stop
[root@node1 mysql]# crm node standby
[root@node1 mysql]# crm node online
13.将下载好的压缩包解压至/usr/local,并进入此目录
#tar xf mysql-5.5.52-linux2.6-i686.tar.gz -C /usr/local
#cd /usr/local/
14.为解压后的目录创建一个链接,并进入此目录
#ln -sv mysql-5.5.52-linux2.6-i686 mysql
#cd mysql
15.创建MySQL用户(使其成为系统用户)和MySQL组
#groupadd -r -g 306 mysql
#useradd -g 306 -r -u 306 mysql
16.使mysql下的所有文件都属于root用户和mysql组
#chown -R root:mysql /usr/local/mysql/*
17.准备启动脚本,并禁止其开机自动启动
#cp support-files/mysql.server /etc/init.d/mysqld
#chkconfig --add mysqld
#chkconfig mysqld off
18.编辑数据库配置文件
#cp support-files/my-large.cnf /etc/my.cnf
#vim /etc/my.cnf,修改和添加以下内容:
thread_concurrency =2(因为我的CPU数为1,所以线程数改为2)
datadir = /mydata/data
19.启动mysql
# service mysqld start
# /usr/local/mysql/bin/mysql
20.测试是否正常
mysql> show databases;
发现有mydb数据库!
测试成功!
四、配置mysql资源
1、停止主节点上的mysql服务
# service mysqld stop
2、定义主资源
crm(live)configure# primitive mysqld lsb:mysqld
crm(live)configure# verify
3、定义资源约束
定义排列约束,使mysqld和mystore在一起
crm(live)configure# colocation mysqld_with_mystore inf: mysqld mystore
crm(live)configure# verify
定义顺序约束,使mystore先启动,mysqld后启动
crm(live)configure# order mysqld_after_mystore mandatory: mystore mysqld
crm(live)configure# verify
crm(live)configure# commit
4、测试
1)在主节点上连接mysql,并创建数据库
mysql> CREATE DATABASES hellodb;
mysql> show databases;
2)节点切换(主节点上)
# crm node standby
# crm node online
3)在原来的从节点(及现在的主节点上测试)
mysql> show databases;
发现有hellodb数据库!
测试成功!
至此,drbd+corosync的高可用mysql配置完成!
欢迎批评指正!
本文出自 “10917734” 博客,请务必保留此出处http://10927734.blog.51cto.com/10917734/1870222