MySQL MHA架构介绍:
MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开发,是一套优秀的作为MySQL高可用性环境下故障切换和主从提升的高可用软件。在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。
该软件由两部分组成:MHA Manager(管理节点)和MHA Node(数据节点)。MHA Manager可以单独部署在一*立的机器上管理多个master-slave集群,也可以部署在一台slave节点上。MHA Node运行在每台MySQL服务器上,MHA Manager会定时探测集群中的master节点,当master出现故障时,它可以自动将最新数据的slave提升为新的master,然后将所有其他的slave重新指向新的master。整个故障转移过程对应用程序完全透明。
在MHA自动故障切换过程中,MHA试图从宕机的主服务器上保存二进制日志,最大程度的保证数据的不丢失,但这并不总是可行的。例如,如果主服务器硬件故障或无法通过ssh访问,MHA没法保存二进制日志,只进行故障转移而丢失了最新的数据。使用MySQL 5.5的半同步复制,可以大大降低数据丢失的风险。MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。
目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二从,即一台充当master,一台充当备用master,另外一台充当从库,因为至少需要三台服务器,出于机器成本的考虑,淘宝也在该基础上进行了改造,目前淘宝TMHA已经支持一主一从。(出自:《深入浅出MySQL(第二版)》)
架构图:
MHA工作原理总结为以下几条:
(1)从宕机崩溃的master保存二进制日志事件(binlog events);
(2)识别含有最新更新的slave;
(3)应用差异的中继日志(relay log) 到其他slave;
(4)应用从master保存的二进制日志事件(binlog events);
(5)提升一个slave为新master;
(6)使用其他的slave连接新的master进行复制。
官方介绍:https://code.google.com/p/mysql-master-ha/
实验环境:(centos7.2 MySQL版本5.7)
角色 ip地址 主机名 server_id 类型
Monitor host 192.168.0.2 server1 - 监控复制组
Master 192.168.0.3 server2 1 写入
Candicate master 192.168.0.4 server3 2 读
Slave 192.168.0.5 server4 3 读
server3和server4是server2的slave,复制环境搭建后面会简单演示,其中master对外提供写服务,备选master(实际的slave,主机名server3)提供读服务,slave也提供相关的读服务,一旦master宕机,将会把备选master提升为新的master,slave指向新的master
1、部署MHA过程:
在所有节点都要安装MHA node所需的perl模块(DBD:mysql),可以通过yum安装,如果没epel源,先安装epel源,在如下:(温馨提示:系统时间一定要是最新的,否则安装时会出各种奇葩问题)
在4台服务器上安装mha 准备操作(以server1(192.168.0.2)为例):
#先安装依赖
wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
[root@server1 data]# rpm -ivh epel-release-latest-7.noarch.rpm
warning: epel-release-latest-7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:epel-release-7-11 ################################# [100%]
yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager -y
(2)在所有的节点安装MHA node:(下面以server1为例,记得server2、server3和server4也一样的操作),MHA node和MHA Manager都在要官网下载,
下载地址:wget https://github-production-release-asset-2e65be.s3.amazonaws.com/2093258/9d78fb60-2de4-11e8-8f0c-bac507a4e54f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180906T015544Z&X-Amz-Expires=300&X-Amz-Signature=0d3c55cae6f8073b54589eeb035d17c7c82bf525a7eb0b82d7e25766781028fb&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dmha4mysql-node-0.58.tar.gz&response-content-type=application%2Foctet-stream
[root@server1 data]# rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm
Preparing... ################################# [100%]
Updating / installing...
1:mha4mysql-node-0.58-0.el7.centos ################################# [100%]
安装完成后会在/usr/local/bin目录下生成以下脚本文件:
cd /usr/local/bin/
pwd
/usr/local/bin
ll
总用量 40576
-r-xr-xr-x 1 root root 15498 1月 18 11:02 apply_diff_relay_logs
-r-xr-xr-x 1 root root 4807 1月 18 11:02 filter_mysqlbinlog
-r-xr-xr-x 1 root root 7401 1月 18 11:02 purge_relay_logs
-r-xr-xr-x 1 root root 7263 1月 18 11:02 save_binary_logs
Node脚本说明:(这些工具通常由MHA Manager的脚本触发,无需人为操作)
save_binary_logs //保存和复制master的二进制日志
apply_diff_relay_logs //识别差异的中继日志事件并将其差异的事件应用于其他的slave
filter_mysqlbinlog //去除不必要的ROLLBACK事件(MHA已不再使用这个工具)
purge_relay_logs //清除中继日志(不会阻塞SQL线程)
2.安装MHA Manager,在MHA Manager的主机也是需要安装MHA Node,MHA Manger也依赖于perl模块(前面安装node时已安装)
在manager服务器server1(192.168.0.2)安装MHA Manager软件包:
[root@server1 data]# rpm -ivh mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
Preparing... ################################# [100%]
Updating / installing...
1:mha4mysql-manager-0.58-0.el7.cent################################# [100%]
安装完成后,在/usr/local/bin会产生相关的脚本:
pwd
/usr/local/bin
ll
总用量 37364
-r-xr-xr-x. 1 root root 15498 1月 11 22:55 apply_diff_relay_logs
-r-xr-xr-x. 1 root root 4807 1月 11 22:55 filter_mysqlbinlog
-r-xr-xr-x. 1 root root 1995 1月 11 22:55 masterha_check_repl
-r-xr-xr-x. 1 root root 1779 1月 11 22:55 masterha_check_ssh
-r-xr-xr-x. 1 root root 1865 1月 11 22:55 masterha_check_status
-r-xr-xr-x. 1 root root 3201 1月 11 22:55 masterha_conf_host
-r-xr-xr-x. 1 root root 2517 1月 11 22:55 masterha_manager
-r-xr-xr-x. 1 root root 2165 1月 11 22:55 masterha_master_monitor
-r-xr-xr-x. 1 root root 2373 1月 11 22:55 masterha_master_switch
-r-xr-xr-x. 1 root root 3749 1月 11 22:55 masterha_secondary_check
-r-xr-xr-x. 1 root root 1739 1月 11 22:55 masterha_stop
-r-xr-xr-x. 1 root root 7401 1月 11 22:55 purge_relay_logs
-r-xr-xr-x. 1 root root 7263 1月 11 22:55 save_binary_logd
复制相关脚本到/usr/local/bin目录(软件包解压缩后就有了,不是必须,因为这些脚本不完整,需要自己修改,这是软件开发着留给我们自己发挥的,如果开启下面的任何一个脚本对应的参数,而对应这里的脚本又没有修改,则会抛错,自己被坑的很惨)
192.168.2.131 [root scripts]$ ll
总用量 32
-rwxr-xr-x. 1 root root 3443 1月 8 2012 master_ip_failover //自动切换时vip管理的脚本,不是必须,如果我们使用keepalived的,我们可以自己编写脚本完成对vip的管理,比如监控mysql,如果mysql异常, 我们停止keepalived就行,这样vip就会自动漂移
-rwxr-xr-x. 1 root root 9186 1月 8 2012 master_ip_online_change //在线切换时vip的管理,不是必须,同样可以可以自行编写简单的shell完成
-rwxr-xr-x. 1 root root 11867 1月 8 2012 power_manager //故障发生后关闭主机的脚本,不是必须
-rwxr-xr-x. 1 root root 1360 1月 8 2012 send_report //因故障切换后发送报警的脚本,不是必须,可自行编写简单的shell完成 #创建目录
mkdir -p /etc/mha/scripts
配置全局配置文件
[root@server1 data]#
cat /etc/masterha_default.cnf << EOF
[server default]
user=root
password=abc123
ssh_user=root
repl_user=repl
repl_password=mysql
ping_interval=1
#master_binlog_dir= /var/lib/mysql,/var/log/mysql
secondary_check_script=masterha_secondary_check -s 192.168.0.3 -s 192.168.0.4 -s 192.168.0.5
master_ip_failover_script="/etc/mha/scripts/master_ip_failover"
master_ip_online_change_script="/etc/mha/scripts/master_ip_online_change"
report_script="/etc/mha/scripts/send_report"
EOF
####配置主配置文件
cat >> /etc/mha/app1.cnf << EOF
[server default]
manager_workdir=/var/log/mha/app1
manager_log=/var/log/mha/app1/manager.log
[server1]
hostname=192.168.0.3
candidate_master=1
master_binlog_dir="/usr/local/mysql/data"
#查看方式 find / -name mysql-bin*
[server2]
hostname=192.168.0.4
candidate_master=1
master_binlog_dir="/usr/local/mysql/data"
[server3]
hostname=192.168.0.5
master_binlog_dir="/usr/local/mysql/data"
#表示没有机会成为master
no_master=1
EOF
#####配置VIP
#为了防止脑裂发生,推荐生产环境采用脚本的方式来管理虚拟 ip,而不是使用 keepalived来完成。
cat >> /etc/mha/scripts/master_ip_failover << EOF
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host,
$orig_master_ip,$orig_master_port, $new_master_host, $new_master_ip,$new_master_port
);
#定义VIP变量
my $vip = '192.168.0.6/8';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {
return 0 unless ($ssh_user);
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
EOF
#####配置报警邮件脚本
#mail邮件发送程序,需要先配置好发送这信息
cat >> /etc/mail.rc EOF <<
set from=dbmonitor@vcredit.com
set smtp=smtp.vcredit.com
set smtp-auth-user=dbmonitor@vcredit.com
set
EOF
#这是具体的邮件发送脚本
vi /etc/mha/scripts/send_report
#!/bin/bash
source /root/.bash_profile
# 解析变量
orig_master_host=`echo "$1" | awk -F = '{print $2}'`
new_master_host=`echo "$2" | awk -F = '{print $2}'`
new_slave_hosts=`echo "$3" | awk -F = '{print $2}'`
subject=`echo "$4" | awk -F = '{print $2}'`
body=`echo "$5" | awk -F = '{print $2}'`
#定义收件人地址
email="888118@wsfnk.com"
tac /var/log/mha/app1/manager.log | sed -n 2p | grep 'successfully' > /dev/null
if [ $? -eq 0 ]
then
messages=`echo -e "MHA $subject 主从切换成功\n master:$orig_master_host --> $new_master_host \n $body \n 当前从库:$new_slave_hosts"`
echo "$messages" | mail -s "Mysql 实例宕掉,MHA $subject 切换成功" $email >>/tmp/mailx.log 2>&1
else
messages=`echo -e "MHA $subject 主从切换失败\n master:$orig_master_host --> $new_master_host \n $body" `
echo "$messages" | mail -s ""Mysql 实例宕掉,MHA $subject 切换失败"" $email >>/tmp/mailx.log 2>&1
fi
EOF
######配置编写VIP脚本
vi /etc/mha/scripts/master_ip_online_change
#!/bin/bash
source /root/.bash_profile
vip=`echo '192.168.0.5/8'` #设置VIP
key=`echo '1'`
command=`echo "$1" | awk -F = '{print $2}'`
orig_master_host=`echo "$2" | awk -F = '{print $2}'`
new_master_host=`echo "$7" | awk -F = '{print $2}'`
orig_master_ssh_user=`echo "${12}" | awk -F = '{print $2}'`
new_master_ssh_user=`echo "${13}" | awk -F = '{print $2}'`
#要求服务的网卡识别名一样,都为ens192(这里是)
stop_vip=`echo "ssh root@$orig_master_host /usr/sbin/ifconfig eth0:$key down"`
start_vip=`echo "ssh root@$new_master_host /usr/sbin/ifconfig eth0:$key $vip"`
if [ $command = 'stop' ]
then
echo -e "\n\n\n****************************\n"
echo -e "Disabled thi VIP - $vip on old master: $orig_master_host \n"
$stop_vip
if [ $? -eq 0 ]
then
echo "Disabled the VIP successfully"
else
echo "Disabled the VIP failed"
fi
echo -e "***************************\n\n\n"
fi
if [ $command = 'start' -o $command = 'status' ]
then
echo -e "\n\n\n*************************\n"
echo -e "Enabling the VIP - $vip on new master: $new_master_host \n"
$start_vip
if [ $? -eq 0 ]
then
echo "Enabled the VIP successfully"
else
echo "Enabled the VIP failed"
fi
echo -e "***************************\n\n\n"
fi
######将脚本赋予可执行权限
chmod +x /etc/mha/scripts/master_ip_failover
chmod +x /etc/mha/scripts/master_ip_online_change
chmod +x /etc/mha/scripts/send_report
##### 通过 masterha_check_ssh 验证 ssh 信任登录是否成功
masterha_check_ssh --conf=/etc/mha/app1.cnf
[root@server1 data]# masterha_check_ssh --conf=/etc/mha/app1.cnf
Thu Sep :: - [info] Reading default configuration from /etc/masterha_default.cnf..
Thu Sep :: - [info] Reading application default configuration from /etc/mha/app1.cnf..
Thu Sep :: - [info] Reading server configuration from /etc/mha/app1.cnf..
Thu Sep :: - [info] Starting SSH connection tests..
Thu Sep :: - [debug]
Thu Sep :: - [debug] Connecting via SSH from root@192.168.0.3(192.168.0.3:) to root@192.168.0.4(192.168.0.4:)..
Thu Sep :: - [debug] ok.
Thu Sep :: - [debug] Connecting via SSH from root@192.168.0.3(192.168.0.3:) to root@192.168.0.5(192.168.0.5:)..
Thu Sep :: - [debug] ok.
Thu Sep :: - [debug]
Thu Sep :: - [debug] Connecting via SSH from root@192.168.0.4(192.168.0.4:) to root@192.168.0.3(192.168.0.3:)..
Thu Sep :: - [debug] ok.
Thu Sep :: - [debug] Connecting via SSH from root@192.168.0.4(192.168.0.4:) to root@192.168.0.5(192.168.0.5:)..
Thu Sep :: - [debug] ok.
Thu Sep :: - [debug]
Thu Sep :: - [debug] Connecting via SSH from root@192.168.0.5(192.168.0.5:) to root@192.168.0.3(192.168.0.3:)..
Thu Sep :: - [debug] ok.
Thu Sep :: - [debug] Connecting via SSH from root@192.168.0.5(192.168.0.5:) to root@192.168.0.4(192.168.0.4:)..
Thu Sep :: - [debug] ok.
Thu Sep :: - [info] All SSH connection tests passed successfully.
通过 masterha_check_repl 验证 mysql 主从复制是否成功(下面输出表示测试通过)
masterha_check_repl --conf=/etc/mha/app1.cnf
[root@server1 sysbench-0.4.12.14]# masterha_check_repl --conf=/etc/mha/app1.cnf
Fri Sep :: - [info] Reading default configuration from /etc/masterha_default.cnf..
Fri Sep :: - [info] Reading application default configuration from /etc/mha/app1.cnf..
Fri Sep :: - [info] Reading server configuration from /etc/mha/app1.cnf..
Fri Sep :: - [info] MHA::MasterMonitor version 0.58.
Fri Sep :: - [info] GTID failover mode =
Fri Sep :: - [info] Dead Servers:
Fri Sep :: - [info] Alive Servers:
Fri Sep :: - [info] 192.168.0.3(192.168.0.3:)
Fri Sep :: - [info] 192.168.0.4(192.168.0.4:)
Fri Sep :: - [info] 192.168.0.5(192.168.0.5:)
Fri Sep :: - [info] Alive Slaves:
Fri Sep :: - [info] 192.168.0.3(192.168.0.3:) Version=5.7.-log (oldest major version between slaves) log-bin:enabled
Fri Sep :: - [info] GTID ON
Fri Sep :: - [info] Replicating from 192.168.0.4(192.168.0.4:)
Fri Sep :: - [info] Primary candidate for the new Master (candidate_master is set)
Fri Sep :: - [info] 192.168.0.5(192.168.0.5:) Version=5.7.-log (oldest major version between slaves) log-bin:enabled
Fri Sep :: - [info] GTID ON
Fri Sep :: - [info] Replicating from 192.168.0.4(192.168.0.4:)
Fri Sep :: - [info] Not candidate for the new Master (no_master is set)
Fri Sep :: - [info] Current Alive Master: 192.168.0.4(192.168.0.4:)
Fri Sep :: - [info] Checking slave configurations..
Fri Sep :: - [info] read_only= is not set on slave 192.168.0.3(192.168.0.3:).
Fri Sep :: - [info] read_only= is not set on slave 192.168.0.5(192.168.0.5:).
Fri Sep :: - [info] Checking replication filtering settings..
Fri Sep :: - [info] binlog_do_db= , binlog_ignore_db=
Fri Sep :: - [info] Replication filtering check ok.
Fri Sep :: - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Fri Sep :: - [info] Checking SSH publickey authentication settings on the current master..
Fri Sep :: - [info] HealthCheck: SSH to 192.168.0.4 is reachable.
Fri Sep :: - [info]
192.168.0.4(192.168.0.4:) (current master)
+--192.168.0.3(192.168.0.3:)
+--192.168.0.5(192.168.0.5:) Fri Sep :: - [info] Checking replication health on 192.168.0.3..
Fri Sep :: - [info] ok.
Fri Sep :: - [info] Checking replication health on 192.168.0.5..
Fri Sep :: - [info] ok.
Fri Sep :: - [info] Checking master_ip_failover_script status:
Fri Sep :: - [info] /etc/mha/scripts/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.0.4 --orig_master_ip=192.168.0.4 --orig_master_port= IN SCRIPT TEST====/sbin/ifconfig eth0: down==/sbin/ifconfig eth0: 192.168.0.6/=== Checking the Status of the script.. OK
Fri Sep :: - [info] OK.
Fri Sep :: - [warning] shutdown_script is not defined.
Fri Sep :: - [info] Got exit code (Not master dead). MySQL Replication Health is OK.
3.配置SSH登录无密码验证(使用key登录,工作中常用,最好不要禁掉密码登录,如果禁了,可能会有问题)
准备软件: yum install openssh*(4台服务器都要安装)
在server1 192.168.0.2 操作(Monitor):
ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:f0xJpGLFKxaPfmUEqAky86b9Zhw1EoZSQ2JPjLmJ93Q root@server1
The key's randomart image is:
+---[RSA ]----+
| o*=. oo.. |
| .B++.o o..o. |
| . O.o +o+.o. |
|. + + E.*.o.o. |
| . * . =So oo |
| . o . ...o |
| o . .. o |
| = . |
| o |
+----[SHA256]-----+
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.0.3
[root@server1 /]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.0.3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: ERROR: ssh: connect to host 192.168.0.3 port : Connection refused [root@server1 /]# systemctl start sshd
[root@server1 /]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.0.3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.0.3 (192.168.0.3)' can't be established.
ECDSA key fingerprint is SHA256:Vb6AWhrc/cteMSXrZNk2an+rUdSvqodePu0r5oQlCis.
ECDSA key fingerprint is MD5::bd:8c:ce:::b1:7d:e4::f1:c9:a6::d9:.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.0.3's password: Number of key(s) added: Now try logging into the machine, with: "ssh 'root@192.168.0.3'"
and check to make sure that only the key(s) you wanted were added.
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.0.4
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.0.5
在server2 192.168.0.3 操作(Master):
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.0.4
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.0.5
在server3 192.168.0.4 操作(slave):
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.0.3
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.0.5
在server4 192.168.2.130操作(slave):
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.0.3
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.0.4
4.搭建主从复制环境
注意:binlog-do-db 和 replicate-ignore-db 设置必须相同。 MHA 在启动时候会检测过滤规则,如果过滤规则不同,MHA 不启动监控和故障转移。
(1)在Master 192.168.0.3(server2)上备份一份完整的数据:
mysqldump -uroot -p123456 --master-data=2 --single-transaction -R --triggers -A all.sql
其中--master-data=2代表备份时刻记录master的Binlog位置和Position,--single-transaction意思是获取一致性快照,-R意思是备份存储过程和函数,--triggres的意思是备份触发器,-A代表备份所有的库。更多信息请自行mysqldump --help查看。
(2)在Master 192.168.0.3(server2)上创建复制用户:
mysql grant replication slave on *.* to 'repl'@'%' identified by 'mysql';
Query OK, 0 rows affected (0.00 sec) mysql flush privileges;
Query OK, 0 rows affected (0.00 sec)
(3)查看主库备份时的binlog名称和位置,MASTER_LOG_FILE和MASTER_LOG_POS:
head -n 30 all.sql | grep 'CHANGE MASTER TO'
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000004', MASTER_LOG_POS=245;
(4)把备份复制到192.168.0.4和192.168.0.5
192.168.2.128 [root ~]$ scp all.sql 192.168.2.129:/root/
all.sql 100% 500KB 500.5KB/s 00:00
192.168.2.128 [root ~]$ scp all.sql 192.168.2.130:/root/
all.sql
(5)分别在两台服务器上导入备份,执行复制相关命令
在slave主机server3 192.168.0.4上操作:
mysql -uroot -p123456 < ./all.sql
mysql stop slave;
Query OK, 0 rows affected (0.01 sec) mysql CHANGE MASTER TO MASTER_HOST='192.168.0.3',MASTER_USER='repl', MASTER_PASSWORD='mysql',MASTER_LOG_FILE='mysql-bin.000004',MASTER_LOG_POS=245;
Query OK, 0 rows affected (0.01 sec) mysql start slave;
Query OK, 0 rows affected (0.00 sec) mysql show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.0.3
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 472
Relay_Log_File: localhost-relay-bin.000002
Relay_Log_Pos: 480
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
在slave master4 192.168.0.5上操作,导入备份,执行同步操作,如下:
mysql -uroot -p123456 < ./all.sql
mysql slave stop;
Query OK, 0 rows affected, 1 warning (0.00 sec) mysql CHANGE MASTER TO MASTER_HOST='192.168.0.3',MASTER_USER='repl', MASTER_PASSWORD='mysql',MASTER_LOG_FILE='mysql-bin.000004',MASTER_LOG_POS=245;
Query OK, 0 rows affected (0.04 sec) mysql slave start;
Query OK, 0 rows affected (0.00 sec) mysql show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.0.3
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 472
Relay_Log_File: localhost-relay-bin.000002
Relay_Log_Pos: 480
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
(6)两台slave服务器设置read_only(从库对外提供读服务,之所以没有写进配置文件,是因为随时slave会提升为master)
mysql -uroot -p123456 -e "set global read_only=1"
mysql -uroot -p123456 -e "set global read_only=1"
(7)创建监控用户(在master上执行,也就是server2 192.168.0.3):
mysql grant all privileges on *.* to 'root'@'%' identified by 'abc123';
Query OK, 0 rows affected (0.00 sec) mysql flush privileges;
Query OK, 0 rows affected (0.01 sec) mysql
到这里整个集群环境已经搭建完毕,剩下的就是配置MHA软件了。
5.配置MHA(前面已经操作过,无需处理)
(1)创建MHA的工作目录,并且创建相关配置文件(在软件包解压后的目录里面有样例配置文件)。
mkdir -p /etc/masterha
cp mha4mysql-manager-0.56/samples/conf/app1.cnf /etc/masterha/
修改app1.cnf配置文件,修改后的文件内容如下(注意,配置文件中的注释需要去掉,我这里是为了解释清楚):
[root@192.168.2.131 ~]# cat /etc/masterha/app1.cnf
[server default]
manager_workdir=/var/log/masterha/app1.log //设置manager的工作目录
manager_log=/var/log/masterha/app1/manager.log //设置manager的日志
master_binlog_dir=/data/mysql //设置master 保存binlog的位置,以便MHA可以找到master的日志,我这里的也就是mysql的数据目录
master_ip_failover_script= /usr/local/bin/master_ip_failover //设置自动failover时候的切换脚本
master_ip_online_change_script= /usr/local/bin/master_ip_online_change //设置手动切换时候的切换脚本
password=123456 //设置mysql中root用户的密码,这个密码是前文中创建监控用户的那个密码
user=root 设置监控用户root
ping_interval=1 //设置监控主库,发送ping包的时间间隔,默认是3秒,尝试三次没有回应的时候自动进行railover
remote_workdir=/tmp //设置远端mysql在发生切换时binlog的保存位置
repl_password=123456 //设置复制用户的密码
repl_user=repl //设置复制环境中的复制用户名
report_script=/usr/local/bin/send_report //设置发生切换后发送的报警的脚本
secondary_check_script= /usr/local/bin/masterha_secondary_check -s server3 -s server2 --user=root --master_host=server2 --master_ip=192.168.2.128 --master_port=3306 //一旦MHA到server2的监控之间出现问题,MHA Manager将会尝试从server3登录到server2
shutdown_script="" //设置故障发生后关闭故障主机脚本(该脚本的主要作用是关闭主机放在发生脑裂,这里没有使用)
ssh_user=root //设置ssh的登录用户名 [server1]
hostname=192.168.2.128
port=3306 [server2]
hostname=192.168.2.129
port=3306
candidate_master=1
//设置为候选master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库,即使这个主库不是集群中事件最新的slave
check_repl_delay=0 //默认情况下如果一个slave落后master 100M的relay logs的话,MHA将不会选择该slave作为一个新的master,因为对于这个slave的恢复需要花费很长时间,通过设置check_repl_delay=0,MHA触发切换在选择一个新的master的时候将会忽略复制延时,这个参数对于设置了candidate_master=1的主机非常有用,因为这个候选主在切换的过程中一定是新的master [server3]
hostname=192.168.2.130
port=3306
(2)设置relay log的清除方式(在每个slave节点上):
在slave master3 192.168.0.4操作:
mysql -uroot -p123456 -e "set global relay_log_purge=0"
在slave master4 192.168.0.5操作:
mysql -uroot -p123456 -e "set global relay_log_purge=0"
注意:
MHA在发生切换的过程中,从库的恢复过程中依赖于relay log的相关信息,所以这里要将relay log的自动清除设置为OFF,采用手动清除relay log的方式。在默认情况下,从服务器上的中继日志会在SQL线程执行完毕后被自动删除。但是在MHA环境中,这些中继日志在恢复其他从服务器时可能会被用到,因此需要禁用中继日志的自动删除功能。定期清除中继日志需要考虑到复制延时的问题。在ext3的文件系统下,删除大的文件需要一定的时间,会导致严重的复制延时。为了避免复制延时,需要暂时为中继日志创建硬链接,因为在linux系统中通过硬链接删除大文件速度会很快。(在mysql数据库中,删除大表时,通常也采用建立硬链接的方式)
设置定期清理relay脚本(两台slave服务器):
在slave master03 192.168.0.4操作:
[root ~]$ cat purge_relay_log.sh
#!/bin/bash
user=root
passwd=123456
port=3306
log_dir='/data/masterha/log'
work_dir='/data'
purge='/usr/local/bin/purge_relay_logs' if [ ! -d $log_dir ]
then
mkdir $log_dir -p
fi $purge --user=$user --password=$passwd --disable_relay_log_purge --port=$port --workdir=$work_dir $log_dir/purge_relay_logs.log 2&1 192.168.2.129 [root ~]$ crontab -l
0 4 * * * /bin/bash /root/purge_relay_log.sh
在slave master4 192.168.0.5操作跟上面是一样的,这里不演示了。
参数说明:
--user mysql //用户名
--password mysql //密码
--port //端口号
--workdir //指定创建relay log的硬链接的位置,默认是/var/tmp,由于系统不同分区创建硬链接文件会失败,故需要执行硬链接具体位置,成功执行脚本后,硬链接的中继日志文件被删除
--disable_relay_log_purge //默认情况下,如果relay_log_purge=1,脚本会什么都不清理,自动退出,通过设定这个参数,当relay_log_purge=1的情况下会将relay_log_purge设置为0。清理relay log之后,最后将参数设置为OFF。
purge_relay_logs脚本删除中继日志不会阻塞SQL线程。下面我们手动执行看看什么情况:
[root ~]$ purge_relay_logs --user=root --password=123456 --port=3306 -disable_relay_log_purge --workdir=/data/
2015-01-18 12:30:51: purge_relay_logs script started.
Found relay_log.info: /data/mysql/relay-log.info
Removing hard linked relay log files localhost-relay-bin* under /data/.. done.
Current relay log file: /data/mysql/localhost-relay-bin.000002
Archiving unused relay log files (up to /data/mysql/localhost-relay-bin.000001) ...
Creating hard link for /data/mysql/localhost-relay-bin.000001 under /data//localhost-relay-bin.000001 .. ok.
Creating hard links for unused relay log files completed.
Executing SET GLOBAL relay_log_purge=1; FLUSH LOGS; sleeping a few seconds so that SQL thread can delete older relay log files (if it keeps up); SET GLOBAL relay_log_purge=0; .. ok.
Removing hard linked relay log files localhost-relay-bin* under /data/.. done.
2015-01-18 12:30:54: All relay log purging operations succeeded.
6.检查SSH配置(server1 192.168.0.2 Monitor 监控节点上操作),如下:
192.168.2.131 [root ~]$ masterha_check_ssh --conf=/etc/masterha/app1.cnf
Sun Jan 18 12:31:48 2015 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sun Jan 18 12:31:48 2015 - [info] Reading application default configurations from /etc/masterha/app1.cnf..
Sun Jan 18 12:31:48 2015 - [info] Reading server configurations from /etc/masterha/app1.cnf..
Sun Jan 18 12:31:48 2015 - [info] Starting SSH connection tests..
Sun Jan 18 12:31:49 2015 - [debug]
Sun Jan 18 12:31:48 2015 - [debug] Connecting via SSH from root@192.168.2.128(192.168.2.128:22) to root@192.168.2.129(192.168.2.129:22)..
Sun Jan 18 12:31:49 2015 - [debug] ok.
Sun Jan 18 12:31:49 2015 - [debug] Connecting via SSH from root@192.168.2.128(192.168.2.128:22) to root@192.168.2.130(192.168.2.130:22)..
Sun Jan 18 12:31:49 2015 - [debug] ok.
Sun Jan 18 12:31:50 2015 - [debug]
Sun Jan 18 12:31:49 2015 - [debug] Connecting via SSH from root@192.168.2.129(192.168.2.129:22) to root@192.168.2.128(192.168.2.128:22)..
Sun Jan 18 12:31:49 2015 - [debug] ok.
Sun Jan 18 12:31:49 2015 - [debug] Connecting via SSH from root@192.168.2.129(192.168.2.129:22) to root@192.168.2.130(192.168.2.130:22)..
Sun Jan 18 12:31:50 2015 - [debug] ok.
Sun Jan 18 12:31:50 2015 - [debug]
Sun Jan 18 12:31:49 2015 - [debug] Connecting via SSH from root@192.168.2.130(192.168.2.130:22) to root@192.168.2.128(192.168.2.128:22)..
Sun Jan 18 12:31:50 2015 - [debug] ok.
Sun Jan 18 12:31:50 2015 - [debug] Connecting via SSH from root@192.168.2.130(192.168.2.130:22) to root@192.168.2.129(192.168.2.129:22)..
Sun Jan 18 12:31:50 2015 - [debug] ok.
Sun Jan 18 12:31:50 2015 - [info] All SSH connection tests passed successfully.
可以看见各个节点ssh验证都是ok的。
7.检查整个复制环境状况(server1 192.168.0.2 Monitor 监控节点上操作),如下:
192.168.2.131 [root ~]$ masterha_check_repl --conf=/etc/masterha/app1.cnf
Sun Jan 18 13:08:11 2015 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql --output_file=/tmp/save_binary_logs_test --manager_version=0.56 --start_file=mysql-bin.000004
Sun Jan 18 13:08:11 2015 - [info] Connecting to root@192.168.2.128(192.168.2.128)..
Creating /tmp if not exists.. ok.
Checking output directory is accessible or not..
ok.
Binlog found at /data/mysql, up to mysql-bin.000004
Sun Jan 18 13:08:11 2015 - [info] Master setting check done.
Sun Jan 18 13:08:11 2015 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Sun Jan 18 13:08:11 2015 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user=root --slave_host=192.168.2.129 --slave_ip=192.168.2.129 --slave_port=3306 --workdir=/tmp --target_version=5.5.60-log --manager_version=0.56 --relay_log_info=/data/mysql/relay-log.info --relay_dir=/data/mysql/ --slave_pass=xxx
Sun Jan 18 13:08:11 2015 - [info] Connecting to root@192.168.2.129(192.168.2.129:22)..
Can't exec "mysqlbinlog": 没有那个文件或目录 at /usr/local/share/perl5/MHA/BinlogManager.pm line 99.
mysqlbinlog version not found!
at /usr/local/bin/apply_diff_relay_logs line 463
Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln193] Slaves settings check failed!
Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln372] Slave configuration failed.
Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln383] Error happend on checking configurations. at /usr/local/bin/masterha_check_repl line 48
Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln478] Error happened on monitoring servers.
Sun Jan 18 13:08:12 2015 - [info] Got exit code 1 (Not master dead). MySQL Replication Health is NOT OK!
192.168.2.131 [root ~]$
如果发现如下错误:
Can't exec "mysqlbinlog": No such file or directory at /usr/local/share/perl5/MHA/BinlogManager.pm line 99.
mysqlbinlog version not found!
Testing mysql connection and privileges..sh: mysql: command not found
mysql command failed with rc 127:0!
可以通过以下方法解决(在所有节点上执行):
192.168.2.128 [root ~]$ ln -s /usr/local/mysql/bin/mysqlbinlog /usr/local/bin/mysqlbinlog
192.168.2.128 [root ~]$ ln -s /usr/local/mysql/bin/mysql /usr/local/bin/mysql 192.168.2.129 [root ~]$ ln -s /usr/local/mysql/bin/mysqlbinlog /usr/local/bin/mysqlbinlog
192.168.2.129 [root ~]$ ln -s /usr/local/mysql/bin/mysql /usr/local/bin/mysql 192.168.2.130 [root ~]$ ln -s /usr/local/mysql/bin/mysqlbinlog /usr/local/bin/mysqlbinlog
192.168.2.130 [root ~]$ ln -s /usr/local/mysql/bin/mysql /usr/local/bin/mysql
再进行检查
192.168.2.131 [root ~]$ masterha_check_repl --conf=/etc/masterha/app1.cnf
Sun Jan 18 13:19:41 2015 - [info] Checking replication health on 192.168.2.129..
Sun Jan 18 13:19:41 2015 - [info] ok.
Sun Jan 18 13:19:41 2015 - [info] Checking replication health on 192.168.2.130..
Sun Jan 18 13:19:41 2015 - [info] ok.
Sun Jan 18 13:19:41 2015 - [info] Checking master_ip_failover_script status:
Sun Jan 18 13:19:41 2015 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.2.128 --orig_master_ip=192.168.2.128 --orig_master_port=3306
Bareword "FIXME_xxx" not allowed while "strict subs" in use at /usr/local/bin/master_ip_failover line 88.
Execution of /usr/local/bin/master_ip_failover aborted due to compilation errors.
Sun Jan 18 13:19:41 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln214] Failed to get master_ip_failover_script status with return code 255:0.
Sun Jan 18 13:19:41 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln383] Error happend on checking configurations. at /usr/local/bin/masterha_check_repl line 48
Sun Jan 18 13:19:41 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln478] Error happened on monitoring servers.
Sun Jan 18 13:19:41 2015 - [info] Got exit code 1 (Not master dead). MySQL Replication Health is NOT OK!
192.168.2.131 [root ~]$
还是报错,纠结N久,才发现原因是:原来Failover两种方式:一种是虚拟IP地址,一种是全局配置文件。MHA并没有限定使用哪一种方式,而是让用户自己选择,虚拟IP地址的方式会牵扯到其它的软件,比如keepalive软件,而且还要修改脚本master_ip_failover。
所以先暂时注释master_ip_failover_script= /usr/local/bin/master_ip_failover这个选项。后面引入keepalived后和修改该脚本以后再开启该选项
192.168.2.131 [root ~]$ grep master_ip_failover /etc/masterha/app1.cnf
#master_ip_failover_script= /usr/local/bin/master_ip_failover
再次进行状态查看:
192.168.2.131 [root ~]$ masterha_check_repl --conf=/etc/masterha/app1.cnf
Sun Jan 18 13:23:57 2015 - [info] Slaves settings check done.
Sun Jan 18 13:23:57 2015 - [info]
192.168.2.128 (current master)
+--192.168.2.129
+--192.168.2.130 Sun Jan 18 13:23:57 2015 - [info] Checking replication health on 192.168.2.129..
Sun Jan 18 13:23:57 2015 - [info] ok.
Sun Jan 18 13:23:57 2015 - [info] Checking replication health on 192.168.2.130..
Sun Jan 18 13:23:57 2015 - [info] ok.
Sun Jan 18 13:23:57 2015 - [warning] master_ip_failover_script is not defined.
Sun Jan 18 13:23:57 2015 - [warning] shutdown_script is not defined.
Sun Jan 18 13:23:57 2015 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
192.168.2.131 [root ~]$
已经没有明显报错,只有两个警告而已,复制也显示正常了,哈哈,没报错了,先乐一会^0^
8.检查MHA Manager的状态
通过master_check_status脚本查看Manager的状态:
192.168.2.131 [root ~]$ masterha_check_status --conf=/etc/masterha/app1.cnf
app1 is stopped(2:NOT_RUNNING).
注意:如果正常,会显示"PING_OK",否则会显示"NOT_RUNNING",这代表MHA监控没有开启。
9.开启MHA Manager监控(server1 192.168.2.131操作)如下:
192.168.2.131 [root ~]$ mkdir -p /var/log/masterha/app1/
192.168.2.131 [root ~]$ nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null /var/log/masterha/app1/manager.log 2&1 &
[1] 13014
192.168.2.131 [root ~]$
启动参数说明:
--remove_dead_master_conf //该参数代表当发生主从切换后,老的主库的ip将会从配置文件中移除。 --manger_log //日志存放位置 --ignore_last_failover //在缺省情况下,如果MHA检测到连续发生宕机,且两次宕机间隔不足8小时的话,则不会进行Failover,之所以这样限制是为了避免ping-pong效应。该参数代表忽略上次MHA触发切换产生的文件,默认情况下,MHA发生切换后会在日志目录,也就是上面我设置的/data产生app1.failover.complete文件,下次再次切换的时候如果发现该目录下存在该文件将不允许触发切换,除非在第一次切换后收到删除该文件,为了方便,这里设置为--ignore_last_failover。
查看MHA Manager监控是否正常:
192.168.2.131 [root ~]$ masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:13014) is running(0:PING_OK), master:192.168.2.128
可以看见已经在监控了,而且master的主机为192.168.2.128
10.查看启动日志(server1 192.168.2.131操作)如下:
192.168.2.131 [root ~]$ tail -n20 /var/log/masterha/app1/manager.log
Sun Jan 18 13:27:22 2015 - [info] Connecting to root@192.168.2.130(192.168.2.130:22)..
Checking slave recovery environment settings..
Opening /data/mysql/relay-log.info ... ok.
Relay log found at /data/mysql, up to localhost-relay-bin.000002
Temporary relay log file is /data/mysql/localhost-relay-bin.000002
Testing mysql connection and privileges.. done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Sun Jan 18 13:27:22 2015 - [info] Slaves settings check done.
Sun Jan 18 13:27:22 2015 - [info]
192.168.2.128 (current master)
+--192.168.2.129
+--192.168.2.130 Sun Jan 18 13:27:22 2015 - [warning] master_ip_failover_script is not defined.
Sun Jan 18 13:27:22 2015 - [warning] shutdown_script is not defined.
Sun Jan 18 13:27:22 2015 - [info] Set master ping interval 1 seconds.
Sun Jan 18 13:27:22 2015 - [info] Set secondary check script: /usr/local/bin/masterha_secondary_check -s server3 -s server2 --user=root --master_host=server2 --master_ip=192.168.2.128 --master_port=3306
Sun Jan 18 13:27:22 2015 - [info] Starting ping health check on 192.168.2.128(192.168.2.128:3306)..
Sun Jan 18 13:27:22 2015 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
192.168.2.131 [root ~]$
其中"Ping(SELECT) succeeded, waiting until MySQL doesn't respond.."说明整个系统已经开始监控了。
11.关闭MHA Manage监控(server1 192.168.2.131操作)如下:
关闭很简单,使用masterha_stop命令完成。(只是演示关闭,在测试中,必须是开启的状态,如果关了,在测试的时候务必记得开启)
192.168.2.131 [root ~]$ masterha_stop --conf=/etc/masterha/app1.cnf
Stopped app1 successfully.
[1]+ Exit 1 nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null /var/log/masterha/app1/manager.log 2&1
192.168.2.131 [root ~]$
12.配置VIP
vip配置可以采用两种方式,一种通过keepalived的方式管理虚拟ip的浮动;另外一种通过脚本方式启动虚拟ip的方式(即不需要keepalived或者heartbeat类似的软件)。
下面先介绍通过安装keepalived来管理虚拟IP的浮动:
(1)下载软件进行并进行安装(两台master,准确的说一台是master,另外一台是备选master,在没有切换以前是slave)server2 192.168.2.128操作:
192.168.2.128 [root ~]$ wget http://www.keepalived.org/software/keepalived-1.2.12.tar.gz
192.168.2.128 [root ~]$ tar xf keepalived-1.2.12.tar.gz
192.168.2.128 [root ~]$ cd keepalived-1.2.12
192.168.2.128 [root keepalived-1.2.12]$ ./configure --prefix=/usr/local/keepalived
192.168.2.128 [root keepalived-1.2.12]$ make && make install
192.168.2.128 [root keepalived-1.2.12]$ cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
192.168.2.128 [root keepalived-1.2.12]$ cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
192.168.2.128 [root keepalived-1.2.12]$ mkdir /etc/keepalived
192.168.2.128 [root keepalived-1.2.12]$ cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
192.168.2.128 [root keepalived-1.2.12]$ cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
server3 192.168.2.129也要执行上面的操作,安装是一样的,配置文件不一样,这里不演示,自已安装哈
(2)配置keepalived的配置文件,在master上配置(server2 192.168.2.128)操作如下:
192.168.2.128 [root keepalived-1.2.12]$ cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs {
notification_email {
saltstack@163.com
}
notification_email_from dba@dbserver.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id MySQL-HA
} vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 150
advert_int 1
nopreempt authentication {
auth_type PASS
auth_pass 1111
} virtual_ipaddress {
192.168.2.88
}
}
192.168.2.128 [root keepalived-1.2.12]$
其中router_id MySQL HA表示设定keepalived组的名称,将192.168.2.88这个虚拟ip绑定到该主机的eth0网卡上,并且设置了状态为backup模式,将keepalived的模式设置为非抢占模式(nopreempt),priority 150表示设置的优先级为150。下面的配置略有不同,但是都是一个意思。(还有一个细节要注意的,要看清楚自己的网卡是eth0做模拟VIP,还是eth1)
在候选master上配置(server3 192.168.2.129)操作如下:
192.168.2.129 [root keepalived-1.2.12]$ cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs {
notification_email {
saltstack@163.com
}
notification_email_from dba@dbserver.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id MySQL-HA
} vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 120
advert_int 1
nopreempt authentication {
auth_type PASS
auth_pass 1111
} virtual_ipaddress {
192.168.2.88
}
}
192.168.2.129 [root keepalived-1.2.12]$
(3)启动keepalived服务,在master上启动并查看日志(server2 192.168.2.128)操作如下:
192.168.2.128 [root keepalived-1.2.12]$ /etc/init.d/keepalived start
正在启动 keepalived: [确定]
192.168.2.128 [root keepalived-1.2.12]$ tail -f /var/log/messages
Jan 18 13:47:20 localhost Keepalived_healthcheckers[4638]: Registering Kernel netlink reflector
Jan 18 13:47:20 localhost Keepalived_healthcheckers[4638]: Registering Kernel netlink command channel
Jan 18 13:47:20 localhost Keepalived_healthcheckers[4638]: Opening file '/etc/keepalived/keepalived.conf'.
Jan 18 13:47:20 localhost Keepalived_healthcheckers[4638]: Configuration is using : 7105 Bytes
Jan 18 13:47:20 localhost Keepalived_healthcheckers[4638]: Using LinkWatch kernel netlink reflector...
Jan 18 13:47:23 localhost Keepalived_vrrp[4639]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 18 13:47:24 localhost Keepalived_vrrp[4639]: VRRP_Instance(VI_1) Entering MASTER STATE
Jan 18 13:47:24 localhost Keepalived_vrrp[4639]: VRRP_Instance(VI_1) setting protocol VIPs.
Jan 18 13:47:24 localhost Keepalived_vrrp[4639]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.88
Jan 18 13:47:24 localhost Keepalived_healthcheckers[4638]: Netlink reflector reports IP 192.168.2.88 added
Jan 18 13:47:29 localhost Keepalived_vrrp[4639]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.88
启动候选master的keepalived(server3 192.168.2.129)操作如下:
192.168.2.129 [root keepalived-1.2.12]$ /etc/init.d/keepalived start
正在启动 keepalived: [确定]
192.168.2.129 [root keepalived-1.2.12]$
192.168.2.129 [root keepalived-1.2.12]$ tail -f /var/log/messages
Jan 18 13:52:31 localhost Keepalived_vrrp[4990]: Registering gratuitous ARP shared channel
Jan 18 13:52:31 localhost Keepalived_healthcheckers[4989]: Registering Kernel netlink command channel
Jan 18 13:52:31 localhost Keepalived_vrrp[4990]: Opening file '/etc/keepalived/keepalived.conf'.
Jan 18 13:52:31 localhost Keepalived_healthcheckers[4989]: Opening file '/etc/keepalived/keepalived.conf'.
Jan 18 13:52:31 localhost Keepalived_healthcheckers[4989]: Configuration is using : 7105 Bytes
Jan 18 13:52:31 localhost Keepalived_healthcheckers[4989]: Using LinkWatch kernel netlink reflector...
Jan 18 13:52:31 localhost Keepalived_vrrp[4990]: Configuration is using : 62850 Bytes
Jan 18 13:52:31 localhost Keepalived_vrrp[4990]: Using LinkWatch kernel netlink reflector...
Jan 18 13:52:31 localhost Keepalived_vrrp[4990]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jan 18 13:52:31 localhost Keepalived_vrrp[4990]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Jan 18 13:52:34 localhost Keepalived_vrrp[4990]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 18 13:52:35 localhost Keepalived_vrrp[4990]: VRRP_Instance(VI_1) Entering MASTER STATE
Jan 18 13:52:35 localhost Keepalived_vrrp[4990]: VRRP_Instance(VI_1) setting protocol VIPs.
Jan 18 13:52:35 localhost Keepalived_vrrp[4990]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.88
Jan 18 13:52:35 localhost Keepalived_healthcheckers[4989]: Netlink reflector reports IP 192.168.2.88 added
Jan 18 13:52:40 localhost Keepalived_vrrp[4990]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.88
(4)查看绑定情况
192.168.2.128 [root keepalived-1.2.12]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:86:dc:2a brd ff:ff:ff:ff:ff:ff
inet 192.168.2.128/24 brd 192.168.2.255 scope global eth0
inet 192.168.2.88/32 scope global eth0
inet6 fe80::20c:29ff:fe86:dc2a/64 scope link
valid_lft forever preferred_lft forever
192.168.2.128 [root keepalived-1.2.12]$
发现已经将虚拟IP 192.168.2.88绑定了master02 192.168.2.128的网卡eth0上了
从上面的信息可以看到keepalived已经配置成功。
注意:
上面两台服务器的keepalived都设置为了BACKUP模式,在keepalived中2种模式,分别是master-backup模式和backup-backup模式。这两种模式有很大区别。在master-backup模式下,一旦主库宕机,虚拟ip会自动漂移到从库,当主库修复后,keepalived启动后,还会把虚拟ip抢占过来,即使设置了非抢占模式(nopreempt)抢占ip的动作也会发生。在backup-backup模式下,当主库宕机后虚拟ip会自动漂移到从库上,当原主库恢复和keepalived服务启动后,并不会抢占新主的虚拟ip,即使是优先级高于从库的优先级别,也不会发生抢占。为了减少ip漂移次数,通常是把修复好的主库当做新的备库。
(5)MHA引入keepalived(MySQL服务进程挂掉时通过MHA 停止keepalived):
要想把keepalived服务引入MHA,我们只需要修改切换是触发的脚本文件master_ip_failover即可,在该脚本中添加在master发生宕机时对keepalived的处理。
1、编辑脚本/usr/local/bin/master_ip_failover,修改后如下(server1 192.168.2.131)操作:
192.168.2.131 [root ~]$ cat /usr/local/bin/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL = 'all'; use Getopt::Long; my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
); my $vip = '192.168.2.88';
my $ssh_start_vip = "/etc/init.d/keepalived start";
my $ssh_stop_vip = "/etc/init.d/keepalived stop"; GetOptions(
'command=s' = \$command,
'ssh_user=s' = \$ssh_user,
'orig_master_host=s' = \$orig_master_host,
'orig_master_ip=s' = \$orig_master_ip,
'orig_master_port=i' = \$orig_master_port,
'new_master_host=s' = \$new_master_host,
'new_master_ip=s' = \$new_master_ip,
'new_master_port=i' = \$new_master_port,
); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) { my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
} sub usage {
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
192.168.2.131 [root ~]$
把#master_ip_failover_script= /usr/local/bin/master_ip_failover打开
192.168.2.131 [root ~]$ grep 'master_ip_failover_script' /etc/masterha/app1.cnf
master_ip_failover_script= /usr/local/bin/master_ip_failover
执行检测:
192.168.2.131 [root ~]$ masterha_check_repl --conf=/etc/masterha/app1.cnf
Sun Jan 18 14:00:43 2015 - [info] Slaves settings check done.
Sun Jan 18 14:00:43 2015 - [info]
192.168.2.128 (current master)
+--192.168.2.129
+--192.168.2.130 Sun Jan 18 14:00:43 2015 - [info] Checking replication health on 192.168.2.129..
Sun Jan 18 14:00:43 2015 - [info] ok.
Sun Jan 18 14:00:43 2015 - [info] Checking replication health on 192.168.2.130..
Sun Jan 18 14:00:43 2015 - [info] ok.
Sun Jan 18 14:00:43 2015 - [info] Checking master_ip_failover_script status:
Sun Jan 18 14:00:43 2015 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.2.128 --orig_master_ip=192.168.2.128 --orig_master_port=3306
Unmatched right curly bracket at /usr/local/bin/master_ip_failover line 76, at end of line
syntax error at /usr/local/bin/master_ip_failover line 76, near "}"
Execution of /usr/local/bin/master_ip_failover aborted due to compilation errors.
Sun Jan 18 14:00:43 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln214] Failed to get master_ip_failover_script status with return code 255:0.
Sun Jan 18 14:00:43 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln383] Error happend on checking configurations. at /usr/local/bin/masterha_check_repl line 48
Sun Jan 18 14:00:43 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln478] Error happened on monitoring servers.
Sun Jan 18 14:00:43 2015 - [info] Got exit code 1 (Not master dead). MySQL Replication Health is NOT OK!
报以上的错,折腾了N多人,因为好多人不懂perl,看到模板就复制别人的代码,就是在复制的进去的时候,弄乱了,又手动调一下,导致各种各样的问题,我上面就是不小心导致的报错,手动修改了(cp的时候有一行多了一个#号),报错的大部份原因是master_ip_failover脚本导致的,而不要过多花时间纠结自己是否安装时安装少了东西,怀疑自己搭建的环境问题
再次执行检查:
192.168.2.131 [root ~]$ masterha_check_repl --conf=/etc/masterha/app1.cnf
Sun Jan 18 14:02:21 2015 - [info] Slaves settings check done.
Sun Jan 18 14:02:21 2015 - [info]
192.168.2.128 (current master)
+--192.168.2.129
+--192.168.2.130 Sun Jan 18 14:02:21 2015 - [info] Checking replication health on 192.168.2.129..
Sun Jan 18 14:02:21 2015 - [info] ok.
Sun Jan 18 14:02:21 2015 - [info] Checking replication health on 192.168.2.130..
Sun Jan 18 14:02:21 2015 - [info] ok.
Sun Jan 18 14:02:21 2015 - [info] Checking master_ip_failover_script status:
Sun Jan 18 14:02:21 2015 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.2.128 --orig_master_ip=192.168.2.128 --orig_master_port=3306 IN SCRIPT TEST====/etc/init.d/keepalived stop==/etc/init.d/keepalived start=== Checking the Status of the script.. OK
Sun Jan 18 14:02:21 2015 - [info] OK.
Sun Jan 18 14:02:21 2015 - [warning] shutdown_script is not defined.
Sun Jan 18 14:02:21 2015 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
192.168.2.131 [root ~]$
可以看见已经没有报错了,再乐一会吧,哈哈……
/usr/local/bin/master_ip_failover添加或者修改的内容意思是当主库数据库发生故障时,会触发MHA切换,MHA Manager会停掉主库上的keepalived服务,触发虚拟ip漂移到备选从库,从而完成切换。当然可以在keepalived里面引入脚本,这个脚本监控mysql是否正常运行,如果不正常,则调用该脚本杀掉keepalived进程。
2、以下进行模拟主Master(192.168.2.128)down了:
192.168.2.128 [root keepalived-1.2.12]$ /etc/init.d/mysqld stop
Shutting down MySQL... SUCCESS!
192.168.2.128 [root keepalived-1.2.12]$
在管理节点(server1 192.168.2.131)查看日志:(报错)
192.168.2.131 [root ~]$ tail -f /var/log/masterha/app1/manager.log
192.168.2.128 (current master)
+--192.168.2.129
+--192.168.2.130 Sun Jan 18 13:32:37 2015 - [warning] master_ip_failover_script is not defined.
Sun Jan 18 13:32:37 2015 - [warning] shutdown_script is not defined.
Sun Jan 18 13:32:37 2015 - [info] Set master ping interval 1 seconds.
Sun Jan 18 13:32:37 2015 - [info] Set secondary check script: /usr/local/bin/masterha_secondary_check -s server3 -s server2 --user=root --master_host=server2 --master_ip=192.168.2.128 --master_port=3306
Sun Jan 18 13:32:37 2015 - [info] Starting ping health check on 192.168.2.128(192.168.2.128:3306)..
Sun Jan 18 13:32:37 2015 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
Sun Jan 18 14:32:03 2015 - [warning] Got error on MySQL select ping: 2006 (MySQL server has gone away)
Sun Jan 18 14:32:03 2015 - [info] Executing seconary network check script: /usr/local/bin/masterha_secondary_check -s server3 -s server2 --user=root --master_host=server2 --master_ip=192.168.2.128 --master_port=3306 --user=root --master_host=192.168.2.128 --master_ip=192.168.2.128 --master_port=3306
Sun Jan 18 14:32:03 2015 - [info] Executing SSH check script: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql --output_file=/tmp/save_binary_logs_test --manager_version=0.56 --binlog_prefix=mysql-bin
Sun Jan 18 14:32:03 2015 - [info] HealthCheck: SSH to 192.168.2.128 is reachable.
Sun Jan 18 14:32:04 2015 - [warning] Got error on MySQL connect: 2013 (Lost connection to MySQL server at 'reading initial communication packet', system error: 111)
Sun Jan 18 14:32:04 2015 - [warning] Connection failed 1 time(s)..
Sun Jan 18 14:32:05 2015 - [warning] Got error on MySQL connect: 2013 (Lost connection to MySQL server at 'reading initial communication packet', system error: 111)
Sun Jan 18 14:32:05 2015 - [warning] Connection failed 2 time(s)..
Sun Jan 18 14:32:06 2015 - [warning] Got error on MySQL connect: 2013 (Lost connection to MySQL server at 'reading initial communication packet', system error: 111)
Sun Jan 18 14:32:06 2015 - [warning] Connection failed 3 time(s)..
ssh: Could not resolve hostname server3: Name or service not known
Monitoring server server3 is NOT reachable!
在管理节服务器192.168.2.131上添加hosts:
192.168.2.131 [root ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.128 server1
192.168.2.129 server2
192.168.2.130 server3
再查看日志(点下面加号可以查看日志):
192.168.2.131 [root ~]$ tail -f /var/log/masterha/app1/manager.log
3、在之前的Master(192.168.2.128)上查看一下vip:
192.168.2.128 [root keepalived-1.2.12]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:86:dc:2a brd ff:ff:ff:ff:ff:ff
inet 192.168.2.128/24 brd 192.168.2.255 scope global eth0
inet6 fe80::20c:29ff:fe86:dc2a/64 scope link
valid_lft forever preferred_lft forever
192.168.2.128 [root keepalived-1.2.12]$
可以看到vip已经不在down的机器上了
去候选的master(server3 192.168.2.129)也就是现在的新master查看是否有vip漂过:
192.168.2.129 [root keepalived-1.2.12]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:66:95:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.129/24 brd 192.168.2.255 scope global eth0
inet 192.168.2.88/32 scope global eth0
inet6 fe80::20c:29ff:fe66:9564/64 scope link
valid_lft forever preferred_lft forever
192.168.2.129 [root keepalived-1.2.12]$
哈哈,看到vip已经成功漂移过来了。
从tail -f /var/log/masterha/app1/manager.log的信息可以发现最后有这样的字眼:
The latest slave 192.168.2.129(192.168.2.129:3306) has all relay logs for recovery.
Selected 192.168.2.129 as a new master.
192.168.2.129: OK: Applying all logs succeeded.
192.168.2.129: OK: Activated master IP address.
192.168.2.130: This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
192.168.2.130: OK: Applying all logs succeeded. Slave started, replicating from 192.168.2.129.
192.168.2.129: Resetting slave info succeeded.
Master failover to 192.168.2.129(192.168.2.129:3306) completed successfully.
Sun Jan 18 17:11:55 2015 - [info] Sending mail..
Unknown option: conf
看到上面的Sending mail了吧,哈哈,已经正常发邮件了,看图:
发邮件的设置要在监控节点192.168.2.131上操作:
192.168.2.131 [root bin]$ cat /etc/masterha/app1.cnf |grep "report_script"
report_script=/usr/local/bin/send_report
send_report这个脚本在安装好软件后就会有,但我前面说了,这些脚本有很多地方不够完善,包括send_report的发邮件脚本,下面说明发设置,并把代码share出来:
脚本代码:(该脚本是37wan DBA-邓亚运分享,博客地址在博文后面贴出)
4、在管理节点查看一下配置文件/etc/masterha/app1.cnf可以发现[server1]的内容已经被自动去掉了(server1 192.168.2.131):
192.168.2.131 [root ~]$ cat /etc/masterha/app1.cnf
[server default]
manager_log=/var/log/masterha/app1/manager.log
manager_workdir=/var/log/masterha/app1.log
master_binlog_dir=/data/mysql
master_ip_failover_script=/usr/local/bin/master_ip_failover
master_ip_online_change_script=/usr/local/bin/master_ip_online_change
password=123456
ping_interval=1
remote_workdir=/tmp
repl_password=123456
repl_user=repl
report_script=/usr/local/bin/send_report
secondary_check_script=/usr/local/bin/masterha_secondary_check -s server3 -s server2 --user=root --master_host=server2 --master_ip=192.168.2.128 --master_port=3306
shutdown_script=""
ssh_user=root
user=root [server2]
candidate_master=1
check_repl_delay=0
hostname=192.168.2.129
port=3306 [server3]
hostname=192.168.2.130
port=3306
(2)通过脚本的方式管理VIP。这里是修改/usr/local/bin/master_ip_failover,也可以使用其他的语言完成,比如php语言。使用php脚本编写的failover这里就不介绍了。修改完成后内容如下,而且如果使用脚本管理vip的话,需要手动在master服务器上绑定一个vip
192.168.2.128 [root ~]$ /sbin/ifconfig eth0:1 192.168.2.88/24
192.168.2.128 [root ~]$ ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:86:DC:2A
inet addr:192.168.2.128 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe86:dc2a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:41643 errors:0 dropped:0 overruns:0 frame:0
TX packets:24696 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:31624443 (30.1 MiB) TX bytes:3388815 (3.2 MiB) eth0:1 Link encap:Ethernet HWaddr 00:0C:29:86:DC:2A
inet addr:192.168.2.88 Bcast:192.168.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
在管理节点(server1 192.168.2.131)修改下/usr/local/bin/master_ip_failover脚本,如下:
#!/usr/bin/env perl
use strict;
use warnings FATAL = 'all'; use Getopt::Long; my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
); my $vip = '192.168.2.88';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig eth1:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth1:$key down"; GetOptions(
'command=s' = \$command,
'ssh_user=s' = \$ssh_user,
'orig_master_host=s' = \$orig_master_host,
'orig_master_ip=s' = \$orig_master_ip,
'orig_master_port=i' = \$orig_master_port,
'new_master_host=s' = \$new_master_host,
'new_master_ip=s' = \$new_master_ip,
'new_master_port=i' = \$new_master_port,
); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) { my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
} sub usage {
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
(1)在slave库(192.168.2.129)上停掉slave IO线程,模拟主从延时
mysql stop slave io_thread;
Query OK, 0 rows affected (0.03 sec)
(2)在master库(192.168.2.128)安装sysbench,进行sysbench数据生成,在sbtest库下生成sbtest表,共10W记录
192.168.2.128 [root ~]$ yum install sysbench -y
192.168.2.128 [root ~]$ mysql -uroot -p123456 -e "create database sbtest;"
192.168.2.128 [root ~]$ sysbench --test=oltp --oltp-table-size=100000 --oltp-read-only=off --init-rng=on --num-threads=1 --max-requests=0 --oltp-dist-type=uniform --max-time=1800 --mysql-user=root --mysql-socket=/tmp/mysql.sock --mysql-password=123456 --db-driver=mysql --mysql-table-engine=innodb --oltp-test-mode=complex prepare
另外一台slave我们没有停止io线程,所以还在继续接收日志。
(3)在slave库(192.168.2.129)开启slave IO线程:
mysql start slave io_thread;
Query OK, 0 rows affected (0.00 sec)
(4)停掉master库(192.168.2.128)操作如下:
192.168.2.128 [root ~]$ /etc/init.d/mysqld stop
Shutting down MySQL.... SUCCESS!
(5)在管理节点查看日志:
192.168.2.131 [root ~]$ tail -f /var/log/masterha/app1/manager.log
(6)在新的Master192.168.2.129上查看数据有没有同步过来,因为在还没创建tbtest库的时候,就停了slave sql线程:
mysql show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sbtest |
| test |
+--------------------+
5 rows in set (0.00 sec) mysql use sbtest
Database changed
mysql select count(*) from sbtest;
+----------+
| count(*) |
+----------+
| 100000 |
+----------+
1 row in set (0.03 sec) mysql
可以看到落后的数据也同步过来了
(7)查看来vip的漂移情况:
192.168.2.129 [root keepalived-1.2.12]$ ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:66:95:64
inet addr:192.168.2.129 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe66:9564/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:48779 errors:0 dropped:0 overruns:0 frame:0
TX packets:31696 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:46635239 (44.4 MiB) TX bytes:3067487 (2.9 MiB) eth0:1 Link encap:Ethernet HWaddr 00:0C:29:66:95:64
inet addr:192.168.2.88 Bcast:192.168.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
虚拟IP已经成功漂移到候选的master 192.168.2.129上了
在做上面通过使用脚本管理vip的实验时,发现很奇怪的事情,就是我查看切换成功后,我去查看再在那台是master库时:
192.168.2.131 [root bin]$ masterha_check_status --conf=/etc/masterha/app1.cnf
app1 is stopped(2:NOT_RUNNING).
192.168.2.131 [root bin]$
发现MHA Manager挂了,这下就呆了,该不会那里配置错了吧,到时真的没想明白后来看了同学的博客得知,官网上对这种情况有解释:
意思是安装一个进程工具,通过该工具结合脚本来管理进程。可以参考官方资料:https://code.google.com/p/mysql-master-ha/wiki/Runnning_Background
为了不让大家不乱,我再次把实验环境贴出来:
角色 ip地址 主机名 server_id 类型
Monitor host 192.168.2.131 server1 - 监控复制组
Master 192.168.2.128 server2 1 写入
Candicate master 192.168.2.129 server3 2 读
Slave 192.168.2.130 server4 3 读
二.手动Failover(MHA Manager必须没有运行)
当主服务器故障时,人工手动调用MHA来进行故障切换操作,具体命令如下:
先停MHA Manager:
192.168.2.131 [root ~]$ masterha_stop --conf=/etc/masterha/app1.cnf
Stopped app1 successfully.
[1]+ Exit 1 nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null /var/log/masterha/app1/manager.log 2&1 (wd: /usr/local/bin)
(wd now: ~)
192.168.2.131 [root ~]$
在Manager主机上操作如下:
192.168.2.131 [root bin]$ masterha_master_switch --master_state=dead --conf=/etc/masterha/app1.cnf --dead_master_host=192.168.2.128 --dead_master_port=3306 --new_master_host=192.168.2.129 --new_master_port=3306 --ignore_last_failover
--dead_master_ip=<dead_master_ip is not set. Using 192.168.2.128.
Mon Jan 19 00:42:18 2015 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Mon Jan 19 00:42:18 2015 - [info] Reading application default configurations from /etc/masterha/app1.cnf..
Mon Jan 19 00:42:18 2015 - [info] Reading server configurations from /etc/masterha/app1.cnf..
Mon Jan 19 00:42:18 2015 - [info] MHA::MasterFailover version 0.56.
Mon Jan 19 00:42:18 2015 - [info] Starting master failover.
Mon Jan 19 00:42:18 2015 - [info]
Mon Jan 19 00:42:18 2015 - [info] * Phase 1: Configuration Check Phase..
Mon Jan 19 00:42:18 2015 - [info]
Mon Jan 19 00:42:19 2015 - [info] Dead Servers:
Mon Jan 19 00:42:19 2015 - [error][/usr/local/share/perl5/MHA/MasterFailover.pm, ln181] None of server is dead. Stop failover.
Mon Jan 19 00:42:19 2015 - [error][/usr/local/share/perl5/MHA/ManagerUtil.pm, ln178] Got ERROR: at /usr/local/bin/masterha_master_switch line 53
看到报错了,报错的原因:MHA manager检测到没有dead的server,将报错,并结束failover,也就说,我们要手动关了主库,才能正常切换:
192.168.2.128 [root ~]$ /etc/init.d/mysqld stop
Shutting down MySQL... SUCCESS!
再执行手动failover命令:
192.168.2.131 [root bin]$ masterha_master_switch --master_state=dead --conf=/etc/masterha/app1.cnf --dead_master_host=192.168.2.128 --dead_master_port=3306 --new_master_host=192.168.2.129 --new_master_port=3306 --ignore_last_failover
三、MHA的在线切换
在许多情况下, 需要将现有的主服务器迁移到另外一台服务器上。 比如主服务器硬件故障,RAID 控制卡需要重建,将主服务器移到性能更好的服务器上等等。维护主服务器引起性能下降, 导致停机时间至少无法写入数据。 另外, 阻塞或杀掉当前运行的会话会导致主主之间数据不一致的问题发生。 MHA 提供快速切换和优雅的阻塞写入,这个切换过程只需要 0.5-2s 的时间,这段时间内数据是无法写入的。在很多情况下,0.5-2s 的阻塞写入是可以接受的。因此切换主服务器不需要计划分配维护时间窗口。
MHA在线切换的大概过程:
(1)检测复制设置和确定当前主服务器
(2)确定新的主服务器
(3)阻塞写入到当前主服务器
(4)等待所有从服务器赶上复制
(5)授予写入到新的主服务器
(6)重新设置从服务器
注意,在线切换的时候应用架构需要考虑以下两个问题:
1.自动识别master和slave的问题(master的机器可能会切换),如果采用了vip的方式,基本可以解决这个问题。
2.负载均衡的问题(可以定义大概的读写比例,每台机器可承担的负载比例,当有机器离开集群时,需要考虑这个问题)
为了保证数据完全一致性,在最快的时间内完成切换,MHA的在线切换必须满足以下条件才会切换成功,否则会切换失败。
(1)所有slave的IO线程都在运行
(2)所有slave的SQL线程都在运行
(3)所有的show slave status的输出中Seconds_Behind_Master参数小于或者等于running_updates_limit秒,如果在切换过程中不指定running_updates_limit,那么默认情况下running_updates_limit为1秒。
(4)在master端,通过show processlist输出,没有一个更新花费的时间大于running_updates_limit秒。
在线切换步骤如下:
在MHA Manager服务器192.168.2.131上操作,首先,停掉MHA监控:
192.168.2.131 [root ~]$ masterha_stop --conf=/etc/masterha/app1.cnf
Stopped app1 successfully.
[1]+ Exit 1 nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null /var/log/masterha/app1/manager.log 2&1 (wd: /usr/local/bin)
(wd now: ~)
192.168.2.131 [root ~]$
执行在线切换命令:(以下是0.53版本的manager和node包报的错)
Starting master switch from 192.168.2.128(192.168.2.128:3306) to 192.168.2.129(192.168.2.129:3306)? (yes/NO): yes
Sun Jan 18 20:06:17 2015 - [info] Checking whether 192.168.2.129(192.168.2.129:3306) is ok for the new master..
Sun Jan 18 20:06:17 2015 - [info] ok.
Sun Jan 18 20:06:17 2015 - [info] 192.168.2.128(192.168.2.128:3306): SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE MASTER to a dummy host.
Sun Jan 18 20:06:17 2015 - [info] 192.168.2.128(192.168.2.128:3306): Resetting slave pointing to the dummy host.
Sun Jan 18 20:06:17 2015 - [info] ** Phase 1: Configuration Check Phase completed.
Sun Jan 18 20:06:17 2015 - [info]
Sun Jan 18 20:06:17 2015 - [info] * Phase 2: Rejecting updates Phase..
Sun Jan 18 20:06:17 2015 - [info]
Sun Jan 18 20:06:17 2015 - [info] Executing master ip online change script to disable write on the current master:
Sun Jan 18 20:06:17 2015 - [info] /usr/local/bin/master_ip_online_change --command=stop --orig_master_host=192.168.2.128 --orig_master_ip=192.168.2.128 --orig_master_port=3306 --new_master_host=192.168.2.129 --new_master_ip=192.168.2.129 --new_master_port=3306
Got Error: DBI connect(';host=192.168.2.129;port=3306;mysql_connect_timeout=4','',...) failed: Access denied for user 'root'@'192.168.2.131' (using password: NO) at /usr/local/share/perl5/MHA/DBHelper.pm line 181
at /usr/local/bin/master_ip_online_change line 138 Sun Jan 18 20:06:17 2015 - [error][/usr/local/share/perl5/MHA/ManagerUtil.pm, ln178] Got ERROR: at /usr/local/bin/masterha_master_switch line 53
原因是脚本master_ip_online_change不完整,需要自己进行相应的修改,脚本中new_master_password这个变量获取不到,导致在线切换失败,所以进行了相关的硬编码,直接把mysql的root用户密码赋值给变量new_master_password,但mha4mysql-manager-0.56和mha4mysql-node-0.56版本已经不需要自己把密码直接赋值了,它自己能读出来,之前版本貌似在读new_master_password变量时,总获取不到值(perl脚本我也不太懂,需要大家一起来改善,哈哈)
下面来看来0.56版本的执行情况:
192.168.2.131 [root bin]$ masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=192.168.2.129 --new_master_port=3306 --orig_master_is_new_slave --running_updates_limit=10000
192.168.2.131 [root bin]$ masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=192.168.2.129 --new_master_port=3306 --orig_master_is_new_slave --running_updates_limit=10000
Mon Jan 19 01:51:39 2015 - [info] MHA::MasterRotate version 0.56.
Mon Jan 19 01:51:39 2015 - [info] Starting online master switch..
Mon Jan 19 01:51:39 2015 - [info]
Mon Jan 19 01:51:39 2015 - [info] * Phase 1: Configuration Check Phase..
Mon Jan 19 01:51:39 2015 - [info]
Mon Jan 19 01:51:39 2015 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Mon Jan 19 01:51:39 2015 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Mon Jan 19 01:51:39 2015 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Mon Jan 19 01:51:39 2015 - [info] GTID failover mode = 0
Mon Jan 19 01:51:39 2015 - [info] Current Alive Master: 192.168.2.128(192.168.2.128:3306)
Mon Jan 19 01:51:39 2015 - [info] Alive Slaves:
Mon Jan 19 01:51:39 2015 - [info] 192.168.2.129(192.168.2.129:3306) Version=5.5.30-log (oldest major version between slaves) log-bin:enabled
Mon Jan 19 01:51:39 2015 - [info] Replicating from 192.168.2.128(192.168.2.128:3306)
Mon Jan 19 01:51:39 2015 - [info] Primary candidate for the new Master (candidate_master is set)
Mon Jan 19 01:51:39 2015 - [info] 192.168.2.130(192.168.2.130:3306) Version=5.5.25-log (oldest major version between slaves) log-bin:enabled
Mon Jan 19 01:51:39 2015 - [info] Replicating from 192.168.2.128(192.168.2.128:3306) It is better to execute FLUSH NO_WRITE_TO_BINLOG TABLES on the master before switching. Is it ok to execute on 192.168.2.128(192.168.2.128:3306)? (YES/no): yes
Mon Jan 19 01:51:46 2015 - [info] Executing FLUSH NO_WRITE_TO_BINLOG TABLES. This may take long time..
Mon Jan 19 01:51:46 2015 - [info] ok.
Mon Jan 19 01:51:46 2015 - [info] Checking MHA is not monitoring or doing failover..
Mon Jan 19 01:51:46 2015 - [info] Checking replication health on 192.168.2.129..
Mon Jan 19 01:51:46 2015 - [info] ok.
Mon Jan 19 01:51:46 2015 - [info] Checking replication health on 192.168.2.130..
Mon Jan 19 01:51:46 2015 - [info] ok.
Mon Jan 19 01:51:46 2015 - [info] 192.168.2.129 can be new master.
Mon Jan 19 01:51:46 2015 - [info]
From:
192.168.2.128(192.168.2.128:3306) (current master)
+--192.168.2.129(192.168.2.129:3306)
+--192.168.2.130(192.168.2.130:3306) To:
192.168.2.129(192.168.2.129:3306) (new master)
+--192.168.2.130(192.168.2.130:3306)
+--192.168.2.128(192.168.2.128:3306) Starting master switch from 192.168.2.128(192.168.2.128:3306) to 192.168.2.129(192.168.2.129:3306)? (yes/NO): yes
Mon Jan 19 01:51:50 2015 - [info] Checking whether 192.168.2.129(192.168.2.129:3306) is ok for the new master..
Mon Jan 19 01:51:50 2015 - [info] ok.
Mon Jan 19 01:51:50 2015 - [info] 192.168.2.128(192.168.2.128:3306): SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE MASTER to a dummy host.
Mon Jan 19 01:51:50 2015 - [info] 192.168.2.128(192.168.2.128:3306): Resetting slave pointing to the dummy host.
Mon Jan 19 01:51:50 2015 - [info] ** Phase 1: Configuration Check Phase completed.
Mon Jan 19 01:51:50 2015 - [info]
Mon Jan 19 01:51:50 2015 - [info] * Phase 2: Rejecting updates Phase..
Mon Jan 19 01:51:50 2015 - [info]
Mon Jan 19 01:51:50 2015 - [info] Executing master ip online change script to disable write on the current master:
Mon Jan 19 01:51:50 2015 - [info] /usr/local/bin/master_ip_online_change --command=stop --orig_master_host=192.168.2.128 --orig_master_ip=192.168.2.128 --orig_master_port=3306 --orig_master_user='root' --orig_master_password='123456' --new_master_host=192.168.2.129 --new_master_ip=192.168.2.129 --new_master_port=3306 --new_master_user='root' --new_master_password='123456' --orig_master_ssh_user=root --new_master_ssh_user=root --orig_master_is_new_slave
Mon Jan 19 01:51:50 2015 173112 Set read_only on the new master.. ok.
Mon Jan 19 01:51:50 2015 178943 Drpping app user on the orig master..
Mon Jan 19 01:51:50 2015 180438 Set read_only=1 on the orig master.. ok.
Mon Jan 19 01:51:50 2015 183258 Killing all application threads..
Mon Jan 19 01:51:50 2015 183387 done.
Mon Jan 19 01:51:50 2015 - [info] ok.
Mon Jan 19 01:51:50 2015 - [info] Locking all tables on the orig master to reject updates from everybody (including root):
Mon Jan 19 01:51:50 2015 - [info] Executing FLUSH TABLES WITH READ LOCK..
Mon Jan 19 01:51:50 2015 - [info] ok.
Mon Jan 19 01:51:50 2015 - [info] Orig master binlog:pos is mysql-bin.000017:107.
Mon Jan 19 01:51:50 2015 - [info] Waiting to execute all relay logs on 192.168.2.129(192.168.2.129:3306)..
Mon Jan 19 01:51:50 2015 - [info] master_pos_wait(mysql-bin.000017:107) completed on 192.168.2.129(192.168.2.129:3306). Executed 0 events.
Mon Jan 19 01:51:50 2015 - [info] done.
Mon Jan 19 01:51:50 2015 - [info] Getting new master's binlog name and position..
Mon Jan 19 01:51:50 2015 - [info] mysql-bin.000005:61791
Mon Jan 19 01:51:50 2015 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.2.129', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000005', MASTER_LOG_POS=61791, MASTER_USER='repl', MASTER_PASSWORD='xxx';
Mon Jan 19 01:51:50 2015 - [info] Executing master ip online change script to allow write on the new master:
Mon Jan 19 01:51:50 2015 - [info] /usr/local/bin/master_ip_online_change --command=start --orig_master_host=192.168.2.128 --orig_master_ip=192.168.2.128 --orig_master_port=3306 --orig_master_user='root' --orig_master_password='123456' --new_master_host=192.168.2.129 --new_master_ip=192.168.2.129 --new_master_port=3306 --new_master_user='root' --new_master_password='123456' --orig_master_ssh_user=root --new_master_ssh_user=root --orig_master_is_new_slave
Mon Jan 19 01:51:50 2015 443208 Set read_only=0 on the new master.
Mon Jan 19 01:51:50 2015 444741 Creating app user on the new master..
Mon Jan 19 01:51:50 2015 - [info] ok.
Mon Jan 19 01:51:50 2015 - [info]
Mon Jan 19 01:51:50 2015 - [info] * Switching slaves in parallel..
Mon Jan 19 01:51:50 2015 - [info]
Mon Jan 19 01:51:50 2015 - [info] -- Slave switch on host 192.168.2.130(192.168.2.130:3306) started, pid: 23040
Mon Jan 19 01:51:50 2015 - [info]
Mon Jan 19 01:51:50 2015 - [info] Log messages from 192.168.2.130 ...
Mon Jan 19 01:51:50 2015 - [info]
Mon Jan 19 01:51:50 2015 - [info] Waiting to execute all relay logs on 192.168.2.130(192.168.2.130:3306)..
Mon Jan 19 01:51:50 2015 - [info] master_pos_wait(mysql-bin.000017:107) completed on 192.168.2.130(192.168.2.130:3306). Executed 0 events.
Mon Jan 19 01:51:50 2015 - [info] done.
Mon Jan 19 01:51:50 2015 - [info] Resetting slave 192.168.2.130(192.168.2.130:3306) and starting replication from the new master 192.168.2.129(192.168.2.129:3306)..
Mon Jan 19 01:51:50 2015 - [info] Executed CHANGE MASTER.
Mon Jan 19 01:51:50 2015 - [info] Slave started.
Mon Jan 19 01:51:50 2015 - [info] End of log messages from 192.168.2.130 ...
Mon Jan 19 01:51:50 2015 - [info]
Mon Jan 19 01:51:50 2015 - [info] -- Slave switch on host 192.168.2.130(192.168.2.130:3306) succeeded.
Mon Jan 19 01:51:50 2015 - [info] Unlocking all tables on the orig master:
Mon Jan 19 01:51:50 2015 - [info] Executing UNLOCK TABLES..
Mon Jan 19 01:51:50 2015 - [info] ok.
Mon Jan 19 01:51:50 2015 - [info] Starting orig master as a new slave..
Mon Jan 19 01:51:50 2015 - [info] Resetting slave 192.168.2.128(192.168.2.128:3306) and starting replication from the new master 192.168.2.129(192.168.2.129:3306)..
Mon Jan 19 01:51:50 2015 - [info] Executed CHANGE MASTER.
Mon Jan 19 01:51:50 2015 - [info] Slave started.
Mon Jan 19 01:51:50 2015 - [info] All new slave servers switched successfully.
Mon Jan 19 01:51:50 2015 - [info]
Mon Jan 19 01:51:50 2015 - [info] * Phase 5: New master cleanup phase..
Mon Jan 19 01:51:50 2015 - [info]
Mon Jan 19 01:51:50 2015 - [info] 192.168.2.129: Resetting slave info succeeded.
Mon Jan 19 01:51:50 2015 - [info] Switching master to 192.168.2.129(192.168.2.129:3306) completed successfully.
192.168.2.131 [root bin]$
参数说明:
--orig_master_is_new_slave 切换时加上此参数是将原 master 变为 slave 节点,如果不加此参数,原来的 master 将不启动 --running_updates_limit=10000,故障切换时,候选master 如果有延迟的话, mha 切换不能成功,加上此参数表示延迟在此时间范围内都可切换(单位为s),但是切换的时间长短是由recover 时relay 日志的大小决定
master_ip_online_change脚本代码如下:
说明可以参考官网:https://code.google.com/p/mysql-master-ha/wiki/Parameters#master_ip_online_change_script(自备*)
2、修复宕机的Master
通常情况下自动切换以后,原master可能已经废弃掉,待原master主机修复后,如果数据完整的情况下,可能想把原来master重新作为新主库的slave,这时我们可以借助当时自动切换时刻的MHA日志来完成对原master的修复。下面是提取相关日志的命令:
从上面信息可以看到:
All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.2.129', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000005', MASTER_LOG_POS=61791, MASTER_USER='repl', MASTER_PASSWORD='xxx';
意思是说,如果Master主机修复好了,可以在修复好后的Master执行CHANGE MASTER操作,作为新的slave库。
目前高可用方案可以一定程度上实现数据库的高可用,比如前面文章介绍的MMM,heartbeat+drbd,Cluster等。还有percona的Galera Cluster等。这些高可用软件各有优劣。在进行高可用方案选择时,主要是看业务还有对数据一致性方面的要求。最后出于对数据库的高可用和数据一致性的要求,推荐使用MHA架构。
总结:
一、尽信书,不如不信,有时按着书本上做某个实验,书上能做出来,但不代表我们按着步骤走也能成功
二、要学会从官网找资料,这样让你懂得更多,因为有时谷歌、百度都找不出相关的资料
三、MHA的实验要多测试,要理解它的切换过程,有很多人搭建完,测试到能转换就完事了,这是远远不够的。
四、MHA环境搭建好后,很多perl脚本是不够完善的,需要自己去修改,很多问题都是perl脚本引起的
五、每个人的搭建环境不一样,遇到的问题可能也不一样,这很正常,只要肯下功夫,终能解决的
参考资料:
https://code.google.com/p/mysql-master-ha/wiki/TableOfContents?tm=6
http://www.cnblogs.com/gomysql/p/3675429.html(大牛的博客,帮助了我很多)
大部份MHA理论知识参考很赞的书籍《深入浅出MySQL》(第2版)