主机名 |
物理IP |
集群角色 |
servier_id |
Monitor |
192.168.1.134 |
MMM管理端 |
无 |
Master1 |
192.168.1.130 |
主Master可读、可写 |
1 |
Master2 |
192.168.1.131 |
备Master可读、可写 |
2 |
Slave1 |
192.168.1.132 |
Slave节点只读 |
3 |
Slave2 |
192.168.1.133 |
Slave节点只读 |
4 |
虚拟IP地址 |
IP角色 |
功能描述 |
192.168.1.140 |
写IP |
写入VIP |
192.168.1.141 |
读IP |
读查询VIP可以通过LVS,HAProxy等负载均衡软件对赌VIP进行负载均衡。 |
192.168.1.142 |
读IP |
|
192.168.1.143 |
读IP |
|
192.168.1.144 |
读IP |
1.所有DB节点主机下载mysql,mysql-server
2.编辑/etc/my.cnf文件
[mysqld]
read-only=1
server-id=1
log-bin=mysql-bin
relay-log=mysql-relay-bin
replication-wild-ignore-table=test.%
replication-wild-ignore-table=information_schema.%
其中server-id每台主机分别为1,2,3,4
3.创建复制用户并授权
A)首先在Master1的mysql库中创建复制用户,
mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.131’ identified by ‘123456’;
mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.132’ identified by ‘123456’;
mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.133’ identified by ‘123456’;
mysql> show master status;
B)然后在Master2的mysql库中将Master1设为自己的主服务器
mysql> change master to master_host=’192.168.1.130’,master_user=’repl_user’,
master_password=’123456’,master_log_file=’mysql-bin.xxxxx’,
master_log_pos=xxx;
其中,master_log_file,和master_log_pos的值由A步骤的show master status;
mysql> start slave;
mysql> show slave status;
其中Slave_IO_Running和Slave_SQL_Running,这就是在从服务器节点上运行的主从复制线程,都应该为Yes。
C)分别在Slave1,Slave2重复步骤B
D)在Master2的mysql库中创建复制用户
mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.130’ identified by ‘123456’;
mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.132’ identified by ‘123456’;
mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.133’ identified by ‘123456’;
mysql> show master status;
E)然后在Master1的mysql库中将Master2设为自己的主服务器
mysql> change master to master_host=’192.168.1.131’,master_user=’repl_user’,
master_password=’123456’,master_log_file=’mysql-bin.xxxxx’,
master_log_pos=xxx;
mysql> show slave status;
4.MMM套件的安装
A)在Monitor节点:
yum -y install mysql-mmm* (EPEL源)
B)在每个MySQL DB节点安装mysql-mmm-agent即可。
yum -y install mysql-mmm-agent
5.MMM集群配置
A)所有mysql节点创建monitor user和monitor agent账号
mysql > grant replication client on *.* to 'mmm_monitor'@'192.168.1.%' identified by '123456';
mysql > grant super,replication client,process on *.* to 'mmm_agent'@'192.168.1.%' identified by '123456';
mysql > flush privileges;
B)配置mmm_common.conf文件(/etc/mysql-mmm/),然后分别复制到mysql节点。
active_master_role writer
<host default>
cluster_interface eth0
pid_path /var/run/mysql-mmm/mmm_agentd.pid
bin_path /usr/libexec/mysql-mmm/
replication_user repl_user #设置复制的用户
replication_password 123456 #设置复制的用户
agent_user mmm_agent #设置更改只读操作的用户
agent_password 123456
</host>
<host db1> 设置db1的配置信息
ip 192.168.1.130
mode master 设置db1的角色为master
peer db2 设置与db1对等的主机名
</host>
<host db2>
ip 192.168.1.131
mode master
peer db1
</host>
<host db3>
ip 192.168.1.132
mode slave 设置db3的角色为slave
</host>
<host db4>
ip 192.168.1.133
mode slave
</host>
<role writer> #设置可写角色模式
hosts db1, db2
ips 192.168.1.140 #设置可写的虚拟IP
mode exclusive db1和db2是互斥
</role>
<role reader> #设置可读角色模式
hosts db1, db2,db3,db4
ips 192.168.1.141,192.168.1.142,192.168.1.143,192.168.1.144 #设置可读的虚拟IP
mode balanced 负载均衡
</role>
C)配置mmm_agent.conf文件
include mmm_common.conf
this db1(mysql节点,分别换成对应的db1,db2,db3,db4)
D)配置mmm_mon.conf文件(仅在MMM管理节点上)
include mmm_common.conf
<monitor>
ip 127.0.0.1
pid_path /var/run/mysql-mmm/mmm_mond.pid
bin_path /usr/libexec/mysql-mmm
status_path /var/lib/mysql-mmm/mmm_mond.status
ping_ips 192.168.1.130,192.168.1.131,192.168.1.132,192.168.1.133
测试网络可用性的IP地址列表,有一个可ping通即为网络可用,但是不能写本地IP
flap_duration 3600 抖动的时间范围
flap_count 3
auto_set_online 0
# The kill_host_bin does not exist by default, though the monitor will
# throw a warning about it missing. See the section 5.10 "Kill Host
# Functionality" in the PDF documentation.
#
# kill_host_bin /usr/libexec/mysql-mmm/monitor/kill_host
#
</monitor>
<host default>
monitor_user mmm_monitor
monitor_password 123456
</host>
debug 0
E)配置/etc/default/mysql-mmm-agent文件(所有mysql节点)
ENABLED=1
6.MMM集群管理
A)在Monitor运行:
/etc/init.d/mysql-mmm-monitor start
B)在所有mysql节点运行:
/etc/init.d/mysql-mmm-agent start
C)分别将mysql节点设置为online状态
mmm_control set_online db# (1,2,3,4)
D)查看集群运行状态
mmm_control show
mmm_control checks all
7.测试MMM实现MySQL高可用功能
读写分离测试
读表的时候使用mysql普通用户进行操作
故障转移测试
把Master1节点的mysql服务关闭。再次查看MMM集群运行状态。
重启Master1节点的mysql服务,再次查看MMM集群运行状态,手动切换Master,mmm_control move_role writer db1
测试slave节点。
8.MySQL读、写分离
实现方案:MMM整合Amoeba应用架构
Amoeba作为MySQL的分布式数据前端代理层,主要在应用层访问MySQL的时候充当SQL路由器功能,具有负载均衡,高可用性,SQL过滤,读写分离,通过Amoeba可以实现数据源的高可用、负载均衡、数据切片。
A)安装和配置JDK环境(JavaSE1.5以上的JDK版本)
将JDK安装到/usr/local/目录下,然后设置Java环境变量。
export JAVA_HOME=/usr/local/jdk1.6.0_45
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:
$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
B)安装Amoeba
mkdir /usr/local/amoeba
tar xf amoeba-mysql-binary-2.2.0.tar.gz -C/usr/local/amoeba
启动Amoeba
chmod +x -R /usr/local/amoeba/bin
/usr/local/amoeba/bin/amoeba start
出现一个问题:
The stack size specified is too small, Specify at least 160k
Could not create the Java virtual machine.
把/usr/local/amoeba/bin/amoeba中的
DEFAULT_OPTS="-server -Xms256m -Xmx256m -Xss128k"
改为:
DEFAULT_OPTS="-server -Xms256m -Xmx256m -Xss256k"
正常启动的时候是这样:
log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml
2016-03-27 11:04:39,568 INFO context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-2.2.0
log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf
2016-03-27 11:04:39,949 INFO net.ServerableConnectionManager - Amoeba for Mysql listening on 0.0.0.0/0.0.0.0:8066.
2016-03-27 11:04:39,954 INFO net.ServerableConnectionManager - Amoeba Monitor Server listening on /127.0.0.1:63260.
要时常关注这个,可以发现问题。
C)配置Amoeba
实现读写分离功能,仅需要dbServers.xml和amoeba.xml
首先配置dbServers.xml文件
<?xml version="1.0" encoding="gbk"?>
<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">
<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">
<!--
Each dbServer needs to be configured into a Pool,
If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:
add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig
such as 'multiPool' dbServer
-->
<dbServer name="abstractServer" abstractive="true">
<factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">
<property name="manager">${defaultManager}</property>
<property name="sendBufferSize">64</property>
<property name="receiveBufferSize">128</property>
<!-- mysql port -->
<property name="port">3306</property>
#下面这个配置用于设置Amoeba默认连接的数据库名,操作表必须显示指定数据库名db.table,否则会在repdb下进行操作
<!-- mysql schema -->
<property name="schema">repdb</property>
#Amoeba连接后端数据库服务器的账号和密码,因此,需要在mysql集群中创建该用户,并授权Amoeba服务器可连接。
<!-- mysql user -->
<property name="user">ixdba</property>
<!-- mysql password -->
<property name="password">123456</property>
</factoryConfig>
<poolConfig class="com.meidusa.amoeba.net.poolable.PoolableObjectPool">
<property name="maxActive">500</property>#配置最大连接数
<property name="maxIdle">500</property>#配置最大空闲连接数
<property name="minIdle">10</property>#最小连接数
<property name="minEvictableIdleTimeMillis">600000</property>
<property name="timeBetweenEvictionRunsMillis">600000</property>
<property name="testOnBorrow">true</property>
<property name="testOnReturn">true</property>
<property name="testWhileIdle">true</property>
</poolConfig>
</dbServer>
#设置一个后端可写dbServer,这里定义为writedb
<dbServer name="writedb" parent="abstractServer">
<factoryConfig>
<!-- mysql ip -->
#MMM集群提供对外访问的可写VIP地址
<property name="ipAddress">192.168.1.140</property>
</factoryConfig>
</dbServer>
#设置可读dbServer
<dbServer name="slave1" parent="abstractServer">
<factoryConfig>
<!-- mysql ip -->
#MMM集群提供对外访问的可读VIP地址
<property name="ipAddress">192.168.1.141</property>
</factoryConfig>
</dbServer>
<dbServer name="slave2" parent="abstractServer">
<factoryConfig>
<!-- mysql ip -->
<property name="ipAddress">192.168.1.142</property>
</factoryConfig>
</dbServer>
<dbServer name="slave3" parent="abstractServer">
<factoryConfig>
<!-- mysql ip -->
<property name="ipAddress">192.168.1.143</property>
</factoryConfig>
</dbServer>
<dbServer name="slave4" parent="abstractServer">
<factoryConfig>
<!-- mysql ip -->
<property name="ipAddress">192.168.1.144</property>
</factoryConfig>
</dbServer>
#dbServer组,将可读的数据库IP统一放到一组中。
<dbServer name="myslaves" virtual="true">
<poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">
<!-- Load balancing strategy: 1=ROUNDROBIN均衡,2=WEIGHTBASED权重, 3=HA-->
<property name="loadbalance">1</property>
<!-- Separated by commas,such as: server1,server2,server1 -->
<property name="poolNames">slave1,slave2,slave3,slave4</property>
</poolConfig>
</dbServer>
</amoeba:dbServers>
然后配置另一个文件amoeba.xml
<?xml version="1.0" encoding="gbk"?>
<!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">
<amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">
<proxy>
<!-- service class must implements com.meidusa.amoeba.service.Service -->
<service name="Amoeba for Mysql" class="com.meidusa.amoeba.net.ServerableConnectionManager">
<!-- port -->
#Amoeba监听的端口,默认为8066
<property name="port">8066</property>
<!-- bind ipAddress -->
<!--
<property name="ipAddress">127.0.0.1</property>
-->
<property name="manager">${clientConnectioneManager}</property>
<property name="connectionFactory">
<bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">
<property name="sendBufferSize">128</property>
<property name="receiveBufferSize">64</property>
</bean>
</property>
<property name="authenticator">
<bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">
#设置客户端连接Amoeba时需要使用的账号和密码。
#实际使用的时候,是这样mysql -uroot -p123456 -h192.168.1.134(本地ip)-P8066
<property name="user">root</property>
<property name="password">123456</property>
<property name="filter">
<bean class="com.meidusa.amoeba.server.IPAccessController">
<property name="ipFile">${amoeba.home}/conf/access_list.conf</property>
</bean>
</property>
</bean>
</property>
</service>
<!-- server class must implements com.meidusa.amoeba.service.Service -->
<service name="Amoeba Monitor Server" class="com.meidusa.amoeba.monitor.MonitorServer">
<!-- port -->
<!-- default value: random number
<property name="port">9066</property>
-->
<!-- bind ipAddress -->
<property name="ipAddress">127.0.0.1</property>
<property name="daemon">true</property>
<property name="manager">${clientConnectioneManager}</property>
<property name="connectionFactory">
<bean class="com.meidusa.amoeba.monitor.net.MonitorClientConnectionFactory"></bean>
</property>
</service>
<runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">
<!-- proxy server net IO Read thread size -->
<property name="readThreadPoolSize">20</property>
<!-- proxy server client process thread size -->
<property name="clientSideThreadPoolSize">30</property>
<!-- mysql server data packet process thread size -->
<property name="serverSideThreadPoolSize">30</property>
<!-- per connection cache prepared statement size -->
<property name="statementCacheSize">500</property>
<!-- query timeout( default: 60 second , TimeUnit:second) -->
<property name="queryTimeout">60</property>
</runtime>
</proxy>
<!--
Each ConnectionManager will start as thread
manager responsible for the Connection IO read , Death Detection
-->
<connectionManagerList>
<connectionManager name="clientConnectioneManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">
<property name="subManagerClassName">com.meidusa.amoeba.net.ConnectionManager</property>
<!--
default value is avaliable Processors
<property name="processors">5</property>
-->
</connectionManager>
<connectionManager name="defaultManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">
<property name="subManagerClassName">com.meidusa.amoeba.net.AuthingableConnectionManager</property>
<!--
default value is avaliable Processors
<property name="processors">5</property>
-->
</connectionManager>
</connectionManagerList>
<!-- default using file loader -->
<dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">
<property name="configFile">${amoeba.home}/conf/dbServers.xml</property>
</dbServerLoader>
<queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">
<property name="ruleLoader">
<bean class="com.meidusa.amoeba.route.TableRuleFileLoader">
<property name="ruleFile">${amoeba.home}/conf/rule.xml</property>
<property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>
</bean>
</property>
<property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>
<property name="LRUMapSize">1500</property>
#Amoeba默认的池。
<property name="defaultPool">writedb</property>
#定义好的两个读、写池。
<property name="writePool">writedb</property>
<property name="readPool">myslaves</property>
<property name="needParse">true</property>
</queryRouter>
</amoeba:configuration>
D)设置Amoeba登录数据库权限
在MMM集群的所有mysql节点上执行,为Amoeba访问MMM集群中所有mysql数据库节点授权。
GRANT ALL ON repdb.* TO'ixdba'@'192.168.1.134' identified by ‘123456’;
FLUSH PRICILEGES;
E)测试Amoeba实现读、写分离和负载均衡。
MMM集群的所有mysql节点开启mysql的查询日志,方便检验是否成功。
在/etc/my.cnf添加如下:
log=/var/log/mysql_query_log(此文件自己创建,对mysql可写)
在每个mysql节点的test库中创建一张表,表名为mmm_test.
mysql> use test;
mysql> create table mmm_test (id int,email varchar(60));
mysql> insert into mmm_test (id,email) values (100,'this is本地真实IP’);
在远程MySQL客户端,通过Amoeba配置文件(amoeba.xml)中指定的用户名、密码、端口号以及Amoeba服务器的IP地址连接到MySQL数据库中:
mysql -uroot -p123456 -h192.168.1.134 -P8066
mysql> select * from test.mmm_test;
+------+-----------------------+
| id | email |
+------+-----------------------+
| 100 | this is 192.168.1.130 |
+------+-----------------------+
mysql> select * from test.mmm_test;
+------+-----------------------+
| id | email |
+------+-----------------------+
| 100 | this is 192.168.1.132 |
+------+-----------------------+
由此可见负载均衡实现了。
如果是这样:
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 11416420
就要看看你配置文件的dbServers.xml中:
<!-- mysql user -->
<property name="user">root</property>
<property name="password">password</property>
账号密码是否配置对了。
然后测试读写分离,创建两张表mmm_test1和mmm_test2
mysql> create table mmm_test1 (id int,email varchar(60));
mysql> create table mmm_test2 (id int,email varchar(60));
mysql > insert into mmm_test1 (id,email) values (103,'mmm_test3@126.com');
mysql > drop table mmm_test2;
查看mysql的log日志,由于所有节点都是只读状态,但是由于配置了,MMM,主节点有写权限。
思路是:先构建Mysql主从复制,再使用MMM实现监控和管理MySQL Master-Master的复制和服务状态,和监控Slave节点的复制以及运行状态,任意节点发生故障时是想自动切换的功能,如果任意节点故障,MMM集群就会自动屏蔽故障节点。通过Amoeba实现MySQL读写分离.
我是初学者,这是看书学习,有很多不足地方,请大家多多指教