实现memcache集群
一:memcache本身没有redis锁具备的数据持久化功能,比如RDB和AOF都没有,但是可以通过做集群的方式,让各memcache的数据进行同步,实现数据的一致性,即保证各memcache的数据是一样的,即使有任何一台或多台memcache发生故障,只要集群种有一台memcache可用就不会出现数据丢失,当其他memcache重新加入到集群的时候可以自动从有数据的memcache当中自动获取数据并提供服务,以下为详细步骤:
memcached的API使用三十二位元的循环冗余校验(CRC-32)计算键值后,将资料分散在不同的机器上。当表格满了以后,接下来新增的资料会以LRU机制替换掉。由于memcached通常只是当作快取系统使用,所以使用memcached的应用程式在写回较慢的系统时(像是后端的数据库)需要额外的程式码更新memcached内的资料。
memcached具有多种语言的客户端开发包,包括:Perl/PHP/JAVA/C/Python/Ruby/C#/MySQL/
1.1:环境准备:
1.1.1:逻辑架构图:
以下架构中客户端访问Haproxy的VIP,VIP基于keepalived实现,Haproxy代理两个Magent来实现服务的高可用,即任何一个Magent故障不会影响业务的访问,而每个Magent又代理两个Memcached来实现主备,当Magent代理的主Memcached故障之后Magent会自动将用户请求转发至备份的Memcached,因此也不会影响客户端的连接使用,最后端的两个Memcached通过repcached实现双主互从双向数据复制同步的结构,从而保证任何一个memcached故障之后不会丢失数据,当故障的memcached恢复之后自动加入到Magent代理当中,其数据自动从另外一台Memcached恢复数据,因此本结构从一定程度实现了集群的部署,解决了单点故障实现了高可用性和负载均衡的功能。
1.1.2:软件准备:
操作系统:Centos 7.2-1511
Memcache:1.4.24 #官方下载地址:http://memcached.org/
Libevent:1.4.15 #官方下载地址:http://libevent.org/
magent:0.6
repcached:2.2
1.2:安装libevent:
# yum install gcc gcc-c++ automake
# tar xvf libevent-2.0.22-stable.tar.gz
# cd libevent-2.0.22-stable
# ./configure --prefix=/usr/local/libevent
# make && make install
1.3:安装memcache:
1.3.1:解压并部署:
# tar xvf memcached-1.4.34.tar.gz
# cd memcached-1.4.34
# ./configure --help
# ./configure --prefix=/usr/local/memcache --with-libevent=/usr/local/libevent
# make && make install
1.3.2:memcached命令帮助:
-p 监听的端口
-l 连接的IP地址, 默认是本机
-d start 启动memcached服务
-d restart 重起memcached服务
-d stop|shutdown 关闭正在运行的memcached服务
-d install 安装memcached服务
-d uninstall 卸载memcached服务
-u 以的身份运行 (仅在以root运行的时候有效)
-m 最大内存使用,单位MB。默认64MB
-M 内存耗尽时返回错误,而不是删除项
-c 最大同时连接数,默认是1024
-f 块大小增长因子,默认是1.25
-n 最小分配空间,key+value+flags默认是48
-h 显示帮助
1.4:安装magent,是memcache的代理软件:
magent实现N主N备的结构,即当主memcached故障,有备份memcached代替提供服务:
1.4.1:下载安装包:
下载地址:https://code.google.com/archive/p/memagent/downloads
# tar xvf magent-0.6.tar.gz
# mkdir magent
# mv magent-0.6.tar.gz magent
# cd magent
# tar xvf magent-0.6.tar.gz
# yum install libevent*
# /sbin/ldconfig
1.4.2:第一次编译如下:
[root@mem1 magent]# make
gcc -Wall -g -O2 -I/usr/local/include -m64 -c -o magent.o magent.c
magent.c: In function ‘writev_list’:
magent.c:729:17: error: ‘SSIZE_MAX’ undeclared (first use in this function)
if (toSend > SSIZE_MAX ||
^
magent.c:729:17: note: each undeclared identifier is reported only once for each function it appears in
make: *** [magent.o] Error 1
解决办法:
# vim ketama.h #在开头添加以下三行:
#ifndef SSIZE_MAX
# define SSIZE_MAX 32767
#endif
1.4.3:第二次make报错如下:
[root@mem1 magent]# make
gcc -Wall -g -O2 -I/usr/local/include -m64 -c -o magent.o magent.c
gcc -Wall -g -O2 -I/usr/local/include -m64 -c -o ketama.o ketama.c
gcc -Wall -g -O2 -I/usr/local/include -m64 -o magent magent.o ketama.o /usr/lib64/libevent.a /usr/lib64/libm.a
gcc: error: /usr/lib64/libevent.a: No such file or directory
gcc: error: /usr/lib64/libm.a: No such file or directory
make: *** [magent] Error 1
#解决办法:
# ln -s /usr/lib64/libm.so /usr/lib64/libm.a
1.4.4:第三次make报错如下:
[root@mem1 magent]# make
gcc -Wall -g -O2 -I/usr/local/include -m64 -o magent magent.o ketama.o /usr/lib64/libevent.a /usr/lib64/libm.a
gcc: error: /usr/lib64/libevent.a: No such file or directory
make: *** [magent] Error 1
#解决办法:
# ln -sv /usr/local/libevent/lib/libevent.a /usr/lib64/libevent.a
1.4.5:第四次make成功:
[root@mem1 magent]# make
gcc -Wall -g -O2 -I/usr/local/include -m64 -o magent magent.o ketama.o /usr/lib64/libevent.a /usr/lib64/libm.a
1.4.6:复制magent命令到系统命令目录:
# cp magent /usr/bin/
1.4.7:验证magent命令:
[root@mem1 magent]# magent -h
memcached agent v0.6 Build-Date: Jan 18 2017 16:40:43
Usage:
-h this message
-u uid
-g gid
-p port, default is 11211. (0 to disable tcp support)
-s ip:port, set memcached server ip and port
-b ip:port, set backup memcached server ip and port
-l ip, local bind ip address, default is 0.0.0.0
-n number, set max connections, default is 4096
-D don't go to background
-k use ketama key allocation algorithm
-f file, unix socket path to listen on. default is off
-i number, set max keep alive connections for one memcached server, default is 20
-v verbose
1.5:部署repcahed:
repcached是memcached的补丁,实现了不同memcached实例内存数据同步的功能,类似于实现了主从的功能,但是每个memcached都是可以写入的,即可以双向复制主从的都是可读可写的,所以也不是简单的主从结构,更像是多主互从的结构,通过repcachd可以解决memcached单机内存数据不能持久化的问题,因为可以实现memcached主从结构,项目地址:http://repcached.sourceforge.net/
1.5.1:下载repcached:
# wget http://downloads.sourceforge.net/repcached/memcached-1.2.8-repcached-2.2.tar.gz
1.5.2:解压并部署:
# tar xvf memcached-1.2.8-repcached-2.2.tar.gz
# cd memcached-1.2.8-repcached-2.2
#./configure –prefix=/usr/local/repcached –enable-replication
报错如下:
1.5.3:更改配置文件:
# vim memcached.c
55 /* FreeBSD 4.x doesn't have IOV_MAX exposed. */
56 #ifndef IOV_MAX
57 #if defined(__FreeBSD__) || defined(__APPLE__)
58 # define IOV_MAX 1024
59 #endif
60 #endif
改为如下内容:
55 /* FreeBSD 4.x doesn't have IOV_MAX exposed. */
56 #ifndef IOV_MAX
57 # define IOV_MAX 1024
58 #endif
1.5.4:再次编译安装:
# ./configure --prefix=/usr/local/repcached --enable-replication
#make && make install
1.5.5:验证安装完成且命令可用:
[root@mem1 memcached-1.2.8-repcached-2.2]# /usr/local/repcached/bin/memcached -h
memcached 1.2.8
repcached 2.2
-p <num> TCP port number to listen on (default: 11211)
-U <num> UDP port number to listen on (default: 11211, 0 is off)
-s <file> unix socket path to listen on (disables network support)
-a <mask> access mask for unix socket, in octal (default 0700)
-l <ip_addr> interface to listen on, default is INDRR_ANY
-d run as a daemon
-r maximize core file limit
-u <username> assume identity of <username> (only when run as root)
-m <num> max memory to use for items in megabytes, default is 64 MB
-M return error on memory exhausted (rather than removing items)
-c <num> max simultaneous connections, default is 1024
-k lock down all paged memory. Note that there is a
limit on how much memory you may lock. Trying to
allocate more than that would fail, so be sure you
set the limit correctly for the user you started
the daemon with (not for -u <username> user;
under sh this is done with 'ulimit -S -l NUM_KB').
-v verbose (print errors/warnings while in event loop)
-vv very verbose (also print client commands/reponses)
-h print this help and exit
-i print memcached and libevent license
-P <file> save PID in <file>, only used with -d option
-f <factor> chunk size growth factor, default 1.25
-n <bytes> minimum space allocated for key+value+flags, default 48
-R Maximum number of requests per event
limits the number of requests process for a given con nection
to prevent starvation. default 20
-b Set the backlog queue limit (default 1024)
-x <ip_addr> hostname or IP address of peer repcached
-X <num> TCP port number for replication (default: 11212)
1.5.6:通过repcached安装的memcached命令启动memcache服务并实现memcache主备结构,其中-x为对方即主memcache的IP,-X为本地启动的用数据同步的端口:
1.5.6.1:Server 1相关操作:
[root@mem1 ~]# /usr/local/repcached/bin/memcached -d -m 1024 -p 11211 -u root -c 1024 -x 192.168.10.102 -X 16000 #配置memcache服务并设置从什么地方同步数据
#验证端口启动成功,16000是一个检测端口,当双方检测到对方都有16000的时候会关闭,检测到对方没有启动则自己才会启动:
[root@mem1 ~]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:11211 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 127.0.0.1:2812 *:*
LISTEN 0 128 *:16000 *:*
LISTEN 0 128 :::11211 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 100 ::1:25 :::*
1.5.6.2:Server 2相关操作:
[root@mem2 ~]# /usr/local/repcached/bin/memcached -d -m 1024 -p 11211 -u root -c 1024 -x 192.168.10.101 -X 16000 #和server 1的第一个memcache保持一致即互为主备
[root@mem2 ~]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:11211 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 :::11211 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 100 ::1:25 :::*
1.5.6.3:再次查看server 1的端口状态是否有16000:
[root@mem1 ~]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:11211 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 127.0.0.1:2812 *:*
LISTEN 0 128 :::11211 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 100 ::1:25 :::*
1.5.6.4:实现原理:
在 master上可以通过 -X指定 replication port,在 slave上通过 -x/-X找到 master并 connect上去,事实上,如果同时指定了 -x/-X, repcached一定会尝试连接,但如果连接失败,它就会用 -X参数来自己 listen(成为 master);如果 master坏掉, slave侦测到连接断了,它会自动 listen而成为 master;而如果 slave坏掉, master也会侦测到连接断,它就会重新 listen等待新的 slave加入。
从这方案的技术实现来看,其实它是一个单 master单 slave的方案,但它的 master/slave都是可读写的,而且可以相互同步,所以从功能上看,也可以认为它是双机 master-master方案。
1.5.7:连接到memcache,验证可以相互同步数据:
1.5.8:再次进行测试:
[root@mem1 ~]# telnet 192.168.10.101 11211
Trying 192.168.10.101...
Connected to 192.168.10.101.
Escape character is '^]'.
set name1 0 0 4
jack
STORED
get name1
VALUE name1 0 4
jack
END
quit
Connection closed by foreign host.
[root@mem1 ~]# telnet 192.168.10.102 11211
Trying 192.168.10.102...
Connected to 192.168.10.102.
Escape character is '^]'.
get name1
VALUE name1 0 4
jack
END
1.6:magent使用:
magent是一个开源的memcache代理软件,客户端连接magent和连接memcache是完全一样的操作,magent可以通过主备为memcache实现高可用,即一个组内有一个memcache Mater和一个Mencache Slave,Master和Slave的数据通过repcached进行同步,当Master 挂掉之后Magent会将请求转发至Slave,由于Slave和Master的数据是相互同步的,所以不会丢失数据,因此客户使用是没有感觉的,当Master恢复之后会自动从Slave恢复数据,因此可以完全保证数据不会丢失,可以开启两个magent代理后端的memcache服务,然后再有前端的haproxy代理magent,这样用户只要访问haproxy的VIP地址即可,当一台magent 挂掉之后haproxy会自动通过状态监测将其从负载中删除,恢复之后再自动添加,这样就实现了一个完美的解决方案:
1.6.1:启动magent代理服务:
[root@mem1 ~]# scp /usr/bin/magent 192.168.10.102:/usr/bin/ #将magent命令直接复制到Server 2即可直接使用
[root@mem1 ~]# scp /usr/local/repcached/bin/memcached 192.168.10.102:/usr/bin/ #将通过repcached编译的memcached命令复制到server 2直接使用
[root@mem1 ~]# magent -u root -n 65536 -l 192.168.10.101 -p 10211 -s 192.168.10.101:11211 -b 192.168.10.102:11211 #Server 1代理互为主备的一组组memcache,可以代理多组,多组就写多个-s指定即可
[root@mem2 ~]# magent -u root -n 65536 -l 192.168.10.102 -p 10211 -s 192.168.10.102:11211 -b 192.168.10.101:11211 #Server 2配置的代理服务器与serevr1相同但是互为主备
1.6.2:验证Server1和Server2的10211端口已经启动成功:
#Server1:
[root@mem1 ~]# magent -u root -n 65536 -l 192.168.10.101 -p 10211 -s 192.168.10.101:11211 -b 192.168.10.102:11211 #主为101备为102,可以有多主,使用-s指定即可
[root@mem1 ~]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:11211 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 127.0.0.1:2812 *:*
LISTEN 0 128 192.168.10.101:10211 *:*
LISTEN 0 128 :::11211 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 100 ::1:25 :::*
#Server2:
[root@mem2 ~]# magent -u root -n 65536 -l 192.168.10.102 -p 10211 -s 192.168.10.102:11211 -b 192.168.10.101:11211 #主为102,备为101
[root@mem2 ~]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:11211 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 192.168.10.102:10211 *:*
LISTEN 0 128 :::11211 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 100 ::1:25 :::*
1.7:连接到magent 端口进行测试:
1.7.1:连接到Server1:
[root@mem1 ~]# telnet 192.168.10.101 11211
Trying 192.168.10.101...
Connected to 192.168.10.101.
Escape character is '^]'.
set name 0 0 4
jack
STORED
get name
VALUE name 0 4
jack
END
get key1
VALUE key1 0 2
ab
END:
1.7.2:连接到Server2:
1.8:验证当memcache挂掉在重新加入到集群之后数据可以恢复:
1.8.1:Server1 具体操作如下:
[root@mem1 ~]# ps -ef | grep memcached
root 35142 1 0 00:26 ? 00:00:00 /usr/local/repcached/bin/memcached -d -m 1024 -p 11211 -u root -c 1024 -x 192.168.10.102 -X 16000
root 35194 34883 0 01:17 pts/0 00:00:00 grep --color=auto memcached
[root@mem1 ~]# kill -9 35142 #强制结束Server1上的memcache进程
[root@mem1 ~]# telnet 192.168.10.101 10211 #依然可以操作通过magent代理的Server2上的memcache服务
Trying 192.168.10.101...
Connected to 192.168.10.101.
Escape character is '^]'.
get key1
VALUE key1 0 2
ab
END
get name
VALUE name 0 4
jack
END
quit
Connection closed by foreign host.
[root@mem1 ~]# telnet 192.168.10.101 11211 #Server上的memcache服务已经无法连接
Trying 192.168.10.101...
telnet: connect to address 192.168.10.101: Connection refused
[root@mem1 ~]# /usr/local/repcached/bin/memcached -d -m 1024 -p 11211 -u root -c 1024 -x 192.168.10.102 -X 16000 #在Server1重新启动memcache服务
[root@mem1 ~]# telnet 192.168.10.101 11211 #连接到本机的memcache服务
Trying 192.168.10.101...
Connected to 192.168.10.101.
Escape character is '^]'.
get key1 #数据已经恢复,数据护肤不是使用的持久化也不是使用的备份,而是通过Serevr2上的memcache时时恢复出来的
VALUE key1 0 2
ab
END
get name
VALUE name 0 4
jack
END
1.8.2:Server2具体操作如下:
[root@mem2 ~]# ps -ef | grep memcached
root 19823 1 0 01:03 ? 00:00:00 /usr/local/repcached/bin/memcached -d -m 1024 -p 11211 -u root -c 1024 -x 192.168.10.101 -X 16000
root 19832 19696 0 01:09 pts/0 00:00:00 grep --color=auto memcached
[root@mem2 ~]# kill -9 19823 #强制结束Server2上的memcache进程
[root@mem2 ~]# telnet 192.168.10.102 10211 #连接到Server 2上的magent测试服务使用可用,
Trying 192.168.10.102...
Connected to 192.168.10.102. #依然可以操作通过magent代理的备份Server1上的memcache
Escape character is '^]'.
get key1
VALUE key1 0 2
ab
END
get name
VALUE name 0 4
jack
END
quit
Connection closed by foreign host.
[root@mem2 ~]# telnet 192.168.10.102 11211 #Server上memcache的11211端口依然无法连接
Trying 192.168.10.102...
telnet: connect to address 192.168.10.102: Connection refused
[root@mem2 ~]# /usr/local/repcached/bin/memcached -d -m 1024 -p 11211 -u root -c 1024 -x 192.168.10.101 -X 16000 #重新启动Server2上的memcache服务
[root@mem2 ~]# telnet 192.168.10.102 11211 #直接连接到Server2上的memcache验证数据是否恢复
Trying 192.168.10.102...
Connected to 192.168.10.102.
Escape character is '^]'.
get key1
VALUE key1 0 2
ab
END
get name
VALUE name 0 4
jack
END