主从 + sentinel 实现redis的高可用

时间:2022-08-10 01:20:44

redis提供主从模式(也就是复制replication), 如果不太清楚主从搭建过程的请参考之前博客, 一主多从这种模式只是将读写进行了分类,如果主发生了故障,整个redis系统都将变的不可用. 然而redis 引进了哨兵, 哨兵可以独立与redis运行的分布式服务. 提供redis实时监控和故障检测恢复的功能. 不了解哨兵特性的可参考之前关于哨兵的博客.

这里记录主从搭配合作实现redis的高可用(HA).

实战图解

       +----+
| M1 |
| S1 |
+----+
|
+----+ | +----+
| R2 |----+----| R3 |
| S2 | | S3 |
+----+ +----+

Configuration: quorum = 2

我们都知道哨兵的启动方式:

redis-sentinel /path/to/sentinel_6379.conf --sentinel 

redis 版本

使用版本是3.2.1

27.0.0.1:6381> 
127.0.0.1:6381> info server
# Server
redis_version:3.2.1
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:b3132c8ce7b475fa
redis_mode:standalone
os:Linux 3.13.0-32-generic x86_64
arch_bits:64
multiplexing_api:epoll

配置哨兵

自定义创建三个哨兵配置文件,这里使用

sentinel_26379.conf

sentinel_26380.conf

sentinel_26381.conf

配置内容如下:

port 26379

sentinel monitor mymaster 127.0.0.1 6381 2

sentinel down-after-milliseconds mymaster 5000

# master 有密码就要使用,
sentinel auth-pass mymaster ****

sentinel failover-timeout resque 180000.

sentinel parallel-syncs resque 5

其他两个,端口换下就好.

配置启动脚本文件

创建哨兵启动脚本,sentinel_26379; sentinel_26380; sentinel_26381;

详细内容如下:

#!/bin/sh
#
# Simple Redis init.d script conceived to work on Linux systems
# as it does use of the /proc filesystem.


redis-sentinel /path/to/sentinel_26379.conf --sentinel &


esac

验证运行状态

分别启动三个哨兵.

登陆6381(master),关闭 shutdown redis

宕机前的redis 主从状态:

root@ubuntu:~/programs# redis-cli -p 6381
127.0.0.1:6381>
127.0.0.1:6381> auth *****
OK
127.0.0.1:6381>
127.0.0.1:6381> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6379,state=online,offset=1446113,lag=1
slave1:ip=127.0.0.1,port=6380,state=online,offset=1446113,lag=1
master_repl_offset:1446262
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:397687
repl_backlog_histlen:1048576
127.0.0.1:6381>

执行关闭

127.0.0.1:6381> 
127.0.0.1:6381>
127.0.0.1:6381> shutdown save
not connected>
not connected>

监控哨兵的打印状态:

2078:X 28 Jul 19:50:11.967 # +sdown master mymaster 127.0.0.1 6381
2078:X 28 Jul 19:50:12.108 # +new-epoch 28
2078:X 28 Jul 19:50:12.175 # +vote-for-leader 1d9a0fc928f84a79b2e2aaa686db2ae735b6958d 28
2078:X 28 Jul 19:50:13.026 # +odown master mymaster 127.0.0.1 6381 #quorum 3/2
2078:X 28 Jul 19:50:13.026 # Next failover delay: I will not start a failover before Thu Jul 28 19:56:13 2016
2078:X 28 Jul 19:50:13.261 # +config-update-from sentinel 1d9a0fc928f84a79b2e2aaa686db2ae735b6958d 127.0.0.1 26380 @ mymaster 127.0.0.1 6381
2078:X 28 Jul 19:50:13.261 # +switch-master mymaster 127.0.0.1 6381 127.0.0.1 6380
2078:X 28 Jul 19:50:13.261 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6380
2078:X 28 Jul 19:50:13.262 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6380
2078:X 28 Jul 19:50:18.279 # +sdown slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6380

根据上面信息可以看到: 其中一个哨兵发现6381 主观下线, 然后发起了新的一轮投票;
quorum3/2 三个哨兵都认为6381下线了。然后启动了故障恢复, 最终选举6380为的主.

测试在登陆到6380 查看运行信息;

127.0.0.1:6381> shutdown save
not connected>
not connected>
not connected>
not connected> quit
root@ubuntu:~/programs#
root@ubuntu:~/programs# redis-cli -p 6380
127.0.0.1:6380> auth ****
OK
127.0.0.1:6380>
127.0.0.1:6380> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=6379,state=online,offset=56725,lag=0
master_repl_offset:56860
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:56859
127.0.0.1:6380>

可以看到6380已经是master,而且有一个从6379; 然后我们将6381重新启动,看是否它加入到新的master-6380下么.

127.0.0.1:6380> 
127.0.0.1:6380> quit
root@ubuntu:~/programs#
root@ubuntu:~/programs# /etc/init.d/redis_6380 start
/var/run/redis_6380.pid exists, process is already running or crashed
root@ubuntu:~/programs#
root@ubuntu:~/programs#
root@ubuntu:~/programs# /etc/init.d/redis_6381 start
Starting Redis server...
root@ubuntu:~/programs#
root@ubuntu:~/programs# redis-cli -p 6380
127.0.0.1:6380> auth ****
OK
127.0.0.1:6380> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6379,state=online,offset=91546,lag=0
slave1:ip=127.0.0.1,port=6381,state=online,offset=91560,lag=0
master_repl_offset:91560
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:91559
127.0.0.1:6380>

可以看到6381已经是6380的从机了.

在6381启动过程中哨兵监控的内容

2104:X 28 Jul 19:57:38.988 # -sdown slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6380
2104:X 28 Jul 19:57:48.947 * +convert-to-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6380

6381启动好了,客观认为6381不在是主,直接将角色转换成了6380的从机.

此时redis 主从 + 哨兵 实现高可用方案搭建完毕.

继续将6380宕机,监控如下:

2104:X 28 Jul 20:04:01.733 # +new-epoch 29
2104:X 28 Jul 20:04:01.767 # +vote-for-leader b24714af31039c93f6ad4173c059c4d11e86f302 29
2104:X 28 Jul 20:04:01.767 # +odown master mymaster 127.0.0.1 6380 #quorum 2/2
2104:X 28 Jul 20:04:01.767 # Next failover delay: I will not start a failover before Thu Jul 28 20:10:02 2016
2104:X 28 Jul 20:04:02.801 # +config-update-from sentinel b24714af31039c93f6ad4173c059c4d11e86f302 127.0.0.1 26379 @ mymaster 127.0.0.1 6380
2104:X 28 Jul 20:04:02.801 # +switch-master mymaster 127.0.0.1 6380 127.0.0.1 6381
2104:X 28 Jul 20:04:02.801 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
2104:X 28 Jul 20:04:02.801 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6381
2104:X 28 Jul 20:04:07.834 # +sdown slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6381

欢迎大家评论,吐槽