Redis常用工具之-redis-shake

时间:2023-02-07 11:22:35

介绍

redis-shake是阿里云Redis团队开源的用于Redis数据迁移和数据过滤的工具

基本功能

redis-shake它支持解析、恢复、备份、同步四个功能

  1. 恢复restore:将RDB文件恢复到目的redis数据库。
  2. 备份dump:将源redis的全量数据通过RDB文件备份起来。
  3. 解析decode:对RDB文件进行读取,并以json格式解析存储。
  4. 同步sync:支持源redis和目的redis的数据同步,支持全量和增量数据的迁移,支持单节点、主从版、集群版之间的互相同步。
  5. 同步rump:支持源redis和目的redis的数据同步,仅支持全量的迁移,采用scan和restore命令进行迁移,支持不同云厂商不同redis版本的迁移。

基本原理

Redis常用工具之-redis-shake

RedisShake同步原理

  1. 源Redis服务实例相当于主库,Redis-shake相当于从库,它会发送psync指令给源Redis服务实例。
  2. 源Redis实例先把RDB文件传输给 Redis-shake ,Redis-shake 会把RDB文件发送给目的实例。
  3. 源实例会再把增量命令发送给 Redis-shake ,Redis-shake负责把这些增量命令再同步给目的实例。

RedisShake执行过程

  1. 启动Redis-shake进程,这个进程模拟了一个 Redis 实例,Redis-shake的基本原理就是模拟一个Slave从节点加入源Redis集群,然后进行增量的拉取(通过psync命令)。
  2. Redis-shake进程和数据迁出的源实例进行数据的全量拉取同步,并回放,这个过程和 Redis 主从实例的全量同步是类似的

支持的Redis架构

  1. Standalone:单源拉取,主从版/单节点
  2. Sentinel:从Sentinel获取地址并拉取
  3. Cluster:开源Cluster模式
  4. Proxy:从Proxy拉取

版本迭代特性

现在 redis-shake 有两个主版本: redis-shake 2.x:持续更新 3 年,目前停止更新与答疑,遇到问题推荐尝试 3.x 版本 redis-shake 3.x:基于 redis-shake 2.x 重写,代码可读性高,性能较佳

应用场景

1、数据同步——SYNC/PSYNC

Redis常用工具之-redis-shake

在SYNC模式下支持全量同步与增量同步,当然它有一个限制,需要源端支持PSYNC/SYNC,在向源端发送SYNC/PSYNC命令的时候,可以收到源端的回复。源端、目的端的形态可以是主从集群/Proxy/Cluster,迁移也可以用于云下到云上,云上到云下这种混合云迁移

2、多活

多活通常是用于解决因地域网络传输层面带来的问题,也就是异地多活 比如说业务层面想要在北京和上海两地的机房实时同步最新数据,同样反过来也可以,这样会需要做到一个多活 Redis常用工具之-redis-shake

3、数据备份与恢复

Redis常用工具之-redis-shake

作用

  1. 数据恢复restore
  2. 自建Redis迁移上云
  3. 云上数据迁移到云下
  4. 将数据库恢复至之前的状态,数据回滚
  5. 可以配合Filter使用,只恢复部分Key

4、多云厂商之间数据迁移

对于某些云上redis,比如部分云厂商不支持SYNC/PSYNC 命令,那么在进行云上和下云的时候如何进行行迁移呢 Redis常用工具之-redis-shake RedisShake对于这种场景也做了支持,比如绕过SYNC和PSYNC的同步方式。它是以Scan的方式从源端Redis获取到全量数据,再写入到目的端,实现数据迁移 如上图所示:

部署和使用

前提条件

安装golang 请浏览 go 官方地址:(https://go.dev/dl/?spm=a2c4e.10696291.0.0.24e019a4HrBgn5) 并且检查一下是否有新的版本可用

版本要求

go version >=go1.17
ubuntu:

1、下载 Go 压缩包

root@ubuntu20-171:~# wget -c https://go.dev/dl/go1.20.linux-amd64.tar.gz -O - | sudo tar -xz -C /usr/local

2、调整环境变量

通过将 Go 目录添加到$PATH环境变量,系统将会知道在哪里可以找到 Go 可执行文件

root@ubuntu20-171:~# export PATH=$PATH:/usr/local/go/bin
root@ubuntu20-171:~# export PATH=$PATH:/usr/local/go/bin
root@ubuntu20-171:~# source ~/.profile

3、验证 Go 安装过程

root@ubuntu20-171:~# go version
go version go1.20 linux/amd64

部署

root@ubuntu20-171:~# git clone https://github.com/alibaba/RedisShake
root@ubuntu20-171:~# cd RedisShake
root@ubuntu20-171:~# sh build.sh
root@ubuntu20-171:~# 
root@ubuntu20-171:RedisShake# ll
total 80
drwxr-xr-x 10 root root 4096 Feb  6 16:36 ./
drwxr-xr-x  3 root root 4096 Feb  6 16:33 ../
drwxr-xr-x  4 root root 4096 Feb  6 16:40 bin/  ### build.sh成功生成bin目录
-rwxr-xr-x  1 root root  966 Feb  6 16:32 build.sh*
drwxr-xr-x  3 root root 4096 Feb  6 16:32 cmd/
drwxr-xr-x  2 root root 4096 Feb  6 16:32 filters/
drwxr-xr-x  8 root root 4096 Feb  6 16:34 .git/
drwxr-xr-x  4 root root 4096 Feb  6 16:32 .github/
-rw-r--r--  1 root root   55 Feb  6 16:32 .gitignore
-rw-r--r--  1 root root  424 Feb  6 16:32 go.mod
-rw-r--r--  1 root root 3232 Feb  6 16:32 go.sum
drwxr-xr-x 13 root root 4096 Feb  6 16:32 internal/
-rw-r--r--  1 root root 1078 Feb  6 16:32 license.txt
-rw-r--r--  1 root root 3499 Feb  6 16:32 README.md
-rw-r--r--  1 root root 1924 Feb  6 16:32 restore.toml
-rw-r--r--  1 root root 1918 Feb  6 16:32 scan.toml
drwxr-xr-x  4 root root 4096 Feb  6 16:32 scripts/
-rw-r--r--  1 root root 2031 Feb  6 16:32 sync.toml
drwxr-xr-x  5 root root 4096 Feb  6 16:32 test/
root@ubuntu20-171:RedisShake# cd bin/
root@ubuntu20-171:bin# ll
total 9760
drwxr-xr-x  4 root root    4096 Feb  6 16:40 ./
drwxr-xr-x 10 root root    4096 Feb  6 16:36 ../
drwxr-xr-x  2 root root    4096 Feb  6 16:34 cluster_helper/
drwxr-xr-x  2 root root    4096 Feb  6 16:34 filters/
-rwxr-xr-x  1 root root 9962342 Feb  6 16:34 redis-shake*   ##可执行文件
-rw-r--r--  1 root root    1968 Feb  6 16:40 restore.toml   ## 参数模板
-rw-r--r--  1 root root    1918 Feb  6 16:34 scan.toml
-rw-r--r--  1 root root    2031 Feb  6 16:34 sync.toml

使用

基础用法

1、编辑 对应 参数sync.toml or restore.toml 或手动编辑自行创建后缀名为toml的迁移配置文件
2、构建并执行数据同步进程 redis-shake.
root@ubuntu20-171:RedisShake# ./bin/redis-shake redis-shake.toml
# or
root@ubuntu20-171:RedisShake# ./bin/redis-shake restore.toml

步骤流程

创建xx.toml配置文件  ---> 编写toml配置文件同步规则  ---> 执行redis-shake进程  ---> 核对数据同步情况

参数配置说明

type = "sync" # 同步机制实现

[source] # 源Redis服务实例
version = 5.0 # 填写Redis源服务版本, 例如:2.8, 4.0, 5.0, 6.0, 6.2, 7.0, ...。
address = "127.0.0.1:6379" # 源Redis服务实例 地址+端口
username = "" # 如果Redis没有配置ACL,则可以不填写,否则需要填写用户名 
password = "" # 如果Redis没有配置ACL,则可以不填写,否则需要填写密码
tls = false # 是否开启tls安全机制
elasticache_psync = "" # 是否支持AWS的elasticache

[target]
type = "standalone" # 选择Redis的类型:"standalone:单机模式" or "cluster:集群模式"
version = 5.0  # 填写Redis源服务版本, 例如:2.8, 4.0, 5.0, 6.0, 6.2, 7.0, ...。
# 如果目标Redis服务实例属于cluster集群模式, 那么可以写入其中一个节点的地址和端口.
# redis-shake 会通过`cluster nodes` 命令获取其他的节点地址和端口
address = "127.0.0.1:6380" # 填写的对应的ip加端口
username = "" # 如果Redis没有配置ACL,则可以不填写,否则需要填写用户名 
password = "" # 如果Redis没有配置ACL,则可以不填写,否则需要填写密码
tls = false # 是否开启tls安全机制

[advanced]
dir = "data" # 数据同步的存储目录

# 设置使用的最大CPU核心数, 如果设置了0 代表着 使用 runtime.NumCPU() 实际的cpu cores数量
ncpu = 4

# 开启pprof性能检测的port, 0代表着禁用
pprof_port = 0 

# 开启metric port端口, 0代表着禁用
metrics_port = 0

# log的相关设置
log_file = "redis-shake.log" # 设置对应的日志文件名称
log_level = "info" # debug, info or warn # 设置对应的日志级别
log_interval = 5 # in seconds # 日志打印频次

# redis-shake gets key and value from rdb file, and uses RESTORE command to
# create the key in target redis. Redis RESTORE will return a "Target key name
# is busy" error when key already exists. You can use this configuration item
# to change the default behavior of restore:
# panic:   redis-shake will stop when meet "Target key name is busy" error.
# rewrite: redis-shake will replace the key with new value.
# ignore:  redis-shake will skip restore the key when meet "Target key name is busy" error.
rdb_restore_command_behavior = "rewrite"  # restore的操作类型:panic, rewrite or skip

# pipeline的大小数量阈值
pipeline_count_limit = 1024

# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default. This amount is normally 1gb.
target_redis_client_max_querybuf_len = 1024_000_000

# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited to 512 mb.
target_redis_proto_max_bulk_len = 512_000_000

同步日志注释

当打印的日志出现send RDB finished,表示完成全量数据迁移,接下来进入增量数据迁移阶段。

日志信息中各参数说明如下:

allowOps:表示每秒向目标库发送多少条命令。
说明 通常当allowOps为0时,表示数据迁移完成,可以停止redis-shake。但源库会定时发送PING命令,所以allowOps偶尔不为0。
disallowOps:表示每秒过滤的命令数。
entryId:从1开始计数,表示redis-shake共处理多少条命令。
InQueueEntriesCount:表示还剩余多少条命令待发送。


暂停向源库写入数据,等待返回日志中allowOps对应值连续多次为0时,使用Ctrl+C组合键停止运行redis-shake。
此时目标库的数据与源库完全一致,您可以将业务的数据库服务由自建Redis数据库切换至Tair或Redis实例。

toml文件的编写方式

单机 -> 单机 配置格式:
type = "sync"
[source]  
address = "127.0.0.1:6379"
password = "123456"  ## 根据自身密码填写

[target]  
type = "standalone" #这里type属性设置一定为standalone
address = "127.0.0.1:6379" 
password = "123456" ## 根据自身密码填写

启动 redis-shake:
root@ubuntu20-171:RedisShake# ./redis-shake sync.toml
单机->集群 配置格式:
type = "sync"
[source]  #数据源配置
address = "127.0.0.1:6379"
password = "1234566"

[target]  #目的源配置
type = "cluster"   #这里type属性设置一定为cluster
address = "127.0.0.1:6379" # 这里写集群中的任意一个节点的地址即可
password = "1234566"

启动 redis-shake:
root@ubuntu20-171:RedisShake# ./redis-shake sync.toml
集群 -> 集群 配置格式:

在 RedisShake V3系列版本,也就是当前使用最多的版本中,集群与集群之间的redis数据同步,统一在单机到集群的基础上,为每一个数据源端的redis master节点建立一个shake进程

方法1:手动起多个 redis-shake
例:二个集群3主3从之间的数据同步只需为数据源端的每一个master建立一个shake进程,既需要建立3个单机至集群的shake进程用于二个Redis集群间的无感数据迁移与同步
集群 C 有3个节点:
192.168.0.1:6379
192.168.0.2:6379
192.168.0.3:6379

把 3个节点当成 3个单机实例,参照 单机到集群 部署 3个 redis-shake 进行数据同步。

⚠️注意:不要在同一个目录启动多个 redis-shake,因为 redis-shake 会在本地存储临时文件,多个 redis-shake 之间的临时文件会干扰,正确做法是建立多个目录。
方法2:借助 cluster_helper.py 启动

脚本 cluster_helper.py 可以方便启动多个 redis-shake 从集群迁移数据,效果等同于方法1 ⚠️注意:

  1. 源端有多少个分片,cluster_helper.py 就会起多少个 redis-shake 进程,所以如果源端分片数较多的时候,需要评估当前机器是否可以承担这么多进程。
  2. cluster_helper.py 异常退出的时候,可能没有正常退出 redis-shake 进程,需要 ps aux | grep redis-shake 检查
  3. 每个 redis-shake 进程的执行日志记录在 RedisShake/cluster_helper/data/xxxxx 中,反馈问题请提供相关日志。
依赖

Python 需要 python3.6 及以上版本,安装 Python 依赖:

root@ubuntu20-171:RedisShake# cd bin/cluster_helper
root@ubuntu20-171:cluster_helper# pip3 install -r requirements.txt

配置

修改 sync.toml:

type = "sync"

[source]
address = "192.168.0.1:6379" # 集群 C 中任意一个节点地址
password = "r-ccccc:xxxxx"

[target]
type = "cluster"
address = "192.168.1.1:6380" # 集群 D 中任意一个节点地址
password = "r-ddddd:xxxxx"

启动 redis-shake:
root@ubuntu20-171:RedisShake# cd bin/cluster_helper
root@ubuntu20-171:cluster_helper# python3 cluster_helper.py ../redis-shake ../sync.toml 

参数 1 是 redis-shake 可执行程序的路径
参数 2 是配置文件路径

实操 (以sync模式为例)

root@ubuntu20-171:~# cd /data/tool/
root@ubuntu20-171:/data/tool/RedisShake/bin# cd RedisShake/bin
root@ubuntu20-171:bin# ll
total 9760
drwxr-xr-x  4 root root    4096 Feb  6 16:40 ./
drwxr-xr-x 10 root root    4096 Feb  6 16:36 ../
drwxr-xr-x  2 root root    4096 Feb  6 16:34 cluster_helper/
drwxr-xr-x  2 root root    4096 Feb  6 16:34 filters/
-rwxr-xr-x  1 root root 9962342 Feb  6 16:34 redis-shake*
-rw-r--r--  1 root root    1968 Feb  6 16:40 restore.toml
-rw-r--r--  1 root root    1918 Feb  6 16:34 scan.toml
-rw-r--r--  1 root root    2031 Feb  6 16:34 sync.toml
root@ubuntu20-171:bin# vim sync.toml
type = "sync"
[source]
version = 4.0 # redis version, such as 2.8, 4.0, 5.0, 6.0, 6.2, 7.0, ...
address = "10.150.38.200:6379"
##username = "@a123" # keep empty if not using ACL
password = "********" # keep empty if no authentication is required
tls = false
[target]
type = "standalone" # "standalone" or "cluster"
version = 6.2 # redis version, such as 2.8, 4.0, 5.0, 6.0, 6.2, 7.0, ...
address = "10.150.3.38:6379"
password = "@a123" # keep empty if no authentication is required
tls = false

[advanced]
dir = "/data/redis"
ncpu = 2
pprof_port = 0
metrics_port = 0

# log
log_file = "/data/backup/redis-shake.log"
log_level = "info" # debug, info or warn
log_interval = 1 # in seconds
# pip
rdb_restore_command_behavior = "rewrite" # panic, rewrite or skip
pipeline_count_limit = 1024
target_redis_client_max_querybuf_len = 1024_000_000
target_redis_proto_max_bulk_len = 512_000_000

启动

root@xgsdk-dev-mysql8-transfer-01:/data/tool/RedisShake/bin# ./redis-shake sync.toml

日志信息

同步分为三个阶段:

等待源端save rdb完毕,日志如下:
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"GOOS: linux, GOARCH: amd64"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"Ncpu: 2, GOMAXPROCS: 2"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"pid: 245326"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"pprof_port: 0"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"No lua file specified, will not filter any cmd."}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"auth successful. address=[10.150.3.38:6379]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"redisWriter connected to redis successful. address=[10.150.3.38:6379]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"auth successful. address=[10.150.38.200:6379]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"psyncReader connected to redis successful. address=[10.150.38.200:6379]"}
{"level":"warn","time":"2023-02-02T17:17:42+08:00","message":"remove file. filename=[225789358.aof]"}
{"level":"warn","time":"2023-02-02T17:17:42+08:00","message":"remove file. filename=[6379.rdb]"}
{"level":"warn","time":"2023-02-02T17:17:42+08:00","message":"remove file. filename=[dump.rdb]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"start save RDB. address=[10.150.38.200:6379]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"send [replconf listening-port 10007]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"send [PSYNC ? -1]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"receive [FULLRESYNC fdb52e2bb825223140e9eca34e67d1cd234f0aa8 225842514]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"source db is doing bgsave. address=[10.150.38.200:6379]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"source db bgsave finished. timeUsed=[0.01]s, address=[10.150.38.200:6379]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"received rdb length. length=[62050]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"create dump.rdb file. filename_path=[dump.rdb]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"save RDB finished. address=[10.150.38.200:6379], total_bytes=[62050]"}

全量同步阶段

数据量大的话会显示进度百分比:(我dev环境数据量小)日志如下:

{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"start send RDB. address=[10.150.38.200:6379]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB version: 8"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB AUX fields. key=[redis-ver], value=[4.0.9]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"start save AOF. address=[10.150.38.200:6379]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB AUX fields. key=[redis-bits], value=[64]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"AOFWriter open file. filename=[225842514.aof]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB AUX fields. key=[ctime], value=[1675329462]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB AUX fields. key=[used-mem], value=[118817296]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB repl-stream-db: 6"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB AUX fields. key=[repl-id], value=[fdb52e2bb825223140e9eca34e67d1cd234f0aa8]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB AUX fields. key=[repl-offset], value=[225842514]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB AUX fields. key=[aof-preamble], value=[0]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB resize db. db_size=[427], expire_size=[421]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB resize db. db_size=[2], expire_size=[0]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"RDB resize db. db_size=[58], expire_size=[56]"}
{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"send RDB finished. address=[10.150.38.200:6379], repl-stream-db=[6]"}   ## 全量完成

## 显示 send RDB finished 说明全量完成

增量同步

日志如下:

{"level":"info","time":"2023-02-02T17:17:42+08:00","message":"send RDB finished. address=[10.150.38.200:6379], repl-stream-db=[6]"}
{"level":"info","time":"2023-02-02T17:17:43+08:00","message":"syncing aof. allowOps=[487.00], disallowOps=[0.00], entryId=[486], InQueueEntriesCount=[0], unansweredBytesCount=[0]bytes, diff=[225842655], aofReceivedOffset=[225842655], aofAppliedOffset=[0]"}   ### 说明开始增量同步
{"level":"info","time":"2023-02-02T17:17:43+08:00","message":"AOFReader open file. aof_filename=[225842514.aof]"}
{"level":"info","time":"2023-02-02T17:17:43+08:00","message":"syncing aof. allowOps=[0.00], disallowOps=[0.00], entryId=[12], InQueueEntriesCount=[0], unansweredBytesCount=[0]bytes, diff=[0], aofReceivedOffset=[242], aofAppliedOffset=[242]
{"level":"info","time":"2023-02-02T17:17:44+08:00","message":"syncing aof. allowOps=[0.20], disallowOps=[0.00], entryId=[13], InQueueEntriesCount=[0], unansweredBytesCount=[0]bytes, diff=[0], aofReceivedOffset=[256], aofAppliedOffset=[256]
{"level":"info","time":"2023-02-02T17:17:45+08:00","message":"syncing aof. allowOps=[0.00], disallowOps=[0.00], entryId=[13], InQueueEntriesCount=[0], unansweredBytesCount=[0]bytes, diff=[0], aofReceivedOffset=[256], aofAppliedOffset=[256]
{"level":"info","time":"2023-02-02T17:17:46+08:00","message":"syncing aof. allowOps=[0.20], disallowOps=[0.00], entryId=[14], InQueueEntriesCount=[0], unansweredBytesCount=[0]bytes, diff=[0], aofReceivedOffset=[270], aofAppliedOffset=[270]
{"level":"info","time":"2023-02-02T17:17:47+08:00","message":"syncing aof. allowOps=[0.00], disallowOps=[0.00], entryId=[14], InQueueEntriesCount=[0], unansweredBytesCount=[0]bytes, diff=[0], aofReceivedOffset=[270], aofAppliedOffset=[270]
{"level":"info","time":"2023-02-02T17:17:48+08:00","message":"syncing aof. allowOps=[0.20], disallowOps=[0.00], entryId=[15], InQueueEntriesCount=[0], unansweredBytesCount=[0]bytes, diff=[0], aofReceivedOffset=[284], aofAppliedOffset=[284]
{"level":"info","time":"2023-02-02T17:17:49+08:00","message":"syncing aof. allowOps=[0.00], disallowOps=[0.00], entryId=[15], InQueueEntriesCount=[0], unansweredBytesCount=[0]bytes, diff=[0], aofReceivedOffset=[284], aofAppliedOffset=[284]
{"level":"info","time":"2023-02-02T17:17:50+08:00","message":"syncing aof. allowOps=[0.20], disallowOps=[0.00], entryId=[16], InQueueEntriesCount=[0], unansweredBytesCount=[0]bytes, diff=[0], aofReceivedOffset=[298], aofAppliedOffset=[298]
{"level":"info","time":"2023-02-02T17:17:51+08:00","message":"syncing aof. allowOps=[0.00], disallowOps=[0.00], entryId=[16], InQueueEntriesCount=[0], unansweredBytesCount=[0]bytes, diff=[0], aofReceivedOffset=[298], aofAppliedOffset=[298]

出现字样syncing aof. allowOps=[0.00] 或者出现syncing aof. allowOps=[0.00]的频率很高 说明 当前增量同步完成 可以ctrl+c停止redis-shake 进程,但是 一般不会为0 或者 因为有client在一直连接

查看源端和目标端的情况
源端
127.0.0.1:6379> info Replication
# Replication
role:master
connected_slaves:1  ### 源端会启动一个redis-shake的从节点
slave0:ip=127.0.0.1,port=10007,state=online,offset=1040,lag=0 ###shake进程
master_failover_state:no-failover
master_replid:b20dfd6f617c867cfaf0910c08be627fe7b922d7
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1040
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:104857600
repl_backlog_first_byte_offset:229
repl_backlog_histlen:812

127.0.0.1:6379> info  Keyspace   ## 查看比对key数量
# Keyspace
db0:keys=6,expires=0,avg_ttl=0
db2:keys=3,expires=0,avg_ttl=0
db10:keys=3,expires=0,avg_ttl=0

目标端
127.0.0.1:7000> info Replication
# Replication
role:master   ### 目标端 还是master角色
connected_slaves:0
master_failover_state:no-failover
master_replid:1017a92f7fd8447d304172796ea6044a1f801e26
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:104857600
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

127.0.0.1:7000> info  Keyspace  ## 对比key数量
# Keyspace
db0:keys=6,expires=0,avg_ttl=0
db2:keys=3,expires=0,avg_ttl=0
db10:keys=3,expires=0,avg_ttl=0

注意事项

  1. 如果目标库的数据淘汰策略(maxmemory-policy)配置为noeviction以外的值,可能导致目标库的数据与源库不一致
  2. 如果源库中的某些Key使用了过期(expire)机制,由于可能存在Key已过期但未被及时删除的情形,所以在目标库中查看(如通过info命令)到的Key数量会比源库的Key数量少

参考

官方文档:https://github.com/alibaba/RedisShake