Centos6 安装 Redis 和集群配置

时间:2023-12-13 23:56:38

Redis安装

先确认gcc和tcl已经安装

sudo yum install gcc-c++
sudo yum install tcl

解压, 编译和安装

tar zxvf redis-3.0..tar.gz
sudo mv redis-3.0. /usr/src/
cd /usr/src/redis-3.0./ sudo make
sudo make test
sudo make install # 如果不想安装到 /usr/local/bin, 用这个语句会安装到/opt/redis/redis-3.0./bin下, 然后再到/usr/bin去建好软链
sudo make PREFIX=/opt/redis/redis-3.0. install

安装很*, 在哪里编译都可以, 有用的就是最后生成的那几个binary. Redis 由四个可执行文件:redis-benchmark、redis-cli、redis-server、redis-stat 这四个文件,加上一个redis.conf就构成了整个redis的最终可用包。它们的作用如下:

  • redis-server:Redis服务器的daemon启动程序
  • redis-cli:Redis命令行操作工具。当然,你也可以用telnet根据其纯文本协议来操作
  • redis-benchmark:Redis性能测试工具,测试Redis在你的系统及你的配置下的读写性能
  • redis-stat:Redis状态检测工具,可以检测Redis当前状态参数及延迟状况

现在就可以启动redis了,redis只有一个启动参数,就是他的配置文件路径。源码中提供了一个redis.conf作为参考

redis-server /etc/redis.conf

目录下默认的redis.conf文件的daemonize参数为no,所以redis不会在后台运行。修改为yes则为后台运行redis。另外配置文件中规定了pid文件,log文件和数据文件的地址,如果有需要先修改,默认log信息定向到stdout.

下面是redis.conf的主要配置参数的意义:

  • daemonize:是否以后台daemon方式运行
  • pidfile:pid文件位置
  • port:监听的端口号
  • timeout:请求超时时间
  • loglevel:log信息级别
  • logfile:log文件位置
  • databases:开启数据库的数量
  • save * *:保存快照的频率,第一个*表示多长时间,第三个*表示执行多少次写操作。在一定时间内执行一定数量的写操作时,自动保存快照。可设置多个条件。
  • rdbcompression:是否使用压缩
  • dbfilename:数据快照文件名(只是文件名,不包括目录)
  • dir:数据快照的保存目录(这个是目录)
  • appendonly:是否开启appendonlylog,开启的话每次写操作会记一条log,这会提高数据抗风险能力,但影响效率。
  • appendfsync:appendonlylog如何同步到磁盘(三个选项,分别是每次写都强制调用fsync、每秒启用一次fsync、不调用fsync等待系统自己同步)

为Redis建立用户和日志目录

shell> groupadd redis
shell> useradd -g redis -s /bin/bash redis
mkdir -p /var/redis/data
chown redis:redis /var/redis/data

修改数据快照的保存目录,需要修改redis.conf

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# Also the Append Only File will be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /var/redis/data

修改log文件位置

mkdir -p /var/redis/log
chown redis:redis /var/redis/log # Specify the log file name. Also 'stdout' can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile /var/redis/log/redislog

添加用户口令, 取消注释, 将 foobared 修改为32位以上的长字符串, 因为redis的效率, 短口令很容易被破解

# requirepass foobared

命令行测试

~]$./bin/redis-cli
127.0.0.1:6379> auth foobared 
OK
127.0.0.1:> set foo bar
OK
127.0.0.1:> get foo
"bar"
127.0.0.1:> exit

命令行中, 亦可使用 -a 参数, 如 redis-cli -a foobared

性能测试

/opt/redis/redis-3.2./bin/redis-benchmark -l -p  -a foobared

使用-l参数会循环运行性能测试样例

日志中的warning

:M  Nov ::57.460 # WARNING: The TCP backlog setting of  cannot be e
nforced because /proc/sys/net/core/somaxconn is set to the lower value of .
:M Nov ::57.460 # Server initialized
:M Nov ::57.460 # WARNING overcommit_memory is set to ! Background s
ave may fail under low memory condition. To fix this issue add 'vm.overcommit_me
mory = ' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.over
commit_memory=' for this to take effect.
:M Nov ::57.460 # WARNING you have Transparent Huge Pages (THP) suppo
rt enabled in your kernel. This will create latency and memory usage issues with
Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transpare
nt_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retai
n the setting after a reboot. Redis must be restarted after THP is disabled.

解决方法: 在/etc/sysctl.conf 中增加

net.core.somaxconn =
vm.overcommit_memory =

然后sysctl -p

对于transparent_hugepage的warning

只有64位系统才有transparent_hugepage功能。从rhel6开始系统将默认对所有程序开启transparent_hugepage, linux内核将尽可能的尝试分配2M的页大小给程序使用,内核空间在内存中自身是2M对齐的,目的是减少内核TLB(现代CPU使用一小块关联内存,用来缓存最近访问的虚拟页的PTE。这块内存称为translation lookaside buffer)的压力,增大page大小自然会减少TLB大小。如果内存没有2M连续大小的空间可分配,内核会回退到分配4KB页的方案。THP页也是可以换出的,这是通过把2M的大页分割成4KB的普通页实现的。内核通过增加一个khugepaged内核线程,来不停的寻找连续的足够大的尽量对其的内存区域来满足内存分配请求。这个线程会偶尔尝试使用大的内存分配来替换连续的小内存页以最大化THP 的使用。数据库应用不建议开启Transparent HugePages. ORACLE, mongodb和Redis都不建议开启THP, THP在运行时动态分配内存, 可能会带来运行时内存分配的延误.

查询系统是否开启了THP命令

[root@middle ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
[root@middle ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never

实时生效

echo never >> /sys/kernel/mm/transparent_hugepage/enabled
echo never >> /sys/kernel/mm/transparent_hugepage/defrag

加入启动, 重启后自动生效

# 在/etc/rc.local中增加如下内容

if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi

用于添加服务的 /etc/init.d/redis.
Update 2017-11-10 在redis-4.0.2的源代码中有init.d例子, 位于 utils/redis_init_script

#!/bin/bash
#
# redis Start up the Redis server daemon
#
# chkconfig:
# description:
#
# processname: /opt/redis/redis-3.2./bin/redis-server
# pidfile: /var/redis/redis.pid PATH=/sbin:/bin:/usr/bin:/usr/sbin USER=redis
REDISPORT=
EXEC=/opt/redis/redis-3.2./bin/redis-server
REDIS_CLI=/opt/redis/redis-3.2./bin/redis-cli
SECURE=foobar PIDFILE=/var/redis/redis.pid
CONF=/opt/redis/redis-3.2./conf/redis.conf case "$1" in
start)
if [ -d /sys/kernel/mm/transparent_hugepage ]; then
      echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
      echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag
    elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then
      echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
      echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag
    fi
if [ -f $PIDFILE ]
then
echo "$PIDFILE exists, process is already running or crashed"
else
echo "Starting Redis server..."
su - $USER -c "$EXEC $CONF"
fi
if [ "$?"="" ]
then
echo "Redis is running..."
fi
;;
stop)
if [ ! -f $PIDFILE ]
then
echo "$PIDFILE does not exist, process is not running"
else
PID=$(cat $PIDFILE)
echo "Stopping ..."
su - $USER -c "$REDIS_CLI -p $REDISPORT -a $SECURE SHUTDOWN"
while [ -x ${PIDFILE} ]
do
echo "Waiting for Redis to shutdown ..."
sleep
done
echo "Redis stopped"
fi
;;
restart|force-reload)
${} stop
${} start
;;
*)
echo "Usage: /etc/init.d/redis {start|stop|restart|force-reload}" >&
exit
esac

Ubuntu16.04下的/etc/init.d/redis

#!/bin/sh
#
### BEGIN INIT INFO
# Provides: redis
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Should-Start: $network $time
# Should-Stop: $network $time
# Default-Start:
# Default-Stop:
# Short-Description: Start/ Stop Redis Server daemon
# Description: This service script facilitates startup and shutdown of
# redis daemon
### END INIT INFO
#
# Simple Redis init.d script conceived to work on Linux systems
# as it does use of the /proc filesystem. REDISPORT=
EXEC=/opt/redis/redis-4.0./bin/redis-server
CLIEXEC=/opt/redis/redis-4.0./bin/redis-cli PIDFILE=/var/run/redis_${REDISPORT}.pid
CONF="/opt/redis/redis-4.0.2/conf/${REDISPORT}.conf"
SECURE=biggbang case "$1" in
start)
if [ -f $PIDFILE ]
then
echo "$PIDFILE exists, process is already running or crashed"
else
echo "Starting Redis server..."
$EXEC $CONF
fi
;;
stop)
if [ ! -f $PIDFILE ]
then
echo "$PIDFILE does not exist, process is not running"
else
PID=$(cat $PIDFILE)
echo "Stopping ..."
$CLIEXEC -p $REDISPORT -a $SECURE shutdown
while [ -x /proc/${PID} ]
do
echo "Waiting for Redis to shutdown ..."
sleep
done
echo "Redis stopped"
fi
;;
*)
echo "Please use start or stop as first argument"
;;
esac

添加入ubuntu的系统服务:

sudo update-rc.d redis defaults
# 如果需要删除则使用
sudo update-rc.d redis remove

加入服务后, 可以用 start, stop, status 查看服务状态

Redis Cluster 集群的配置

Redis集群支持多节点数据集自动分片, 提供一定程度的分区可用性, 部分节点挂掉或者无法连接其他节点后, 服务可以正常运行.
集群中的每个Redis节点需要2个TCP连接端口,如6379端口用于Client连接,16379端口用于集群数据通信.

集群的实现机制
集群采用Hash Slot方案, 而不是一致性哈希. Redis使用的是16384个Hash slot (16x32x32, 即2的14次方, 214). 每个key通过CRC16校验后对16384取模来决定放置哪个slot. 集群的每个节点负责一部分hash slot.  如果有3台机器, 那么Node A 负责 0-5500, Node B 负责 5501-11000, Node C 负责 11001-16384. 这种设计下, 添加, 删除新Node比较方便. 例如:

添加新节点D, 只需移动A, B, C上的slot到节点D即可.
移除节点A, 将A上的slot迁移到B和C即可

由于Hash slot在节点间的迁移无需停止操作, 集群新增或者删除节点, 改变集群内部节点占用的Slot比例等都可在线完成.

Redis Cluster Master-Slave Model
为保证某些节点挂掉或无法连接其他节点的情况下可正常提供服务,Redis Cluster提供了主从模式的数据副本机制,每个Hash Slot都可以设置1-N个从节点。
由A,B,C三节点组成的集群中,可以设置A1,B1,C1分别为A,B,C的从节点,如B挂掉后,原B1从节点被提升为主节点,从而保证集群服务正常。

集群在一致性方面的问题, 可能存在写数据丢失情况发生, 如:
- Client写入数据到NodeA
- NodeA答复Client写入成功
- NodeA在写入从节点A1时蹦溃掉了
- A1被提升为Master节点后,但是数据丢失

为解决这个问题,Redis集群提供了同步写支持

集群配置参数
所有的集群配置参数都存在于redis.conf中,主要几个如下:

Cluster-enabled:是否开启集群模式
Cluster-config-file:集群配置变更后会自动写入改文件
Cluster-node-timeout:节点超时时间,超过该时间无法连接主要Master节点后,会停止接受查询服务
Cluster-slave-validity-factor:控制从节点FailOver相关的设置. 设为0, 从节点会一直尝试启动FailOver;设为正数, 则失联大于一定时间(factor*节点TimeOut)后不再进行FailOver
Cluster-migration-barrier:最小从节点连接数
Cluster-require-full-coverage:默认为Yes, 丢失一定比例Key后(可能Node无法连接或者挂掉)集群停止接受写操作; 设置为No, 集群丢失Key的情况下仍提供查询服务

搭建方式一

集群环境至少需要3个节点, 推荐使用6个节点配置, 即3个主节点3个从节点, 首先启动6个独立Redis节点, 然后进行集群关联配置,

最小化的Redis配置文件, 实际使用中, 需要指定不同的数据文件名和日志文件名

port
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout
appendonly yes

创建6个空的文件夹

mkdir cluster-test
cd cluster-test
mkdir

分别创建配置文件并启动6个单点服务

cd
../redis-server ./redis.conf

启动后会看到以下日志信息,提示Node.conf不存在,并且每个节点创建了一个NodeID

[]  Nov ::55.329 * No cluster configuration found, I'm 97a3a64667477371c4479320d683e4c8db5858b1

组建集群配置
源码src文件下提供了一个叫做redis-trib.rb的脚本文件,是一个Ruby脚本用于创建集群,检测及重新分片等

./redis-trib.rb create --replicas  127.0.0.1: 127.0.0.1: 127.0.0.1: 127.0.0.1: 127.0.0.1: 127.0.0.1:
# create表示创建一个新的集群
# replicas 1表示指定集群中每个节点的副本个数为1

出现以下消息,表示只有一个主节点完成了HashSlot的初始化并可以提供服务

[OK] All  slots covered  

搭建方式二
使用Create-Cluster脚本, 脚本位于utils/create-cluster目录西,执行以下命令

create-cluster start
create-cluster create

启动后默认第一个节点端口为30001,完成后执行以下命令

create-cluster stop

验证集群是否搭建成功

$ redis-cli -c -p
redis 127.0.0.1:> set foo bar
-> Redirected to slot [] located at 127.0.0.1:
OK
redis 127.0.0.1:> set hello world
-> Redirected to slot [] located at 127.0.0.1:
OK
redis 127.0.0.1:> get foo
-> Redirected to slot [] located at 127.0.0.1:
"bar"
redis 127.0.0.1:> get hello
-> Redirected to slot [] located at 127.0.0.1:
"world"

对Redis Cluster的操作

添加一个新的主节点 Adding a new node

添加一个新节点基本上就是添加一个空节点, 然后将一些数据移动到其中, 在这种情况下它是一个新的master. 或者你明确的设置它作为副本, 那么这种情况下它就是一个slave.

[root@ecs-d6b3- cluster-test]# cp -R
[root@ecs-d6b3- cluster-test]# vi /redis.conf
[root@ecs-d6b3- cluster-test]# cd
[root@ecs-d6b3- ]# ../redis-server redis.conf
# 用redis-trib来添加一个节点到已存在的集群
./redis-trib.rb add-node 127.0.0.1: 127.0.0.1:

第一个地址是新节点地址, 第二个地址是集群中任意节点地址.

添加一个新的从节点 Adding a new node as a replica

./redis-trib.rb add-node --slave 127.0.0.1: 127.0.0.1:

删除一个节点Removing a node

./redis-trib del-node 127.0.0.1: `<node-id>`
./redis-trib.rb del-node 127.0.0.1: 7c7b7f68bc56bf24cbb36b599d2e2d97b26c5540

重新分片(Resharding the cluster)

./redis-trib.rb reshard 127.0.0.1:7000
# --from 的参数是node id, 可以有多个, 用逗号隔开, 也可以是all; --to 目的节点的node id, 只能填一个
./redis-trib.rb reshard --from <node-id> --to <node-id> --slots <number of slots> --yes <host>:<port>

在删除节点前, 必须通过reshard将节点数据清空, 执行上面的第一条命令, 会进入交互式的命令行操作, 第二个是直接执行的命令格式, 下面是一个执行的命令行记录

$ruby redis-trib.rb reshard --from all --to 80b661ecca260c89e3d8ea9b98f77edaeef43dcd --slots 11 10.180.157.199:6379
>>> Performing Cluster Check (using node 10.180.157.199:6379)
S: b2506515b38e6bbd3034d540599f4cd2a5279ad1 10.180.157.199:6379
slots: (0 slots) slave
replicates 460b3a11e296aafb2615043291b7dd98274bb351
S: d376aaf80de0e01dde1f8cd4647d5ac3317a8641 10.180.157.205:6379
slots: (0 slots) slave
replicates e36c46dbe90960f30861af00786d4c2064e63df2
M: 15126fb33796c2c26ea89e553418946f7443d5a5 10.180.157.201:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 59fa6ee455f58a5076f6d6f83ddd74161fd7fb55 10.180.157.208:6379
slots: (0 slots) slave
replicates 15126fb33796c2c26ea89e553418946f7443d5a5
M: 460b3a11e296aafb2615043291b7dd98274bb351 10.180.157.202:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: 80b661ecca260c89e3d8ea9b98f77edaeef43dcd 10.180.157.200:6380
slots: (0 slots) master
0 additional replica(s)
M: e36c46dbe90960f30861af00786d4c2064e63df2 10.180.157.200:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered. Ready to move 11 slots.
Source nodes:
M: 15126fb33796c2c26ea89e553418946f7443d5a5 10.180.157.201:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
M: 460b3a11e296aafb2615043291b7dd98274bb351 10.180.157.202:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: e36c46dbe90960f30861af00786d4c2064e63df2 10.180.157.200:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
Destination node:
M: 80b661ecca260c89e3d8ea9b98f77edaeef43dcd 10.180.157.200:6380
slots: (0 slots) master
0 additional replica(s)
Resharding plan:
Moving slot 5461 from e36c46dbe90960f30861af00786d4c2064e63df2
Moving slot 5462 from e36c46dbe90960f30861af00786d4c2064e63df2
Moving slot 5463 from e36c46dbe90960f30861af00786d4c2064e63df2
Moving slot 5464 from e36c46dbe90960f30861af00786d4c2064e63df2
Moving slot 0 from 460b3a11e296aafb2615043291b7dd98274bb351
Moving slot 1 from 460b3a11e296aafb2615043291b7dd98274bb351
Moving slot 2 from 460b3a11e296aafb2615043291b7dd98274bb351
Moving slot 10923 from 15126fb33796c2c26ea89e553418946f7443d5a5
Moving slot 10924 from 15126fb33796c2c26ea89e553418946f7443d5a5
Moving slot 10925 from 15126fb33796c2c26ea89e553418946f7443d5a5
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 5461 from 10.180.157.200:6379 to 10.180.157.200:6380:
Moving slot 5462 from 10.180.157.200:6379 to 10.180.157.200:6380:
Moving slot 5463 from 10.180.157.200:6379 to 10.180.157.200:6380:
Moving slot 5464 from 10.180.157.200:6379 to 10.180.157.200:6380:
Moving slot 0 from 10.180.157.202:6379 to 10.180.157.200:6380:
Moving slot 1 from 10.180.157.202:6379 to 10.180.157.200:6380:
Moving slot 2 from 10.180.157.202:6379 to 10.180.157.200:6380:
Moving slot 10923 from 10.180.157.201:6379 to 10.180.157.200:6380:
Moving slot 10924 from 10.180.157.201:6379 to 10.180.157.200:6380:
Moving slot 10925 from 10.180.157.201:6379 to 10.180.157.200:6380:

使用Java连接Redis集群

Jedis连接Redis集群

JedisPoolConfig config = new JedisPoolConfig();
config.setMaxTotal(20);
config.setMaxIdle(2);

# 可以只添加一两个主节点, 不必添加所有的节点
HostAndPort hp0 = new HostAndPort("localhost", 7000);
HostAndPort hp1 = new HostAndPort("localhost", 7001);
HostAndPort hp2 = new HostAndPort("localhost", 7002);
HostAndPort hp3 = new HostAndPort("localhost", 7003);
HostAndPort hp4 = new HostAndPort("localhost", 7004);
HostAndPort hp5 = new HostAndPort("localhost", 7005); Set<HostAndPort> hps = new HashSet<HostAndPort>();
hps.add(hp0);
hps.add(hp1);
hps.add(hp2);
hps.add(hp3);
hps.add(hp4);
hps.add(hp5); JedisCluster jedisCluster = new JedisCluster(hps, 5000, 10, config); long start = System.currentTimeMillis();
for (int i = 0; i < 100; i++) {
jedisCluster.set("sn" + i, "n" + i);
}
long end = System.currentTimeMillis(); System.out.println("Time : " + (end - start) / 10000); for (int i = 0; i < 1000; i++) {
System.out.println(jedisCluster.get("sn" + i));
} jedisCluster.close();