redis集群安装(三主三从)

redis集群

  在redis集群安装之前,需要先在服务器上安装好redis服务,redis集群三主三从,最少需要在三台服务器上安装好redis服务。安装redis说明可参考redis学习笔记(一)安装与配置
  在redis 5版本之前,需要依赖ruby,版本依赖问题不做深究,至于ruby的安装可参考ruby安装
  安装ruby后需要为ruby安装一下第三方接口,可在线执行安装命令

1
gem install redis

  如果服务器无网络链接,可预先在https://rubygems.org/downloads/redis-4.1.3.gem下载好redis-4.1.3.gem文件,然后上传到服务器,之后在文件所在目录执行一下

1
gem install -l ./redis-4.1.3.gem

接下来开始redis集群安装,假设服务器节点信息如下

10.195.173.29:7200
10.195.173.29:7201
10.195.173.36:7200
10.195.173.36:7201
10.195.173.40:7200
10.195.173.40:7201

在每一台服务器目录/usr/local/redis目录下批量创建redis-cluster/7200redis-cluster/7201

1
[root@localhost redis] mkdir -p redis-cluster/7200 redis-cluster/7201

之后在7200和7201目录下分别创建redis-7200.confredis-7201.conf,建议从源文件包中拷贝一份文件到相应的目录中

1
2
[root@localhost 7200] cp /opt/redis-4.0.14/redis.conf redis-7200.conf
[root@localhost 7200] cp /opt/redis-4.0.14/redis.conf redis-7201.conf

修改redis-7200.conf redis-7201.conf文件,注意两个文件配置的端口不一样

1
2
3
4
5
6
7
8
9
10
11
12
port 7200
daemonize yes
protected-mode no
dir /usr/local/redis/redis-cluster/7200/
cluster-enabled yes
cluster-config-file nodes-7200.conf
cluster-node-timeout 5000
appendonly yes
pidfile redis_7200.pid
loglevel notice
logfile "notice.log"
#bind 127.0.0.1

三台服务器按照上述配置完成后,启动各个redis节点

1
2
[root@i-212a7852 redis-cluster]# /usr/local/redis/bin/redis-server /usr/local/redis/redis-cluster/7201/redis-7201.conf 
[root@i-212a7852 redis-cluster]# /usr/local/redis/bin/redis-server /usr/local/redis/redis-cluster/7202/redis-7202.conf

启动完成后,在其中一台服务器上启动搭建集群命令,Redis会给出一个预计的方案,对6个节点分配3主3从,如果认为没有问题,输入yes确认,另外注意用绝对IP,不要用127.0.0.1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
[root@i-212a7852 redis-cluster]# /usr/local/redis/bin/redis-trib.rb create --replicas 1 10.195.173.29:7200 10.195.173.29:7201 10.195.173.36:7200 10.195.173.36:7201 10.195.173.40:7200 10.195.173.40:7201
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
10.195.173.29:7200
10.195.173.36:7200
10.195.173.40:7200
Adding replica 10.195.173.29:7201 to 10.195.173.29:7200
Adding replica 10.195.173.36:7201 to 10.195.173.36:7200
Adding replica 10.195.173.40:7201 to 10.195.173.40:7200
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: fb116678b41904d9fe458382cf0c4f1a135b3c81 10.195.173.29:7200
slots:0-5460 (5461 slots) master
M: b3abcdc8db3321032925ba67db8f2cfee53e038d 10.195.173.36:7200
slots:5461-10922 (5462 slots) master
M: 3b3a89b0324132f47a049a2aad0c7781ed4eb0de 10.195.173.40:7200
slots:10923-16383 (5461 slots) master
S: 585b94dc3cf3d18d12d8c21ccbe4080e9372e867 10.195.173.29:7201
replicates b3abcdc8db3321032925ba67db8f2cfee53e038d
S: ad8bc0cc83db06a68705e0127960654a7e9f7dc0 10.195.173.36:7201
replicates 3b3a89b0324132f47a049a2aad0c7781ed4eb0de
S: 4ba7b4503298f451da4f021857ec9e5136b4c492 10.195.173.40:7201
replicates fb116678b41904d9fe458382cf0c4f1a135b3c81
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 172.18.32.39:7201)
M: fb116678b41904d9fe458382cf0c4f1a135b3c81 10.195.173.29:7200
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: b3abcdc8db3321032925ba67db8f2cfee53e038d 10.195.173.36:7200
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: ad8bc0cc83db06a68705e0127960654a7e9f7dc0 10.195.173.40:7200
slots: (0 slots) slave
replicates 3b3a89b0324132f47a049a2aad0c7781ed4eb0de
S: 4ba7b4503298f451da4f021857ec9e5136b4c492 10.195.173.29:7201
slots: (0 slots) slave
replicates fb116678b41904d9fe458382cf0c4f1a135b3c81
M: 3b3a89b0324132f47a049a2aad0c7781ed4eb0de 10.195.173.36:7201
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 585b94dc3cf3d18d12d8c21ccbe4080e9372e867 10.195.173.40:7201
slots: (0 slots) slave
replicates b3abcdc8db3321032925ba67db8f2cfee53e038d
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

还有需要注意一点,从redis 5.x 版本开始,redis-trib.rb已经废弃了,直接用–cluster命令

1
[root@i-212a7852 redis-cluster]# /usr/local/redis/bin/redis-cli --cluster create --cluster-replicas 1 10.195.173.29:7200 10.195.173.29:7201 10.195.173.36:7200 10.195.173.36:7201 10.195.173.40:7200 10.195.173.40:7201

命令执行成功后,集群搭建成功。

附录:

集群命令

cluster info :打印集群的信息
cluster nodes :列出集群当前已知的所有节点(node),以及这些节点的相关信息。
cluster meet :将 ip 和 port 所指定的节点添加到集群当中,让它成为集群的一份子。
cluster forget :从集群中移除 node_id 指定的节点(保证空槽道)。
cluster replicate :将当前节点设置为 node_id 指定的节点的从节点。
cluster saveconfig :将节点的配置文件保存到硬盘里面。

槽slot命令

cluster addslots [slot …] :将一个或多个槽(slot)指派(assign)给当前节点。
cluster delslots [slot …] :移除一个或多个槽对当前节点的指派。
cluster flushslots :移除指派给当前节点的所有槽,让当前节点变成一个没有指派任何槽的节点。
cluster setslot node :将槽 slot 指派给 node_id 指定的节点,如果槽已经指派给另一个节点,那么先让另一个节点删除该槽>,然后再进行指派。
cluster setslot migrating :将本节点的槽 slot 迁移到 node_id 指定的节点中。
cluster setslot importing :从 node_id 指定的节点中导入槽 slot 到本节点。
cluster setslot stable :取消对槽 slot 的导入(import)或者迁移(migrate)。

键命令

cluster keyslot :计算键 key 应该被放置在哪个槽上。
cluster countkeysinslot :返回槽 slot 目前包含的键值对数量。
cluster getkeysinslot :返回 count 个 slot 槽中的键

Donate comment here