• 生活的道路一旦选定,就要勇敢地走到底,决不回头。——左拉
  • 坚强的信心,能使平凡的人做出惊人的事业。——马尔顿
  • 人不可有傲气,但不可无傲骨。 --徐悲鸿
  • 古之立大志者,不惟有超世之才,亦必有坚韧不拔之志。 --苏轼
  • 时间像海绵里的水,只要你愿意挤,总还是有的。 --鲁迅

Redis Cluster 分布式集群

Redis zkinogg 1年前 (2020-08-09) 204次浏览 0个评论

一、redis集群概念

1.什么是Redis Cluster

1.Redis集群是一个可以在多个Redis节点之间进行数据共享的设施(installation)
2.Redis集群不支持那些需要同时处理多个键的Redis命令,因为执行这些命令需要在多个Redis节点之间移动数据,并且在高负载的情况下,这些命令将降低Redis集群的性能,并导致不可预测的行为。(使用ack协议)
3.Redis集群通过分区(partition)来提供一定程度的可用性(availability):即使集群中有一部分节点失效或者无法进行通讯,集群也可以继续处理命令请求。
4.Redis集群有将数据自动切分(split)到多个节点的能力。

2.Redis Cluster的特点

#高性能:
1.在多分片节点中,将16384个槽位,均匀分布到多个分片节点中
2.存数据时,将key做crc16(key),然后和16384进行取模,得出槽位值(0-16384之间)
3.根据计算得出的槽位值,找到相对应的分片节点的主节点,存储到相应槽位上
4.如果客户端当时连接的节点不是将来要存储的分片节点,分片集群会将客户端连接切换至真正存储节点进行数据存储
5.客户端与redis节点直连,不需要中间proxy层.客户端不需要连接集群所有节点,连接集群中任何一个可用节点即可
6.Redis Cluster解决了redis资源利用率的问题

#高可用
7.在搭建集群时,会为每一个分片的主节点,对应一个从节点,实现slaveof功能,同时当主节点down,实现类似于sentinel的自动failover的功能。

3.槽的概念

1.在集群中,会把所有节点分为16384个槽位
2.槽位的序号是 0 - 16383,序号不重要,数量才重要
3.每一个槽位分配到数据的概率是一样

4.redis故障转移

1.在集群里面,节点会对其他节点进行下线检测。
2.当一个主节点下线时,集群里面的其他主节点负责对下线主节点进行故障移。
3.换句话说,集群的节点集成了下线检测和故障转移等类似 Sentinel 的功能。

二、redis集群手动搭建

1.环境准备

节点 IP 端口
节点1 172.16.1.51 6379,6380
节点2 172.16.1.52 6379,6380
节点3 172.16.1.53 6379,6380

2.搭建redis

#删除以前的redis数据
[root@db01 ~]# rm -rf /service/redis/*

#创建多实例目录
[root@db01 ~]# mkdir /service/redis/{6379,6380}
[root@db02 ~]# mkdir /service/redis/{6379,6380}
[root@db03 ~]# mkdir /service/redis/{6379,6380}

#配置所有redis
[root@db01 ~]# vim /service/redis/6379/redis.conf
bind 172.16.1.51 127.0.0.1
port 6379
daemonize yes
pidfile /service/redis/6379/redis.pid
loglevel notice
logfile /service/redis/6379/redis.log
dbfilename dump.rdb
dir /service/redis/6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000

[root@db01 ~]# vim /service/redis/6380/redis.conf
bind 172.16.1.51 127.0.0.1
port 6380
daemonize yes
pidfile /service/redis/6380/redis.pid
loglevel notice
logfile /service/redis/6380/redis.log
dbfilename dump.rdb
dir /service/redis/6380
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000

3.启动所有redis

[root@db01 ~]# redis-server /service/redis/6379/redis.conf 
[root@db01 ~]# redis-server /service/redis/6380/redis.conf
[root@db02 ~]# redis-server /service/redis/6379/redis.conf 
[root@db02 ~]# redis-server /service/redis/6380/redis.conf
[root@db03 ~]# redis-server /service/redis/6379/redis.conf 
[root@db03 ~]# redis-server /service/redis/6380/redis.conf

4.关联所有redis节点

1)登录所有节点

[root@db01 ~]# redis-cli -h 172.16.1.51 -p 6379
[root@db01 ~]# redis-cli -h 172.16.1.51 -p 6380
[root@db02 ~]# redis-cli -h 172.16.1.52 -p 6379
[root@db02 ~]# redis-cli -h 172.16.1.52 -p 6380
[root@db03 ~]# redis-cli -h 172.16.1.53 -p 6379
[root@db03 ~]# redis-cli -h 172.16.1.53 -p 6380

2)查看集群节点

#查看集群节点,每一各节点只能看到自己
172.16.1.51:6379> CLUSTER NODES
28faba09f4c0ec8cdb90d92e09636796427b7143 :6379 myself,master - 0 0 0 connected

3)关联所有节点

172.16.1.51:6379> CLUSTER MEET 172.16.1.51 6380
OK
172.16.1.51:6379> CLUSTER MEET 172.16.1.52 6379
OK
172.16.1.51:6379> CLUSTER MEET 172.16.1.52 6380
OK
172.16.1.51:6379> CLUSTER MEET 172.16.1.53 6379
OK
172.16.1.51:6379> CLUSTER MEET 172.16.1.53 6380
OK

#查看集群状态,所有节点
172.16.1.51:6379> CLUSTER NODES
aee9f4e6e09a452fd44bca7639be442b5138f141 172.16.1.52:6380 master - 0 1596687131655 4 connected
777412c8d6554e3390e1083bf1f55002be08cf62 172.16.1.51:6380 master - 0 1596687131352 2 connected
ef18ab5bab6d8bc06917a0cf2dc9bffa8b431087 172.16.1.52:6379 master - 0 1596687132362 3 connected
f2747c92813ea06b25c3e9c8d5232b46b3cf9d3d 172.16.1.53:6379 master - 0 1596687131856 0 connected
25f735f08ac62b2f758c1e2c21e178cc46279087 172.16.1.53:6380 master - 0 1596687131251 5 connected
28faba09f4c0ec8cdb90d92e09636796427b7143 172.16.1.51:6379 myself,master - 0 0 1 connected

5.分配槽位

#查看集群状态
172.16.1.51:6379> CLUSTER INFO
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:0
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_sent:1168
cluster_stats_messages_received:1168

#槽位规划
db01:     5462 个槽位  (0-5461)
db02:     5461 个槽位  (5462-10922)
db03:     5461 个槽位  (10923-16383)

#分配槽位
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 CLUSTER ADDSLOTS {0..5461}
OK
[root@db02 ~]# redis-cli -p 6379 -h 172.16.1.52 CLUSTER ADDSLOTS {5462..10922}
OK
[root@db02 ~]# redis-cli -p 6379 -h 172.16.1.53 CLUSTER ADDSLOTS {10923..16383}

6.插入数据测试集群

#插入一条数据
172.16.1.51:6379> set k1 v1
(error) MOVED 12706 172.16.1.53:6379
#报错,该key的值只能插入到12706这个槽位

[root@db03 ~]# redis-cli -h 172.16.1.53
172.16.1.53:6379> set k1 v1
OK

#ASK协议,自动切换将数据添加到指定槽位
[root@db03 ~]# redis-cli -h 172.16.1.53
172.16.1.53:6379> set k2 v2
(error) MOVED 449 172.16.1.51:6379
172.16.1.53:6379> quit
[root@db03 ~]# redis-cli -c -h 172.16.1.53
172.16.1.53:6379> set k2 v2
-> Redirected to slot [449] located at 172.16.1.51:6379
OK

#脚本插入数据测试
[root@db03 ~]# vim data.sh 
#!/bin/bash
for i in {1..1000};do
    redis-cli -c -p 6379 -h 172.16.1.51 set k${i} v${i}
done

#查看数据分配
172.16.1.51:6379> DBSIZE
(integer) 341
172.16.1.52:6379> DBSIZE
(integer) 332
172.16.1.53:6379> DBSIZE
(integer) 327

7.添加副本节点

1)查看节点

172.16.1.51:6379> CLUSTER NODES
5a7f0cf95e1850b5b5ae81d873c4c76fd366d604 172.16.1.51:6380 master - 0 1596763193422 4 connected
5eb9e5356534ff4acda736d13f0dc9fc3d40049b 172.16.1.52:6379 master - 0 1596763192412 5 connected 5462-10922
50878ef6a4d8141c8dbca3e2bf7c84ed48a73ee2 172.16.1.53:6380 master - 0 1596763192512 3 connected
acc3a4d0e6e43fc74630c1f0714865fdcbdaf677 172.16.1.53:6379 master - 0 1596763191908 0 connected 10923-16383
2325be6f1f9c1c9f57d5a033fc05e0d798ea823a 172.16.1.51:6379 myself,master - 0 0 1 connected 0-5461
381b54584572e8013becdae2eeaff48bf6eb5450 172.16.1.52:6380 master - 0 1596763193925 2 connected

2)配置主从

#db01的6380做db02的6379的从库
172.16.1.51:6380> CLUSTER REPLICATE 5eb9e5356534ff4acda736d13f0dc9fc3d40049b
OK

#db02的6380做db03的6379的从库
172.16.1.52:6380> CLUSTER REPLICATE acc3a4d0e6e43fc74630c1f0714865fdcbdaf677
OK

#db03的6380做db01的6379的从库
172.16.1.53:6380> CLUSTER REPLICATE 2325be6f1f9c1c9f57d5a033fc05e0d798ea823a
OK

3)再次查看节点信息

172.16.1.51:6379> CLUSTER NODES
5a7f0cf95e1850b5b5ae81d873c4c76fd366d604 172.16.1.51:6380 slave 5eb9e5356534ff4acda736d13f0dc9fc3d40049b 0 1596763362696 5 connected
5eb9e5356534ff4acda736d13f0dc9fc3d40049b 172.16.1.52:6379 master - 0 1596763363202 5 connected 5462-10922
50878ef6a4d8141c8dbca3e2bf7c84ed48a73ee2 172.16.1.53:6380 slave 2325be6f1f9c1c9f57d5a033fc05e0d798ea823a 0 1596763362192 3 connected
acc3a4d0e6e43fc74630c1f0714865fdcbdaf677 172.16.1.53:6379 master - 0 1596763363203 0 connected 10923-16383
2325be6f1f9c1c9f57d5a033fc05e0d798ea823a 172.16.1.51:6379 myself,master - 0 0 1 connected 0-5461
381b54584572e8013becdae2eeaff48bf6eb5450 172.16.1.52:6380 slave acc3a4d0e6e43fc74630c1f0714865fdcbdaf677 0 1596763364211 2 connected

8.故障演示

#停掉一台节点
[root@db03 ~]# reboot

#到另一台机器查看集群状态,发现集群是正常的
172.16.1.51:6379> CLUSTER INFO
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_sent:327031
cluster_stats_messages_received:326973

#查看节点信息,副本被提升为主库
172.16.1.51:6379> CLUSTER NODES
5a7f0cf95e1850b5b5ae81d873c4c76fd366d604 172.16.1.51:6380 slave 5eb9e5356534ff4acda736d13f0dc9fc3d40049b 0 1596763771309 5 connected
5eb9e5356534ff4acda736d13f0dc9fc3d40049b 172.16.1.52:6379 master - 0 1596763771310 5 connected 5462-10922
50878ef6a4d8141c8dbca3e2bf7c84ed48a73ee2 172.16.1.53:6380 slave,fail 2325be6f1f9c1c9f57d5a033fc05e0d798ea823a 1596763736458 1596763734245 3 disconnected
acc3a4d0e6e43fc74630c1f0714865fdcbdaf677 172.16.1.53:6379 master,fail - 1596763736458 1596763735246 0 disconnected
2325be6f1f9c1c9f57d5a033fc05e0d798ea823a 172.16.1.51:6379 myself,master - 0 0 1 connected 0-5461
381b54584572e8013becdae2eeaff48bf6eb5450 172.16.1.52:6380 master - 0 1596763772319 6 connected 10923-16383

9.节点恢复

#修复机器
[root@db03 ~]# redis-server /service/redis/6379/redis.conf 
[root@db03 ~]# redis-server /service/redis/6380/redis.conf

#再次查看节点信息
172.16.1.51:6379> CLUSTER NODES
5a7f0cf95e1850b5b5ae81d873c4c76fd366d604 172.16.1.51:6380 slave 5eb9e5356534ff4acda736d13f0dc9fc3d40049b 0 1596764061287 5 connected
5eb9e5356534ff4acda736d13f0dc9fc3d40049b 172.16.1.52:6379 master - 0 1596764060781 5 connected 5462-10922
50878ef6a4d8141c8dbca3e2bf7c84ed48a73ee2 172.16.1.53:6380 slave 2325be6f1f9c1c9f57d5a033fc05e0d798ea823a 0 1596764059770 3 connected
acc3a4d0e6e43fc74630c1f0714865fdcbdaf677 172.16.1.53:6379 slave 381b54584572e8013becdae2eeaff48bf6eb5450 0 1596764062094 6 connected
2325be6f1f9c1c9f57d5a033fc05e0d798ea823a 172.16.1.51:6379 myself,master - 0 0 1 connected 0-5461
381b54584572e8013becdae2eeaff48bf6eb5450 172.16.1.52:6380 master - 0 1596764061789 6 connected 10923-16383

#原主节点修复后变为从节点

三、使用工具搭建redis集群

1.环境准备

节点 IP 端口
节点1 172.16.1.51 6379,6380
节点2 172.16.1.52 6379,6380
节点3 172.16.1.53 6379,6380

2.搭建并启动redis多实例(ansible实现)


# 1.发送密钥
[root@m01 ~]# ssh-keygen
[root@m01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.51
[root@m01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.52
[root@m01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.53

# 2.创建roles目录
[root@m01 ~]# mkdir ansible_redis
[root@m01 ~]# cd ansible_redis/
[root@m01 ansible_redis]# ansible-galaxy init redis

# 3.写hosts文件和site.yml文件
[root@m01 ansible_redis]# ll
total 8
-rw-r--r--  1 root root 116 Aug  5 23:12 hosts
drwxr-xr-x 10 root root 154 Aug  5 22:50 redis
-rw-r--r--  1 root root  80 Aug  5 23:11 site.yml
[root@m01 ansible_redis]# cat hosts 
[db_group]
db01 ansible_ssh_host=172.16.1.51
db02 ansible_ssh_host=172.16.1.52 
db03 ansible_ssh_host=172.16.1.53 

[root@m01 ansible_redis]# cat site.yml 
- hosts: all
  roles:
    - { role: redis , when: ansible_fqdn is match 'db*'}

# 4.准备文件
[root@m01 redis]# ll
total 4
drwxr-xr-x 2 root root   22 Aug  5 22:50 defaults
drwxr-xr-x 2 root root   49 Aug  5 23:08 files
drwxr-xr-x 2 root root   22 Aug  5 22:50 handlers
drwxr-xr-x 2 root root   22 Aug  5 22:50 meta
-rw-r--r-- 1 root root 1328 Aug  5 22:50 README.md
drwxr-xr-x 2 root root   43 Aug  8 03:28 tasks
drwxr-xr-x 2 root root   27 Aug  8 03:02 templates
drwxr-xr-x 2 root root   39 Aug  5 22:50 tests
drwxr-xr-x 2 root root   22 Aug  8 03:19 vars
[root@m01 redis]# ll files/
total 1520
-rw-r--r-- 1 root root 1551468 Aug  3 09:58 redis-3.2.12.tar.gz  #redis的tar包
-rw-r--r-- 1 root root      39 Aug  5 23:08 redis.sh            #redis环境变量文件

# 5.准备变量
[root@m01 redis]# cat vars/main.yml 
---
# vars file for redis
redis_port:
  - 6379
  - 6380

# 6.准备templates文件
[root@m01 redis]# cat templates/redis.conf.j2 
bind {{ ansible_eth1.ipv4.address }} 127.0.0.1
port {{ port }}
daemonize yes
pidfile /service/redis/{{ port }}/redis_{{ port }}.pid
loglevel notice
logfile /service/redis/{{ port }}/redis_{{ port }}.log
dir /service/redis/{{ port }}
dbfilename dump.rdb
save 900 1
save 300 10
save 60 10000
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000

# 7.编写tasks任务
[root@m01 redis]# cat tasks/Yum_redis.yml 
- name: Unarchive redis tgz                     # 解压redis的tar包
  unarchive:
    src: redis-3.2.12.tar.gz
    dest: /usr/local/
  when: ansible_fqdn is match 'db*'

- name: link redis                              # 做软链接
  file:
    src: /usr/local/redis-3.2.12
    dest: /usr/local/redis
    state: link
  when: ansible_fqdn is match 'db*'

- name: Make redis                            # 安装redis
  shell: 'cd /usr/local/redis && make'

- name: Push redis sh                           # 推环境变量文件
  copy:
    src: redis.sh
    dest: /etc/profile.d/redis.sh
    mode: 0755
  when: ansible_fqdn is match 'db*'

- name: Source profile                          # 刷新环境变量
  shell: 'source /etc/profile'
  when: ansible_fqdn is match 'db*'

- name: Mkdir redis service                    # 创建多实例目录
  file:
    path: "{{ item }}"
    state: directory
  with_items: 
    - /service/redis/6379
    - /service/redis/6380
  when: ansible_fqdn is match 'db*'

- name: redis config overwrite            # 推redis主配置文件(变量套变量共推6个)
  vars:
    port: "{{ item }}"
  template:
    src: redis.conf.j2
    dest: /service/redis/{{ item }}/redis.conf
  with_items: "{{ redis_port }}"
  when: ansible_fqdn is match 'db*'

- name: Start redis                         # 启动redis多实例(一样的变量套变量启动6个)
  shell: 'redis-server /service/redis/{{ item }}/redis.conf'
  with_items: "{{ redis_port }}"
  when: ansible_fqdn is match 'db*'



# 8.一键搭键redis多实例
[root@m01 ansible_redis]# ansible-playbook site.yml  -i hosts 

3.安装集群插件

#EPEL源安装ruby支持
[root@db01 ~]# yum install ruby rubygems -y

#查看gem源
[root@db01 ~]# gem sources -l
*** CURRENT SOURCES ***

http://rubygems.org/

#添加阿里云的gem源
[root@db01 ~]# gem sources -a http://mirrors.aliyun.com/rubygems/
http://mirrors.aliyun.com/rubygems/ added to sources 

#删除国外gem源
[root@db01 ~]# gem sources  --remove https://rubygems.org/
http://rubygems.org/ removed from sources

#再次查看gem源
[root@db01 ~]# gem sources -l

#使用gem安装redis的ruby插件
[root@db01 ~]# gem install redis -v 3.3.3
Successfully installed redis-3.3.3
1 gem installed
Installing ri documentation for redis-3.3.3...
Installing RDoc documentation for redis-3.3.3...

4. redis-trib.rb命令

[root@db01 ~]# redis-trib.rb 
create          #创建一个集群
check           #检查集群
info            #集群状态
fix             #修复集群
reshard         #重新分配槽位
rebalance       #平衡槽位数量
add-node        #添加节点
del-node        #删除节点
set-timeout     #设置超时时间
call            #向集群所有机器输入命令
import          #导入数据
help            #帮助

5.关联所有节点

[root@db01 ~]# redis-trib.rb create --replicas 1 172.16.1.51:6379 172.16.1.52:6379 172.16.1.53:6379 172.16.1.52:6380 172.16.1.53:6380 172.16.1.51:6380 
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
172.16.1.51:6379
172.16.1.52:6379
172.16.1.53:6379
Adding replica 172.16.1.52:6380 to 172.16.1.51:6379
Adding replica 172.16.1.51:6380 to 172.16.1.52:6379
Adding replica 172.16.1.53:6380 to 172.16.1.53:6379
M: 5ad7bd957133eac9c3a692b35f8ae72258cf0ece 172.16.1.51:6379
   slots:0-5460 (5461 slots) master
M: 7c79559b280db9d9c182f3a25c718efe9e934fc7 172.16.1.52:6379
   slots:5461-10922 (5462 slots) master
M: d27553035a3e91c78d375208c72b756e9b2523d4 172.16.1.53:6379
   slots:10923-16383 (5461 slots) master
S: fee551a90c8646839f66fa0cd1f6e5859e9dd8e0 172.16.1.52:6380
   replicates 5ad7bd957133eac9c3a692b35f8ae72258cf0ece
S: e4794215d9d3548e9c514c10626ce618be19ebfb 172.16.1.53:6380
   replicates d27553035a3e91c78d375208c72b756e9b2523d4
S: 1d10edbc5ed08f85d2afc21cd338b023b9dd61b4 172.16.1.51:6380
   replicates 7c79559b280db9d9c182f3a25c718efe9e934fc7
Can I set the above configuration? (type 'yes' to accept): yes   
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 172.16.1.51:6379)
M: 5ad7bd957133eac9c3a692b35f8ae72258cf0ece 172.16.1.51:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: e4794215d9d3548e9c514c10626ce618be19ebfb 172.16.1.53:6380
   slots: (0 slots) slave
   replicates d27553035a3e91c78d375208c72b756e9b2523d4
M: d27553035a3e91c78d375208c72b756e9b2523d4 172.16.1.53:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: fee551a90c8646839f66fa0cd1f6e5859e9dd8e0 172.16.1.52:6380
   slots: (0 slots) slave
   replicates 5ad7bd957133eac9c3a692b35f8ae72258cf0ece
S: 1d10edbc5ed08f85d2afc21cd338b023b9dd61b4 172.16.1.51:6380
   slots: (0 slots) slave
   replicates 7c79559b280db9d9c182f3a25c718efe9e934fc7
M: 7c79559b280db9d9c182f3a25c718efe9e934fc7 172.16.1.52:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

6.查看集群状态

[root@db01 ~]# redis-cli -h 172.16.1.51 -p 6379 CLUSTER NODES
e4794215d9d3548e9c514c10626ce618be19ebfb 172.16.1.53:6380 slave d27553035a3e91c78d375208c72b756e9b2523d4 0 1596767315453 5 connected
d27553035a3e91c78d375208c72b756e9b2523d4 172.16.1.53:6379 master - 0 1596767315453 3 connected 10923-16383
5ad7bd957133eac9c3a692b35f8ae72258cf0ece 172.16.1.51:6379 myself,master - 0 0 1 connected 0-5460
fee551a90c8646839f66fa0cd1f6e5859e9dd8e0 172.16.1.52:6380 slave 5ad7bd957133eac9c3a692b35f8ae72258cf0ece 0 1596767313429 4 connected
1d10edbc5ed08f85d2afc21cd338b023b9dd61b4 172.16.1.51:6380 slave 7c79559b280db9d9c182f3a25c718efe9e934fc7 0 1596767313935 6 connected
7c79559b280db9d9c182f3a25c718efe9e934fc7 172.16.1.52:6379 master - 0 1596767314949 2 connected 5461-10922

7.重新做主从

#由于使用工具,始终有一台机器从库本机的从库,所以要重新分配主从
172.16.1.52:6380> CLUSTER REPLICATE d27553035a3e91c78d375208c72b756e9b2523d4
OK
172.16.1.53:6380> CLUSTER REPLICATE 5ad7bd957133eac9c3a692b35f8ae72258cf0ece
OK

8.插入数据测试

[root@db01 ~]# redis-trib.rb info 172.16.1.52:6379
172.16.1.52:6379 (7c79559b...) -> 332 keys | 5462 slots | 1 slaves.
172.16.1.51:6379 (5ad7bd95...) -> 341 keys | 5461 slots | 1 slaves.
172.16.1.53:6379 (d2755303...) -> 327 keys | 5461 slots | 1 slaves.
[OK] 1000 keys in 3 masters.
0.06 keys per slot on average.

[root@db01 ~]# redis-trib.rb info 172.16.1.52:6379
172.16.1.52:6379 (7c79559b...) -> 661 keys | 5462 slots | 1 slaves.
172.16.1.51:6379 (5ad7bd95...) -> 674 keys | 5461 slots | 1 slaves.
172.16.1.53:6379 (d2755303...) -> 665 keys | 5461 slots | 1 slaves.
[OK] 2000 keys in 3 masters.
0.12 keys per slot on average.

四、redis集群节点修改

#添加和删除节点的流程
1.新节点添加槽位
2.源节点中的数据进行迁移
3.源节点数据迁移完毕
4.迁移下一个槽位的数据,依次循环

1.添加节点

1)准备新机器

[root@db02 ~]# mkdir /service/redis/{6381,6382}
[root@db02 ~]# vim /service/redis/6381/redis.conf
[root@db02 ~]# vim /service/redis/6382/redis.conf

#启动新节点
[root@db02 ~]# redis-server /service/redis/6381/redis.conf 
[root@db02 ~]# redis-server /service/redis/6382/redis.conf 

2)将新节点添加到集群

[root@db01 ~]# redis-trib.rb add-node 172.16.1.52:6381 172.16.1.51:6379
#或者
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster meet 172.16.1.52:6381

#查看节点信息
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster nodes

3)重新分配槽位

[root@db01 ~]# redis-trib.rb reshard 172.16.1.51:6379
#你想移动多少个槽位到新节点
How many slots do you want to move (from 1 to 16384)? 4096
#新节点的ID是什么
What is the receiving node ID? a298dbd22c10b8492d9ff4295504c50666f4fb2e
#输入源节点的ID,如果是所有节点直接使用all
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
source node #1:all
#你确定要这么分配?
Do you want to proceed with the proposed reshard plan (yes/no)? yes

#分配完毕,查看分配结果
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster nodes
e4794215d9d3548e9c514c10626ce618be19ebfb 172.16.1.53:6380 slave 5ad7bd957133eac9c3a692b35f8ae72258cf0ece 0 1596769469815 5 connected
d27553035a3e91c78d375208c72b756e9b2523d4 172.16.1.53:6379 master - 0 1596769468805 3 connected 12288-16383
5ad7bd957133eac9c3a692b35f8ae72258cf0ece 172.16.1.51:6379 myself,master - 0 0 1 connected 1365-5460
fee551a90c8646839f66fa0cd1f6e5859e9dd8e0 172.16.1.52:6380 slave d27553035a3e91c78d375208c72b756e9b2523d4 0 1596769467797 4 connected
1d10edbc5ed08f85d2afc21cd338b023b9dd61b4 172.16.1.51:6380 slave 7c79559b280db9d9c182f3a25c718efe9e934fc7 0 1596769469513 6 connected
a298dbd22c10b8492d9ff4295504c50666f4fb2e 172.16.1.52:6381 master - 0 1596769468302 7 connected 0-1364 5461-6826 10923-12287
7c79559b280db9d9c182f3a25c718efe9e934fc7 172.16.1.52:6379 master - 0 1596769468302 2 connected 6827-10922

[root@db01 ~]# 
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
172.16.1.51:6379 (5ad7bd95...) -> 499 keys | 4096 slots | 1 slaves.
172.16.1.53:6379 (d2755303...) -> 501 keys | 4096 slots | 1 slaves.
172.16.1.52:6381 (a298dbd2...) -> 502 keys | 4096 slots | 0 slaves.
172.16.1.52:6379 (7c79559b...) -> 498 keys | 4096 slots | 1 slaves.
[OK] 2000 keys in 4 masters.
0.12 keys per slot on average.

4)添加新节点的副本

[root@db02 ~]# redis-cli -h 172.16.1.52 -p 6382 cluster replicate a298dbd22c10b8492d9ff4295504c50666f4fb2e
#或者
[root@db01 ~]# redis-trib.rb add-node --slave --master-id a298dbd22c10b8492d9ff4295504c50666f4fb2e 172.16.1.52:6382 172.16.1.51:6379
              (集群中任意一台机器)
        
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster nodes
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
172.16.1.51:6379 (5ad7bd95...) -> 499 keys | 4096 slots | 1 slaves.
172.16.1.53:6379 (d2755303...) -> 501 keys | 4096 slots | 1 slaves.
172.16.1.52:6381 (a298dbd2...) -> 502 keys | 4096 slots | 1 slaves.
172.16.1.52:6379 (7c79559b...) -> 498 keys | 4096 slots | 1 slaves.
[OK] 2000 keys in 4 masters.
0.12 keys per slot on average.

#调整主从,尽量保证每一台机器的副本都不于主节点在一台机器

5)模拟故障

#分配槽位的过程中
[root@db01 ~]# redis-trib.rb reshard 172.16.1.51:6379

#执行Ctrl+c

#查看集群状态一些命令看不出来有错
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster info
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster nodes
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379

#必须使用工具检查集群
[root@db01 ~]# redis-trib.rb check 172.16.1.51:6379

#错误一:172.16.1.52:6379节点正在导出槽位,172.16.1.52:6381节点正在导入槽位
>>> Check for open slots...
[WARNING] Node 172.16.1.52:6381 has slots in importing state (6885).
[WARNING] Node 172.16.1.52:6379 has slots in migrating state (6885).
[WARNING] The following slots are open: 6885
>>> Check slots coverage...

#错误二:172.16.1.52:6379节点正在导出槽位
>>> Check for open slots...
[WARNING] Node 172.16.1.52:6379 has slots in migrating state (6975).
[WARNING] The following slots are open: 6975
>>> Check slots coverage...

#错误三:
>>> Check for open slots...
[WARNING] Node 172.16.1.52:6381 has slots in importing state (7093).
[WARNING] The following slots are open: 7093
>>> Check slots coverage...

6)修复故障

#使用fix修复集群
[root@db01 ~]# redis-trib.rb fix 172.16.1.52:6379

#将槽位平均分配
1.平均之前
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
172.16.1.51:6379 (5ad7bd95...) -> 499 keys | 4096 slots | 1 slaves.
172.16.1.53:6379 (d2755303...) -> 501 keys | 4096 slots | 1 slaves.
172.16.1.52:6381 (a298dbd2...) -> 648 keys | 5320 slots | 1 slaves.
172.16.1.52:6379 (7c79559b...) -> 352 keys | 2872 slots | 1 slaves.
[OK] 2000 keys in 4 masters.
0.12 keys per slot on average.

2.平均分配(如果节点之间槽位数量相差较小,不会平均分配)
[root@db01 ~]# redis-trib.rb rebalance 172.16.1.51:6379
>>> Performing Cluster Check (using node 172.16.1.51:6379)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Rebalancing across 4 nodes. Total weight = 4
Moving 1224 slots from 172.16.1.52:6381 to 172.16.1.52:6379
#####################################################################################################

3平均分配之后
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
172.16.1.51:6379 (5ad7bd95...) -> 499 keys | 4096 slots | 1 slaves.
172.16.1.53:6379 (d2755303...) -> 501 keys | 4096 slots | 1 slaves.
172.16.1.52:6381 (a298dbd2...) -> 492 keys | 4096 slots | 1 slaves.
172.16.1.52:6379 (7c79559b...) -> 508 keys | 4096 slots | 1 slaves.
[OK] 2000 keys in 4 masters.
0.12 keys per slot on average.

2.删除节点

1)重新分配槽位

#重新分配
[root@db01 ~]# redis-trib.rb reshard 172.16.1.51:6379
How many slots do you want to move (from 1 to 16384)? 1365   #要分配的槽位数量
What is the receiving node ID? #接收槽位的节点id
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: 要删除的节点id
Source node #2: done
Do you want to proceed with the proposed reshard plan (yes/no)? yes

#第二次重新分配
[root@db01 ~]# redis-trib.rb reshard 172.16.1.51:6379
How many slots do you want to move (from 1 to 16384)? 1365
What is the receiving node ID? 7c79559b280db9d9c182f3a25c718efe9e934fc7
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:a298dbd22c10b8492d9ff4295504c50666f4fb2e
Source node #2:done
Do you want to proceed with the proposed reshard plan (yes/no)? yes

#第三次重新分配
[root@db01 ~]# redis-trib.rb reshard 172.16.1.51:6379
How many slots do you want to move (from 1 to 16384)? 1366
What is the receiving node ID? d27553035a3e91c78d375208c72b756e9b2523d4
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:a298dbd22c10b8492d9ff4295504c50666f4fb2e
Source node #2:done
Do you want to proceed with the proposed reshard plan (yes/no)? yes

2)查看重新分配以后的节点信息

[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
172.16.1.51:6379 (5ad7bd95...) -> 664 keys | 5461 slots | 1 slaves.
172.16.1.53:6379 (d2755303...) -> 665 keys | 5462 slots | 2 slaves.
172.16.1.52:6381 (a298dbd2...) -> 0 keys | 0 slots | 0 slaves.
172.16.1.52:6379 (7c79559b...) -> 671 keys | 5461 slots | 1 slaves.
[OK] 2000 keys in 4 masters.
0.12 keys per slot on average.

3)删除节点

#删除没有数据的主节点
[root@db01 ~]# redis-trib.rb del-node 172.16.1.52:6381 a298dbd22c10b8492d9ff4295504c50666f4fb2e
>>> Removing node a298dbd22c10b8492d9ff4295504c50666f4fb2e from cluster 172.16.1.52:6381
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

#删除从节点
[root@db01 ~]# redis-trib.rb del-node 172.16.1.52:6382 47e3638a203488218d8c62deb82e768598977ba4
>>> Removing node 47e3638a203488218d8c62deb82e768598977ba4 from cluster 172.16.1.52:6382
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

#删除时不能删除有数据的主节点,从节点可以随便删除
[root@db01 ~]# redis-trib.rb del-node 172.16.1.53:6379 d27553035a3e91c78d375208c72b756e9b2523d4
>>> Removing node d27553035a3e91c78d375208c72b756e9b2523d4 from cluster 172.16.1.53:6379
[ERR] Node 172.16.1.53:6379 is not empty! Reshard data away and try again.

五、redis数据迁移

1.安装迁移工具

#1.安装依赖
[root@db02 ~]# yum install -y automake libtool autoconf bzip2

#2.拉取工具
[root@db02 ~]# git clone https://github.com/vipshop/redis-migrate-tool
#或者上传包

#3.安装
[root@db02 ~]# cd redis-migrate-tool/
[root@db02 redis-migrate-tool]# autoreconf -fvi
[root@db02 redis-migrate-tool]# ./configure
[root@db02 redis-migrate-tool]# make

2.编写数据迁移脚本

[root@db02 redis-migrate-tool]# vim tocluster.sh

[source]
type: single
servers:
 - 172.16.1.52:6381

[target]
type: redis cluster
servers:
 - 172.16.1.51:6379

[common]
listen: 0.0.0.0:8888

3.单节点生成数据

[root@db03 ~]# vim data.sh 
#!/bin/bash
for i in {1001..2000};do
    redis-cli -c -p 6381 -h 172.16.1.52 set k${i} v${i}
done

[root@db03 ~]# sh data.sh 

4.迁移数据

[root@db02 redis-migrate-tool]# src/redis-migrate-tool -c tocluster.sh &

六、数据审计

1.安装工具

#1.安装依赖
[root@db02 ~]# yum install -y python-pip python-devel

#2.安装工具
[root@db02 ~]# pip install rdbtools python-lzf

#3.下载或上传
[root@db02 ~]# git clone https://github.com/sripathikrishnan/redis-rdb-tools
#或者上传
[root@db02 ~]# tar xf redis-rdb-tools.tar.gz

#4.安装
[root@db02 ~]# cd redis-rdb-tools
[root@db02 redis-rdb-tools]# python setup.py install

2.确认生成rdb文件

[root@db02 6381]# redis-cli -p 6381
127.0.0.1:6381> bgsave
Background saving started
127.0.0.1:6381> quit
[root@db02 6381]# ll
total 44
-rw-r--r-- 1 root root 26206 Aug  7 15:18 dump.rdb

3.使用工具分析文件

# 使用工具生成CSV表格,下载下来进行分析
[root@db02 6381]# rdb -c memory ./dump.rdb -f memory.csv

极客公园 , 版权所有丨如未注明 , 均为原创丨本网站采用BY-NC-SA协议进行授权
转载请注明原文链接:Redis Cluster 分布式集群
喜欢 (0)
[17551054905]
分享 (0)

您必须 登录 才能发表评论!