哨兵是为了高可用设计的,集群是用来分布式扩容。
目的
避免繁琐过程的频繁操作没利用shell,python脚本实现执行一条命令搭建redis集群
方式
提供两种方式搭建集群:原生方式和redis-cli命令
基础命令
如果掌握可以直接跳过
添加主节点
有两种命令方式可以添加
第一种:
任意集群中任意一个容器(新建的容器就是一个只包含自身的集群)
先执行docker exec -ti 容器ID或者容器名字 /bin/bash进入该容器
执行 redis-cli -a 密码 -p 端口 -h ip 登录
最后执行 cluster meet ip port 进行识别别的主容器 (不要meet从容器)
这样执行后集群就有两个子节点 可以执行cluster nodes查看
第二种:
先执行docker exec -ti 容器ID或者容器名字 /bin/bash进入该容器
redis-cli -a 密码 -p 端口 -h ip 登录 --cluster add-node 需要加入的主节点IP:端口 集群任意节点ip:端口
这样就把节点加入到集群当中,可以登录集群任意节点 执行cluster nodes查看
添加从节点
有两种方式添加从节点:
第一种:
docker exec -ti 容器ID或者容器名字 /bin/bash 进入要当从节点的容器
redis-cli -a 密码 -p 端口 -h ip
cluster replicate 主节点nodeId
这样就加入从节点
第二种
docker exec -ti 容器ID或者容器名字 /bin/bash 进入要当从节点的容器
redis-cli -a 密码 -p 端口 -h ip --cluster add-node 从节点ip:端口 集群任意节点ip:端口 --cluster-slave --cluster-master-id 需要加入从节点的主节点nodeId
删除从节点
redis-cli -a 密码 -p 端口 -h ip --cluster del-node 节点ip:端口 节点nodeid
删除主节点
删除主节点首先要保持主节点无槽位占用,如果没有直接执行
redis-cli -a 密码 -p 端口 -h ip --cluster del-node 节点ip:端口 节点nodeid
如果有槽位,先清除槽位
使用 redis-cli -a 密码 -p 端口 -h ip –cluster reshard 集群任意节点ip:端口
输入删除的主节点的所有槽位数量
接受的主节点ID为集群中所有主节点(除需要被删除的主节点外)的nodeID
源主节点ID就输入需要被删除的主节点ID,回车后再输入done再回车这样就把需要被删除的主节点的槽位给了该集群中别的主节点
最后再删除主节点
redis-cli -a 密码 -p 端口 -h ip --cluster del-node 节点ip:端口 节点nodeid
集群搭建
只需 ./run.sh 10 一条命令即可搭建主 3:从6 (每个主节点对应两个从节点)的集群,需要其他比列的可以在下文介绍处修改比值即可
如果想删除和添加 参考上边的命令使用手动的方式添加和删除。
脚本:
脚本结构:
脚本
启动脚本命令
./run.sh
run.sh脚本文件
#! /bin/bash
NET_NUM=172.18.0.1
#启动docker
function start_docker(){
docker_process=`ps -ef|grep -v grep|grep -E docker`
if [ -n "$docker_process" ];then
echo "docker已经启动"
else
systemctl start docker
fi
if [ $? -ne 0 ];then
echo "docker命令执行失败,退出脚本"
exit
fi
}
function restart_docker(){
systemctl restart docker
if [ $? -ne 0 ];then
echo "docker容器重启失败"
else
echo "docker容器重启成功"
fi
}
function create_network(){
docker_network=`docker network ls |grep -v grep |grep -E mynetwork`
if [ -n "$docker_network" ];then
echo "该网络已经存在,跳过网络创建"
else
docker network create --subnet=${NET_NUM}/16 mynetwork
if [ $? -ne 0 ];then
echo "创建网络失败"
exit
else
echo "创建网络成功,网段是${NET_NUM}"
fi
fi
}
function create_container(){
redis_container_id=`docker images |grep -v grep|grep -E redis | awk '{print $3}'`
python container_create.py $redis_container_id
}
function create_file_direction(){
get_conf
create_log
create_data
create_node
}
function get_conf(){
echo "#######################################开始联网下载配置文件###############################################"
declare -i index=0
declare record_name=""
for name in `ls /etc/redis`
do
if [ "$index" -eq 0 ];then
record_name=$name
wget -t 3 http://download.redis.io/redis-stable/redis.conf -O /etc/redis/${name}/redis.conf
else
cp /etc/redis/${record_name}/redis.conf /etc/redis/${name}/redis.conf
fi
index=$(( $index + 1))
done
echo "############################################配置文件下载完成###############################################"
echo "#################################################修改更新配置文件权限################################################"
ls /etc/redis | tr -s " " | xargs -n1 -d " "|awk '{ if($1!=""){cmd="chmod o+rwx /etc/redis/"$1"/redis.conf";system(cmd)}}'
echo "####################################################配置文件权限修改成功################################################"
echo "##########################################################更新配置文件配置项#############################################"
python update_conf.py
echo "##############################################################配置文件配置项更新成功##############################################"
}
#创建日志文件
function create_log(){
ls /etc/redis | tr -s " " | xargs -n1 -d " "|awk '{ if($1!=""){cmd="touch /etc/redis/"$1"/redis.log;chmod o+rwx /etc/redis/"$1"/redis.log";system(cmd)}}'
}
#创建节点文件
function create_node(){
ls /etc/redis | tr -s " " | xargs -n1 -d " "|awk '{ if($1!=""){cmd="touch /etc/redis/"$1"/nodes.conf;chmod o+rwx /etc/redis/"$1"/nodes.conf";system(cmd)}}'
}
function create_data(){
ls /etc/redis | tr -s " " | xargs -n1 -d " "|awk '{ if($1!=""){cmd="mkdir /etc/redis/"$1"/data;chmod o+rwx /etc/redis/"$1"/data";system(cmd)}}'
}
#删除redis目录
function remove_etc_redis(){
if [ -d "/etc/redis" ];then
rm -fr /etc/redis
fi
}
#启动docker
start_docker
#删除容器
./remove_container.sh
sleep 2s
#删除目录
remove_etc_redis
#创建网络
create_network
#创建容器并启动容器
create_container
#创建配置文件
create_file_direction
#重启docker
restart_docker
#创建集群
python ./redis_cluster_create.py
remove_container.sh脚本
#!/bin/bash
if [ -f "success.log" ];then
rm -fr success.log
fi
if [ -e "lock.lock" ];then
rm -fr lock.lock
fi
docker ps -a |grep -v grep |grep -E redis| tr -s " "|cut -d" " -f1 1>success.log 2>fail.log
if [ ! -f "success.log" ];then
echo "success.log文件不存在"
exit 0
fi
if [ $? -ne 0 ];then
echo "命令执行失败"
exit 0
fi
#创建锁
lock=lock.lock
file_url=success.log
#创建文件管道
mkfifo $lock
#创建文件描述符
exec 88<>$lock
#该锁最多并行4个进程
for index in {1..4}
do
echo >&88
done
for index in {1..10}
do
{
read -u 88
#echo sed -n "${index}"p success.log
var=`sed -n ${index}p success.log`
if [ -n "$var" ];then
docker stop $var >& /dev/null
if [ $? -ne 0 ];then
echo "容器:$var 删除失败"
continue
else
docker rm $var >& /dev/null
echo "容器:$var 删除成功"
fi
fi
echo >&88
}&
done
wait
exec 88<&-
rm -fr ./lock.lock ./success.log
echo "容器删除成功"
container_create.py脚本
这个脚本可以指定主从关系和配置自己所需的端口号等
#!/usr/bin/python
#-*-coding:utf-8-*-
import subprocess
import sys
container_dict = [
{
'identy': 'master',
'ip': '172.18.0.2',
'name': 'redis6372',
'port': '6379',
'j_port': '16379',
'announce_port': '6372',
'bus_port': '16372',
'requirepass': '123456',
'network': 'mynetwork',
'slaves': [
{
'identy': 'slave',
'ip': '172.18.0.3',
'name': 'redis6373',
'port': '6379',
'j_port': '16379',
'announce_port': '6373',
'bus_port': '16373',
'requirepass': '123456',
'masterauth': '123456',
'network': 'mynetwork'
},
{
'identy': 'slave',
'ip': '172.18.0.4',
'name': 'redis6374',
'port': '6379',
'j_port': '16379',
'announce_port': '6374',
'bus_port': '16374',
'requirepass': '123456',
'masterauth': '123456',
'network': 'mynetwork'
}
]
},
{
'identy': 'master',
'ip': '172.18.0.5',
'name': 'redis6375',
'port': '6379',
'j_port': '16379',
'announce_port': '6375',
'bus_port': '16375',
'requirepass': '123456',
'network': 'mynetwork',
'slaves': [
{
'identy': 'slave',
'ip': '172.18.0.6',
'name': 'redis6376',
'port': '6379',
'j_port': '16379',
'announce_port': '6376',
'bus_port': '16376',
'requirepass': '123456',
'masterauth': '123456',
'network': 'mynetwork'
},
{
'identy': 'slave',
'ip': '172.18.0.7',
'name': 'redis6377',
'port': '6379',
'j_port': '16379',
'announce_port': '6377',
'bus_port': '16377',
'requirepass': '123456',
'masterauth': '123456',
'network': 'mynetwork'
}
]
},
{
'identy': 'master',
'ip': '172.18.0.8',
'name': 'redis6378',
'port': '6379',
'j_port': '16379',
'announce_port': '6378',
'bus_port': '16378',
'requirepass': '123456',
'network': 'mynetwork',
'slaves': [
{
'identy': 'slave',
'ip': '172.18.0.9',
'name': 'redis6379',
'port': '6379',
'j_port': '16379',
'announce_port': '6379',
'bus_port': '16379',
'requirepass': '123456',
'masterauth': '123456',
'network': 'mynetwork'
},
{
'identy': 'slave',
'ip': '172.18.0.10',
'name': 'redis5310',
'port': '6379',
'j_port': '16379',
'announce_port': '5310',
'bus_port': '15310',
'requirepass': '123456',
'masterauth': '123456',
'network': 'mynetwork'
}
]
}
]
class Docker:
@classmethod
def create_container(cls, *args):
ip_dict = list(*args)
for entry in ip_dict:
cmd = list()
cmd.append("docker run -ti")
cmd.append(" --name %s" % entry.get("name"))
cmd.append(" -p %s:%s" % (entry.get("announce_port"), entry.get("port")))
cmd.append(" -p %s:%s" % (entry.get("bus_port"), entry.get("j_port")))
cmd.append(" -v /etc/redis/%s:/etc/redis" % entry.get("name"))
cmd.append(" --net %s" % entry.get("network"))
cmd.append(" --ip %s" % entry.get("ip"))
cmd.append(" -d --restart=always "+sys.argv[1]+" redis-server /etc/redis/redis.conf")
cmd.append(" --appendonly yes")
cmd.append(" --requirepass %s" % entry.get("requirepass"))
cmd_str = ""
for item in cmd:
cmd_str += item
if len(cmd) > 0:
print(cmd_str)
subprocess.call(cmd_str, shell=True)
if not entry.get("slaves") is None and len(entry.get("slaves")) > 0:
cls.create_container(entry.get("slaves"))
if "__main__" == __name__:
print("开始创建容器")
Docker.create_container(container_dict)
update_conf.py脚本
from container_create import container_dict
import subprocess
public_conf_map = {"oom-score-adj-values 0 200 800":"#oom-score-adj-values 0 200 800",
"oom-score-adj no":"#oom-score-adj no",
"# cluster-enabled yes":"cluster-enabled yes",
'logfile ""':'logfile "\/etc\/redis\/redis.log"',
"# cluster-announce-ip 10.1.1.5":"cluster-announce-ip 192.168.249.130",
"protected-mode yes":"protected-mode no",
"dir .\/":"dir \/etc\/redis\/data",
"# cluster-config-file nodes-6379.conf":"cluster-config-file \/etc\/redis\/nodes.conf",
"# cluster-node-timeout 15000":"cluster-node-timeout 15000",
"bind 127.0.0.1":"bind 0.0.0.0"
}
def conf_update(redis_conf_map, container_dict):
for container_conf in container_dict:
map = dict(public_conf_map);
if not container_conf.get("requirepass") is None:
map.update({"# requirepass foobared": "requirepass %s" % container_conf.get("requirepass")})
if not container_conf.get("masterauth") is None:
map.update({"# masterauth <master-password>": "masterauth %s" % container_conf.get("masterauth")})
if not container_conf.get("announce_port") is None:
map.update(
{"# cluster-announce-port 6379": "cluster-announce-port %s" % container_conf.get("announce_port")})
if not container_conf.get("bus_port") is None:
map.update(
{"# cluster-announce-bus-port 6380": "cluster-announce-bus-port %s" % container_conf.get("bus_port")})
redis_conf_map.update({container_conf.get("name"): map})
if not container_conf.get("slaves") is None and len(container_conf.get("slaves")) > 0:
conf_update(redis_conf_map, container_conf.get("slaves"))
if "__main__" == __name__:
redis_conf_map = dict()
conf_update(redis_conf_map, container_dict)
for redisK, redisV in redis_conf_map.items():
for confK, confV in redisV.items():
subprocess.call("sed -i 's/"+confK+"/"+confV+"/g' /etc/redis/"+redisK+"/redis.conf", shell=True)
redis_cluster_create.py脚本
#!/usr/bin/python
#-*-coding:utf-8-*-
from container_create import container_dict
import subprocess
global_model = None
class Temp_model:
def __init__(self,ip, port, requirepass, name, announce_port):
self.ip = ip
self.port = port
self.requirepass = requirepass
self.name = name
self.announce_port = announce_port
def temp_model(model_map, container_dict):
global global_model
for item in container_dict:
temp_model = Temp_model(item.get("ip"), item.get("port"), item.get("requirepass"), item.get("name"), item.get("announce_port"))
global_model = temp_model
child_model_list = list()
if not item.get("slaves") is None and len(item.get("slaves")) > 0:
for slave in item.get("slaves"):
child_model_list.append(Temp_model(slave.get("ip"), slave.get("port"), slave.get("requirepass"), slave.get("name"), slave.get("announce_port")))
model_map.update({temp_model: child_model_list})
if "__main__" == __name__:
model_map = dict()
temp_model(model_map, container_dict)
redis_str = ""
subprocess.call("echo '#!/bin/bash' >> /etc/redis/%s/into_container_execute.sh" % global_model.name, shell=True)
#创建主节点
str = ""
for k, v in model_map.items():
str += "%s:%s " % (k.ip, k.port)
cmd = "echo redis-cli -a %s -h %s -p %s --cluster create %s >> /etc/redis/%s/into_container_execute.sh" % (global_model.requirepass, global_model.ip, global_model.port, str, global_model.name)
subprocess.call(cmd, shell=True)
#位主节点添加配置的从节点
slave_str = ""
for k, v in model_map.items():
subprocess.call("echo 'declare %s_container_id=$(cat /etc/redis/nodes.conf |grep -E master| grep -E %s |tr -s \" \" |cut -d \" \" -f1)' >> /etc/redis/%s/into_container_execute.sh" % (k.name, k.announce_port, global_model.name), shell=True)
for item in v:
slave_str = "echo 'redis-cli -a %s -h %s -p %s --cluster add-node %s:%s %s:%s --cluster-slave --cluster-master-id %s' >> /etc/redis/%s/into_container_execute.sh" % (global_model.requirepass, global_model.ip, global_model.port, item.ip, item.port, k.ip, k.port,"$%s_container_id" % k.name, global_model.name)
subprocess.call(slave_str, shell=True)
subprocess.call("chmod o+rwx /etc/redis/%s/into_container_execute.sh" % global_model.name, shell=True)
subprocess.call("docker exec -ti %s /bin/bash -c 'cd /etc/redis && ./into_container_execute.sh'" % global_model.name, shell=True)
效果:
root@user-virtual-machine:/usr/src/cluster# ./run.sh
docker已经启动
容器:c875a51df60c 删除成功
容器:36874e6d62bd 删除成功
容器:119355fdba9d 删除成功
容器:7f6704a3f6ad 删除成功
容器:6087465dd7c3 删除成功
容器:a55b16fa3cec 删除成功
容器:3c86fae7041b 删除成功
容器:30f9f8cdf467 删除成功
容器:f70f357dc67c 删除成功
容器删除成功
该网络已经存在,跳过网络创建
开始创建容器
docker run -ti --name redis6372 -p 6372:6379 -p 16372:16379 -v /etc/redis/redis6372:/etc/redis --net mynetwork --ip 172.18.0.2 -d --restart=always 987b78fc9e38 redis-server /etc/redis/redis.conf --appendonly yes --requirepass 123456
fbb080aed863e0805213c57e10172416249fd3bf076cd41ff6973613b10560d0
docker run -ti --name redis6373 -p 6373:6379 -p 16373:16379 -v /etc/redis/redis6373:/etc/redis --net mynetwork --ip 172.18.0.3 -d --restart=always 987b78fc9e38 redis-server /etc/redis/redis.conf --appendonly yes --requirepass 123456
a7954c0cefa60218c1b4ba9b55702cac0f9b6ec9c1887fb9cda76cbe81cce1c8
docker run -ti --name redis6374 -p 6374:6379 -p 16374:16379 -v /etc/redis/redis6374:/etc/redis --net mynetwork --ip 172.18.0.4 -d --restart=always 987b78fc9e38 redis-server /etc/redis/redis.conf --appendonly yes --requirepass 123456
fb84becbae2c07329c9f856b06f9a1f2741f76879af777fc77bbd46af836cbae
docker run -ti --name redis6375 -p 6375:6379 -p 16375:16379 -v /etc/redis/redis6375:/etc/redis --net mynetwork --ip 172.18.0.5 -d --restart=always 987b78fc9e38 redis-server /etc/redis/redis.conf --appendonly yes --requirepass 123456
0643237e7798712f816716cd01b366b4f0a0a4905e2a0f42195bc9ec58ca4137
docker run -ti --name redis6376 -p 6376:6379 -p 16376:16379 -v /etc/redis/redis6376:/etc/redis --net mynetwork --ip 172.18.0.6 -d --restart=always 987b78fc9e38 redis-server /etc/redis/redis.conf --appendonly yes --requirepass 123456
ac048fab188d011d48e7b8c3d4feee083551ac2bae324696660e436ca05ee109
docker run -ti --name redis6377 -p 6377:6379 -p 16377:16379 -v /etc/redis/redis6377:/etc/redis --net mynetwork --ip 172.18.0.7 -d --restart=always 987b78fc9e38 redis-server /etc/redis/redis.conf --appendonly yes --requirepass 123456
4d2fd375031571e918ac6653f7cdd816b0598a40cee3d672087834669fa287d3
docker run -ti --name redis6378 -p 6378:6379 -p 16378:16379 -v /etc/redis/redis6378:/etc/redis --net mynetwork --ip 172.18.0.8 -d --restart=always 987b78fc9e38 redis-server /etc/redis/redis.conf --appendonly yes --requirepass 123456
b802654f8a06af291abde7d0e195d2fc1e5468d021db43b3a568b0978b54f575
docker run -ti --name redis6379 -p 6379:6379 -p 16379:16379 -v /etc/redis/redis6379:/etc/redis --net mynetwork --ip 172.18.0.9 -d --restart=always 987b78fc9e38 redis-server /etc/redis/redis.conf --appendonly yes --requirepass 123456
60901f3ea38df3afa9289a170488763aa4595f9c66aba1d1ca3b658354d151b8
docker run -ti --name redis5310 -p 5310:6379 -p 15310:16379 -v /etc/redis/redis5310:/etc/redis --net mynetwork --ip 172.18.0.10 -d --restart=always 987b78fc9e38 redis-server /etc/redis/redis.conf --appendonly yes --requirepass 123456
b904b0ed5306ee82a29aae40a4575d8427ea9af80be903cff356055f08860fc2
#######################################开始联网下载配置文件###############################################
--2020-10-11 23:00:27-- http://download.redis.io/redis-stable/redis.conf
Resolving download.redis.io (download.redis.io)... 45.60.125.1
Connecting to download.redis.io (download.redis.io)|45.60.125.1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84642 (83K) [application/octet-stream]
Saving to: ‘/etc/redis/redis5310/redis.conf’
/etc/redis/redis5310/redis.conf 100%[================================================================================================================>] 82.66K 24.7KB/s in 3.4s
2020-10-11 23:00:31 (24.7 KB/s) - ‘/etc/redis/redis5310/redis.conf’ saved [84642/84642]
############################################配置文件下载完成###############################################
#################################################修改更新配置文件权限################################################
####################################################配置文件权限修改成功################################################
##########################################################更新配置文件配置项#############################################
##############################################################配置文件配置项更新成功##############################################
docker容器重启成功
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 3 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
M: 42ea12e1df641900bfa6ffd7b909fd3925f8c63d 172.18.0.8:6379
slots:[0-5460] (5461 slots) master
M: 15d674bc4acf61291a2b39e8965be785707cd296 172.18.0.2:6379
slots:[5461-10922] (5462 slots) master
M: 96d39de3d66412e4c6ae94696472ee0a4d3648cc 172.18.0.5:6379
slots:[10923-16383] (5461 slots) master
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 172.18.0.8:6379)
M: 42ea12e1df641900bfa6ffd7b909fd3925f8c63d 172.18.0.8:6379
slots:[0-5460] (5461 slots) master
M: 96d39de3d66412e4c6ae94696472ee0a4d3648cc 192.168.249.130:6375
slots:[10923-16383] (5461 slots) master
M: 15d674bc4acf61291a2b39e8965be785707cd296 192.168.249.130:6372
slots:[5461-10922] (5462 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 172.18.0.9:6379 to cluster 172.18.0.8:6379
>>> Performing Cluster Check (using node 172.18.0.8:6379)
M: 42ea12e1df641900bfa6ffd7b909fd3925f8c63d 172.18.0.8:6379
slots:[0-5460] (5461 slots) master
M: 96d39de3d66412e4c6ae94696472ee0a4d3648cc 192.168.249.130:6375
slots:[10923-16383] (5461 slots) master
M: 15d674bc4acf61291a2b39e8965be785707cd296 192.168.249.130:6372
slots:[5461-10922] (5462 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.18.0.9:6379 to make it join the cluster.
Waiting for the cluster to join
>>> Configure node as replica of 172.18.0.8:6379.
[OK] New node added correctly.
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 172.18.0.10:6379 to cluster 172.18.0.8:6379
>>> Performing Cluster Check (using node 172.18.0.8:6379)
M: 42ea12e1df641900bfa6ffd7b909fd3925f8c63d 172.18.0.8:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 96d39de3d66412e4c6ae94696472ee0a4d3648cc 192.168.249.130:6375
slots:[10923-16383] (5461 slots) master
S: 88f9233819c2b657dae713dbfb76a1d0a68c7e76 192.168.249.130:6379
slots: (0 slots) slave
replicates 42ea12e1df641900bfa6ffd7b909fd3925f8c63d
M: 15d674bc4acf61291a2b39e8965be785707cd296 192.168.249.130:6372
slots:[5461-10922] (5462 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.18.0.10:6379 to make it join the cluster.
Waiting for the cluster to join
>>> Configure node as replica of 172.18.0.8:6379.
[OK] New node added correctly.
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 172.18.0.3:6379 to cluster 172.18.0.2:6379
>>> Performing Cluster Check (using node 172.18.0.2:6379)
M: 15d674bc4acf61291a2b39e8965be785707cd296 172.18.0.2:6379
slots:[5461-10922] (5462 slots) master
S: 7084d567ecfac276a25981a71b179cab3dc3bd09 192.168.249.130:5310
slots: (0 slots) slave
replicates 42ea12e1df641900bfa6ffd7b909fd3925f8c63d
M: 96d39de3d66412e4c6ae94696472ee0a4d3648cc 192.168.249.130:6375
slots:[10923-16383] (5461 slots) master
S: 88f9233819c2b657dae713dbfb76a1d0a68c7e76 192.168.249.130:6379
slots: (0 slots) slave
replicates 42ea12e1df641900bfa6ffd7b909fd3925f8c63d
M: 42ea12e1df641900bfa6ffd7b909fd3925f8c63d 192.168.249.130:6378
slots:[0-5460] (5461 slots) master
2 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.18.0.3:6379 to make it join the cluster.
Waiting for the cluster to join
>>> Configure node as replica of 172.18.0.2:6379.
[OK] New node added correctly.
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 172.18.0.4:6379 to cluster 172.18.0.2:6379
>>> Performing Cluster Check (using node 172.18.0.2:6379)
M: 15d674bc4acf61291a2b39e8965be785707cd296 172.18.0.2:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 7084d567ecfac276a25981a71b179cab3dc3bd09 192.168.249.130:5310
slots: (0 slots) slave
replicates 42ea12e1df641900bfa6ffd7b909fd3925f8c63d
S: 26d487772b4e637874fe745b6793bd28281fdfb0 192.168.249.130:6373
slots: (0 slots) slave
replicates 15d674bc4acf61291a2b39e8965be785707cd296
M: 96d39de3d66412e4c6ae94696472ee0a4d3648cc 192.168.249.130:6375
slots:[10923-16383] (5461 slots) master
S: 88f9233819c2b657dae713dbfb76a1d0a68c7e76 192.168.249.130:6379
slots: (0 slots) slave
replicates 42ea12e1df641900bfa6ffd7b909fd3925f8c63d
M: 42ea12e1df641900bfa6ffd7b909fd3925f8c63d 192.168.249.130:6378
slots:[0-5460] (5461 slots) master
2 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.18.0.4:6379 to make it join the cluster.
Waiting for the cluster to join
>>> Configure node as replica of 172.18.0.2:6379.
[OK] New node added correctly.
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 172.18.0.6:6379 to cluster 172.18.0.5:6379
>>> Performing Cluster Check (using node 172.18.0.5:6379)
M: 96d39de3d66412e4c6ae94696472ee0a4d3648cc 172.18.0.5:6379
slots:[10923-16383] (5461 slots) master
S: 26d487772b4e637874fe745b6793bd28281fdfb0 192.168.249.130:6373
slots: (0 slots) slave
replicates 15d674bc4acf61291a2b39e8965be785707cd296
M: 15d674bc4acf61291a2b39e8965be785707cd296 192.168.249.130:6372
slots:[5461-10922] (5462 slots) master
2 additional replica(s)
S: 7084d567ecfac276a25981a71b179cab3dc3bd09 192.168.249.130:5310
slots: (0 slots) slave
replicates 42ea12e1df641900bfa6ffd7b909fd3925f8c63d
M: 42ea12e1df641900bfa6ffd7b909fd3925f8c63d 192.168.249.130:6378
slots:[0-5460] (5461 slots) master
2 additional replica(s)
S: 7be7347971f84336408fd411a305b0e0842a0dc3 192.168.249.130:6374
slots: (0 slots) slave
replicates 15d674bc4acf61291a2b39e8965be785707cd296
S: 88f9233819c2b657dae713dbfb76a1d0a68c7e76 192.168.249.130:6379
slots: (0 slots) slave
replicates 42ea12e1df641900bfa6ffd7b909fd3925f8c63d
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.18.0.6:6379 to make it join the cluster.
Waiting for the cluster to join
>>> Configure node as replica of 172.18.0.5:6379.
[OK] New node added correctly.
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 172.18.0.7:6379 to cluster 172.18.0.5:6379
>>> Performing Cluster Check (using node 172.18.0.5:6379)
M: 96d39de3d66412e4c6ae94696472ee0a4d3648cc 172.18.0.5:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 26d487772b4e637874fe745b6793bd28281fdfb0 192.168.249.130:6373
slots: (0 slots) slave
replicates 15d674bc4acf61291a2b39e8965be785707cd296
M: 15d674bc4acf61291a2b39e8965be785707cd296 192.168.249.130:6372
slots:[5461-10922] (5462 slots) master
2 additional replica(s)
S: 2dc1b93d354e5161767920327fdfab478110f6f9 192.168.249.130:6376
slots: (0 slots) slave
replicates 96d39de3d66412e4c6ae94696472ee0a4d3648cc
S: 7084d567ecfac276a25981a71b179cab3dc3bd09 192.168.249.130:5310
slots: (0 slots) slave
replicates 42ea12e1df641900bfa6ffd7b909fd3925f8c63d
M: 42ea12e1df641900bfa6ffd7b909fd3925f8c63d 192.168.249.130:6378
slots:[0-5460] (5461 slots) master
2 additional replica(s)
S: 7be7347971f84336408fd411a305b0e0842a0dc3 192.168.249.130:6374
slots: (0 slots) slave
replicates 15d674bc4acf61291a2b39e8965be785707cd296
S: 88f9233819c2b657dae713dbfb76a1d0a68c7e76 192.168.249.130:6379
slots: (0 slots) slave
replicates 42ea12e1df641900bfa6ffd7b909fd3925f8c63d
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.18.0.7:6379 to make it join the cluster.
Waiting for the cluster to join
>>> Configure node as replica of 172.18.0.5:6379.
[OK] New node added correctly.
主要命令介绍
echo "redis-cli -a $password -h $ip -p $port --cluster create $str --cluster-replicas 2 " >>/etc/redis/redis637$1/into_container_execute.sh
cluster-replicas 2 :从节点/主节点=2
主节点= 9* 1/(1+2)=3
从节点 = 9*2(1+2)=6
也就是说每一个主节点下有两个从节点
比值可以自己调但是主节点最低三个
命令执行教程
中间步骤会让输入"yes"或者"no" ,输入“yes”回车即可,整个过程搭建就结束了。
## 集群效果图:
如果时原生操作还不能用,需要分配槽位,此脚本使用的非原生创建集群,所以到这一步就可以使用了,因为“redis-cli -a $password -h $ip -p $port --cluster create $str --cluster-replicas 2”这条命令会帮我们均分槽位,帮我们省去了手工分槽,如果像手动分槽可以参考考下面介绍
## **分配槽位**
```bash
redis-cli -a 密码 -p 端口 -h ip custer addslots {0..500} 就为该IP分配了501个槽位
槽位范围0-16383共16384个槽位,需要保证所有的槽位被所有主节点分配完毕
槽位节点分配图:
所有步骤结束,可以使用了
~~~~~~~~~~~~~~~ 万物之中,希望至美 ~~~~~~~~~~~~~~~