主从复制 Replication
   
    
    
    介绍
   
通过二进制日志方式,达到2台以上MySQL实例数据“同步”。
    
    
    主从复制前提
   
    
    
    两台以上数据库实例,版本一致。
   
[root@db01 ~]# systemctl start mysqld3307
[root@db01 ~]# systemctl start mysqld3308
[root@db01 ~]# systemctl start mysqld3309
    
    
    区分不同角色
   
[root@db01 ~]# mysql -S /data/3307/mysql.sock -e "select @@server_id"
[root@db01 ~]# mysql -S /data/3308/mysql.sock -e "select @@server_id"
[root@db01 ~]# mysql -S /data/3309/mysql.sock -e "select @@server_id"
    
    
    主库开起二进制日志
   
[root@db01 ~]# mysql -S /data/3307/mysql.sock -e "select @@log_bin"
+-----------+
| @@log_bin |
+-----------+
|         1 |
+-----------+
[root@db01 ~]# mysql -S /data/3307/mysql.sock -e "select @@log_bin_basename"
+----------------------+
| @@log_bin_basename   |
+----------------------+
| /data/3307/mysql-bin |
+----------------------+
    
    
    主库创建专用复制用户
   
[root@db01 ~]# mysql -S /data/3307/mysql.sock -e "grant replication slave on *.* to repl@'10.0.0.%' identified by '123'"
    
    
    备份主库数据,恢复从库
   
[root@db01 data]# mysqldump -S /data/3307/mysql.sock -A --master-data=2 >/tmp/full.sql
[root@db01 data]# mysql -S /data/3308/mysql.sock </tmp/full.sql
[root@db01 data]# mysql -S /data/3309/mysql.sock </tmp/full.sql
    
    
    开启从库复制功能(连接信息,复制起点)
   
grep "-- CHANGE MASTER TO" /tmp/full.sql
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000004', MASTER_LOG_POS=674;
CHANGE MASTER TO
  MASTER_HOST='10.0.0.51',
  MASTER_USER='repl',
  MASTER_PASSWORD='123',
  MASTER_PORT=3307,
  MASTER_LOG_FILE='mysql-bin.000004',
  MASTER_LOG_POS=674,
  MASTER_CONNECT_RETRY=10;
start slave; 
    
    
    检查主从状态
   
[root@db01 ~]# mysql -S /data/3309/mysql.sock  -e "show slave status \G"|grep "Running"
[root@db01 ~]# mysql -S /data/3308/mysql.sock  -e "show slave status \G"|grep "Running"
    
    
    主从复制工作原理
   
    
    
    涉及到的文件
   
主库: binlog 
从库: 
	relay-log      :存储请求过来的binlog 
    master.info    : 保存主库信息(IP,PORT,USER,PASSWORD,binlog位置点)
	relay-log.info :记录的是从库回放relaylog的位置点信息。 
    
    
    涉及到的线程
   
主库: 
	dump: 日志投递线程
从库: 
	IO : 连接主库,请求日志
	SQL: 回放日志
    
    
    主从复制原理(文字说明)
   
# 1. 从库执行change master to 语句: IP、 PORT 、USER、PASSWD、binlog起点,信息记录到master.info 
# 2. 从库执行start slave 。开启IO、SQL复制线程
# 3. 从库IO开始工作,读取master.info: IP、 PORT 、USER、PASSWD,连接主库。
# 4. 主库连接层接收到请求,验证通过后,生成 DUMP线程和IO线程交互。
# 5. 从库IO 通过   master.info : binlog起点,找主库DUMP请求新的binlog
# 6. 主库DUMP监控着binlog变化,接收到从库IO请求,截取最新的Binlog,TP给IO
# 7. 从库IO接收到binlog,临时存储再TCP/IP缓存。主库工作到此为止。
# 8. 从库IO将接收到的日志存储到relay-log中,并更新master.info。IO线程工作结束
# 9. 从库SQL线程读取relay.info中,获取到上次回放到的relay-log的位置点    
# 10.从库SQL回放新的relaylog,再次更新relay.info。SQL线程工作结束。 
# 11. relay_log_purge线程,对relay-log有自动清理的功能。
# 12. 主库dump线程实时监控binlog的变化,自动通知给从库IO。
    
    
    主从复制监控
   
    
    
    主库监控
   
mysql> show processlist;
mysql> show slave hosts;
+-----------+------+------+-----------+--------------------------------------+
| Server_id | Host | Port | Master_id | Slave_UUID                           |
+-----------+------+------+-----------+--------------------------------------+
|         9 |      | 3309 |         7 | d1492ae9-6728-11ea-ad4b-000c29248f69 |
|         8 |      | 3308 |         7 | ced6749c-6728-11ea-ab51-000c29248f69 |
+-----------+------+------+-----------+--------------------------------------+
    
    
    从库监控
   
mysql> show slave status \G
    
    
    主库信息汇总
   
主库信息汇总:master.info
Master_Host: 10.0.0.51
Master_User: repl
Master_Port: 3307
Connect_Retry: 10
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 674
    
    
    从库relaylog回放到的位置
   
从库relaylog回放到的位置点:relay-log.info
Relay_Log_File: db01-relay-bin.000004
Relay_Log_Pos: 320
    
    
    从库的线程状态
   
从库的线程状态:log_error
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Last_IO_Errno: 0
Last_IO_Error: 
Last_SQL_Errno: 0
Last_SQL_Error: 
    
    
    过滤复制相关信息
   
过滤复制相关信息: 
Replicate_Do_DB: 
Replicate_Ignore_DB: 
Replicate_Do_Table: 
Replicate_Ignore_Table: 
Replicate_Wild_Do_Table: 
Replicate_Wild_Ignore_Table: 
    
    
    监控主从延时
   
监控主从延时:  
Seconds_Behind_Master: 0
    
    
    延时从库的状态
   
延时从库的状态: 	
SQL_Delay: 0
SQL_Remaining_Delay: NULL
    
    
    GTID复制状态
   
GTID复制状态: 
Retrieved_Gtid_Set: 
Executed_Gtid_Set: 
    
    
    主从复制故障原因分析
   
    
    
    监控方法
   
show slave status \G
从库的线程状态:log_error
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Last_IO_Errno: 0
Last_IO_Error: 
Last_SQL_Errno: 0
Last_SQL_Error: 
    
    
    IO 线程故障
   
    
    
    连接主库
   
连接主库: connecting 
#可能原因
连接信息有误。
网络故障。
防火墙。
最大连接数上线。
# 排查方法:
[root@db01 data]# mysql -urepl -p123 -h10.0.0.51 -P 3307
# 处理方法:
mysql -S /data/3308/mysql.sock -e "stop slave;reset slave all;"
CHANGE MASTER TO
  MASTER_HOST='10.0.0.51',
  MASTER_USER='repl',
  MASTER_PASSWORD='123',
  MASTER_PORT=3307,
  MASTER_LOG_FILE='mysql-bin.000004',
  MASTER_LOG_POS=674,
  MASTER_CONNECT_RETRY=10;
start slave;
    
    
    请求日志
   
请求日志: NO
主库日志损坏。
日志起点写错。
server_id重复
# 排查方法  
show slave status \G 
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 674
Last_IO_Error: xxxx
    
    
    SQL 线程故障
   
    
    
    中继日志损坏
   
# 1. 从库 停SQL线程
stop slave  sql_thread ;
# 2. 主库发生新的操作 
create database test1; 
# 3. 从库删除relaylog 
rm -rf /data/3308/data/db01-relay-bin.00000*
# 4. 启动SQL线程
start  slave  sql_thread ;
修复: 
1. cat /data/3308/data/relay-log.info   ----> binlog 位置点
2. 重构
stop slave ;
reset slave all;
CHANGE MASTER TO
  MASTER_HOST='10.0.0.51',
  MASTER_USER='repl',
  MASTER_PASSWORD='123',
  MASTER_PORT=3307,
  MASTER_LOG_FILE='mysql-bin.000001',
  MASTER_LOG_POS=154,
  MASTER_CONNECT_RETRY=10;
start slave;
    
    
    日志回放失败
   
# 1. 修改的对象不存在
# 2. 创建的对象已存在
# 3. 约束冲突
# 4. 主从配置不同
# 5. SQL_MODE不兼容
# 6. 主从版本差异
方法0: 
从库逆反操作。
方法一:
stop slave; 
set global sql_slave_skip_counter = 1;
#将同步指针向下移动一个,如果多次不同步,可以重复操作。
start slave;
方法二:
/etc/my.cnf
slave-skip-errors = 1032,1062,1007
常见错误代码:
1007:对象已存在
1032:无法执行DML
1062:主键冲突,或约束冲突
但是,以上操作有时是有风险的,最安全的做法就是重新构建主从。把握一个原则,一切以主库为主.
方法三: PT工具
pt-table-checksum
pt-table-sync
方法四:从库只读
mysql> select @@read_only;
mysql> select @@super_read_only;
    
    
    主从复制延时
   
主库做了操作,从库很久才回放
    
    
    主库方面
   
    
    
    提供binlog
   
binlog日志文件落地不及时。sync_binlog=1
    
    
    传输binlog
   
claassic 模式(无GTID),dump线程传输日志是串行的。主库可以并行多个事务。
大事务、并发事务量大。都会导致较高延时。
5.6 版本加入了GTID功能,在传输时就可以并行传输日志了。
5.7 版本即使没开GTID,会自动生成Anonymous_Gtid。
    
    
    从库方面
   
    
    
    relay 落地
   
    
    
    SQL回放
   
单一SQL线程,只能串行回放relaylog。主库可以并发事务,并行传输日志,回放时是串行的。
如果 大事务,并发事务量大。都会导致较高回放延时。
5.6 版本 GTID模式下,可以开启多个SQL线程。但是,5.6多SQL回放时,只能针对不同database并行回放。
5.7 版本中GTID模式下,可以开启多个SQL线程,真正实现了并性回放(MTS)
    
    
    外部因素
   
    网络慢
    
    主从配置相差大
   
    
    
    过滤复制
   
    
    
    介绍
   
部分数据复制。
    
    
    配置方法
   
主库 : 
binlog_do_db=world    
binlog_ignore_db
从库: 
库级别: 
replicate_do_db=world
replicate_do_db=test
replicate_ignore_db= 
表级别:
replicate_do_table=world.city
replicate_ignore_table=
模糊:
replicate_wild_do_table=world.t*
replicate_wild_ignore_table= 
    
    
    模拟
   
[root@db01 3309]# cat /data/3309/my.cnf 
[mysqld]
basedir=/data/app/mysql
datadir=/data/3309/data
socket=/data/3309/mysql.sock
log_error=/data/3309/mysql.log
port=3309
server_id=9
log_bin=/data/3309/mysql-bin
replicate_do_db=oldguo
replicate_do_db=oldboy
systemctl restart mysqld3309
    
    
    在线修改
   
stop slave sql_thread ;
change replication filter replicate_do_db=(oldguo,oldboy);
start slave sql_thread ;
stop slave sql_thread ;
change replication filter replicate_do_db=();
start slave sql_thread ;
    
    
    延时从库
   
    
    
    介绍
   
人为配置的一种特殊从库,主库变更,在延时时间过后,从库才执行。
    
    
    什么是数据损坏
   
逻辑损坏: DROP  delete   truncate   update 
物理损坏: 磁盘,文件
    
    
    配置
   
mysql>stop slave;
mysql>CHANGE MASTER TO MASTER_DELAY = 300;
mysql>start slave;
mysql> show slave status \G
    
    
    怎么用
   
    
    
    思路 (延时3小时)
   
# 1、场景假设 :
主库drop database oldguo     ---> 9:00
# 2、监控故障                ---> 9:01
发现业务oldguo业务不能正常运行。
# 3、 挂维护页
# 4、 停主从                 ----->9:05
判断业务是否有流量,停主从
# 5、修复数据  
模仿SQL线程回放relay,回放到drop,使用relay修复数据。
起点 : relay-log.info  ---> SQL 线程执行到的位置点
终点 : drop之前
# 6、 业务恢复 
从库替代主库工作。
# 7、 主从修复(后话)
    
    
    模拟故障,使用延时从恢复数据
   
# 1. 模拟基础数据  
# 主库 : 
create database delaydb charset utf8mb4;
use delaydb;
create table t1 (id int);
insert into t1 values(1),(2),(3);
commit;
create table t2 (id int);
insert into t2 values(1),(2),(3);
commit;
create table t3 (id int);
insert into t3 values(1),(2),(3);
commit;
drop database delaydb;
# 从库 
# 1. 停线程  
mysql> stop slave;
# 2. 截取relaylog
起点: 
mysql> show slave status\G
Relay_Log_File: db01-relay-bin.000002
Relay_Log_Pos: 476
终点: 
| db01-relay-bin.000002 | 2039 | Query          |         7 |        3284 | drop database delaydb   
[root@db01 data]# cd /data/3309/data/
[root@db01 data]# mysqlbinlog --start-position=476 --stop-position=2039 db01-relay-bin.000002  >/tmp/relay.sql
# 3. 恢复relaylog 到从库
mysql> stop slave;
mysql> reset slave all;
mysql> reset master;
mysql> reset master;
mysql> set sql_log_bin=0;
mysql> source /tmp/relay.sql
mysql> set sql_log_bin=1;
    
    
    主从数据一致: 半同步复制、无损复制、MGR
   
    5.5 版本加入,半同步复制。
    
    1. 不能100%保证,主从一致性。
    
    2. 性能拉低很多
    
    5.6 gtid 串行传输日志,串行SQL,可以缓解。
    
    5.7 增强半同步复制,无损复制。
   
https://dev.mysql.com/doc/refman/5.7/en/replication-semisync.html
    5.7.17 加入MGR ,8.0以后MGR 。
    
    https://www.jianshu.com/p/8c66e0f65324
   
    
    
    GTID复制
   
    
    
    优势
   
    (1)每个事务都有唯一逻辑编号,并具备幂等性
    
    (2)截取binlog时更加灵活、方便(include-gtids –exclude-gtids)
    
    (3)主从复制,提高性能:dump传输日志并行,SQL线程并行回放。
    
    (4)主从复制搭建、监控延时、数据一致性保证。
   
    
    
    搭建
   
    
    
    准备3台独立虚拟机节点
   
    
    
    清理环境
   
pkill mysqld 
rm -rf /data/3306/*
mv /etc/my.cnf /tmp 
    
    
    创建需要的目录
   
mkdir -p /data/3306/data /data/3306/binlog
chown -R mysql.mysql /data
    
    
    准备配置文件
   
# db01 
cat > /etc/my.cnf <<EOF
[mysqld]
user=mysql
basedir=/data/app/mysql
datadir=/data/3306/data
server_id=6
socket=/tmp/mysql.sock
log_bin=/data/3306/binlog/mysql-bin
gtid_mode=ON
enforce_gtid_consistency
log-slave-updates=1
secure-file-priv=/tmp
innodb_data_file_path=ibdata1:128M;ibdata2:128M:autoextend
innodb_temp_data_file_path=ibtmp1:128M;ibtmp2:128M:autoextend:max:500M
innodb_undo_tablespaces=3
innodb_max_undo_log_size=128M
innodb_undo_log_truncate=ON
innodb_purge_rseg_truncate_frequency=32
autocommit=0
innodb_flush_method=O_DIRECT
slow_query_log=ON
slow_query_log_file=/data/3306/data/db01-slow.log
long_query_time=0.1
log_queries_not_using_indexes
[client]
socket=/tmp/mysql.sock
[mysql]
prompt=db01 [\\d]>
socket=/tmp/mysql.sock
EOF
# db02 
cat > /etc/my.cnf <<EOF
[mysqld]
user=mysql
basedir=/data/app/mysql
datadir=/data/3306/data
server_id=7
socket=/tmp/mysql.sock
log_bin=/data/3306/binlog/mysql-bin
gtid_mode=ON
enforce_gtid_consistency
log-slave-updates=1
secure-file-priv=/tmp
innodb_data_file_path=ibdata1:128M;ibdata2:128M:autoextend
innodb_temp_data_file_path=ibtmp1:128M;ibtmp2:128M:autoextend:max:500M
innodb_undo_tablespaces=3
innodb_max_undo_log_size=128M
innodb_undo_log_truncate=ON
innodb_purge_rseg_truncate_frequency=32
autocommit=0
innodb_flush_method=O_DIRECT
slow_query_log=ON
slow_query_log_file=/data/3306/data/db01-slow.log
long_query_time=0.1
log_queries_not_using_indexes
[client]
socket=/tmp/mysql.sock
[mysql]
prompt=db02 [\\d]>
socket=/tmp/mysql.sock
EOF
# db03 
cat > /etc/my.cnf <<EOF
[mysqld]
user=mysql
basedir=/data/app/mysql
datadir=/data/3306/data
server_id=8
socket=/tmp/mysql.sock
log_bin=/data/3306/binlog/mysql-bin
gtid_mode=ON
enforce_gtid_consistency
log-slave-updates=1
secure-file-priv=/tmp
innodb_data_file_path=ibdata1:128M;ibdata2:128M:autoextend
innodb_temp_data_file_path=ibtmp1:128M;ibtmp2:128M:autoextend:max:500M
innodb_undo_tablespaces=3
innodb_max_undo_log_size=128M
innodb_undo_log_truncate=ON
innodb_purge_rseg_truncate_frequency=32
autocommit=0
innodb_flush_method=O_DIRECT
slow_query_log=ON
slow_query_log_file=/data/3306/data/db01-slow.log
long_query_time=0.1
log_queries_not_using_indexes
[client]
socket=/tmp/mysql.sock
[mysql]
prompt=db03 [\\d]>
socket=/tmp/mysql.sock
EOF
    
    
    初始化数据
   
mysqld --initialize-insecure --user=mysql --basedir=/data/app/mysql  --datadir=/data/3306/data 
    
    
    启动数据库
   
/etc/init.d/mysqld start 
    
    
    构建主从
   
    
    
    db01 创建复制用户
   
db01 [(none)]>grant replication slave on *.*  to repl@'10.0.0.%' identified by '123';
    
    
    db02、db03 构建主从
   
change master to 
master_host='10.0.0.51',
master_user='repl',
master_password='123' ,
MASTER_AUTO_POSITION=1;
start slave;
    
    
    gtid 构建时的不同点
   
    MASTER_AUTO_POSITION=1;
    
    参数功能: 第一次构建主从时,自动检查最后一个relay的gtid信息,检查没有SET @@GLOBAL.GTID_PURGED=’1c35b73a-7321-11ea-8974-000c29248f69:1-10’参数
    
    如果都没有信息。就从主库的第一个GTID事件开始全新复制binlog日志。
   
注意: 备份主库数据 ,恢复至从库的方式构建GTID主从,不要 –set-gtid-purged=OFF
    
    
    查看监控信息
   
Last_SQL_Error: Error 'Can't create database 'oldboy'; database exists' on query. Default database: 'oldboy'. Query: 'create database oldboy'
Retrieved_Gtid_Set: 71bfa52e-4aae-11e9-ab8c-000c293b577e:1-3
Executed_Gtid_Set:  71bfa52e-4aae-11e9-ab8c-000c293b577e:1-2
注入空事物的方法:
stop slave;
set gtid_next='71bfa52e-4aae-11e9-ab8c-000c293b577e:3';
begin;commit;
set gtid_next='AUTOMATIC';
    
    
    主从复制架构演变
   
    
    
    基础架构
   
    1主1从
    
    1主多从
    
    多级主从
    
    双主结构
   
    
    
    高级架构
   
    高可用 :
    
    MHA(faceback、RDS MySQL(TMHA)、Oracle官方Operator(K8s+MHA))
    
    PXC(Percona)
    
    MGC(mariaDB)
    
    MySQL Cluster
    
    InnoDB Cluster(8.0.17 clone plugin) 未来 2-3年
   
    读写分离 :
    
    Atlas
    
    ProxySQL
    
    Maxscale
    
    Mycat
   
    分布式架构:
    
    Mycat 、 DBLE 、sharding-jdbc
   
    
    
    MHA 高可用环境搭建
   
    
    
    规划
   
    主库: 51 node
    
    从库:
    
    52 node
    
    53 node manager
   
    
    
    准备环境(1主2从GTID)
   
    
    
    配置关键程序软连接
   
ln -s /data/app/mysql/bin/mysqlbinlog    /usr/bin/mysqlbinlog
ln -s /data/app/mysql/bin/mysql          /usr/bin/mysql
    
    
    配置各节点互信(各节点之间无密码SSH)
   
# db01:
rm -rf /root/.ssh 
ssh-keygen
cd /root/.ssh 
mv id_rsa.pub authorized_keys
scp  -r  /root/.ssh  10.0.0.52:/root 
scp  -r  /root/.ssh  10.0.0.53:/root 
各节点验证
db01:
ssh 10.0.0.51 date
ssh 10.0.0.52 date
ssh 10.0.0.53 date
db02:
ssh 10.0.0.51 date
ssh 10.0.0.52 date
ssh 10.0.0.53 date
db03:
ssh 10.0.0.51 date
ssh 10.0.0.52 date
ssh 10.0.0.53 date
    
    
    安装软件
   
    
    
    下载mha软件
   
mha官网:https://code.google.com/archive/p/mysql-master-ha/
github下载地址:https://github.com/yoshinorim/mha4mysql-manager/wiki/Downloads
    
    
    所有节点安装Node软件依赖包
   
yum install perl-DBD-MySQL -y
rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm
    
    
    在db01主库中创建mha需要的用户
   
grant all privileges on *.* to mha@'10.0.0.%' identified by 'mha';
    
    
    Manager软件安装(db03)
   
yum install -y perl-Config-Tiny epel-release perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes
rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm
    
    
    配置文件准备(db03)
   
    
    
    创建配置文件目录
   
mkdir -p /etc/mha
    
    
    创建日志目录
   
 mkdir -p /var/log/mha/app1
    
    
    编辑mha配置文件
   
vim /etc/mha/app1.cnf
[server default]
manager_log=/var/log/mha/app1/manager        
manager_workdir=/var/log/mha/app1            
master_binlog_dir=/data/binlog       
user=mha                                   
password=mha                               
ping_interval=2
repl_password=123
repl_user=repl
ssh_user=root                               
[server1]                                   
hostname=10.0.0.51
port=3306                                  
[server2]            
hostname=10.0.0.52
port=3306
[server3]
hostname=10.0.0.53
port=3306
    
    
    状态检查
   
    
    
    互信检查
   
masterha_check_ssh  --conf=/etc/mha/app1.cnf 
    
    
    主从状态检查
   
masterha_check_repl --conf=/etc/mha/app1.cnf 
    
    
    开启MHA(db03)
   
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
    
    
    查看MHA状态
   
[root@db03 ~]# masterha_check_status --conf=/etc/mha/app1.cnf
    
    
    MySQL高可用及读写分离
   
    
    
    什么是高可用
   
企业高可用标准:全年无故障时间
无故障时间           故障时间      
99.9%                0.1%     = 525.6  min        KA+双主 :人为干预 
99.99%               0.01%    = 52.56  min        MHA     :半自动化
99.999%              0.001%   = 5.256  min        PXC 、 MGR 、MGC
99.9999%             0.0001%  = 0.5256 min        自动化、云化、平台化
    
    
    MHA的软件结构
   
一堆perl写的脚本。
    
    
    manager 组件
   
    masterha_manger 启动MHA
    
    masterha_check_ssh 检查MHA的SSH配置状况
    
    masterha_check_repl 检查MySQL复制状况
    
    masterha_master_monitor 检测master是否宕机
    
    masterha_check_status 检测当前MHA运行状态
    
    masterha_master_switch 控制故障转移(自动或者手动)
    
    masterha_conf_host 添加或删除配置的server信息
   
    
    
    node 组件
   
    save_binary_logs 保存和复制master的二进制日志
    
    apply_diff_relay_logs 识别差异的中继日志事件并将其差异的事件应用于其他的
    
    purge_relay_logs 清除中继日志(不会阻塞SQL线程)
   
    
    
    高可用软件设计
   
    监控
    
    选主
    
    数据补偿
    
    故障转移
    
    应用透明
    
    自动提醒
    
    自愈
   
    
    
    MHA FailOver 原理
   
    
    
    监控
   
    通过 masterha_master_monitor ,每隔ping_interval秒特测一此Master 心跳。
    
    监测不到心跳,一共给4次机会。
   
    
    
    选主
   
    
    
    权重
   
candidate_master=1  强制某个节点为备选主。如果日志量超过100M差异,放弃掉他。
check_repl_delay=0  不检查日志量的差异。
    
    
    日志量
   
各个从库回放到的日志量。
无GTID: 
[root@db02 ~]#  mysql -e "show slave status\G" |grep "Master_Log"
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 194
Relay_Master_Log_File: mysql-bin.000003
Exec_Master_Log_Pos: 194
有GTID:
[root@db02 ~]#  mysql -e "show slave status\G" |grep "Executed_Gtid_Set"
Executed_Gtid_Set: 1c35b73a-7321-11ea-8974-000c29248f69:1-6
[root@db02 ~]# 
    
    
    没有权重,从库日志量一样
   
根据配置文件的先后顺序选择新主。
    
    
    日志补偿
   
    
    
    if 主库ssh 能连接
   
各个从节点,通过save_binary_logs 立即保存缺失部分的binlog到/var/tmp/xxxxx
怎么判断缺失日志?
有GTID?
[root@db01 ~]# mysql -e "show master status;"
[root@db02 ~]# mysql -e "show slave status\G" |grep "Retrieved_Gtid_Set"
    
    
    eles 主库 ssh 不能连接
   
从节点调用apply_diff_relay_logs,计算两个从节点的relay-log日志差异。
    
    
    故障转移
   
- 取消所有节点的从库状态
- 构建新的主从关系
    
    
    自动将故障节点,从配置文件剔除
   
--remove_dead_master_conf
    
    
    自杀
   
manager自动退出。
    
    
    应用透明: vip
   
    
    
    数据补偿补充方案:binlog_server
   
    
    
    切换提醒:send_report
   
    
    
    模拟故障并恢复
   
    
    
    工作状态查看
   
[root@db03 app1]# masterha_check_status --conf=/etc/mha/app1.cnf 
app1 (pid:17501) is running(0:PING_OK), master:10.0.0.51
    
    
    宕主库测试
   
[root@db01 ~]# /etc/init.d/mysqld stop
Shutting down MySQL............ SUCCESS! 
[root@db01 ~]# 
    
    
    看日志
   
[root@db03 app1]# vim /var/log/mha/app1/manager
    
    
    恢复
   
    
    
    修复故障节点
   
[root@db01 ~]# /etc/init.d/mysqld start
Starting MySQL.. SUCCESS! 
    如果生产怎么办?
    
    按实际情况。
   
    
    
    恢复主从
   
change master to 
master_host='10.0.0.52',
master_user='repl',
master_password='123' ,
MASTER_AUTO_POSITION=1;
start slave;
    
    
    修复配置文件
   
方法一:  
vim /etc/mha/app1.cnf
[server1]
hostname=10.0.0.51
port=3306
方法二: 
[root@db03 ~]# masterha_conf_host --command=add --conf=/etc/mha/app1.cnf --hostname=10.0.0.51 --block=server10 --params="port=3306"
masterha_conf_host --command=delete --conf=/etc/mha/app1.cnf --block=server1
    
    
    预检测脚本
   
[root@db03 ~]# masterha_check_ssh  --conf=/etc/mha/app1.cnf
[root@db03 ~]# masterha_check_repl  --conf=/etc/mha/app1.cnf 
    
    
    启动MHA
   
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
[root@db03 ~]# masterha_check_status --conf=/etc/mha/app1.cnf 
app1 (pid:24316) is running(0:PING_OK), master:10.0.0.52
[root@db03 ~]# 
    
    
    应用透明—VIP
   
vip : 10.0.0.55/24
    
    
    vip 故障转移脚本
   
上传mha_script.tar文件到/usr/local/bin 解压
    
    
    修改权限
   
[root@db03 bin]# chmod +x /usr/local/bin/*
    
    
    修改内容
   
[root@db03 bin]# cp master_ip_failover master_ip_failover.bak
my $vip = '10.0.0.55/24';
my $key = '1';
my $if  = 'ens33';
my $ssh_start_vip = "/sbin/ifconfig $if:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig  $if:$key down";
my $ssh_Bcast_arp= "/sbin/arping -I $if -c 3 -A 10.0.0.55";
    
    
    修改Manager 配置文件
   
vim /etc/mha/app1.cnf 
master_ip_failover_script=/usr/local/bin/master_ip_failover
    
    
    重启MHA
   
[root@db03 bin]# masterha_stop  --conf=/etc/mha/app1.cnf 
[root@db03 bin]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
    
    
    手工在主库添加VIP
   
[root@db02 ~]# ifconfig ens33:1 10.0.0.55/24
    
    
    故障提醒功能
   
    
    
    准备脚本
   
[root@db03 bin]# cp send_report send_report.bak1
my $smtp='smtp.qq.com';             # smtp服务器
my $mail_from='22654481@qq.com';    # 发件箱
my $mail_user='22654481';           # 用户名 QQ号
my $mail_pass='gemghsvgkeyzcagh';   # 授权码
my $mail_to=['22654481@qq.com'];    # 收件箱
#my $mail_to=['to1@qq.com','to2@qq.com'];
    
    
    修改配置文件
   
vim /etc/mha/app1.cnf 
# 添加一行: 
report_script=/usr/local/bin/send_report
    
    
    重启MHA
   
[root@db03 bin]# masterha_stop  --conf=/etc/mha/app1.cnf 
[root@db03 bin]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
    
    
    模拟主库宕机
   
    
    
    确认主库
   
[root@db03 bin]# masterha_check_status  --conf=/etc/mha/app1.cnf 
app1 (pid:27096) is running(0:PING_OK), master:10.0.0.52
    
    
    宕主库
   
[root@db02 ~]# /etc/init.d/mysqld stop 
Shutting down MySQL............ SUCCESS! 
    
    
    观察 vip 漂移
   
    
    
    观察 邮件
   
    
    
    修复MHA 架构1主2从
   
    
    
    日志补偿的冗余方案–binlog_server
   
    
    
    创建必要目录(db03)
   
mkdir -p /data/binlog_server/
chown -R mysql.mysql /data/*
cd  /data/binlog_server/
[root@db03 ~]# mysql -e "show slave status \G"|grep "Master_Log"
              Master_Log_File: mysql-bin.000008
          Read_Master_Log_Pos: 194
        Relay_Master_Log_File: mysql-bin.000008
          Exec_Master_Log_Pos: 194
[root@db03 ~]# 
mysqlbinlog  -R --host=10.0.0.51 --user=mha --password=mha --raw  --stop-never mysql-bin.000008 &
注意:
拉取日志的起点,需要按照目前从库的已经获取到的二进制日志点为起点
    
    
    配置文件设置
   
vim /etc/mha/app1.cnf 
[binlog1]
no_master=1
hostname=10.0.0.53
master_binlog_dir=/data/binlog_server/
    
    
    重启MHA
   
[root@db03 bin]# masterha_stop  --conf=/etc/mha/app1.cnf 
[root@db03 bin]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
    
    
    MHA的维护操作 – 在线切换功能
   
    
    
    只切换角色
   
masterha_master_switch  --conf=/etc/mha/app1.cnf --master_state=alive --new_master_host=10.0.0.52 --orig_master_is_new_slave --running_updates_limit=10000
注意: 
master_ip_online_change_script is not defined. If you do not disable writes on the current master manually, applications keep writing on the current master. Is it ok to proceed? (yes/NO): yes
1. 此种方法 切换,要注意将原主库,FTWRL,否则会造成主从不一致。
2. 手工切换vip
3. 重新拉去新主库的binlog
    
    
    master_ip_online_change_script功能实现
   
功能: 在线切换时,自动锁原主库,VIP自动切换
    
    
    准备切换脚本
   
vim /usr/local/bin/master_ip_online_change
my $vip = "10.0.0.55/24";
my $key = "1";
my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig ens33:$key $vip down";
my $ssh_Bcast_arp= "/sbin/arping -I ens33 -c 3 -A 10.0.0.55";
    
    
    修改MHA配置文件
   
 vim /etc/mha/app1.cnf
master_ip_online_change_script=/usr/local/bin/master_ip_online_change
    
    
    停 MHA
   
[root@db03 bin]# masterha_stop  --conf=/etc/mha/app1.cnf 
    
    
    检查repl
   
[root@db03 bin]# masterha_check_repl   --conf=/etc/mha/app1.cnf 
    
    
    在线切换
   
masterha_master_switch  --conf=/etc/mha/app1.cnf --master_state=alive --new_master_host=10.0.0.51 --orig_master_is_new_slave --running_updates_limit=10000
    
    
    重构binlogserver
   
[root@db03 bin]# ps -ef |grep mysqlbinlog
root      28144  16272  0 17:50 pts/1    00:00:00 mysqlbinlog -R --host=10.0.0.52 --user=mha --password=x x --raw --stop-never mysql-bin.000005
root      28529  16272  0 18:03 pts/1    00:00:00 grep --color=auto mysqlbinlog
[root@db03 bin]# kill -9 28144
[root@db03 bin]# cd /data/binlog_server/
[root@db03 binlog_server]# ll
total 4
-rw-r----- 1 root root 194 Apr  1 17:50 mysql-bin.000005
[root@db03 binlog_server]# rm -rf *
[root@db03 binlog_server]# mysqlbinlog  -R --host=10.0.0.51 --user=mha --password=mha --raw  --stop-never mysql-bin.000009 &
[1] 28534
    
    
    启动MHA
   
[root@db03 bin]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
[root@db03 binlog_server]# masterha_check_status   --conf=/etc/mha/app1.cnf 
app1 (pid:28535) is running(0:PING_OK), master:10.0.0.51
    
    
    测试 candidate_master 权重的作用
   
    
    
    单节点加权重
   
[server1]
hostname=10.0.0.51
port=3306
[server2]
hostname=10.0.0.52
port=3306
[server3]
hostname=10.0.0.53
port=3306
candidate_master=1
结论: 单节点设置candidate_master值时,会被优先选择为新主
    
    
    测试多节点相同权重
   
[server1]
hostname=10.0.0.51
port=3306
candidate_master=1
[server2]
hostname=10.0.0.52
port=3306
candidate_master=1
[server3]
hostname=10.0.0.53
port=3306
结论: 多节点相同权重时,会按照标签顺序选择新主
    
    
    多节点设置不同权重-1
   
[server1]
hostname=10.0.0.51
port=3306
[server2]
candidate_master=10
hostname=10.0.0.52
port=3306
[server3]
candidate_master=1
hostname=10.0.0.53
port=3306
现象: 切换至了server2
[server2]
candidate_master=10
hostname=10.0.0.52
port=3306
怀疑: 
	1. candidate_master越大 权重越高 
	2. 标签比权重优先级高
    
    
    多节点设置不同权重-2
   
[server2]
hostname=10.0.0.52
port=3306
[server1]
hostname=10.0.0.51
port=3306
[server3]
hostname=10.0.0.53
port=3306
candidate_master=10
[server4]
hostname=10.0.0.54
port=3306
candidate_master=1
结论: 优先级高于标签
    
    
    多节点设置不同权重-3
   
[server3]
hostname=10.0.0.53
port=3306
[server2]
hostname=10.0.0.52
port=3306
[server4]
hostname=10.0.0.54
port=3306
candidate_master=10
[server1]
hostname=10.0.0.51
port=3306
candidate_master=20
最终结论:  
1. candidate大于标签id 
2. 如果没有candidate,lastest,标签id
    
    
    读写分离 – Atlas
   
    
    
    安装配置
   
yum install -y Atlas*
cd /usr/local/mysql-proxy/conf
mv test.cnf test.cnf.bak
vim  test.cnf
[mysql-proxy]
admin-username = user
admin-password = pwd
proxy-backend-addresses = 10.0.0.55:3306
proxy-read-only-backend-addresses = 10.0.0.51:3306,10.0.0.53:3306
pwds = repl:3yb5jEku5h4=,mha:O2jBXONX098=
daemon = true
keepalive = true
event-threads = 8
log-level = message
log-path = /usr/local/mysql-proxy/log
sql-log=ON
proxy-address = 0.0.0.0:33060
admin-address = 0.0.0.0:2345
charset=utf8
启动atlas
/usr/local/mysql-proxy/bin/mysql-proxyd test start
ps -ef |grep proxy
    
    
    读写分离测试
   
    
    
    读操作
   
db03 [(none)]>select @@server_id;
+-------------+
| @@server_id |
+-------------+
|           8 |
+-------------+
db03 [(none)]>select @@server_id;
+-------------+
| @@server_id |
+-------------+
|           6 |
+-------------+
db03 [(none)]>select @@server_id;
+-------------+
| @@server_id |
+-------------+
|           8 |
+-------------+
db03 [(none)]>select @@server_id;
+-------------+
| @@server_id |
+-------------+
|           6 |
+-------------+
    
    
    写操作
   
db03 [(none)]>begin;select @@server_id; commit;
+-------------+
| @@server_id |
+-------------+
|           7 |
+-------------+
db03 [(none)]>begin;select @@server_id; commit;
+-------------+
| @@server_id |
+-------------+
|           7 |
+-------------+
    
    
    Atlas的管理
   
[root@db03 conf]# mysql -uuser -ppwd -h 10.0.0.53 -P 2345
    
    
    查看帮助
   
db03 [(none)]>select * from help;
+----------------------------+---------------------------------------------------------+
| command                    | description                                             |
+----------------------------+---------------------------------------------------------+
| SELECT * FROM help         | shows this help                                         |
| SELECT * FROM backends     | lists the backends and their state                      |
| SET OFFLINE $backend_id    | offline backend server, $backend_id is backend_ndx's id |
| SET ONLINE $backend_id     | online backend server, ...                              |
| ADD MASTER $backend        | example: "add master 127.0.0.1:3306", ...               |
| ADD SLAVE $backend         | example: "add slave 127.0.0.1:3306", ...                |
| REMOVE BACKEND $backend_id | example: "remove backend 1", ...                        |
| SELECT * FROM clients      | lists the clients                                       |
| ADD CLIENT $client         | example: "add client 192.168.1.2", ...                  |
| REMOVE CLIENT $client      | example: "remove client 192.168.1.2", ...               |
| SELECT * FROM pwds         | lists the pwds                                          |
| ADD PWD $pwd               | example: "add pwd user:raw_password", ...               |
| ADD ENPWD $pwd             | example: "add enpwd user:encrypted_password", ...       |
| REMOVE PWD $pwd            | example: "remove pwd user", ...                         |
| SAVE CONFIG                | save the backends to config file                        |
| SELECT VERSION             | display the version of Atlas                            |
+----------------------------+---------------------------------------------------------+
    
    
    查看后端节点
   
db03 [(none)]>SELECT * FROM backends;
+-------------+----------------+-------+------+
| backend_ndx | address        | state | type |
+-------------+----------------+-------+------+
|           1 | 10.0.0.55:3306 | up    | rw   |
|           2 | 10.0.0.51:3306 | up    | ro   |
|           3 | 10.0.0.53:3306 | up    | ro   |
+-------------+----------------+-------+------+
    
    
    上线或下线一个节点
   
SET OFFLINE 3;
SET ONLINE  3;
    
    
    添加和删除节点
   
db03 [(none)]>add slave 10.0.0.54:3306;
db03 [(none)]>REMOVE BACKEND 4;
    
    
    用户管理
   
(1) 后端主库添加用户
db02 [(none)]>grant all on *.* to root@'10.0.0.%' identified by '123';
(2) atlas 中添加用户
db03 [(none)]>ADD PWD root:123;
db03 [(none)]>select * from pwds;
+----------+--------------+
| username | password     |
+----------+--------------+
| repl     | 3yb5jEku5h4= |
| mha      | O2jBXONX098= |
| root     | 3yb5jEku5h4= |
+----------+--------------+
删除用户: 
db03 [(none)]>remove pwd mha ;
Empty set (0.00 sec)
db03 [(none)]>select * from pwds;
+----------+--------------+
| username | password     |
+----------+--------------+
| repl     | 3yb5jEku5h4= |
| root     | 3yb5jEku5h4= |
+----------+--------------+
    
    
    永久保存配置
   
save config;
    
    
    分布式架构-Mycat
   
    
    
    基础架构介绍
   
     
   
    
    
    准备环境
   
    
    
    环境准备
   
    两台虚拟机 db01 db02
    
    每台创建四个mysql实例:3307 3308 3309 3310
   
    
    
    创建相关目录初始化数据
   
mkdir /data/33{07..10}/data -p
mysqld --initialize-insecure  --user=mysql --datadir=/data/3307/data --basedir=/data/app/mysql
mysqld --initialize-insecure  --user=mysql --datadir=/data/3308/data --basedir=/data/app/mysql
mysqld --initialize-insecure  --user=mysql --datadir=/data/3309/data --basedir=/data/app/mysql
mysqld --initialize-insecure  --user=mysql --datadir=/data/3310/data --basedir=/data/app/mysql
    
    
    准备配置文件和启动脚本
   
    
    
    db01
   
========db01==============
cat >/data/3307/my.cnf<<EOF
[mysqld]
basedir=/data/app/mysql
datadir=/data/3307/data
socket=/data/3307/mysql.sock
port=3307
log-error=/data/3307/mysql.log
log_bin=/data/3307/mysql-bin
binlog_format=row
skip-name-resolve
server-id=7
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3308/my.cnf<<EOF
[mysqld]
basedir=/data/app/mysql
datadir=/data/3308/data
port=3308
socket=/data/3308/mysql.sock
log-error=/data/3308/mysql.log
log_bin=/data/3308/mysql-bin
binlog_format=row
skip-name-resolve
server-id=8
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3309/my.cnf<<EOF
[mysqld]
basedir=/data/app/mysql
datadir=/data/3309/data
socket=/data/3309/mysql.sock
port=3309
log-error=/data/3309/mysql.log
log_bin=/data/3309/mysql-bin
binlog_format=row
skip-name-resolve
server-id=9
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3310/my.cnf<<EOF
[mysqld]
basedir=/data/app/mysql
datadir=/data/3310/data
socket=/data/3310/mysql.sock
port=3310
log-error=/data/3310/mysql.log
log_bin=/data/3310/mysql-bin
binlog_format=row
skip-name-resolve
server-id=10
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/etc/systemd/system/mysqld3307.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/data/app/mysql/bin/mysqld --defaults-file=/data/3307/my.cnf
LimitNOFILE = 5000
EOF
cat >/etc/systemd/system/mysqld3308.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/data/app/mysql/bin/mysqld --defaults-file=/data/3308/my.cnf
LimitNOFILE = 5000
EOF
cat >/etc/systemd/system/mysqld3309.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/data/app/mysql/bin/mysqld --defaults-file=/data/3309/my.cnf
LimitNOFILE = 5000
EOF
cat >/etc/systemd/system/mysqld3310.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/data/app/mysql/bin/mysqld --defaults-file=/data/3310/my.cnf
LimitNOFILE = 5000
EOF
    
    
    db02
   
========db02===============
cat >/data/3307/my.cnf<<EOF
[mysqld]
basedir=/data/app/mysql
datadir=/data/3307/data
socket=/data/3307/mysql.sock
port=3307
log-error=/data/3307/mysql.log
log_bin=/data/3307/mysql-bin
binlog_format=row
skip-name-resolve
server-id=17
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3308/my.cnf<<EOF
[mysqld]
basedir=/data/app/mysql
datadir=/data/3308/data
port=3308
socket=/data/3308/mysql.sock
log-error=/data/3308/mysql.log
log_bin=/data/3308/mysql-bin
binlog_format=row
skip-name-resolve
server-id=18
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3309/my.cnf<<EOF
[mysqld]
basedir=/data/app/mysql
datadir=/data/3309/data
socket=/data/3309/mysql.sock
port=3309
log-error=/data/3309/mysql.log
log_bin=/data/3309/mysql-bin
binlog_format=row
skip-name-resolve
server-id=19
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3310/my.cnf<<EOF
[mysqld]
basedir=/data/app/mysql
datadir=/data/3310/data
socket=/data/3310/mysql.sock
port=3310
log-error=/data/3310/mysql.log
log_bin=/data/3310/mysql-bin
binlog_format=row
skip-name-resolve
server-id=20
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/etc/systemd/system/mysqld3307.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/data/app/mysql/bin/mysqld --defaults-file=/data/3307/my.cnf
LimitNOFILE = 5000
EOF
cat >/etc/systemd/system/mysqld3308.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/data/app/mysql/bin/mysqld --defaults-file=/data/3308/my.cnf
LimitNOFILE = 5000
EOF
cat >/etc/systemd/system/mysqld3309.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/data/app/mysql/bin/mysqld --defaults-file=/data/3309/my.cnf
LimitNOFILE = 5000
EOF
cat >/etc/systemd/system/mysqld3310.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/data/app/mysql/bin/mysqld --defaults-file=/data/3310/my.cnf
LimitNOFILE = 5000
EOF
    
    
    修改权限,启动多实例
   
chown -R mysql.mysql /data/*
systemctl start mysqld3307
systemctl start mysqld3308
systemctl start mysqld3309
systemctl start mysqld3310
mysql -S /data/3307/mysql.sock -e "show variables like 'server_id'"
mysql -S /data/3308/mysql.sock -e "show variables like 'server_id'"
mysql -S /data/3309/mysql.sock -e "show variables like 'server_id'"
mysql -S /data/3310/mysql.sock -e "show variables like 'server_id'"
    
    
    节点主从规划
   
箭头指向谁是主库
    10.0.0.51:3307    <----->  10.0.0.52:3307
    10.0.0.51:3309    ------>  10.0.0.51:3307
    10.0.0.52:3309    ------>  10.0.0.52:3307
    10.0.0.52:3308  <----->    10.0.0.51:3308
    10.0.0.52:3310  ----->     10.0.0.52:3308
    10.0.0.51:3310  ----->     10.0.0.51:3308
    
    
    分片规划
   
shard1:
    Master:10.0.0.51:3307
    slave1:10.0.0.51:3309
    Standby Master:10.0.0.52:3307
    slave2:10.0.0.52:3309
	
	
shard2:
    Master:10.0.0.52:3308
    slave1:10.0.0.52:3310
    Standby Master:10.0.0.51:3308
    slave2:10.0.0.51:3310
	
    
    
    开始配置主从环境
   
    
    
    shard1
   
# shard1
##  10.0.0.51:3307 <-----> 10.0.0.52:3307
# db02
mysql  -S /data/3307/mysql.sock -e "grant replication slave on *.* to repl@'10.0.0.%' identified by '123';"
mysql  -S /data/3307/mysql.sock -e "grant all  on *.* to root@'10.0.0.%' identified by '123'  with grant option;"
# db01
mysql  -S /data/3307/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3307/mysql.sock -e "start slave;"
mysql  -S /data/3307/mysql.sock -e "show slave status\G"
# db02
mysql  -S /data/3307/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3307/mysql.sock -e "start slave;"
mysql  -S /data/3307/mysql.sock -e "show slave status\G"
## 10.0.0.51:3309 ------> 10.0.0.51:3307
# db01
mysql  -S /data/3309/mysql.sock  -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3309/mysql.sock  -e "start slave;"
mysql  -S /data/3309/mysql.sock  -e "show slave status\G"
## 10.0.0.52:3309 ------> 10.0.0.52:3307
# db02
mysql  -S /data/3309/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3309/mysql.sock -e "start slave;"
mysql  -S /data/3309/mysql.sock -e "show slave status\G"
    
    
    shard2
   
# shard2
## 10.0.0.52:3308 <-----> 10.0.0.51:3308
# db01
mysql  -S /data/3308/mysql.sock -e "grant replication slave on *.* to repl@'10.0.0.%' identified by '123';"
mysql  -S /data/3308/mysql.sock -e "grant all  on *.* to root@'10.0.0.%' identified by '123'  with grant option;"
# db02
mysql  -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3308/mysql.sock -e "start slave;"
mysql  -S /data/3308/mysql.sock -e "show slave status\G"
# db01
mysql  -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3308/mysql.sock -e "start slave;"
mysql  -S /data/3308/mysql.sock -e "show slave status\G"
## 10.0.0.52:3310 -----> 10.0.0.52:3308
# db02
mysql  -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3310/mysql.sock -e "start slave;"
mysql  -S /data/3310/mysql.sock -e "show slave status\G"
##10.0.0.51:3310 -----> 10.0.0.51:3308
# db01
mysql  -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3310/mysql.sock -e "start slave;"
mysql  -S /data/3310/mysql.sock -e "show slave status\G"
    
    
    检测主从状态
   
mysql -S /data/3307/mysql.sock -e "show slave status\G"|grep Yes
mysql -S /data/3308/mysql.sock -e "show slave status\G"|grep Yes
mysql -S /data/3309/mysql.sock -e "show slave status\G"|grep Yes
mysql -S /data/3310/mysql.sock -e "show slave status\G"|grep Yes
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
注:如果中间出现错误,在每个节点进行执行以下命令,从开始配置主从环境从头执行
mysql -S /data/3307/mysql.sock -e "stop slave; reset slave all;"
mysql -S /data/3308/mysql.sock -e "stop slave; reset slave all;"
mysql -S /data/3309/mysql.sock -e "stop slave; reset slave all;"
mysql -S /data/3310/mysql.sock -e "stop slave; reset slave all;"
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    
    
    MyCAT安装
   
    
    
    预先安装Java运行环境
   
yum install -y java
    
    
    下载
   
Mycat-server-xxxxx.linux.tar.gz
http://dl.mycat.io/
    
    
    解压文件
   
tar xf Mycat-server-*
    
    
    软件目录结构
   
ls
bin  catlet  conf  lib  logs  version.txt
    
    
    启动和连接
   
    
    
    配置环境变量
   
vim /etc/profile
export PATH=/data/mycat/bin:$PATH
source /etc/profile
    
    
    启动
   
mycat start
    
    
    连接mycat
   
mysql -uroot -p123456 -h 127.0.0.1 -P8066
    
    
    配置文件介绍
   
    
    
    bin 目录
   
程序目录
    
    
    conf
   
配置文件目录
    
    
    schema.xml
   
主配置文件:节点信息、读写分离、高可用设置、调用分片策略…
    
    
    rule.xml
   
分片策略的定义、功能、使用用方法
    
    
    server.xml
   
mycat服务有关配置: 用户、网络、权限、策略、资源…
    
    
    xx.txt文件
   
分片参数定义文件
    
    
    log4j2.xml
   
Mycat 相关日志记录配置
    
    
    logs
   
    wrapper.log : 启动日志
    
    mycat.log :工作日志
   
    
    
    Mycat 的基础功能实现
   
    
    
    schema.xml内容介绍
   
    
    
    schema(逻辑库) 定义
   
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1"> 
</schema>
    
    
    datanode 分片节点定义
   
 <dataNode name="sh1" dataHost="oldguo1" database= "world" />  
    
    
    数据节点定义
   
  <dataHost name="oldguo1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">    
                <heartbeat>select user()</heartbeat>  
        <writeHost host="db1" url="10.0.0.51:3307" user="root" password="123"> 
                        <readHost host="db2" url="10.0.0.51:3309" user="root" password="123" /> 
        </writeHost> 
        </dataHost>  
    
    
    读写分离
   
db01:
mysql -S /data/3307/mysql.sock 
grant all on *.* to root@'10.0.0.%' identified by '123';
source /root/world.sql
mysql -S /data/3308/mysql.sock 
grant all on *.* to root@'10.0.0.%' identified by '123';
source /root/world.sql
vim schema.xml 
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">  
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1"> 
</schema>  
        <dataNode name="sh1" dataHost="oldguo1" database= "world" />         
        <dataHost name="oldguo1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">    
                <heartbeat>select user()</heartbeat>  
        <writeHost host="db1" url="10.0.0.51:3307" user="root" password="123"> 
                        <readHost host="db2" url="10.0.0.51:3309" user="root" password="123" /> 
        </writeHost> 
        </dataHost>  
</mycat:schema>
mycat restart  
测试 : 
[root@db01 ~]# mysql -uroot -p123456 -h 10.0.0.51 -P8066
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|           9 |
+-------------+
mysql> begin;select @@server_id;commit;
+-------------+
| @@server_id |
+-------------+
|           7 |
+-------------+
1 row in set (0.00 sec)
mysql> 
    
    
    测试环境准备
   
db01:
mysql -S /data/3307/mysql.sock 
grant all on *.* to root@'10.0.0.%' identified by '123';
source /root/world.sql
mysql -S /data/3308/mysql.sock 
grant all on *.* to root@'10.0.0.%' identified by '123';
source /root/world.sql
重启mycat
mycat restart
    
    
    读写分离测试
   
# 连接mycat 服务
[root@db01 ~]# mysql -uroot -p123456 -h 10.0.0.51 -P8066
# 测试读
mysql> select @@server_id;
# 测试写
mysql> begin ; select @@server_id;commit;
    
    
    配置读写分离及高可用
   
[root@db01 conf]# mv schema.xml schema.xml.rw
[root@db01 conf]# vim schema.xml
<?xml version="1.0"?>  
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">  
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1"> 
</schema>  
    <dataNode name="sh1" dataHost="oldguo1" database= "world" />  
    <dataHost name="oldguo1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1"> 
        <heartbeat>select user()</heartbeat>  
    <writeHost host="db1" url="10.0.0.51:3307" user="root" password="123"> 
            <readHost host="db2" url="10.0.0.51:3309" user="root" password="123" /> 
    </writeHost> 
    <writeHost host="db3" url="10.0.0.52:3307" user="root" password="123"> 
            <readHost host="db4" url="10.0.0.52:3309" user="root" password="123" /> 
    </writeHost>        
    </dataHost>  
</mycat:schema>
真正的 writehost:负责写操作的writehost  
standby  writeHost  :和readhost一样,只提供读服务
当写节点宕机后,后面跟的readhost也不提供服务,这时候standby的writehost就提供写服务,
后面跟的readhost提供读服务
测试读写分离:
mycat restart
mysql -uroot -p123456 -h 10.0.0.51  -P 8066
mysql> select @@server_id;
mysql> begin;select @@server_id;commit;
测试高可用: 
[root@db01 conf]# systemctl stop mysqld3307
mysql -uroot -p123456 -h 10.0.0.51  -P 8066
mysql> select @@server_id;
mysql> begin;select @@server_id;commit;
[root@db01 conf]# systemctl start mysqld3307
mysql -uroot -p123456 -h 10.0.0.51  -P 8066
mysql> select @@server_id;
mysql> begin;select @@server_id;commit;
    
    
    参数介绍
   
    
    
    balance属性
   
读操作负载均衡类型,目前的取值有3种:
- balance=“0”, 不开启读写分离机制,所有读操作都发送到当前可用的writeHost上。
- 
     balance=“1”,全部的readHost与standby writeHost参与select语句的负载均衡,简单的说,
 
 当双主双从模式(M1->S1,M2->S2,并且M1与 M2互为主备),正常情况下,M2,S1,S2都参与select语句的负载均衡。
- balance=“2”,所有读操作都随机的在writeHost、readhost上分发。
    
    
    writeType属性
   
写操作,负载均衡类型,目前的取值有2种:
- 
     writeType=“0”, 所有写操作发送到配置的第一个writeHost,
 
 第一个挂了切到还生存的第二个writeHost,重新启动后已切换后的为主,切换记录在配置文件中:dnindex.properties .
- writeType=“1”,所有写操作都随机的发送到配置的writeHost,但不推荐使用
    
    
    switchType属性
   
    -1 表示不自动切换
    
    1 默认值,自动切换
    
    2 基于MySQL主从同步的状态决定是否切换 ,心跳语句为 show slave status
    
    datahost其他配置
   
    
    
    连接有关
   
    maxCon=“1000”:最大的并发连接数
    
    minCon=“10” :mycat在启动之后,会在后端节点上自动开启的连接线程
   
    
    
    tempReadHostAvailable=“1”
   
    这个一主一从时(1个writehost,1个readhost时),可以开启这个参数,如果2个writehost,2个readhost时
    
    select user() 监测心跳
   
    
    
    Mycat 分布式架构–垂直分表
   
cd /data/mycat/conf
mv  schema.xml  schema.xml.ha 
vim schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1">
        <table name="user" dataNode="sh1"/>
        <table name="order_t" dataNode="sh2"/>
</schema>
    <dataNode name="sh1" dataHost="oldguo1" database= "taobao" />
    <dataNode name="sh2" dataHost="oldguo2" database= "taobao" />
    <dataHost name="oldguo1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="10.0.0.51:3307" user="root" password="123">
            <readHost host="db2" url="10.0.0.51:3309" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="10.0.0.52:3307" user="root" password="123">
            <readHost host="db4" url="10.0.0.52:3309" user="root" password="123" />
    </writeHost>
    </dataHost>
	
    <dataHost name="oldguo2" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="10.0.0.51:3308" user="root" password="123">
            <readHost host="db2" url="10.0.0.51:3310" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="10.0.0.52:3308" user="root" password="123">
            <readHost host="db4" url="10.0.0.52:3310" user="root" password="123" />
    </writeHost>
    </dataHost>
	
</mycat:schema>
创建测试库和表:
mysql -S /data/3307/mysql.sock -e "create database taobao charset utf8;"
mysql -S /data/3308/mysql.sock -e "create database taobao charset utf8;"
mysql -S /data/3307/mysql.sock -e "use taobao;create table user(id int,name varchar(20))";
mysql -S /data/3308/mysql.sock -e "use taobao;create table order_t(id int,name varchar(20))"
# 重启mycat 
mycat restart;
# mycat中对user  和 order 数据插入
[root@db01 conf]# mysql -uroot -p123456 -h 10.0.0.51  -P 8066
mysql> insert into user values(1,'a');
mysql> insert into user values(2,'b');
mysql> insert into user values(3,'c');
mysql> commit;
mysql> insert into order_t values(1,'x'),(2,'y');
mysql> commit;
[root@db01 conf]# mysql -S /data/3307/mysql.sock -e "show tables from taobao"
+------------------+
| Tables_in_taobao |
+------------------+
| user             |
+------------------+
[root@db01 conf]# mysql -S /data/3308/mysql.sock -e "show tables from taobao"
+------------------+
| Tables_in_taobao |
+------------------+
| order_t          |
+------------------+
[root@db01 conf]# mysql -S /data/3307/mysql.sock -e "select * from taobao.user"
+------+------+
| id   | name |
+------+------+
|    1 | a    |
|    2 | b    |
|    3 | c    |
+------+------+
[root@db01 conf]# mysql -S /data/3308/mysql.sock -e "select * from taobao.order_t"
+------+------+
| id   | name |
+------+------+
|    1 | x    |
|    2 | y    |
+------+------+
[root@db01 conf]# 
    
    
    Mycat 分布式架构–水平拆分
   
    
    
    重要概念
   
分片策略 :几乎融合经典业务中大部分的分片策略。Mycat已经开发了相应算法,非常方便调用。
范围分片 
取模 
枚举
日期
HASH
等
 分片键: 作为分片条件的列。
    
    
    范围分片
   
比如说t3表
(1)行数非常多,2000w(1-1000w:sh1   1000w01-2000w:sh2)
(2)访问非常频繁,用户访问较离散
# 1. 修改schema.xml文件,定制分片策略
cp schema.xml schema.xml.1  
vim schema.xml
添加:
<table name="t3" dataNode="sh1,sh2" rule="auto-sharding-long" />
# 2. 定义和使用分片策略
vim rule.xml
<tableRule name="auto-sharding-long">
                <rule>
                        <columns>id</columns>
                        <algorithm>rang-long</algorithm>
                </rule>     
				
<function name="rang-long"
    class="io.mycat.route.function.AutoPartitionByLong">
    <property name="mapFile">autopartition-long.txt</property>
</function>
  
# 3.  定义范围
  
vim autopartition-long.txt
0-10=0
10-20=1
# 4.  创建测试表
mysql -S /data/3307/mysql.sock -e "use taobao;create table t3 (id int not null primary key auto_increment,name varchar(20) not null);"
mysql -S /data/3308/mysql.sock  -e "use taobao;create table t3 (id int not null primary key auto_increment,name varchar(20) not null);"
# 5. 测试
重启mycat
mycat restart
mysql -uroot -p123456 -h 10.0.0.51 -P 8066
insert into t3(id,name) values(1,'a');
insert into t3(id,name) values(2,'b');
insert into t3(id,name) values(3,'c');
insert into t3(id,name) values(4,'d');
insert into t3(id,name) values(11,'aa');
insert into t3(id,name) values(12,'bb');
insert into t3(id,name) values(13,'cc');
insert into t3(id,name) values(14,'dd');
[root@db01 conf]# mysql -S /data/3308/mysql.sock  -e "select * from taobao.t3"
[root@db01 conf]# mysql -S /data/3307/mysql.sock  -e "select * from taobao.t3"
    
    
    取模分片
   
取余分片方式:分片键(一个列)与节点数量进行取余,得到余数,将数据写入对应节点
# 1.  修改配置文件
vim schema.xml
<table name="t4" dataNode="sh1,sh2" rule="mod-long" />
# 2. 查看和定义分片使用
vim rule.xml
<property name="count">2</property>
# 3. 准备测试环境
     
创建测试表:
mysql -S /data/3307/mysql.sock -e "use taobao;create table t4 (id int not null primary key auto_increment,name varchar(20) not null);"
mysql -S /data/3308/mysql.sock -e "use taobao;create table t4 (id int not null primary key auto_increment,name varchar(20) not null);"
重启mycat 
mycat restart 
# 4. 测试: 
mysql -uroot -p123456 -h10.0.0.51 -P8066
use TESTDB
insert into t4(id,name) values(1,'a');
insert into t4(id,name) values(2,'b');
insert into t4(id,name) values(3,'c');
insert into t4(id,name) values(4,'d');
insert into t4(id,name) values(6,'x'),(8,'y'),(10,'z');
分别登录后端节点查询数据
	mysql -S /data/3308/mysql.sock  -e "select * from taobao.t4"
	mysql -S /data/3307/mysql.sock  -e "select * from taobao.t4"
    
    
    枚举分片
   
t5 表
id name telnum
1   bj   1212
2   sh   22222
3   bj   3333
4   sh   44444
5   bj   5555
sharding-by-intfile
# 1.  设计分片策略
vim schema.xml
<table name="t5" dataNode="sh1,sh2" rule="sharding-by-intfile" />
# 2. 应用分片策略
vim rule.xml
        <tableRule name="sharding-by-intfile">
                <rule>
                        <columns>name</columns>
                        <algorithm>hash-int</algorithm>
                </rule>
        </tableRule>
<function name="hash-int" class="org.opencloudb.route.function.PartitionByFileMap"> 
<property name="mapFile">partition-hash-int.txt</property> 
  <property name="type">1</property>
</function> 
vim  partition-hash-int.txt 配置: 
bj=0 
sh=1
DEFAULT_NODE=1 
# 3. 准备测试环境
mysql -S /data/3307/mysql.sock -e "use taobao;create table t5 (id int not null primary key auto_increment,name varchar(20) not null);"
mysql -S /data/3308/mysql.sock -e "use taobao;create table t5 (id int not null primary key auto_increment,name varchar(20) not null);"
重启mycat 
mycat restart 
# 4. 插入测试数据:  
mysql -uroot -p123456 -h10.0.0.51 -P8066
use TESTDB
insert into t5(id,name) values(1,'bj');
insert into t5(id,name) values(2,'sh');
insert into t5(id,name) values(3,'bj');
insert into t5(id,name) values(4,'sh');
insert into t5(id,name) values(5,'tj');
mysql -S /data/3308/mysql.sock  -e "select * from taobao.t5"
mysql -S /data/3307/mysql.sock  -e "select * from taobao.t5"
    
    
    Mycat全局表
   
a   b   c  d   .....
join 
t 
a  
id   name   age 
1    zs     18
2    ls     19  
b 
id    addr  aid 
1001  bj    1 
1002  sh    2 
使用场景:
如果你的业务中有些数据类似于数据字典,比如配置文件的配置,
常用业务的配置或者数据量不大很少变动的表,这些表往往不是特别大,
而且大部分的业务场景都会用到,那么这种表适合于Mycat全局表,无须对数据进行切分,
要在所有的分片上保存一份数据即可,Mycat 在Join操作中,业务表与全局表进行Join聚合会优先选择相同分片内的全局表join,
避免跨库Join,在进行数据插入操作时,mycat将把数据分发到全局表对应的所有分片执行,在进行数据读取时候将会随机获取一个节点读取数据。 
# 1. 设置全局表策略
vim schema.xml 
<table name="t_area" primaryKey="id"  type="global" dataNode="sh1,sh2" /> 
# 2. 后端数据准备
mysql -S /data/3307/mysql.sock -e "use taobao;create table t_area (id int not null primary key auto_increment,name varchar(20) not null);"
mysql -S /data/3308/mysql.sock -e "use taobao;create table t_area (id int not null primary key auto_increment,name varchar(20) not null);"
重启mycat 
mycat restart 
# 3. 测试: 
mysql -uroot -p123456 -h10.0.0.51 -P8066
	use TESTDB
	insert into t_area(id,name) values(1,'a');
	insert into t_area(id,name) values(2,'b');
	insert into t_area(id,name) values(3,'c');
	insert into t_area(id,name) values(4,'d');
mysql -S /data/3308/mysql.sock  -e "select * from taobao.t_area"
mysql -S /data/3307/mysql.sock  -e "select * from taobao.t_area"
    
    
    E-R分片
   
a
join 
b 
on  a.xx =b.yy 
为了防止跨分片join,可以使用E-R模式
<table name="a" dataNode="sh1,sh2" rule="mod-long"> 
       <childTable name="b" joinKey="aid" parentKey="id" /> 
</table> 
a
id  name             
1   a    
2   b    
3   c    
4   d   
b 
id   addr  aid
1001  bj   1
1002  sh   2 
1003  tj   3
1004  wh   4 
select * from a join b on a.id = b.aid where a.name=d
例子: 
1. 修改配置文件
vim schema.xml
<table name="a" dataNode="sh1,sh2" rule="mod-long_oldguo"> 
       <childTable name="b" joinKey="aid" parentKey="id" /> 
</table> 
2. 修改rule.xml mod-log分片策略:
vim rule.xml
        <tableRule name="mod-long_oldguo">
                <rule>
                        <columns>id</columns>
                        <algorithm>mod-long_oldguo</algorithm>
                </rule>
        </tableRule>
        <function name="mod-long_oldguo" class="io.mycat.route.function.PartitionByMod">
                <!-- how many data nodes -->
                <property name="count">2</property>
        </function>
3. 创建测试表
mysql -S /data/3307/mysql.sock -e "use taobao;create table a (id int not null primary key auto_increment,name varchar(20) not null);"
mysql -S /data/3307/mysql.sock -e "use taobao;create table b (id int not null primary key auto_increment,addr varchar(20) not null ,aid int );"
mysql -S /data/3308/mysql.sock -e "use taobao;create table a (id int not null primary key auto_increment,name varchar(20) not null);"
mysql -S /data/3308/mysql.sock -e "use taobao;create table b (id int not null primary key auto_increment,addr varchar(20) not null ,aid int );"
4. 重启mycat 测试
mycat restart
mysql -uroot -p123456 -h10.0.0.51 -P8066
use TESTDB
insert into a(id,name) values(1,'a');
insert into a(id,name) values(2,'b');
insert into a(id,name) values(3,'c');
insert into a(id,name) values(4,'d');
insert into a(id,name) values(5,'e');
insert into b(id,addr,aid) values(1001,'bj',1);
insert into b(id,addr,aid) values(1002,'sj',3);
insert into b(id,addr,aid) values(1003,'sd',4);
insert into b(id,addr,aid) values(1004,'we',2);
insert into b(id,addr,aid) values(1005,'er',5);
========
后端数据节点数据分布: 
mysql -S /data/3307/mysql.sock  -e "select * from taobao.a"
mysql -S /data/3307/mysql.sock  -e "select * from taobao.b"
mysql -S /data/3308/mysql.sock  -e "select * from taobao.a"
mysql -S /data/3308/mysql.sock  -e "select * from taobao.b"
    
    
    管理类操作
   
[root@db01 conf]# mysql -uroot -p123456 -h10.0.0.51 -P9066
查看帮助 
show @@help;
查看Mycat 服务情况 
show @@server ;
查看分片信息
mysql> show @@datanode;
查看数据源
show @@datasource
重新加载配置信息
reload @@config          : schema.xml                        
reload @@config_all      :  所有配置重新加载
    
    
    修改逻辑库
   
逻辑库名
# 总配置文件
schema.xml 
<schema name="oldboy" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1"
# mycat 服务配置
server.xml
 <user name="root" defaultAccount="true">
                <property name="password">123456</property>
                <property name="schemas">oldboy</property>
                <property name="defaultSchema">oldboy</property>
                <!--No MyCAT Database selected 错误前会尝试使用该schema作为schema,不设置则为null,报错 -->
                <!-- 表级 DML 权限设置 -->
                <!--            
                <privileges check="false">
                        <schema name="TESTDB" dml="0110" >
                                <table name="tb01" dml="0000"></table>
                                <table name="tb02" dml="1111"></table>
                        </schema>
                </privileges>           
                 -->
        </user>
        <user name="user">
                <property name="password">user</property>
                <property name="schemas">oldboy</property>
                <property name="readOnly">true</property>
                <property name="defaultSchema">oldboy</property>
        </user>
reload @@config_all      :  所有配置重新加载
2. 添加一个逻辑库 
schema.xml 
<schema name="oldguo" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1">
</schema>
server.xml
 <property name="schemas">oldboy,oldguo</property>
mycat restart
 
