MyCAT分布式架构介绍
MyCATMycat是一个数据库中间件,是一个实现了MySQL协议的服务器。前端用户可以把它看作是一个数据库代理,用 MySQL 客户端工具和命令行访问,而其后端可以用 MySQL 原生(Native) 协议与多个 MySQL 服务器通信,也可以用 JDBC 协议与大多数主流数据库服务器通信其。核心功能是分库分表,配合数据库的主从模式还可以实现读写分离。
MyCAT最重要的动词是"拦截",它拦截了用户发送过来的SQL语句并对其进行特定分析:如分片分析、路由分析、读写分离分析、缓存分析等,然后将此SQL发往后端的真实数据库,并将返回的结果做适当的处理,最终再返回给用户。
MyCAT拆分介绍
- schema拆分及业务分库:对列很多的表可以按列拆分成多个表,降低耦合度。
- 垂直拆分/纵向拆分:根据业务的不同,将不同业务的表放到不同的数据库中(分库分表,每个库存一张表)。
- 水平拆分/横向拆分:对数据量很大的表进行拆分,把这些表按照某种规则将数据存放到不同的数据库中(分片,实现更高的读写并发)。
MyCAT应用场景
- 读写分离,配置简单,支持读写分离与主从切换;
- 分库分表,对于超过1000万的表进行分片,最大支持1000亿单表分片;
- 多租户应用,每个应用一个库,应用只连接MyCAT,从而不改造数据库本身,实现多租户;
- 报表系统,借助MyCAT的分表能力,处理大规模报表统计;
- 作为海量数据实时查询的一种简单有效方案,比如100亿条频繁查询的记录需要在3秒内查询出来结果,除了基于主键的查询,还可能存在范围查询或其他属性查询,此时mycat可能是最简单有效的选择。
- 数据库路由器,mycat基于mysql实例的连接池复用机制,可以让每一个应用最大程度地共享一个mysql实例地所有连接池,让数据库地并发访问能力大大提升。
MyCAT 基础架构搭建
MyCAT环境准备
节点主从规划:
-
db01 3307实例与db02 3307实例互为主从关系; 每个3309实例是本地3307实例的从库; 192.168.1.5:3307 ? 192.168.1.6:3307
192.168.1.5:3309 → 192.168.1.5:3307
192.168.1.6:3309 → 192.168.1.6:3307
-
db01 3308实例与db02 3308实例互为主从关系; 每个3310实例是本地3308实例的从库; 192.168.1.5:3308 ? 192.168.1.6:3308
192.168.1.5:3310 → 192.168.1.5:3308
192.168.1.6:3310 → 192.168.1.6:3308
节点分片规划:
部署过程:
-
创建多实例 两台虚拟机 db01(192.168.1.5)和db02(192.168.1.6) 每台虚拟机创建四个MySQL实例:3307 3308 3309 3310 创建脚本: [root@db01 ~]
pkill mysqld
rm -rf /data
mv /etc/my.cnf /etc/my.cnf.bak
mkdir /data/33{07..10}/data -p
chown -R mysql.mysql /data
mysqld --initialize-insecure --user=mysql --datadir=/data/3307/data --basedir=/app/mysql &> /dev/null && echo "3307 initialized complete."
mysqld --initialize-insecure --user=mysql --datadir=/data/3308/data --basedir=/app/mysql &> /dev/null && echo "3308 initialized complete."
mysqld --initialize-insecure --user=mysql --datadir=/data/3309/data --basedir=/app/mysql &> /dev/null && echo "3309 initialized complete."
mysqld --initialize-insecure --user=mysql --datadir=/data/3310/data --basedir=/app/mysql &> /dev/null && echo "3310 initialized complete."
cat >/data/3307/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3307/data
socket=/data/3307/mysql.sock
port=3307
log-error=/data/3307/mysql.log
log_bin=/data/3307/mysql-bin
binlog_format=row
skip-name-resolve
server-id=7
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3308/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3308/data
port=3308
socket=/data/3308/mysql.sock
log-error=/data/3308/mysql.log
log_bin=/data/3308/mysql-bin
binlog_format=row
skip-name-resolve
server-id=8
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3309/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3309/data
socket=/data/3309/mysql.sock
port=3309
log-error=/data/3309/mysql.log
log_bin=/data/3309/mysql-bin
binlog_format=row
skip-name-resolve
server-id=9
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3310/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3310/data
socket=/data/3310/mysql.sock
port=3310
log-error=/data/3310/mysql.log
log_bin=/data/3310/mysql-bin
binlog_format=row
skip-name-resolve
server-id=10
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/etc/systemd/system/mysqld3307.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3307/my.cnf
LimitNOFILE = 5000
EOF
cat >/etc/systemd/system/mysqld3308.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3308/my.cnf
LimitNOFILE = 5000
EOF
cat >/etc/systemd/system/mysqld3309.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3309/my.cnf
LimitNOFILE = 5000
EOF
cat >/etc/systemd/system/mysqld3310.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3310/my.cnf
LimitNOFILE = 5000
EOF
chown -R mysql.mysql /data
systemctl start mysqld3307
systemctl start mysqld3308
systemctl start mysqld3309
systemctl start mysqld3310
mysql -S /data/3307/mysql.sock -e "select @@server_id"
mysql -S /data/3308/mysql.sock -e "select @@server_id"
mysql -S /data/3309/mysql.sock -e "select @@server_id"
mysql -S /data/3310/mysql.sock -e "select @@server_id"
-
shard1部署主从关系 192.168.1.5:3307 ? 192.168.1.6:3307
mysql -S /data/3307/mysql.sock -e "grant replication slave on *.* to repl@'192.168.1.%' identified by '123';"
mysql -S /data/3307/mysql.sock -e "grant all on *.* to root@'192.168.1.%' identified by '123' with grant option;"
mysql -S /data/3307/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.1.6', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3307/mysql.sock -e "start slave;"
mysql -S /data/3307/mysql.sock -e "show slave status\G"
mysql -S /data/3307/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.1.5', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3307/mysql.sock -e "start slave;"
mysql -S /data/3307/mysql.sock -e "show slave status\G"
192.168.1.5:3309 → 192.168.1.5:3307
mysql -S /data/3309/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.1.5', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3309/mysql.sock -e "start slave;"
mysql -S /data/3309/mysql.sock -e "show slave status\G"
192.168.1.6:3309 → 192.168.1.6:3307
mysql -S /data/3309/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.1.6', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3309/mysql.sock -e "start slave;"
mysql -S /data/3309/mysql.sock -e "show slave status\G"
-
shard2部署主从关系 192.168.1.5:3308 ? 192.168.1.6:3308
mysql -S /data/3308/mysql.sock -e "grant replication slave on *.* to repl@'192.168.1.%' identified by '123';"
mysql -S /data/3308/mysql.sock -e "grant all on *.* to root@'192.168.1.%' identified by '123' with grant option;"
mysql -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.1.5', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3308/mysql.sock -e "start slave;"
mysql -S /data/3308/mysql.sock -e "show slave status\G"
mysql -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.1.6', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3308/mysql.sock -e "start slave;"
mysql -S /data/3308/mysql.sock -e "show slave status\G"
192.168.1.5:3310 → 192.168.1.5:3308
mysql -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.1.5', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3310/mysql.sock -e "start slave;"
mysql -S /data/3310/mysql.sock -e "show slave status\G"
192.168.1.6:3310 → 192.168.1.6:3308
mysql -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.1.6', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3310/mysql.sock -e "start slave;"
mysql -S /data/3310/mysql.sock -e "show slave status\G"
-
检查主从状态
[root@db01 ~]
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
[root@db01 ~]
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
[root@db01 ~]
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
[root@db01 ~]
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
[root@db02 ~]
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
[root@db02 ~]
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
[root@db02 ~]
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
[root@db02 ~]
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
-
导入测试库
mysql> mysql -S /data/3307/mysql.sock
source /root/world.sql
mysql> mysql -S /data/3308/mysql.sock
source /root/world.sql
MyCAT下载安装
-
安装Java运行环境 [root@db01 ~]
-
下载解压MyCAT软件 [root@db01 ~]
[root@db01 ~]
[root@db01 ~]
[root@db01 ~]
-
查看MyCAT软件目录结构 [root@db01 ~]
bin catlet conf lib logs version.txt
-
启动和连接MyCAT
[root@db01 ~]
export PATH=/application/mycat/bin:$PAT
[root@db01 ~]
[root@db01 ~]
Starting Mycat-server...
[root@db01 ~]
tcp6 0 0 :::8066 :::* LISTEN 6719/java
[root@db01 ~]
tcp6 0 0 :::9066 :::* LISTEN 6719/java
[root@db01 conf]
-
MyCAT相关文件介绍 [root@db01 ~]
mycat.log
mycat.pid
switch.log
wrapper.log
[root@db01 ~]
schema.xml
server.xml
rule.xml
-
编辑配置文件 [root@db01 ~]
[root@db01 conf]
[root@db01 conf]
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1"> </schema>
<dataNode name="dn1" dataHost="localhost1" database= "wordpress" />
<dataHost name="localhost1" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="1">
<heartbeat>select user()</heartbeat>
<writeHost host="db01" url="192.168.1.5:3307" user="root" password="123">
<readHost host="db02" url="192.168.1.5:3309" user="root" password="123" />
</writeHost>
</dataHost>
</mycat:schema>
schema(TESTDB)→ dataNode(dn1) → dataHost(host、r、w)
MyCAT基本功能
MyCAT配置读写分离
-
编辑schema.xml配置文件(更改datanode的database= 以及writehost和readhost的内容) [root@db01 conf]
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1">
</schema>
<dataNode name="dn1" dataHost="localhost1" database= "world" />
<dataHost name="localhost1" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="1">
<heartbeat>select user()</heartbeat>
<writeHost host="db01" url="192.168.1.5:3307" user="root" password="123">
<readHost host="db02" url="192.168.1.5:3309" user="root" password="123" />
</writeHost>
</dataHost>
</mycat:schema>
~
-
重启mycat [root@db01 conf]
-
读写分离测试 [root@db01 conf]
mysql> use TESTDB;
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
| 9 |
+-------------+
1 row in set (0.02 sec)
mysql> begin ;select @@server_id; commit;
Query OK, 0 rows affected (0.01 sec)
+-------------+
| @@server_id |
+-------------+
| 7 |
+-------------+
1 row in set (0.01 sec)
Query OK, 0 rows affected (0.01 sec)
注意:以上案例实现了1主1从的读写分离功能,写操作落到主库,读操作落到从库。如果主库宕机,从库也不能再继续提供读服务(配置文件readhost包含在writehost内)。
MyCAT配置高可用
- MyCAT默认使用第一个writehost,第二个为standby writehost作为备用只提供读服务。即此架构中有三个节点提供读服务。
- 当写节点宕机,其后的readonly也不提供服务。此时standby writehost提供写服务,后面跟的readhost提供读服务。
- 在写节点宕机时,standby writehost即可提供写服务,无切换动作(因为standby writehost一直在提供服务,数据一致)。
部署过程:
-
编辑schema.xml配置文件 [root@db01 conf]
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1">
</schema>
<dataNode name="dn1" dataHost="localhost1" database= "world" />
<dataHost name="localhost1" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="1">
<heartbeat>select user()</heartbeat>
<writeHost host="db01" url="192.168.1.5:3307" user="root" password="123"> <readHost host="db02" url="192.168.1.5:3309" user="root" password="123" />
</writeHost>
<writeHost host="db03" url="192.168.1.6:3307" user="root" password="123">
<readHost host="db04" url="192.168.1.6:3309" user="root" password="123" />
</writeHost>
</dataHost>
</mycat:schema>
~
-
重启mycat [root@db01 conf]
-
测试高可用 写节点宕机之前
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
| 17 |
+-------------+
1 row in set (0.11 sec)
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
| 19 |
+-------------+
1 row in set (0.10 sec)
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
| 9 |
+-------------+
1 row in set (0.20 sec)
mysql> begin; select @@server_id; commit;
Query OK, 0 rows affected (0.01 sec)
+-------------+
| @@server_id |
+-------------+
| 7 |
+-------------+
1 row in set (0.00 sec)
Query OK, 0 rows affected (0.01 sec)
写节点宕机之后
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
| 19 |
+-------------+
1 row in set (0.08 sec)
mysql> begin; select @@server_id; commit;
Query OK, 0 rows affected (0.01 sec)
+-------------+
| @@server_id |
+-------------+
| 17 |
+-------------+
1 row in set (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
MyCAT属性介绍
<dataHost name=“localhost1” maxCon=“1000” minCon=“10” balance=“1” writeType=“0” dbType=“mysql” dbDriver=“native” switchType=“1”>
-
balance属性 负载均衡类型,目前的取值有3种:
1. balance="0", 不开启读写分离机制,所有读操作都发送到当前可用的writeHost上。
2. balance="1",默认配置。全部的readHost与standby writeHost参与select语句的负载均衡,简单的说,当双主双从模式(M1->S1,M2->S2,并且M1与 M2互为主备),正常情况下,M2,S1,S2都参与select语句的负载均衡。
3. balance="2",所有读操作都随机的在writeHost、readhost上分发(写压力较小的情况下)。
-
writetype属性 负载均衡类型,目前的取值有2种:
1. writeType="0", 默认配置。所有写操作发送到配置的第一个writeHost,第一个挂了切到还生存的第二个writeHost,重新启动后已切换后的为主,切换记录在配置文件中:dnindex.properties
2. writeType="1",所有写操作都随机的发送到配置的writeHost,但不推荐使用
-
switchtype属性 1. switchType="-1" 表示不自动切换
2. switchType="1" 默认值,自动切换
3. switchType="2" 基于MySQL主从同步的状态决定是否切换 ,心跳语句为 show slave status
-
maxCon属性:最大的并发连接数 -
minCon属性:MyCAT在启动之后,会在后端节点上自动开启的连接线程数(设置过大对内存资源压力较大) -
tempReadHostAvailable属性:一主一从时(1个writehost,1个readhost时),可以开启这个参数,在写节点宕机时,可暂时开启读功能;如果2个writehost,2个readhost时,没有必要开启此属性。 -
select user() 监测心跳,通过该语句探测数据库是否宕机
MyCAT核心功能
MyCAT垂直分表
需求:将world数据库中的city表和country表分别放在不同的节点上
-
编辑配置文件 <?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1">
<table name="user" dataNode="dn1" />
<table name="order" dataNode="dn2" />
</schema>
<dataNode name="dn1" dataHost="localhost1" database= "taobao" />
<dataNode name="dn2" dataHost="localhost2" database= "taobao" />
<dataHost name="localhost1" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="1">
<heartbeat>select user()</heartbeat>
<writeHost host="db01" url="192.168.1.5:3307" user="root" password="123">
<readHost host="db02" url="192.168.1.5:3309" user="root" password="123" />
</writeHost>
<writeHost host="db03" url="192.168.1.6:3307" user="root" password="123">
<readHost host="db04" url="192.168.1.6:3309" user="root" password="123" />
</writeHost>
</dataHost>
<dataHost name="localhost2" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="1">
<heartbeat>select user()</heartbeat>
<writeHost host="db01" url="192.168.1.5:3308" user="root" password="123">
<readHost host="db02" url="192.168.1.5:3310" user="root" password="123" />
</writeHost>
<writeHost host="db03" url="192.168.1.6:3308" user="root" password="123">
<readHost host="db04" url="192.168.1.6:3310" user="root" password="123" />
</writeHost>
</dataHost>
</mycat:schema>
-
创建库表 [root@db01 ~]
[root@db01 ~]
[root@db01 ~]
[root@db01 ~]
-
重启mycat [root@db01 ~]
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
-
查看mycat库表情况。此时对应用来说,order_t和user表都在TESTDB表中,但是order_t和user表实际上在不同的物理节点上,如此便实现了垂直拆分。 mysql> show databases;
+----------+
| DATABASE |
+----------+
| TESTDB |
+----------+
1 row in set (0.01 sec)
mysql> use TESTDB;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> show tables ;
+------------------+
| Tables_in_taobao |
+------------------+
| order_t |
| user |
+------------------+
2 rows in set (0.14 sec)
MyCAT水平拆分(分片)
分片需求
- 行数非常多
- 访问非常频繁
分片目的
- 将大数据进行分布存储
- 提供均衡的访问路由
分片策略
- 范围分片:range
- 取模分片:mod
- 哈希分片:hash
- 枚举分片:枚举类型
- 时间分片:如按月份
优化关联查询
MyCAT范围分片
-
编辑配置文件schema.xml [root@db01 conf]
<table name="t1" dataNode="dn1,dn2" rule="auto-sharding-long" />
-
查看配置文件rule.xml,可以得知range分配规则由autopartition-long.txt指定 [root@db01 conf]
<tableRule name="auto-sharding-long">
<rule>
<columns>id</columns>
<algorithm>rang-long</algorithm>
</rule>
<function name="rang-long"
class="io.mycat.route.function.AutoPartitionByLong">
<property name="mapFile">autopartition-long.txt</property>
</function>
-
查看并编辑autopartition-long.txt [root@db01 conf]
0-500M=0
500M-1000M=1
1000M-1500M=2
[root@db01 conf]
0-10=0
11-20=1
-
创建测试表 [root@db01 conf]
[root@db01 conf]
-
重启mycat [root@db01 conf]
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
-
测试
[root@db01 conf]
mysql> insert into t1(id,name) values(1,'a'),(2,'b');
Query OK, 2 rows affected (2.22 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> insert into t1(id,name) values(11,'aa'),(12,'bb');
Query OK, 2 rows affected (0.13 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> select * from t1;
+----+------+
| id | name |
+----+------+
| 1 | a |
| 2 | b |
| 11 | aa |
| 12 | bb |
+----+------+
4 rows in set (0.20 sec)
分别查询mysqld3307和mysqld3308 [root@db01 conf]
+----+------+
| id | name |
+----+------+
| 1 | a |
| 2 | b |
+----+------+
[root@db01 conf]
+----+------+
| id | name |
+----+------+
| 11 | aa |
| 12 | bb |
+----+------+
MyCAT取模分片
取余分片方式:分片键(一个列)与节点数量进行取余,得到余数,将数据写入对应节点。
-
编辑schema.xml [root@db01 conf]
<table name="t2" dataNode="dn1,dn2" rule="mod-log" />
-
查看rule.xml [root@db01 conf]
<tableRule name="mod-long">
<rule>
<columns>id</columns>
<algorithm>mod-long</algorithm>
</rule>
</tableRule>
<function name="mod-long" class="io.mycat.route.function.PartitionByMod">
<!-- how many data nodes -->
<property name="count">2</property>
</function>
-
创建测试表 mysql -S /data/3307/mysql.sock -e "use taobao;create table t2 (id int not null primary key auto_increment,name varchar(20) not null);"
mysql -S /data/3308/mysql.sock -e "use taobao;create table t2 (id int not null primary key auto_increment,name varchar(20) not null);"
-
重启mycat [root@db01 conf]
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
-
测试
[root@db01 conf]
mysql> insert into t2(id,name) values(1,'a'),(2,'b');
Query OK, 2 rows affected (2.22 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> insert into t2(id,name) values(3,'c'),(4,'d');
Query OK, 2 rows affected (0.13 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> select * from t2;
+----+------+
| id | name |
+----+------+
| 1 | a |
| 2 | b |
| 3 | c |
| 4 | d |
+----+------+
4 rows in set (0.20 sec)
分别查询mysqld3307和mysqld3308 [root@db01 conf]
+----+------+
| id | name |
+----+------+
| 2 | b |
| 4 | d |
+----+------+
[root@db01 conf]
+----+------+
| id | name |
+----+------+
| 1 | a |
| 3 | c |
+----+------+
MyCAT枚举分片
-
编辑配置文件schema.xml vim schema.xml
<table name="t3" dataNode="dn1,dn2" rule="sharding-by-intfile" />
-
查看配置文件rule.xml cat rule.xml
<tableRule name="sharding-by-intfile">
<rule>
<columns>name</columns>
<algorithm>hash-int</algorithm>
</rule>
</tableRule>
<function name="hash-int"
class="io.mycat.route.function.PartitionByFileMap">
<property name="mapFile">partition-hash-int.txt</property>
<property name="type">1</property>
</function>
-
编辑partition-hash-int.txt vim partition-hash-int.txt
beijing=0
shanghai=1
-
测试…
MyCAT全局表
<table name=“country” primaryKey=“id” type=“global” dataNode=“dn1,dn2” />
使用场景
- 如果业务中有些数据类似于数据字典,比如配置文件的配置,常用业务的配置或者数据量不大很少变动的表,这些表往往不是特别大,而且大部分的业务场景都会用到,那么这种表适合于Mycat全局表。全局表无须对数据进行切分,要在所有的分片上保存一份数据即可,Mycat 在Join操作中,业务表与全局表进行Join聚合会优先选择相同分片内的全局表join,避免跨库Join,在进行数据插入操作时,mycat将把数据分发到全局表对应的所有分片执行,在进行数据读取时候将会随机获取一个节点读取数据。
MyCAT ER分片
为了防止跨分片join,可以使用E-R模式。
<table name="A" dataNode="dn1,dn2" rule="mod-long">
<childTable name="B" joinKey="yy" parentKey="xx" />
</table>
|