标签:range comm data sts hid concat ide prim 存储
mysql中数据库learn文件夹结构:
看一下表sales的定义:
show create table sales \G
*************************** 1. row ***************************
Table: sales
Create Table: CREATE TABLE `sales` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`amount` double NOT NULL,
`order_day` datetime NOT NULL,
PRIMARY KEY (`id`,`order_day`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (YEAR(order_day))
(PARTITION p_2010 VALUES LESS THAN (2010) ENGINE = InnoDB,
PARTITION p_2011 VALUES LESS THAN (2011) ENGINE = InnoDB,
PARTITION p_2012 VALUES LESS THAN (2012) ENGINE = InnoDB,
PARTITION p_catchall VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */
1 row in set (0.00 sec)
表p_key的定义
show create table p_key \G
*************************** 1. row ***************************
Table: p_key
Create Table: CREATE TABLE `p_key` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`keyname` char(20) DEFAULT NULL,
`keyval` varchar(1000) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=12 DEFAULT CHARSET=utf8
/*!50100 PARTITION BY KEY (id)
PARTITIONS 4 */
1 row in set (0.01 sec)
对于MyISAM引擎,一张表对于存储了3个文件。fm存储表结构。myi存放索引,myd存放数据。
但p_key相应的另一个文件p_key.par。
又一次创建一个Range分区的表fuhui_log,体验分区查询:
DROP TABLE IF EXISTS fuhui_log;
CREATE TABLE fuhui_log (
object_id int(11),
title varchar(20) NOT NULL ,
content varchar(20) ,
time int(11),
primary key (object_id)
)
PARTITION BY range (object_id)
(
PARTITION p1 VALUES less than (5000),
PARTITION p2 VALUES less than (10000),
PARTITION p3 VALUES less than MAXVALUE
);
自己定义存储过程,向数据库中插入20000条数据:
delimiter //
create procedure fun_fuhui_log()
begin
declare i int;
set i = 1;
while i < 20000 do
insert into fuhui_log(object_id,title,content,time) values (i,concat(‘title_‘,i),‘test content‘,i);
set i = i+1;
end while;
end
//
调用存储过程,进行数据插入:
delimiter ;
call fun_fuhui_log();
获取插入数据结果:
select count(*) from fuhui_log;
查询结果为19999,耗时:1 row in set (0.01 sec);
select * from fuhui_log where object_id = 13588;
耗时0.00 sec
依据如上的步骤。创建一个基本表,并改动存储过程,插入相同的数据:
DROP TABLE IF EXISTS fuhui_log2;
CREATE TABLE fuhui_log2 (
object_id int(11),
title varchar(20) NOT NULL ,
content varchar(20) ,
time int(11),
primary key (object_id)
);
数据结构设计的太简单,数据量太小。看不出效果来,重先改动存储过程。插入80000条数据:
while i < 80000 do
replace into fuhui_log2(object_id,title,content,time) values (i,concat(‘title_‘,i),‘test content‘,i);
set i = i+1;
end while;
select count(*) from fuhui_log2;
运行结果:1 row in set (0.02 sec)
select count(*) from fuhui_log;
运行结果:1 row in set (0.03 sec)【没有依照逻辑出牌】
这个样例非常失败,改动表结构。去掉primary key
alter table fuhui_log drop primary key;
alter table fuhui_log2 drop primary key;
样例仍然比較失败,运行的效率非常难发现
select * from fuhui_log where object_id = 56770 \G
耗时:0.05sec
select * from fuhui_log2 where object_id = 56770 \G
耗时0.06sec
对于count统计,fuhui_log比fuhui_log2耗时都多。count的并行计算,都被我给玷污了
改动分区结构,又一次计算:
alter table fuhui_log reorganize partition p3 into (
partition p3_1 values less than (30000),
partition p3_2 values less than (50000),
partition p3_3 values less than MAXVALUE);
查看又一次分区后的结果:
select table_schema,table_name,partition_name,PARTITION_METHOD from infor
mation_schema.partitions where table_name=‘fuhui_log‘;
然后又一次计算:
select count(*) from fuhui_log ;
运行效果0.04sec,跟fuhui_log2的统计时间相等了。可是
select * from fuhui_log where object_id = 56770 \G
运行时间变成了0.02sec
竟然已经写这么久了,今天就此罢笔吧
标签:range comm data sts hid concat ide prim 存储
原文地址:http://www.cnblogs.com/lxjshuju/p/7354212.html