码迷,mamicode.com
首页 > 数据库 > 详细

HiveQL与SQL区别

时间:2015-07-16 13:33:00      阅读:248      评论:0      收藏:0      [点我收藏+]

标签:

转自:http://www.aboutyun.com/thread-7327-1-1.html

1、Hive不支持等值连接

  SQL中对两表内联可以写成:select * from dual a,dual b where a.key = b.key;
    Hive中应为:select * from dual a join dual b on a.key = b.key;
    而不是传统的格式:SELECT t1.a1 as c1, t2.b1 as c2FROM t1, t2 WHERE t1.a2 = t2.b2
2、分号字符
    分号是SQL语句结束标记,在HiveQL中也是,但是在HiveQL中,对分号的识别没有那么智慧,例如:select concat(key,concat(‘;‘,key)) from dual;
但HiveQL在解析语句时提示:FAILED: Parse Error: line 0:-1 mismatched input ‘<EOF>‘ expecting ) in function specification,解决的办法是,使用分号的八进制的ASCII码进行转义,那么上述语句应写成:select concat(key,concat(‘\073‘,key)) from dual;
3、IS [NOT] NULL
    SQL中null代表空值, 值得警惕的是, 在HiveQL中String类型的字段若是空(empty)字符串, 即长度为0, 那么对它进行IS NULL的判断结果是False.
4、Hive不支持将数据插入现有的表或分区中,
仅支持覆盖重写整个表,示例如下:

INSERT OVERWRITE TABLE t1  
SELECT * FROM t2; 

5、hive不支持INSERT INTO 表 Values(), UPDATE, DELETE操作
    尽量避免使用很复杂的锁机制来读写数据:INSERT INTO就是在表或分区中追加数据。
6、hive支持嵌入mapreduce程序,来处理复杂的逻辑

1 FROM (  
2 MAP doctext USING python wc_mapper.py AS (word, cnt)  
3 FROM docs  
4 CLUSTER BY word  
5 ) a  
6 REDUCE word, cnt USING python wc_reduce.py;  

doctext: 是输入,word, cnt: 是map程序的输出CLUSTER BY: 将wordhash后,又作为reduce程序的输入并且map程序、reduce程序可以单独使用,如:

1 FROM (  
2 FROM session_table  
3 SELECT sessionid, tstamp, data  
4 DISTRIBUTE BY sessionid SORT BY tstamp  
5 ) a  
6 REDUCE sessionid, tstamp, data USING session_reducer.sh;  

DISTRIBUTE BY: 用于给reduce程序分配行数据。

7、hive支持将转换后的数据直接写入不同的表,还能写入分区、hdfs和本地目录
    这样能免除多次扫描输入表的开销。

FROM t1  
  
INSERT OVERWRITE TABLE t2  
SELECT t3.c2, count(1)  
FROM t3  
WHERE t3.c1 <= 20  
GROUP BY t3.c2  
  
INSERT OVERWRITE DIRECTORY /output_dir  
SELECT t3.c2, avg(t3.c1)  
FROM t3  
WHERE t3.c1 > 20 AND t3.c1 <= 30  
GROUP BY t3.c2  
  
INSERT OVERWRITE LOCAL DIRECTORY /home/dir  
SELECT t3.c2, sum(t3.c1)  
FROM t3  
WHERE t3.c1 > 30  
GROUP BY t3.c2;  

实际实例

一、创建一个表
CREATE TABLE u_data (
userid INT,
movieid INT,
rating INT,
unixtime STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘/t‘
STORED AS TEXTFILE;
下载示例数据文件,并解压缩
wget http://www.grouplens.org/system/files/ml-data.tar__0.gz
tar xvzf ml-data.tar__0.gz

二、加载数据到表中:
LOAD DATA LOCAL INPATH ‘ml-data/u.data‘ OVERWRITE INTO TABLE u_data;

三、统计数据总量:
SELECT COUNT(1) FROM u_data;

四、现在做一些复杂的数据分析:
创建一个 weekday_mapper.py: 文件,作为数据按周进行分割
import sys
import datetime
for line in sys.stdin:
line = line.strip()
userid, movieid, rating, unixtime = line.split(‘/t‘)

五、生成数据的周信息
weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday()
print ‘/t‘.join([userid, movieid, rating, str(weekday)])

六、使用映射脚本
//创建表,按分割符分割行中的字段值
CREATE TABLE u_data_new (
userid INT,
movieid INT,
rating INT,
weekday INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘/t‘;
//将python文件加载到系统
add FILE weekday_mapper.py;

七、将数据按周进行分割
INSERT OVERWRITE TABLE u_data_new
SELECT
TRANSFORM (userid, movieid, rating, unixtime)
USING ‘python weekday_mapper.py‘
AS (userid, movieid, rating, weekday)
FROM u_data;

SELECT weekday, COUNT(1)
FROM u_data_new
GROUP BY weekday;

八、处理Apache Weblog 数据
将WEB日志先用正则表达式进行组合,再按需要的条件进行组合输入到表中
add jar ../build/contrib/hive_contrib.jar;

CREATE TABLE apachelog (
host STRING,
identity STRING,
user STRING,
time STRING,
request STRING,
status STRING,
size STRING,
referer STRING,
agent STRING)
ROW FORMAT SERDE ‘org.apache.hadoop.hive.contrib.serde2.RegexSerDe‘
WITH SERDEPROPERTIES (
"input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (-|//[[^//]]*//]) ([^ /"]*|/"[^/"]*/") (-|[0-9]*) (-|[0-9]*)(?: ([^ /"]*|/"[^/"]*/") ([^ /"]*|/"[^/"]*/"))?",
"output.format.string" = "%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s"
)
STORED AS TEXTFILE

HiveQL与SQL区别

标签:

原文地址:http://www.cnblogs.com/sunfie/p/4650879.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!