1.hive中基本操作;
DDL,DML
2.hive中函数
User-Defined Functions : UDF(用户自定义函数,简称JDF函数)
UDF: 一进一出 upper lower substring(进来一条记录,出去还是一条记录)
UDAF:Aggregation(用户自定的聚合函数) 多进一出 count max min sum ...
UDTF: Table-Generation 一进多出
3.举例
show functions显示系统支持的函数
行数举例:split(),explode()
exercise:使用hive统计单词出现次数
explode把数组转成多行的数据
[hadoop@hadoop000 data]$ vi hive-wc.txt
hello,world,welcome
hello,welcome
hive> create table hive_wc(sentence string);
OK
Time taken: 1.083 seconds hive> load data local inpath '/home/hadoop/data/hive-wc.txt' into table hive_wc;
Loading data to table default.hive_wc
Table default.hive_wc stats: [numFiles=, totalSize=]
OK
Time taken: 1.539 seconds hive> select * from hive_wc;
OK
hello,world,welcome
hello,welcome Time taken: 0.536 seconds, Fetched: row(s)
hive> select split(sentence,",") from hive_wc;
OK
["hello","world","welcome"]
["hello","welcome"]
[""]
Time taken: 0.161 seconds, Fetched: row(s)
"hello"
"world"
"welcome"
"hello"
"welcome"
用一个SQL完成wordcount统计:
hive> select word, count() as c
> from (select explode(split(sentence,",")) as word from hive_wc) t
> group by word ;
Query ID = hadoop_20180613094545_920c2e72--47eb-9a9c-5e5a30ebb1ae
Total jobs =
Launching Job out of
Number of reduce tasks not specified. Estimated from input data size:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1528851144815_0001, Tracking URL = http://hadoop000:8088/proxy/application_1528851144815_0001/
Kill Command = /home/hadoop/app/hadoop-2.6.-cdh5.7.0/bin/hadoop job -kill job_1528851144815_0001
Hadoop job information for Stage-: number of mappers: ; number of reducers:
-- ::, Stage- map = %, reduce = %
-- ::, Stage- map = %, reduce = %, Cumulative CPU 2.42 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 4.31 sec
MapReduce Total cumulative CPU time: seconds msec
Ended Job = job_1528851144815_0001
MapReduce Jobs Launched:
Stage-Stage-: Map: Reduce: Cumulative CPU: 4.31 sec HDFS Read: HDFS Write: SUCCESS
Total MapReduce CPU Time Spent: seconds msec
OK hello
welcome
world
Time taken: 26.859 seconds, Fetched: row(s)
4.json类型数据
使用到的文件: rating.json
创建一张表 rating_json,上传数据,并查看前十行数据信息:
hive> create table rating_json(json string);
OK hive> load data local inpath '/home/hadoop/data/rating.json' into table rating_json;
Loading data to table default.rating_json
Table default.rating_json stats: [numFiles=, totalSize=]
OK hive> select * from rating_json limit ;
OK
{"movie":"","rate":"","time":"","userid":""}
{"movie":"","rate":"","time":"","userid":""}
{"movie":"","rate":"","time":"","userid":""}
{"movie":"","rate":"","time":"","userid":""}
{"movie":"","rate":"","time":"","userid":""}
{"movie":"","rate":"","time":"","userid":""}
{"movie":"","rate":"","time":"","userid":""}
{"movie":"","rate":"","time":"","userid":""}
{"movie":"","rate":"","time":"","userid":""}
{"movie":"","rate":"","time":"","userid":""}
Time taken: 0.195 seconds, Fetched: row(s)
对json的数据进行处理,json_tuple 是一个UDTF是 Hive0.7版本引进的:
hive> select
> json_tuple(json,"movie","rate","time","userid") as (movie,rate,time,userid)
> from rating_json limit ;
OK Time taken: 0.189 seconds, Fetched: row(s)
5.时间类型的转换:
[hadoop@hadoop000 data]$ more hive_row_number.txt
,,ruoze,M
,,jepson,M
,,wangwu,F
,,zhaoliu,F
,,tianqi,M
,,wangba,F
[hadoop@hadoop000 data]$
hive> create table hive_rownumber(id int,age int, name string, sex string)
> row format delimited fields terminated by ',';
OK
Time taken: 0.451 seconds
hive> load data local inpath '/home/hadoop/data/hive_row_number.txt' into table hive_rownumber;
Loading data to table hive3.hive_rownumber
Table hive3.hive_rownumber stats: [numFiles=, totalSize=]
OK
Time taken: 1.381 seconds
hive> select * from hive_rownumber ;
OK
ruoze M
jepson M
wangwu F
zhaoliu F
tianqi M
wangba F
Time taken: 0.455 seconds, Fetched: row(s)
需求:查询出每种性别中年龄最大的两条数据 -- > topn:
分析:order by 是全局的排序,是做不到分组内的排序的 ;组内进行排序,就要用到窗口函数or分析函数
select id,age,name.sex
from
(select id,age,name,sex,
row_number() over(partition by sex order by age desc)
from hive_rownumber) t
where rank<=2;
hive> select id,age,name,sex
> from
> (select id,age,name,sex,
> row_number() over(partition by sex order by age desc) as rank
> from hive_rownumber) t
> where rank<=;
Query ID = hadoop_20180614202525_9829dc42-3c37--8b12-89c416589ebc
Total jobs =
Launching Job out of
Number of reduce tasks not specified. Estimated from input data size:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1528975858636_0001, Tracking URL = http://hadoop000:/proxy/application_1528975858636_0001/
Kill Command = /home/hadoop/app/hadoop-2.6.-cdh5.7.0/bin/hadoop job -kill job_1528975858636_0001
Hadoop job information for Stage-: number of mappers: ; number of reducers:
-- ::, Stage- map = %, reduce = %
-- ::, Stage- map = %, reduce = %, Cumulative CPU 1.48 sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU 3.86 sec
MapReduce Total cumulative CPU time: seconds msec
Ended Job = job_1528975858636_0001
MapReduce Jobs Launched:
Stage-Stage-: Map: Reduce: Cumulative CPU: 3.86 sec HDFS Read: HDFS Write: SUCCESS
Total MapReduce CPU Time Spent: seconds msec
OK
wangba F
wangwu F
tianqi M
jepson M
Time taken: 29.262 seconds, Fetched: row(s)