hive> create table access_log(
> ip string,
> other string
> )
> row format delimited fields terminated by ¨ ¨
> stored as textfile;
OK
Time taken: 0.106 seconds
hive> load data local inpath ¨/data/software/access_log.txt¨ overwrite into table access_log;
Copying data from file:/data/software/access_log.txt
Copying file: file:/data/software/access_log.txt
Loading data to table default.access_log
Moved to trash: hdfs://hdnode1:9000/user/hive/warehouse/access_log
OK
Time taken: 0.753 seconds
借助SQL统计就很方便了,GROUP BY即可。不过考虑到这次要操作的数据量较大,执行GROUP BY将结果集输出到屏幕恐有不妥,因此这里我们将结果保存到另外的表中,执行SQL语句如下:
hive> create table access_result as select ip,count(1) ct from access_log group by ip;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_201304220923_0007, Tracking URL = http://hdnode1:50030/jobdetails.jsp?jobid=job_201304220923_0007
Kill Command = /usr/local/hadoop-0.20.2/bin/../bin/hadoop job -Dmapred.job.tracker=hdnode1:9001 -kill job_201304220923_0007
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2013-05-02 17:02:12,037 Stage-1 map = 0%, reduce = 0%
2013-05-02 17:02:18,082 Stage-1 map = 100%, reduce = 0%
2013-05-02 17:02:27,128 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201304220923_0007
Moving data to: hdfs://hdnode1:9000/user/hive/warehouse/access_result
[Warning] could not update stats.
476 Rows loaded to hdfs://hdnode1:9000/tmp/hive-grid/hive_2013-05-02_17-02-05_557_8882025499109535399/-ext-10000
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 HDFS Read: 7118627 HDFS Write: 8051 SUCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 25.529 seconds
hive> select * from access_result order by ct desc limit 10;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_201304220923_0009, Tracking URL = http://hdnode1:50030/jobdetails.jsp?jobid=job_201304220923_0009
Kill Command = /usr/local/hadoop-0.20.2/bin/../bin/hadoop job -Dmapred.job.tracker=hdnode1:9001 -kill job_201304220923_0009
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2013-05-02 17:04:45,208 Stage-1 map = 0%, reduce = 0%
2013-05-02 17:04:48,220 Stage-1 map = 100%, reduce = 0%
2013-05-02 17:04:57,260 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201304220923_0009
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 HDFS Read: 8051 HDFS Write: 191 SUCESS
Total MapReduce CPU Time Spent: 0 msec
OK
218.20.24.203 4597
221.194.180.166 4576
119.146.220.12 1850
117.136.31.144 1647
121.28.95.48 1597
113.109.183.126 1596
182.48.112.2 870
120.84.24.200 773
61.144.125.162 750
27.115.124.75 470
Time taken: 20.608 seconds