[grid@hdnode1 ~]$ hadoop jar /usr/local/hadoop-0.20.2/hadoop-0.20.2-examples.jar wordcount jss jsscount
13/02/17 20:00:48 INFO input.FileInputFormat: Total input paths to process : 2
13/02/17 20:00:48 INFO mapred.JobClient: Running job: job_201302041636_0001
13/02/17 20:00:49 INFO mapred.JobClient: map 0% reduce 0%
13/02/17 20:00:58 INFO mapred.JobClient: map 50% reduce 0%
13/02/17 20:01:01 INFO mapred.JobClient: map 100% reduce 0%
13/02/17 20:01:10 INFO mapred.JobClient: map 100% reduce 100%
13/02/17 20:01:12 INFO mapred.JobClient: Job complete: job_201302041636_0001
13/02/17 20:01:13 INFO mapred.JobClient: Counters: 17
13/02/17 20:01:13 INFO mapred.JobClient: Job Counters
13/02/17 20:01:13 INFO mapred.JobClient: Launched reduce tasks=1
13/02/17 20:01:13 INFO mapred.JobClient: Launched map tasks=2
13/02/17 20:01:13 INFO mapred.JobClient: Data-local map tasks=2
13/02/17 20:01:13 INFO mapred.JobClient: FileSystemCounters
13/02/17 20:01:13 INFO mapred.JobClient: FILE_BYTES_READ=84
13/02/17 20:01:13 INFO mapred.JobClient: HDFS_BYTES_READ=42
13/02/17 20:01:13 INFO mapred.JobClient: FILE_BYTES_WRITTEN=238
13/02/17 20:01:13 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=35
13/02/17 20:01:13 INFO mapred.JobClient: Map-Reduce Framework
13/02/17 20:01:13 INFO mapred.JobClient: Reduce input groups=4
13/02/17 20:01:13 INFO mapred.JobClient: Combine output records=6
13/02/17 20:01:13 INFO mapred.JobClient: Map input records=2
13/02/17 20:01:13 INFO mapred.JobClient: Reduce shuffle bytes=90
13/02/17 20:01:13 INFO mapred.JobClient: Reduce output records=4
13/02/17 20:01:13 INFO mapred.JobClient: Spilled Records=12
13/02/17 20:01:13 INFO mapred.JobClient: Map output bytes=66
13/02/17 20:01:13 INFO mapred.JobClient: Combine input records=6
13/02/17 20:01:13 INFO mapred.JobClient: Map output records=6
13/02/17 20:01:13 INFO mapred.JobClient: Reduce input records=6