码迷,mamicode.com
首页 > 其他好文 > 详细

mapreduce 对文件分词读取

时间:2019-07-25 00:53:34      阅读:184      评论:0      收藏:0      [点我收藏+]

标签:exce   com   mapred   out   wait   inter   token   tab   instance   

MapReduce 

实例一:(进行文件的分词读取)

1.1 首先导入架包

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-core</artifactId>
  <version>2.8.2</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-common</artifactId>
  <version>2.7.3</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>2.7.3</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-hdfs</artifactId>
  <version>2.7.3</version>
</dependency>

1.2 编写Mapper

private final static LongWritable two = new LongWritable(1);
private Text word = new Text();

@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
    //构造一个用来解析str的StringTokenizer对象。java默认的分隔符是“空格”、“制表符(‘\t’)”、“换行符(‘\n’)”、“回车符(‘\r’)”
    StringTokenizer st = new StringTokenizer(value.toString());
    while(st.hasMoreTokens()){//返回是否还有分隔符
        word.set(st.nextToken());//返回从当前位置到下一个分隔符的字符串
        context.write(word,two);
    }

}

1.3 编写Reduce

private final static LongWritable one = new LongWritable();
@Override
protected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException {
    int sum = 0;
    for (LongWritable value:values) {
        sum+=value.get();
    }
    one.set(sum);
    context.write(key,one);
}

1.4 编写job驱动

Job job = Job.getInstance(new Configuration());
        job.setJarByClass(TextJob.class);
        FileInputFormat.addInputPath(job,new Path(args[0]));
        FileOutputFormat.setOutputPath(job,new Path(args[1]));
        job.setMapperClass(TextMapper.class);
//        job.setCombinerClass(TextReduce.class);
        job.setReducerClass(TextReduce.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);
        job.waitForCompletion(true);

1.5 在hsfs 中的方法:

[root@head42 ~]# hadoop jar mapreduce-1.0-SNAPSHOT.jar com.njbd.normal.text1.TextJob /text /output14

注释:mapreduce-1.0-SNAPSHOT.jar:为java包

           com.njbd.normal.text1.TextJob:为job的路径

?           /text/output14/:为输出目录 

 

mapreduce 对文件分词读取

标签:exce   com   mapred   out   wait   inter   token   tab   instance   

原文地址:https://www.cnblogs.com/tudousiya/p/11241441.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!