博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
【Hadoop测试程序】编写MapReduce测试Hadoop环境
阅读量:5784 次
发布时间:2019-06-18

本文共 4978 字,大约阅读时间需要 16 分钟。

  • 我们使用之前搭建好的Hadoop环境,可参见:
《【Hadoop环境搭建】Centos6.8搭建hadoop伪分布模式》    
  • 示例程序为《Hadoop权威指南3》中的获取最高温度的示例程序;

数据准备

输入数据为:sample.txt

 
  1. 0067011990999991950051507004+68750+023550FM-12+038299999V0203301N00671220001CN9999999N9+00001+99999999999
  2. 0043011990999991950051512004+68750+023550FM-12+038299999V0203201N00671220001CN9999999N9+00221+99999999999
  3. 0043011990999991950051518004+68750+023550FM-12+038299999V0203201N00261220001CN9999999N9-00111+99999999999
  4. 0043012650999991949032412004+62300+010750FM-12+048599999V0202701N00461220001CN0500001N9+01111+99999999999
  5. 0043012650999991949032418004+62300+010750FM-12+048599999V0202701N00461220001CN0500001N9+00781+99999999999

将samle.txt上传至HDFS

 
  1. hadoop fs -put /home/hadoop/ncdcData/sample.txt input

项目结构

731047-20161009105017961-780555563.png

MaxTemperatureMapper类

731047-20161009105018992-1239023919.png
 
  1. package com.ll.maxTemperature;
  2. import java.io.IOException;
  3. import org.apache.hadoop.io.IntWritable;
  4. import org.apache.hadoop.io.LongWritable;
  5. import org.apache.hadoop.io.Text;
  6. import org.apache.hadoop.mapreduce.Mapper;
  7. public class MaxTemperatureMapper extends
  8. Mapper<LongWritable, Text, Text, IntWritable> {
  9. private static final int MISSING = 9999;
  10. @Override
  11. public void map(LongWritable key, Text value, Context context)
  12. throws IOException, InterruptedException {
  13. String line = value.toString();
  14. String year = line.substring(15, 19);
  15. int airTemperature;
  16. if (line.charAt(87) == '+') {
    // parseInt doesn't like leading plus
  17. // signs
  18. airTemperature = Integer.parseInt(line.substring(88, 92));
  19. } else {
  20. airTemperature = Integer.parseInt(line.substring(87, 92));
  21. }
  22. String quality = line.substring(92, 93);
  23. if (airTemperature != MISSING && quality.matches("[01459]")) {
  24. context.write(new Text(year), new IntWritable(airTemperature));
  25. }
  26. }
  27. }
  28. // ^^ MaxTemperatureMapper

MaxTemperatureReducer类

731047-20161009105021274-722548677.png
 
  1. package com.ll.maxTemperature;
  2. import java.io.IOException;
  3. import org.apache.hadoop.io.IntWritable;
  4. import org.apache.hadoop.io.Text;
  5. import org.apache.hadoop.mapreduce.Reducer;
  6. public class MaxTemperatureReducer extends
  7. Reducer<Text, IntWritable, Text, IntWritable> {
  8. @Override
  9. public void reduce(Text key, Iterable<IntWritable> values, Context context)
  10. throws IOException, InterruptedException {
  11. int maxValue = Integer.MIN_VALUE;
  12. for (IntWritable value : values) {
  13. maxValue = Math.max(maxValue, value.get());
  14. }
  15. context.write(key, new IntWritable(maxValue));
  16. }
  17. }
  18. // ^^ MaxTemperatureReducer

MaxTemperature类(主函数)

731047-20161009105024149-1214107497.png
 
  1. package com.ll.maxTemperature;
  2. import org.apache.hadoop.fs.Path;
  3. import org.apache.hadoop.io.IntWritable;
  4. import org.apache.hadoop.io.Text;
  5. import org.apache.hadoop.mapreduce.Job;
  6. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  7. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  8. public class MaxTemperature {
  9. public static void main(String[] args) throws Exception {
  10. if (args.length != 2) {
  11. args = new String[] {
  12. "hdfs://localhost:9000/user/hadoop/input/sample.txt",
  13. "hdfs://localhost:9000/user/hadoop/out2" };
  14. }
  15. Job job = new Job(); // 指定作业执行规范
  16. job.setJarByClass(MaxTemperature.class);
  17. job.setJobName("Max temperature");
  18. FileInputFormat.addInputPath(job, new Path(args[0]));
  19. FileOutputFormat.setOutputPath(job, new Path(args[1])); // Reduce函数输出文件的写入路径
  20. job.setMapperClass(MaxTemperatureMapper.class);
  21. job.setCombinerClass(MaxTemperatureReducer.class);
  22. job.setReducerClass(MaxTemperatureReducer.class);
  23. job.setOutputKeyClass(Text.class);
  24. job.setOutputValueClass(IntWritable.class);
  25. System.exit(job.waitForCompletion(true) ? 0 : 1);
  26. }
  27. }
  28. // ^^ MaxTemperature
解释说明:
输入路径为:hdfs://localhost:9000/user/hadoop/input/sample.txt
这部分由两部分组成:
  1. hdfs://localhost:9000/;
  2. /user/hadoop/input/sample.txt
其中
hdfs://localhost:9000/
由文件core-size.xml进行设置:
其中
/user/hadoop/input/sample.txt
就是上面准备数据时sample.txt存放的路径:
输出路径为:
hdfs://localhost:9000/user/hadoop/out2
需要注意的是,在执行MapReduce时,这个输出路径一定不要存在,否则会出错。

pom.xml

731047-20161009105028884-1663196301.png
 
  1. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  2. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  3. <modelVersion>4.0.0</modelVersion>
  4. <groupId>com.ll</groupId>
  5. <artifactId>MapReduceTest</artifactId>
  6. <version>0.0.1-SNAPSHOT</version>
  7. <packaging>jar</packaging>
  8. <name>MapReduceTest</name>
  9. <url>http://maven.apache.org</url>
  10. <properties>
  11. <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  12. <hadoopVersion>1.2.1</hadoopVersion>
  13. <junit.version>3.8.1</junit.version>
  14. </properties>
  15. <dependencies>
  16. <dependency>
  17. <groupId>junit</groupId>
  18. <artifactId>junit</artifactId>
  19. <version>${
    junit.version}</version>
  20. <scope>test</scope>
  21. </dependency>
  22. <!-- Hadoop -->
  23. <dependency>
  24. <groupId>org.apache.hadoop</groupId>
  25. <artifactId>hadoop-core</artifactId>
  26. <version>${
    hadoopVersion}</version>
  27. <!-- Hadoop -->
  28. </dependency>
  29. </dependencies>
  30. </project>

程序测试

Hadoop环境准备

我们使用之前搭建好的Hadoop环境,可参见:
《【Hadoop环境搭建】Centos6.8搭建hadoop伪分布模式》
 

生成jar包

下面是生成jar包过程
731047-20161009105031103-1196376461.png
731047-20161009105031509-1814748862.png
731047-20161009105032431-168444936.png
731047-20161009105034197-2835706.png

上传服务器并运行测试

使用默认的输入输出路径:
 
  1. hadoop jar mc.jar
指定输入输出路径:
 
  1. hadoop jar /home/hadoop/jars/mc.jar hdfs://localhost:9000/user/hadoop/input/sample.txt hdfs://localhost:9000/user/hadoop/out5

转载地址:http://davyx.baihongyu.com/

你可能感兴趣的文章
vue-cli项目打包需要修改的路径问题
查看>>
js实现复选框的操作-------Day41
查看>>
数据结构化与保存
查看>>
[SpringBoot] - 配置文件的多种形式及优先级
查看>>
chrome浏览器开发者工具之同步修改至本地
查看>>
debian7 + wheezy + chromium + flashplayer
查看>>
AOP
查看>>
进阶开发——文档,缓存,ip限速
查看>>
vue中子组件需调用父组件通过异步获取的数据
查看>>
uva 11468 - Substring(AC自己主动机+概率)
查看>>
Mysql 数据备份与恢复,用户创建,授权
查看>>
沉思录
查看>>
Angular.js中的$injector服务
查看>>
构建之法读书笔记01
查看>>
linux - lsof 命令最佳实践
查看>>
kafka性能测试
查看>>
现实世界的Windows Azure:h.e.t软件使用Windows Azure削减50%的成本
查看>>
深入.net框架
查看>>
聚合类新闻client产品功能点详情分析
查看>>
湘潭邀请赛——Alice and Bob
查看>>