Win下用idea远程在hadoop上调试spark程序及读取hbase

时间:2022-01-14 12:56:27

Win下用idea远程在hadoop上调试spark程序及读取hbase

环境:
Win7
Jdk1.8
Hadoop2.7.3的winutils.exe工具
IntelliJ IDEA 2017.3 x64
IDEA 2017.3 的scala支持包
Spark2.1.1
Scala2.11.4

第0步 配置系统环境变量

0.1 Jdk1.8,Scala2.11.4配置就行不细说
0.2 Hadoop在win下的配置:(这里用的是2.7.3)
拷贝集群中的hadoop2.7.3安装路径,放到win下任意盘符的根目录
下载Hadoop2.7.3的winutils.exe工具链接:https://pan.baidu.com/s/1pKWAGe3 密码:zyi7
用该bin替换原来的hadoop的bin目录
将hadoop2.7.3配置到系统环境变量中,hadoop2.7.3/bin可以不用配

第一步 配置idea

1.1下载并安装(https://www.jetbrains.com/idea/
安装后先别启动,等待破解
破解(如果经济条件可以的话,尽量支持正版,毕竟这么好的工具)
下载破解包:链接:https://pan.baidu.com/s/1eRSjwJ4 密码:mo6d
将该破解包直接拷贝到安装目录的bin目录即可
1.2 配置idea环境
下载IDEA 2017.3 的scala支持包
地址:链接:https://pan.baidu.com/s/1mixLiPU 密码:dbzu
安装IDEA 2017.3 的scala支持包(必做)
Win下用idea远程在hadoop上调试spark程序及读取hbase

第二步 开发

2.1 创建项目(maven项目,便于开发,创建同时支持java和scala的项目)
maven-archetype-quickstartolve/70/gravity/SouthEast)
Win下用idea远程在hadoop上调试spark程序及读取hbase
请忽略组id里的内容,瞎起的;删除版本的快照
Win下用idea远程在hadoop上调试spark程序及读取hbase
2.2 修改pom.xml文件,添加个框架所用到的依赖
添加以下内容:

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

<scala.version>2.11.4</scala.version>
<hbase.version>1.2.5</hbase.version>
<spark.version>2.1.1</spark.version>
<hadoop.version>2.7.3</hadoop.version>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>

<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>

<!-- spark -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>

<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- hadoop -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>${hadoop.version}</version>
</dependency>
<!-- hbase -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>${hbase.version}</version>
</dependency>


</dependencies>
<build>
<sourceDirectory>src/main/java</sourceDirectory>
<testSourceDirectory>src/test/java</testSourceDirectory>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<maniClass></maniClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.3.1</version>
<executions>
<execution>
<goals>
<goal>exec</goal>
</goals>
</execution>
</executions>
<configuration>
<executable>java</executable>
<includeProjectDependencies>false</includeProjectDependencies>
<classpathScope>compile</classpathScope>
<mainClass>com.dt.spark.SparkApps.App</mainClass>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.6</source>
<target>1.6</target>
</configuration>
</plugin>
</plugins>
</build>

2.3 添加集群的配置文件到我们的项目中去
将hadoop,hbase的配置文件复制到resources文件夹下
Win下用idea远程在hadoop上调试spark程序及读取hbase

第三步 写代码,实现案例

案例一 用java实现一个wordcount案例

装备测试文件
我将words.txt编辑好之后放到了hdfs上
Win下用idea远程在hadoop上调试spark程序及读取hbase
查看words.txt的内容
Win下用idea远程在hadoop上调试spark程序及读取hbase
红框中就是我们的words.txt,可以看到,每个单词间使用空格分割的

JavaWordCount代码如下:

package com.shanshu.demo;

import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.sql.SparkSession;
import scala.Tuple2;

import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
import java.util.regex.Pattern;

public class JavaWordCount {

private static final Pattern SPACE = Pattern.compile(" ");

public static void main(String[] args) throws Exception {

/*if (args.length < 1) {
System.err.println("Usage: JavaWordCount <file>");
System.exit(1);
}*/


System.setProperty("hadoop.home.dir","E:\\hadoop-2.7.3");

SparkSession spark = SparkSession
.builder().master("spark://192.168.10.84:7077")
.appName("JavaWordCount")
.getOrCreate();

spark.sparkContext()
.addJar("E:\\myIDEA\\sparkDemo\\out\\artifacts\\sparkDemo_jar\\sparkDemo.jar");

JavaRDD<String> lines = spark.read().textFile("hdfs://192.168.10.82:8020/user/jzz/word/words.txt").javaRDD();

JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
@Override
public Iterator<String> call(String s) {
return Arrays.asList(SPACE.split(s)).iterator();
}
});

JavaPairRDD<String, Integer> ones = words.mapToPair(
new PairFunction<String, String, Integer>() {
@Override
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
});

JavaPairRDD<String, Integer> counts = ones.reduceByKey(
new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});

List<Tuple2<String, Integer>> output = counts.collect();
for (Tuple2<?,?> tuple : output) {
System.out.println(tuple._1() + ": " + tuple._2());
}
spark.stop();
}

}

请注意:代码中用到@Override时,需要修改一下java的版本,不然报错
Win下用idea远程在hadoop上调试spark程序及读取hbase

修改Project的java版本
Win下用idea远程在hadoop上调试spark程序及读取hbase

修改Module的java版本
Win下用idea远程在hadoop上调试spark程序及读取hbase

说明:在本地运行代码也就是把打好的jar添加到代码中
打jar包的步骤如下:

i 添加路径
Win下用idea远程在hadoop上调试spark程序及读取hbase

Win下用idea远程在hadoop上调试spark程序及读取hbase

Win下用idea远程在hadoop上调试spark程序及读取hbase

Win下用idea远程在hadoop上调试spark程序及读取hbase

注:有时需要拷贝生成的jar到集群上去运跑,为了防止编译过后的jar过大,删除这些jar
Win下用idea远程在hadoop上调试spark程序及读取hbase

ii 编译
Win下用idea远程在hadoop上调试spark程序及读取hbase

Win下用idea远程在hadoop上调试spark程序及读取hbase

iii 编译的结果
Win下用idea远程在hadoop上调试spark程序及读取hbase

iv复制该jar包在磁盘中的目录
Win下用idea远程在hadoop上调试spark程序及读取hbase

将这个路径写到代码中(必须做,不然不能在本地运行)
如下:
Win下用idea远程在hadoop上调试spark程序及读取hbase

v 执行代码,成功后可以看到结果如下
Win下用idea远程在hadoop上调试spark程序及读取hbase

案例二 用scala代码实现读取hbase的数据
准备工作:在hbase中建一张表fruit,列族为info,并插入数据(已建好)
查看:
Win下用idea远程在hadoop上调试spark程序及读取hbase

i 拷贝hbase的配置文件到idea的resources目录
Win下用idea远程在hadoop上调试spark程序及读取hbase

ii 添加scala的jar
Win下用idea远程在hadoop上调试spark程序及读取hbase

iii 在main目录下创建一个scala的文件夹,设置scala为source
Win下用idea远程在hadoop上调试spark程序及读取hbase

iv 目录拷贝java目录下的META-INF目录到scala下,删除MANIFEST.MF文件,创建包
Win下用idea远程在hadoop上调试spark程序及读取hbase

v 写scala代码
Win下用idea远程在hadoop上调试spark程序及读取hbase

代码如下:

package com.shanshu.scala

import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.hadoop.hbase.util.Bytes
import org.apache.spark.{SparkConf, SparkContext}

object ReadHbase {

def main(args: Array[String]): Unit = {
val conf = HBaseConfiguration.create()

conf.set("hbase_zookeeper_property_clientPort","2181")
conf.set("hbase_zookeeper_quorum","192.168.10.82")

val sparkConf = new SparkConf().setMaster("local[3]").setAppName("readHbase")

val sc = new SparkContext(sparkConf)
//设置查询的表名
conf.set(TableInputFormat.INPUT_TABLE, "fruit")
val stuRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],
classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],
classOf[org.apache.hadoop.hbase.client.Result])

//遍历输出
stuRDD.foreach({ case (_,result) =>
val key = Bytes.toString(result.getRow)
val name = Bytes.toString(result.getValue("info".getBytes,"name".getBytes))
val color = Bytes.toString(result.getValue("info".getBytes,"color".getBytes))
val num = Bytes.toString(result.getValue("info".getBytes,"num".getBytes))
val people = Bytes.toString(result.getValue("info".getBytes,"people".getBytes))
println("Row key:"+key+" Name:"+name+" color:"+color+" num:"+num+" people"+people)
})
sc.stop()
}
}

vi 同样,打jar包(非必须,在集群上执行时才需要)
删除原来的jar,因为我们这次要选择scala的主类
Win下用idea远程在hadoop上调试spark程序及读取hbase

vii 运行结果如下:
Win下用idea远程在hadoop上调试spark程序及读取hbase

QQ:2816942401(广告勿扰)