为什么Spark失败了?NoSuchMethodError:io.netty.channel.DefaultFileRegion。(Ljava / io /文件;JJ)V”?

时间:2021-08-29 20:53:13

I use Spark Streaming 2.10, Kafka_2.11-0.10.0.0, and Spark-streaming-0-10-2.11-2.10.

我使用Spark流2.10、Kafka_2.11-0.10.0.0和Spark流-0-10-2.11-2.10。

spark-submit --version
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.7.0_80
Branch 
Compiled by user jenkins on 2016-12-16T02:04:48Z

I use maven to build the project. The following are the dependencies.

我使用maven来构建项目。下面是依赖项。

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.11</artifactId>
  <version>2.1.0</version>
</dependency>
<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming_2.11</artifactId>
  <version>2.1.0</version>
  <scope>provided</scope>
</dependency> 
<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
  <version>2.1.0</version>
 </dependency>

I use the command to run the application in Eclipse correctly.

我使用该命令在Eclipse中正确地运行应用程序。

right click -> run as -> maven build -> clean install -> run

However, when I spark-submit the application as follows:

但是,当我按照以下方式提交申请时:

spark-submit \
  --jars=/opt/ibudata/binlogSparkStreaming/kafka_2.11-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/ka‌​fka-clients-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/me‌​trics-core-2.2.0.jar‌​,/opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar,/opt‌​/ibudata/binlogSpark‌​Streaming/zkclient-0‌​.3.jar \
  --class com.br.sparkStreaming.wordcount \
  --master spark:m20p183:7077 \
  --executor-memory 2g \
  --num-executors 3 \
  /opt/ibudata/binlogSparkStreaming/jars/wordcounttest8-0.0.1-‌​SNAPSHOT.jar

...it fails with the following error:

…它失败的原因如下:

> io.netty.handler.codec.EncoderException: java.lang.NoSuchMethodError:
> io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V  at
> io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:107)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
>   at
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
>   at
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:651)
>   at
> io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:266)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
>   at
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
>   at
> io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:706)
>   at
> io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:741)
>   at
> io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:895)
>   at
> io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:240)
>   at
> org.apache.spark.network.server.TransportRequestHandler.respond(TransportRequestHandler.java:194)
>   at
> org.apache.spark.network.server.TransportRequestHandler.processStreamRequest(TransportRequestHandler.java:150)
>   at
> org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
>   at
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
>   at
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
>   at
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>   at
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
>   at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)     at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>   at
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>   at java.lang.Thread.run(Thread.java:745) Caused by:
> java.lang.NoSuchMethodError:
> io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V  at
> org.apache.spark.network.buffer.FileSegmentManagedBuffer.convertToNetty(FileSegmentManagedBuffer.java:133)
>   at
> org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:54)
>   at
> org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:33)
>   at
> io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
>   ... 36 more

any suggestion will be appreciate

任何建议都是值得赞赏的。

After all kinds trying,but no good result, then I tried run-exampe given by Spark Streaming + Kafka Integration Guide (Kafka broker version 0.8.2.1 or higher) , run example with command: bin/run-example streaming.JavaDirectKafkaWordCount 172.18.30.22:9092 \test,

在各种尝试之后,但是没有很好的结果,然后我尝试了Spark流+ Kafka集成指南(Kafka broker version 0.8.2.1或更高版本)提供的run-exampe,并使用命令:bin/run-example流。JavaDirectKafkaWordCount 172.18.30.22:9092 \测试,

but it reported the same error: java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion.;

但是它报告了同样的错误:java.lang。NoSuchMethodError:io.netty.channel.DefaultFileRegion。;

so I doubt it maybe some jars in the classpath caused this problem, my spark-env.sh :

我怀疑可能是类路径中的一些jar引起了这个问题,我的spark-env。承宪:

export JAVA_HOME=//opt/jdk1.7
export export SCALA_HOME=/opt/scala
export export SPARK_HOME=/opt/spark
export HADOOP_HOME=/opt/hadoop2.7.3
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop  
#export SPARK_MASTER_IP=master1  
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=m20p180:2181,m20p181:2181,m20p182:2181 -Dspark.deploy.zookeeper.dir=/spark"  
export SPARK_WORKER_MEMORY=1g  
export SPARK_EXECUTOR_MEMORY=1g  
export SPARK_DRIVER_MEMORY=1g  
export SPARK_WORKDER_CORES=4
export HIVE_CONF_DIR=/opt/hadoop2.7.3/hive/conf
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar

As you can see I added netty-all-4.1.12.Final.jar to the classpath ,but it didn't work. ::q!

如你所见,我添加了netty-all-4.1.12.Final。jar到类路径,但是它不起作用。::问!

I also started the example with command: SPARK_PRINT_LAUNCH_COMMAND=1 ./bin/run-example streaming.JavaDirectKafkaWordCount 172.18.30.22:9092 \test

我还用命令:SPARK_PRINT_LAUNCH_COMMAND=1 ./bin/run-example流启动了这个示例。JavaDirectKafkaWordCount 172.18.30.22:9092 \测试

the output :

输出:

Spark Command: //opt/jdk1.7/bin/java -cp /opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/:/opt/hadoop2.7.3/share/hadoop/yarn/lib/:/opt/hadoop2.7.3/share/hadoop/common/:/opt/hadoop2.7.3/share/hadoop/common/lib/:/opt/hadoop2.7.3/share/hadoop/hdfs/:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/:/opt/hadoop2.7.3/share/hadoop/mapreduce/:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/:/opt/hadoop2.7.3/share/hadoop/tools/lib/:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/:/opt/hadoop2.7.3/share/hadoop/yarn/lib/:/opt/hadoop2.7.3/share/hadoop/common/:/opt/hadoop2.7.3/share/hadoop/common/lib/:/opt/hadoop2.7.3/share/hadoop/hdfs/:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/:/opt/hadoop2.7.3/share/hadoop/mapreduce/:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/:/opt/hadoop2.7.3/share/hadoop/tools/lib/:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar:/opt/spark/conf/:/opt/spark/jars/*:/opt/hadoop2.7.3/etc/hadoop/ -Xmx1g -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --jars /opt/spark/examples/jars/spark-examples_2.11-2.1.0.jar,/opt/spark/examples/jars/scopt_2.11-3.3.0.jar --class org.apache.spark.examples.streaming.JavaDirectKafkaWordCount spark-internal 172.18.30.22:9092 test
========================================
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2017-07-04 18:37:01,123 INFO  [main] spark.SparkContext (Logging.scala:logInfo(54)) - Running Spark version 2.1.0
2017-07-04 18:37:01,129 WARN  [main] spark.SparkContext (Logging.scala:logWarning(66)) - Support for Java 7 is deprecated as of Spark 2.0.0
2017-07-04 18:37:02,304 WARN  [main] spark.SparkConf (Logging.scala:logWarning(66)) - 
SPARK_CLASSPATH was detected (set to ':/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with --driver-class-path to augment the driver classpath
 - spark.executor.extraClassPath to augment the executor classpath

2017-07-04 18:37:02,308 WARN  [main] spark.SparkConf (Logging.scala:logWarning(66)) - Setting 'spark.executor.extraClassPath' to ':/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar' as a work-around.
2017-07-04 18:37:02,309 WARN  [main] spark.SparkConf (Logging.scala:logWarning(66)) - Setting 'spark.driver.extraClassPath' to ':/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar' as a work-around.
2017-07-04 18:37:02,524 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing view acls to: ibudata
2017-07-04 18:37:02,526 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing modify acls to: ibudata
2017-07-04 18:37:02,528 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing view acls groups to: 
2017-07-04 18:37:02,530 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing modify acls groups to: 
2017-07-04 18:37:02,532 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(ibudata); groups with view permissions: Set(); users  with modify permissions: Set(ibudata); groups with modify permissions: Set()
2017-07-04 18:37:03,091 INFO  [main] util.Utils (Logging.scala:logInfo(54)) - Successfully started service 'sparkDriver' on port 35480.
2017-07-04 18:37:03,127 INFO  [main] spark.SparkEnv (Logging.scala:logInfo(54)) - Registering MapOutputTracker
2017-07-04 18:37:03,162 INFO  [main] spark.SparkEnv (Logging.scala:logInfo(54)) - Registering BlockManagerMaster
2017-07-04 18:37:03,166 INFO  [main] storage.BlockManagerMasterEndpoint (Logging.scala:logInfo(54)) - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2017-07-04 18:37:03,167 INFO  [main] storage.BlockManagerMasterEndpoint (Logging.scala:logInfo(54)) - BlockManagerMasterEndpoint up
2017-07-04 18:37:03,185 INFO  [main] storage.DiskBlockManager (Logging.scala:logInfo(54)) - Created local directory at /tmp/blockmgr-20f80f78-0e27-462d-b5a4-1e0067308861
2017-07-04 18:37:03,214 INFO  [main] memory.MemoryStore (Logging.scala:logInfo(54)) - MemoryStore started with capacity 408.9 MB
2017-07-04 18:37:03,330 INFO  [main] spark.SparkEnv (Logging.scala:logInfo(54)) - Registering OutputCommitCoordinator
2017-07-04 18:37:03,458 INFO  [main] util.log (Log.java:initialized(186)) - Logging initialized @4162ms
2017-07-04 18:37:03,623 INFO  [main] server.Server (Server.java:doStart(327)) - jetty-9.2.z-SNAPSHOT
2017-07-04 18:37:03,652 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@ef93f0{/jobs,null,AVAILABLE}
2017-07-04 18:37:03,653 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@70d9720a{/jobs/json,null,AVAILABLE}
2017-07-04 18:37:03,653 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@53ce2867{/jobs/job,null,AVAILABLE}
2017-07-04 18:37:03,654 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@3bead2d{/jobs/job/json,null,AVAILABLE}
2017-07-04 18:37:03,654 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@5b5b6746{/stages,null,AVAILABLE}
.....................
client.TransportClientFactory (TransportClientFactory.java:createClient(250)) - Successfully created connection to /192.168.22.197:35480 after 64 ms (0 ms spent in bootstraps)
2017-07-04 18:37:07,261 INFO  [Executor task launch worker-0] util.Utils (Logging.scala:logInfo(54)) - Fetching spark://192.168.22.197:35480/jars/spark-examples_2.11-2.1.0.jar to /tmp/spark-5110e687-a732-4762-8d74-a7c13a035681/userFiles-69ba7cd2-0014-40d7-8ac8-6b73cd07ce41/fetchFileTemp1907532202323588721.tmp
2017-07-04 18:37:07,367 ERROR [shuffle-server-3-2] server.TransportRequestHandler (TransportRequestHandler.java:operationComplete(201)) - Error sending result StreamResponse{streamId=/jars/spark-examples_2.11-2.1.0.jar, byteCount=1950712, body=FileSegmentManagedBuffer{file=/opt/spark/examples/jars/spark-examples_2.11-2.1.0.jar, offset=0, length=1950712}} to /192.168.22.197:41069; closing connection
io.netty.handler.codec.EncoderException: java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V
    at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:107)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
    at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
    at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:651)
    at io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:266)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
    at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
    at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:706)
    at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:741)
    at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:895)
    at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:240)
    at org.apache.spark.network.server.TransportRequestHandler.respond(TransportRequestHandler.java:194)
    at org.apache.spark.network.server.TransportRequestHandler.processStreamRequest(TransportRequestHandler.java:150)
    at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
    at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
    at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V
    at org.apache.spark.network.buffer.FileSegmentManagedBuffer.convertToNetty(FileSegmentManagedBuffer.java:133)
    at org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:54)
    at org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:33)
    at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
    ... 36 more
2017-07-04 18:37:07,379 ERROR [shuffle-client-6-1] client.TransportResponseHandler (TransportResponseHandler.java:channelInactive(126)) - Still have 1 requests outstanding when connection from /192.168.22.197:35480 is closed
2017-07-04 18:37:08,054 INFO  [JobGenerator] scheduler.JobScheduler (Logging.scala:logInfo(54)) - Added jobs for time 1499164628000 ms

2 个解决方案

#1


1  

After all kinds trying ,finally it was resolved by add the netty-4.0.42.Final to the spark_classpath, must remember that your spark is a cluster ,not only change the master ,but also change the slaves , is that the reason that blocked me a long time . At last ,Many thanks for Jacek Laskowski ,you're very kindful.

经过各种尝试,最终通过添加netty-4.0.42解决。spark_classpath的最后一点是,必须记住您的spark是一个集群,不仅更改了master,而且还更改了从服务器,这是阻塞我很长时间的原因。最后,非常感谢Jacek Laskowski,你很善良。

#2


0  

You definitely don't want to spark-submit with the following jars:

您肯定不想提交以下jar文件:

--jars=/opt/ibudata/binlogSparkStreaming/kafka_2.11-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/ka‌​fka-clients-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/me‌​trics-core-2.2.0.jar‌​,/opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar,/opt‌​/ibudata/binlogSpark‌​Streaming/zkclient-0‌​.3.jar

You only want to include spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar that could also be too high as compared to your deployment environment.

你只是想包括火花‌-streaming-kafka-0-8‌_2.11-2.1.0。与部署环境相比,jar也可能太高。

--jars=/opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar

You should remove --jars from spark-submit.

您应该从sparksubmit中删除jar。

I'd start with a local deployment environment first and only when it runs spark-submit the Spark application to Hadoop YARN.

我将首先从本地部署环境开始,并且只在它运行sparg时—将Spark应用程序提交给Hadoop纱线。

Try the following first and get it working:

先试试下面的方法,然后让它工作:

spark-submit \
  --jars /opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar \
  --class com.br.sparkStreaming.wordcount \
  /opt/ibudata/binlogSparkStreaming/jars/wordcounttest8-0.0.1-‌​SNAPSHOT.jar

Note that --jars does not use = to specify the parameters (I didn't know it'd be accepted).

注意——jars不使用=来指定参数(我不知道它会被接受)。

My guess is that you spark-submit to the environment with a different Spark version which is below 2.1.0 and is incompatible with what you bundled in an uber jar (I suspect that you assembled an uber jar that you eventually spark-submit).

我的猜测是,您提交给环境的Spark版本低于2.1.0,并且与您捆绑在uber jar中的内容不兼容(我怀疑您最终组装了一个uber jar,并提交)。

As you can see in the stack trace the error is due to:

正如您在堆栈跟踪中看到的,错误是由于:

java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion(Ljava/io/File;JJ)V
  at org.apache.spark.network.buffer.FileSegmentManagedBuffer.convertToNetty(FileSegmentManagedBuffer.java:133)

That particular line 133 was changed quite recently in [SPARK-15178][CORE] Remove LazyFileRegion instead use netty's DefaultFileRegion and is only available in 2.1.0 and higher which you happen to use.

第133行最近在[SPARK-15178][CORE]删除LazyFileRegion,转而使用netty的DefaultFileRegion,只有2.1.0和更高版本才可以使用。

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.11</artifactId>
  <version>2.1.0</version>
</dependency>

#1


1  

After all kinds trying ,finally it was resolved by add the netty-4.0.42.Final to the spark_classpath, must remember that your spark is a cluster ,not only change the master ,but also change the slaves , is that the reason that blocked me a long time . At last ,Many thanks for Jacek Laskowski ,you're very kindful.

经过各种尝试,最终通过添加netty-4.0.42解决。spark_classpath的最后一点是,必须记住您的spark是一个集群,不仅更改了master,而且还更改了从服务器,这是阻塞我很长时间的原因。最后,非常感谢Jacek Laskowski,你很善良。

#2


0  

You definitely don't want to spark-submit with the following jars:

您肯定不想提交以下jar文件:

--jars=/opt/ibudata/binlogSparkStreaming/kafka_2.11-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/ka‌​fka-clients-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/me‌​trics-core-2.2.0.jar‌​,/opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar,/opt‌​/ibudata/binlogSpark‌​Streaming/zkclient-0‌​.3.jar

You only want to include spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar that could also be too high as compared to your deployment environment.

你只是想包括火花‌-streaming-kafka-0-8‌_2.11-2.1.0。与部署环境相比,jar也可能太高。

--jars=/opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar

You should remove --jars from spark-submit.

您应该从sparksubmit中删除jar。

I'd start with a local deployment environment first and only when it runs spark-submit the Spark application to Hadoop YARN.

我将首先从本地部署环境开始,并且只在它运行sparg时—将Spark应用程序提交给Hadoop纱线。

Try the following first and get it working:

先试试下面的方法,然后让它工作:

spark-submit \
  --jars /opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar \
  --class com.br.sparkStreaming.wordcount \
  /opt/ibudata/binlogSparkStreaming/jars/wordcounttest8-0.0.1-‌​SNAPSHOT.jar

Note that --jars does not use = to specify the parameters (I didn't know it'd be accepted).

注意——jars不使用=来指定参数(我不知道它会被接受)。

My guess is that you spark-submit to the environment with a different Spark version which is below 2.1.0 and is incompatible with what you bundled in an uber jar (I suspect that you assembled an uber jar that you eventually spark-submit).

我的猜测是,您提交给环境的Spark版本低于2.1.0,并且与您捆绑在uber jar中的内容不兼容(我怀疑您最终组装了一个uber jar,并提交)。

As you can see in the stack trace the error is due to:

正如您在堆栈跟踪中看到的,错误是由于:

java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion(Ljava/io/File;JJ)V
  at org.apache.spark.network.buffer.FileSegmentManagedBuffer.convertToNetty(FileSegmentManagedBuffer.java:133)

That particular line 133 was changed quite recently in [SPARK-15178][CORE] Remove LazyFileRegion instead use netty's DefaultFileRegion and is only available in 2.1.0 and higher which you happen to use.

第133行最近在[SPARK-15178][CORE]删除LazyFileRegion,转而使用netty的DefaultFileRegion,只有2.1.0和更高版本才可以使用。

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.11</artifactId>
  <version>2.1.0</version>
</dependency>