SparkStreaming 整合Kafka
Spark Streaming连接kafka 的两种方式
Receiver based Approah
- KafkaUtils.createDstream基于接收器方式,消费Kafka数据,已淘汰
- Receiver作为Task运行在Executor等待数据,一个Receiver效率低,需要开启多个,再手动合并数据,很麻烦
- Receiver挂了,可能丢失数据,需要开启WAL(预写日志)保证数据安全,效率低
- 通过Zookeeper来连接kafka,offset存储再zookeeper中
- spark消费的时候为了保证数据不丢也会保存一份offset,可能出现数据不一致
Direct Approach
- KafkaUtils.createDirectStream直连方式,streaming中每个批次的job直接调用Simple Consumer API获取对应Topic数据
- Direct方式直接连接kafka分区获取数据,提高了并行能力
- Direct方式调用kafka低阶API,offset自己存储和维护,默认由spark维护在checkpoint中
- offset也可以自己手动维护,保存在mysql/redis中
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "hadoop102:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "sparkdemo",
"auto.offset.reset" -> "latest",
"auto.commit.interval.ms"->"1000",
"enable.auto.commit" -> (true: java.lang.Boolean)
)
val topics = Array("spark_kafka")
val kafkaDS: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String](
ssc,
LocationStrategies.PreferConsistent,
ConsumerStrategies.Subscribe[String, String](topics, kafkaParams)
)
代码展示
自动提交偏移量
object kafka_Demo01 {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setMaster("local[*]").setAppName("kafka_Demo01")
val sc = new SparkContext(conf)
val ssc = new StreamingContext(sc, Seconds(5))
ssc.checkpoint("data/ckp")
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "hadoop102:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "sparkdemo",
"auto.offset.reset" -> "latest",
"auto.commit.interval.ms"->"1000",
"enable.auto.commit" -> (true: java.lang.Boolean)
)
val topics = Array("spark_kafka")
val kafkaDS: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String](
ssc,
LocationStrategies.PreferConsistent,
ConsumerStrategies.Subscribe[String, String](topics, kafkaParams)
)
val infoDS = kafkaDS.map(record => {
val topic = record.topic()
val partition = record.partition()
val offset = record.offset()
val key = record.key()
val value = record.value()
val info: String = s"""topic:${topic}, partition:${partition}, offset:${offset}, key:${key}, value:${value}"""
info
})
infoDS.print()
ssc.start()
ssc.awaitTermination()
ssc.stop(true, true)
}
}
手动提交
kafkaDS.foreachRDD(rdd => {
rdd.foreach(record => {
val topic = record.topic()
val partition = record.partition()
val offset = record.offset()
val key = record.key()
val value = record.value()
val info: String = s"""topic:${topic}, partition:${partition}, offset:${offset}, key:${key}, value:${value}"""
info
println("消费" + info)
})
val offsetRanges: Array[OffsetRange] = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
kafkaDS.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
println("当前批次的数据已消费并手动提交")
})
object kafka_Demo02 {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setMaster("local[*]").setAppName("kafka_Demo01")
val sc = new SparkContext(conf)
val ssc = new StreamingContext(sc, Seconds(5))
ssc.checkpoint("data/ckp")
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "hadoop102:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "sparkdemo",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
val topics = Array("spark_kafka")
val kafkaDS: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String](
ssc,
LocationStrategies.PreferConsistent,
ConsumerStrategies.Subscribe[String, String](topics, kafkaParams)
)
kafkaDS.foreachRDD(rdd => {
rdd.foreach(record => {
val topic = record.topic()
val partition = record.partition()
val offset = record.offset()
val key = record.key()
val value = record.value()
val info: String = s"""topic:${topic}, partition:${partition}, offset:${offset}, key:${key}, value:${value}"""
info
println("消费" + info)
})
val offsetRanges: Array[OffsetRange] = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
kafkaDS.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
println("当前批次的数据已消费并手动提交")
})
kafkaDS.print()
ssc.start()
ssc.awaitTermination()
ssc.stop(true, true)
}
}