Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十九):推送avro格式数据到topic,并使用spark structured streaming接收topic解析avro数据

时间:2022-08-28 10:02:23

推送avro格式数据到topic

源代码:https://github.com/Neuw84/structured-streaming-avro-demo/blob/master/src/main/java/es/aconde/structured/GeneratorDemo.java

package es.aconde.structured;

import com.twitter.bijection.Injection;
import com.twitter.bijection.avro.GenericAvroCodecs;
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericRecord;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.SplittableRandom;
import java.util.Properties; /**
* Fake data generator for Kafka
*
* @author Angel Conde
*/
public class GeneratorDemo { /**
* Avro defined schema
*/
public static final String USER_SCHEMA = "{"
+ "\"type\":\"record\","
+ "\"name\":\"alarm\","
+ "\"fields\":["
+ " { \"name\":\"str1\", \"type\":\"string\" },"
+ " { \"name\":\"str2\", \"type\":\"string\" },"
+ " { \"name\":\"int1\", \"type\":\"int\" }"
+ "]}"; /**
*
* @param args
* @throws InterruptedException
*/
public static void main(String[] args) throws InterruptedException {
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer"); Schema.Parser parser = new Schema.Parser();
Schema schema = parser.parse(USER_SCHEMA);
Injection<GenericRecord, byte[]> recordInjection = GenericAvroCodecs.toBinary(schema); KafkaProducer<String, byte[]> producer = new KafkaProducer<>(props);
SplittableRandom random = new SplittableRandom(); while (true) {
GenericData.Record avroRecord = new GenericData.Record(schema);
avroRecord.put("str1", "Str 1-" + random.nextInt(10));
avroRecord.put("str2", "Str 2-" + random.nextInt(1000));
avroRecord.put("int1", random.nextInt(10000)); byte[] bytes = recordInjection.apply(avroRecord); ProducerRecord<String, byte[]> record = new ProducerRecord<>("mytopic", bytes);
producer.send(record);
Thread.sleep(100);
} }
}

使用spark structured streaming接收topic解析avro数据

源代码:https://github.com/Neuw84/structured-streaming-avro-demo/blob/master/src/main/java/es/aconde/structured/StructuredDemo.java

package es.aconde.structured;

import com.databricks.spark.avro.SchemaConverters;
import com.twitter.bijection.Injection;
import com.twitter.bijection.avro.GenericAvroCodecs;
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericRecord;
import org.apache.log4j.Level;
import org.apache.log4j.LogManager;
import org.apache.spark.SparkConf;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Encoders;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.streaming.StreamingQuery;
import org.apache.spark.sql.streaming.StreamingQueryException;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructType; /**
* Structured streaming demo using Avro'ed Kafka topic as input
*
* @author Angel Conde
*/
public class StructuredDemo { private static Injection<GenericRecord, byte[]> recordInjection;
private static StructType type;
private static final String USER_SCHEMA = "{"
+ "\"type\":\"record\","
+ "\"name\":\"myrecord\","
+ "\"fields\":["
+ " { \"name\":\"str1\", \"type\":\"string\" },"
+ " { \"name\":\"str2\", \"type\":\"string\" },"
+ " { \"name\":\"int1\", \"type\":\"int\" }"
+ "]}";
private static Schema.Parser parser = new Schema.Parser();
private static Schema schema = parser.parse(USER_SCHEMA); static { //once per VM, lazily
recordInjection = GenericAvroCodecs.toBinary(schema);
type = (StructType) SchemaConverters.toSqlType(schema).dataType(); } public static void main(String[] args) throws StreamingQueryException {
//set log4j programmatically
LogManager.getLogger("org.apache.spark").setLevel(Level.WARN);
LogManager.getLogger("akka").setLevel(Level.ERROR); //configure Spark
SparkConf conf = new SparkConf()
.setAppName("kafka-structured")
.setMaster("local[*]"); //initialize spark session
SparkSession sparkSession = SparkSession
.builder()
.config(conf)
.getOrCreate(); //reduce task number
sparkSession.sqlContext().setConf("spark.sql.shuffle.partitions", "3"); //data stream from kafka
Dataset<Row> ds1 = sparkSession
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe", "mytopic")
.option("startingOffsets", "earliest")
.load(); //start the streaming query
sparkSession.udf().register("deserialize", (byte[] data) -> {
GenericRecord record = recordInjection.invert(data).get();
return RowFactory.create(record.get("str1").toString(), record.get("str2").toString(), record.get("int1")); }, DataTypes.createStructType(type.fields()));
ds1.printSchema();
Dataset<Row> ds2 = ds1
.select("value").as(Encoders.BINARY())
.selectExpr("deserialize(value) as rows")
.select("rows.*"); ds2.printSchema(); StreamingQuery query1 = ds2
.groupBy("str1")
.count()
.writeStream()
.queryName("Test query")
.outputMode("complete")
.format("console")
.start(); query1.awaitTermination(); }
}

Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十九):推送avro格式数据到topic,并使用spark structured streaming接收topic解析avro数据的更多相关文章

  1. Kafka:ZK&plus;Kafka&plus;Spark Streaming集群环境搭建(十九)ES6&period;2&period;2 安装Ik中文分词器

    注: elasticsearch 版本6.2.2 1)集群模式,则每个节点都需要安装ik分词,安装插件完毕后需要重启服务,创建mapping前如果有机器未安装分词,则可能该索引可能为RED,需要删除后 ...

  2. Kafka:ZK&plus;Kafka&plus;Spark Streaming集群环境搭建(十二)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网。

    Centos7出现异常:Failed to start LSB: Bring up/down networking. 按照<Kafka:ZK+Kafka+Spark Streaming集群环境搭 ...

  3. Kafka:ZK&plus;Kafka&plus;Spark Streaming集群环境搭建(十一)定制一个arvo格式文件发送到kafka的topic,通过Structured Streaming读取kafka的数据

    将arvo格式数据发送到kafka的topic 第一步:定制avro schema: { "type": "record", "name": ...

  4. Kafka:ZK&plus;Kafka&plus;Spark Streaming集群环境搭建(十)安装hadoop2&period;9&period;0搭建HA

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  5. Kafka:ZK&plus;Kafka&plus;Spark Streaming集群环境搭建(十六)Structured Streaming中ForeachSink的用法

    Structured Streaming默认支持的sink类型有File sink,Foreach sink,Console sink,Memory sink. ForeachWriter实现: 以写 ...

  6. Kafka:ZK&plus;Kafka&plus;Spark Streaming集群环境搭建(十四)定义一个avro schema使用comsumer发送avro字符流,producer接受avro字符流并解析

    参考<在Kafka中使用Avro编码消息:Consumer篇>.<在Kafka中使用Avro编码消息:Producter篇> 在了解如何avro发送到kafka,再从kafka ...

  7. Kafka:ZK&plus;Kafka&plus;Spark Streaming集群环境搭建(十五)Spark编写UDF、UDAF、Agg函数

    Spark Sql提供了丰富的内置函数让开发者来使用,但实际开发业务场景可能很复杂,内置函数不能够满足业务需求,因此spark sql提供了可扩展的内置函数. UDF:是普通函数,输入一个或多个参数, ...

  8. Kafka:ZK&plus;Kafka&plus;Spark Streaming集群环境搭建(十八)ES6&period;2&period;2 增删改查基本操作

    #文档元数据 一个文档不仅仅包含它的数据 ,也包含 元数据 —— 有关 文档的信息. 三个必须的元数据元素如下:## _index    文档在哪存放 ## _type    文档表示的对象类别 ## ...

  9. Kafka:ZK&plus;Kafka&plus;Spark Streaming集群环境搭建(十三)kafka&plus;spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits&period; Current usage&colon; 119&period;5 MB of 1 GB physical memory used&semi; 2&period;2 GB of 2&period;1 G)

    异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...

  10. Kafka:ZK&plus;Kafka&plus;Spark Streaming集群环境搭建(九)安装kafka&lowbar;2&period;11-1&period;1&period;0

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

随机推荐

  1. 设计模式-观察者模式&lpar;List列表维护观察者&rpar;

    namespace ConsoleApplication1{ interface IObserver { void ReceiveMsg(string msg); } class Observer : ...

  2. Linux WAS7 启动异常

    启动server1异常,信息如下: [root@cmiecmceprd02 bin]# ./startServer.sh server1ADMU0116I: Tool information is b ...

  3. &lbrack;stm32&rsqb; LED

    /**************************************************************************** * 文件名: main.c * 内容简述: ...

  4. OSGi在淘宝内部的使用

    现在基本不怎么用了,OSGi主要的价值,在实际中体现得不太明显 比如类隔离,用更简单的自定义ClassLoader也可以实现:单机多版本服务,用的场景也很少:热部署也不是很实用 但是,基于OSGi框架 ...

  5. cdoj 1250 喵哈哈的矩阵 数学题

    喵哈哈的矩阵 Time Limit: 20 Sec Memory Limit: 256 MB 题目连接 http://acm.uestc.edu.cn/#/problem/show/1250 Desc ...

  6. 协定类型不具有 ServiceContractAttribute 特性

    协定类型 ZBMService.QueryHistoryData 不具有 ServiceContractAttribute 特性.若要定义有效协定,指定的类型(协定接口或服务类)必须具有 Servic ...

  7. &lpar;链表&rpar; 206&period; Reverse Linked List

    Reverse a singly linked list. Example: Input: 1->2->3->4->5->NULL Output: 5->4-&gt ...

  8. NOIP2016&lpar;D1T2&rpar;天天爱跑步题解

    首先声明这不是一篇算法独特的题解,仍然是"LCA+桶+树上差分",但这篇题解是为了让很多很多看了很多题解仍然看不懂的朋友们看懂的,其中就包括我,我也在努力地把解题的"思维 ...

  9. Json、JavaBean、String等互转

    Json.JavaBean.String等互转 本文介绍简单的Json.JavaBean.String互换(下文JavaBean简称Object对象,这里不是很严谨) 转换关系如下: 其中String ...

  10. 《阿里巴巴 Java 开发手册》划重点!

    [强制]小数类型为 decimal,禁止使用 float 和 double. 说明:float 和 double 在存储的时候,存在精度损失的问题,很可能在值的比较时,得到不 正确的结果.如果存储的数 ...