kafka的安装配置与使用

时间:2021-11-30 21:34:44

一、修改配置(kafka内server.properties)

1、唯一标示:

brokerid=1

2、主机名

listeners=PLAINTEXT://192.168.2.10:9092            监听端口 使用本机IP

3. 日志路径

logbasic

4.结点

zookeeper.connect=192.168.2.10,192.168.2.11,192.168.2.12

5.修改配置

vi /etc/profile

 

先启动zookeeperHadoop,yarn


6.启动kafka

[root@localmaster kafka_2.11-0.10.1.0]# ./bin/kafka-server-start.sh -daemon config/server.properties

 

7.创建

[root@localmaster kafka_2.11-0.10.1.0]# ./bin/kafka-topics.sh --create -zookeeper 192.168.2.10:2181 --replication-factor 1 -partition 1 --topic my-test603

 

8、列表:

[root@localmaster kafka_2.11-0.10.1.0]# kafka-topics.sh --list -zookeeper 192.168.2.11:2181

  

9、开启服务端

[root@localmaster kafka_2.11-0.10.1.0]# ./bin/kafka-console-producer.sh --broker-list 192.168.2.10:9092 --topic my-test603

 

10、开启消费端

[root@localuser1 kafka_2.11-0.10.1.0]# ./bin/kafka-console-consumer.sh  --zookeeper 192.168.2.11:2181 --from-beginning --topic my-test603

 

A、准备文件资源:test.logBflume的conf中写配置文件:ftok1.conf

 

11、执行配置脚本

[root@localmaster flume]# ./bin/flume-ng agent -c conf -f conf/ftok1.conf -n a1 -Dflume.root.logger=INFO,console

 

12 、运行flume,发送文件

[root@localmaster flume]# ./bin/flume-ng avro-client -c conf -H 0.0.0.0 -p 44444 -F /usr/tools/test.log 


参考配置文件:ftok1.conf

a1.sources=s

a1.channels=c

a1.sinks=k

 

a1.sources.s.type=avro

a1.sources.s.bind=0.0.0.0

a1.sources.s.port=44444

a1.sources.s.channels=c

 

a1.sinks.k.type=org.apache.flume.sink.kafka.KafkaSink

a1.sinks.k.kafka.topic=my-test603

a1.sinks.k.kafka.bootstrap.servers=192.168.65.128:9092

a1.sinks.k.kafka.flumeBatchSize=20

a1.sinks.k.kafka.producer.acks=1

a1.sinks.k.kafka.producer.linger.ms=1

a1.sinks.k.kafka.producer.compression.type=snappy

a1.sinks.k.channel=c

 

a1.channels.c.type=memory

a1.channels.c.capacity=1000

a1.channels.c.transactionCapacity=100