版权声明:未经允许,随意转载,请附上本文链接谢谢(づ ̄3 ̄)づ╭❤~
https://blog.csdn.net/xiaoduan_/article/details/79777845
Kafka单节点多broker配置
1 . 启动Zookeeper # zkServer.sh start
2 . 配置多个broker
1. 在Kafka安装目录的config目录下拷贝 server.properties 分别为server-1.properties,server-2.properties,server-3 .properties
2. 在对应文件修改默认日志输出路径,监听端口,和broker.id ,broker.id 必须互不相同.以下为参考
server-1.properties
log.dirs=/home/hadoop/app/tmp/kafka-logs-1
listeners=PLAINTEXT://:9093
broker.id=1server-2.properties
log.dirs=/home/hadoop/app/tmp/kafka-logs-2
listeners=PLAINTEXT://:9094
broker.id=2server-3.properties
log.dirs=/home/hadoop/app/tmp/kafka-logs-3
listeners=PLAINTEXT://:9095
broker.id=3
3 . 分别启动多个broker
# kafka-server-start.sh -daemon $KAFKA_HOME/config/server-1.properties
# kafka-server-start.sh -daemon $KAFKA_HOME/config/server-2.properties
# kafka-server-start.sh -daemon $KAFKA_HOME/config/server-3.properties
4 . 创建topic # kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
5 . 创建生产者 # kafka-console-producer.sh --broker-list localhost:9093,localhost:9094,localhost:9095 --topic my-replicated-topic
6 . 创建消费者 # kafka-console-consumer.sh --zookeeper localhost:2181 --topic my-replicated-topic