一、kafka-manager简介
为了简化开发者和服务工程师维护Kafka集群的工作,yahoo构建了一个叫做Kafka管理器的基于Web工具,叫做 Kafka Manager。这个管理工具可以很容易地发现分布在集群中的哪些topic分布不均匀,或者是分区在整个集群分布不均匀的的情况。它支持管理多个集群、选择副本、副本重新分配以及创建Topic。同时,这个管理工具也是一个非常好的可以快速浏览这个集群的工具,有如下功能:
- 管理多个集群
- 轻松检查群集状态(主题,消费者,偏移,代理,副本分发,分区分发)
- 运行首选副本选举
- 使用选项生成分区分配以选择要使用的代理
- 运行分区重新分配(基于生成的分配)
- 使用可选主题配置创建主题(0.8.1.1具有与0.8.2+不同的配置)
- 删除主题(仅支持0.8.2+并记住在代理配置中设置delete.topic.enable = true)
- 主题列表现在指示标记为删除的主题(仅支持0.8.2+)
- 批量生成多个主题的分区分配,并可选择要使用的代理
- 批量运行重新分配多个主题的分区
- 将分区添加到现有主题
- 更新现有主题的配置
- kafka-manager 项目地址:https://github.com/yahoo/kafka-manager/
二、kafka-manager安装
1.下载安装包
使用Git或者直接从Releases中下载,这里下载 1.3.3.18 版本:https://github.com/yahoo/kafka-manager/releases
wget https://github.com/yahoo/kafka-manager/archive/1.3.3.18.zip
2.解压安装包
Last login: Thu Sep :: from 192.168.0.103
[spark@master ~]$ cd /opt/
[spark@master opt]$ wget https://github.com/yahoo/kafka-manager/archive/1.3.3.18.zip
[spark@master opt]$ ll
total
drwxr-xr-x. root root Jul : apache-maven-3.5.
drwxr-xr-x. spark spark Jul : elasticsearch-6.2.
drwxr-xr-x. spark spark Jul : elasticsearch-head-master
drwxr-xr-x. spark spark Jul : hadoop-2.9.
drwxr-xr-x. root root Jul : hdfs-over-ftp-master
drwxr-xr-x. Mar : jdk1..0_171
drwxrwxrwx. root root Jul : kafka_2.-1.1.
-rw-r--r--. root root Sep : kafka-manager-1.3.3.18.zip
drwxrwxr-x. spark spark Feb kibana-6.2.-linux-x86_64
-rw-r--r--. root root Apr mysql57-community-release-el7-.noarch.rpm
drwxrwxr-x. spark spark Aug : nifi-1.7.
drwxr-xr-x. spark spark Jul : node-8.9.
drwxr-xr-x. spark spark Jul : node-v8.9.1
drwxrwxr-x. Apr scala-2.11.
drwxrwxrwx. hadoop hadoop Sep : spark-2.2.-bin-hadoop2.
drwxrwxrwx. spark spark Aug : zookeeper-3.4.
[spark@master opt]$ su root
Password:
[root@master opt]# unzip kafka-manager-1.3.3.18.zip
Archive: kafka-manager-1.3.3.18.zip
8dcdbf8fabb0001691c9b52b447b656f498b4d7b
creating: kafka-manager-1.3.3.18/
。。。。
inflating: kafka-manager-1.3.3.18/test/kafka/test/SeededBroker.scala
[root@master opt]# ls
apache-maven-3.5. jdk1..0_171 mysql57-community-release-el7-.noarch.rpm spark-2.2.-bin-hadoop2.
elasticsearch-6.2. kafka_2.-1.1. nifi-1.7. zookeeper-3.4.
elasticsearch-head-master kafka-manager-1.3.3.18 node-8.9.
hadoop-2.9. kafka-manager-1.3.3.18.zip node-v8.9.1
hdfs-over-ftp-master kibana-6.2.-linux-x86_64 scala-2.11.
3.sbt编译
1)yum安装sbt(因为kafka-manager需要sbt编译)
[root@master opt]# cd kafka-manager-1.3.3.18
[root@master kafka-manager-1.3.3.18]# ls
app build.sbt conf img LICENCE project public README.md sbt src test
[root@master kafka-manager-1.3.3.18]# sbt
bash: sbt: command not found
[root@master kafka-manager-1.3.3.18]# cd ..
[root@master opt]# curl https://bintray.com/sbt/rpm/rpm > bintray-sbt-rpm.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
--:--:-- :: --:--:--
[root@master opt]# ll
total
drwxr-xr-x. root root Jul : apache-maven-3.5.
-rw-r--r--. root root Sep : bintray-sbt-rpm.repo
drwxr-xr-x. spark spark Jul : elasticsearch-6.2.
drwxr-xr-x. spark spark Jul : elasticsearch-head-master
drwxr-xr-x. spark spark Jul : hadoop-2.9.
drwxr-xr-x. root root Jul : hdfs-over-ftp-master
drwxr-xr-x. Mar : jdk1..0_171
drwxrwxrwx. root root Jul : kafka_2.-1.1.
drwxr-xr-x. root root Jul : kafka-manager-1.3.3.18
-rw-r--r--. root root Sep : kafka-manager-1.3.3.18.zip
drwxrwxr-x. spark spark Feb kibana-6.2.-linux-x86_64
-rw-r--r--. root root Apr mysql57-community-release-el7-.noarch.rpm
drwxrwxr-x. spark spark Aug : nifi-1.7.
drwxr-xr-x. spark spark Jul : node-8.9.
drwxr-xr-x. spark spark Jul : node-v8.9.1
drwxrwxr-x. Apr scala-2.11.
drwxrwxrwx. hadoop hadoop Sep : spark-2.2.-bin-hadoop2.
drwxrwxrwx. spark spark Aug : zookeeper-3.4.
[root@master opt]# mv bintray-sbt-rpm.repo /etc/yum.repos.d/
[root@master opt]# yum install sbt
Loaded plugins: fastestmirror
Determining fastest mirrors
* base: mirrors.zju.edu.cn
* extras: mirrors.shu.edu.cn
* updates: mirrors.zju.edu.cn
base | 3.6 kB ::
bintray--sbt-rpm | 1.3 kB ::
extras | 3.4 kB ::
mysql-connectors-community | 2.5 kB ::
mysql-tools-community | 2.5 kB ::
mysql57-community | 2.5 kB ::
updates | 3.4 kB ::
(/): extras//x86_64/primary_db | kB ::
(/): mysql-connectors-community/x86_64/primary_db | kB ::
(/): bintray--sbt-rpm/primary | 3.8 kB ::
(/): updates//x86_64/primary_db | 5.2 MB ::
bintray--sbt-rpm /
Resolving Dependencies
--> Running transaction check
---> Package sbt.noarch :1.2.- will be installed
--> Finished Dependency Resolution Dependencies Resolved ====================================================================================================================================
Package Arch Version Repository Size
====================================================================================================================================
Installing:
sbt noarch 1.2.- bintray--sbt-rpm 1.1 M Transaction Summary
====================================================================================================================================
Install Package Total download size: 1.1 M
Installed size: 1.2 M
Is this ok [y/d/N]: y
Downloading packages:
sbt-1.2..rpm | 1.1 MB ::
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : sbt-1.2.-.noarch /
Verifying : sbt-1.2.-.noarch / Installed:
sbt.noarch :1.2.- Complete!
改仓库地址:(sbt 默认下载库文件很慢, 还时不时被打断),我们可以在用户目录下创建 touch ~/.sbt/repositories, 填上阿里云的镜像 # vi ~/.sbt/repositories
[root@master opt]# ls
apache-maven-3.5. jdk1..0_171 mysql57-community-release-el7-.noarch.rpm spark-2.2.-bin-hadoop2.
elasticsearch-6.2. kafka_2.-1.1. nifi-1.7. zookeeper-3.4.
elasticsearch-head-master kafka-manager-1.3.3.18 node-8.9.
hadoop-2.9. kafka-manager-1.3.3.18.zip node-v8.9.1
hdfs-over-ftp-master kibana-6.2.-linux-x86_64 scala-2.11.
[root@master opt]# sbt
Getting org.scala-sbt sbt 1.2. (this may take some time)...
^C[root@master opt]# sbt -version
Getting org.scala-sbt sbt 1.2. (this may take some time)...
^C[root@master opt]# cd kafka-manager-1.3.3.18
[root@master kafka-manager-1.3.3.18]# ./sbt clean dist
Downloading sbt launcher for 0.13.:
From http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/0.13.9/sbt-launch.jar
To /root/.sbt/launchers/0.13./sbt-launch.jar
Download failed. Obtain the jar manually and place it at /root/.sbt/launchers/0.13./sbt-launch.jar
[root@master kafka-manager-1.3.3.18]# vi ~/.sbt/repositories
[repositories]
local
aliyun-nexus: http://maven.aliyun.com/nexus/content/groups/public/
jcenter: https://jcenter.bintray.com/
typesafe-ivy-releases: https://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[module]/[revision]/[type]s/[artifact](-[c
lassifier]).[ext], bootOnly
maven-central
~
~
~
~
~
~
2)编译kafka-manager
[root@master kafka-manager-1.3.3.18]# ./sbt clean dist
Downloading sbt launcher for 0.13.:
From http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/0.13.9/sbt-launch.jar
To /root/.sbt/launchers/0.13./sbt-launch.jar
[info] Loading project definition from /opt/kafka-manager-1.3.3.18/project
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
[info] Set current project to kafka-manager (in build file:/opt/kafka-manager-1.3.3.18/)
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
[warn] Credentials file /root/.bintray/.credentials does not exist
[success] Total time: s, completed Sep , :: PM
。。。。。
[info] [SUCCESSFUL ] jline#jline;2.12.!jline.jar (340ms)
[info] Done updating.
[warn] Scala version was updated by one of library dependencies:
[warn] * org.scala-lang:scala-library:(2.11., 2.11., 2.11., 2.11., 2.11., 2.11.) -> 2.11.
[warn] To force scalaVersion, add the following:
[warn] ivyScala := ivyScala.value map { _.copy(overrideScalaVersion = true) }
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * org.webjars:jquery:1.11. -> 2.1.
[warn] Run 'evicted' to see detailed eviction warnings
[info] Wrote /opt/kafka-manager-1.3.3.18/target/scala-2.11/kafka-manager_2.-1.3.3.18.pom
[info] Compiling Scala sources and Java sources to /opt/kafka-manager-1.3.3.18/target/scala-2.11/classes...
[info] 'compiler-interface' not yet compiled for Scala 2.11.. Compiling...
[info] Main Scala API documentation to /opt/kafka-manager-1.3.3.18/target/scala-2.11/api...
[info] Compilation completed in 8.323 s
model contains documentable templates
[info] Main Scala API documentation successful.
[info] Packaging /opt/kafka-manager-1.3.3.18/target/scala-2.11/kafka-manager_2.-1.3.3.18-javadoc.jar ...
[info] Done packaging.
[info] LESS compiling on source(s)
[info] Packaging /opt/kafka-manager-1.3.3.18/target/scala-2.11/kafka-manager_2.-1.3.3.18-web-assets.jar ...
[info] Done packaging.
[info] Packaging /opt/kafka-manager-1.3.3.18/target/scala-2.11/kafka-manager_2.-1.3.3.18.jar ...
[info] Done packaging.
[info] Packaging /opt/kafka-manager-1.3.3.18/target/scala-2.11/kafka-manager_2.-1.3.3.18-sans-externalized.jar ...
[info] Done packaging.
[info]
[info] Your package is ready in /opt/kafka-manager-1.3.3.18/target/universal/kafka-manager-1.3.3.18.zip
[info]
[success] Total time: s, completed Sep , :: PM
[root@master kafka-manager-1.3.3.18]# ls
app build.sbt conf img LICENCE project public README.md sbt src target test
[root@master kafka-manager-1.3.3.18]# cd target/universal
[root@master universal]# ls
kafka-manager-1.3.3.18.zip scripts
[root@master universal]# scp kafka-manager-1.3.3.18.zip /opt/kafka-manager-1.3.3.18.zip
4.安装
重新解压编译好的kafka-manager-1.3.3.18.zip&修改配置文件
[root@master universal]# cd /opt/
[root@master opt]# ls
apache-maven-3.5. hadoop-2.9. kafka_2.-1.1. kibana-6.2.-linux-x86_64 node-8.9. spark-2.2.-bin-hadoop2.
elasticsearch-6.2. hdfs-over-ftp-master kafka-manager-1.3.3.18 mysql57-community-release-el7-.noarch.rpm node-v8.9.1 zookeeper-3.4.
elasticsearch-head-master jdk1..0_171 kafka-manager-1.3.3.18.zip nifi-1.7. scala-2.11.
[root@master opt]# mv kafka-manager-1.3.3.18 kafka-manager-1.3.3.18-source
[root@master opt]# ll
total
drwxr-xr-x. root root Jul : apache-maven-3.5.
drwxr-xr-x. spark spark Jul : elasticsearch-6.2.
drwxr-xr-x. spark spark Jul : elasticsearch-head-master
drwxr-xr-x. spark spark Jul : hadoop-2.9.
drwxr-xr-x. root root Jul : hdfs-over-ftp-master
drwxr-xr-x. Mar : jdk1..0_171
drwxrwxrwx. root root Jul : kafka_2.-1.1.
drwxr-xr-x. root root Sep : kafka-manager-1.3.3.18-source
-rw-r--r--. root root Sep : kafka-manager-1.3.3.18.zip
drwxrwxr-x. spark spark Feb kibana-6.2.-linux-x86_64
-rw-r--r--. root root Apr mysql57-community-release-el7-.noarch.rpm
drwxrwxr-x. spark spark Aug : nifi-1.7.
drwxr-xr-x. spark spark Jul : node-8.9.
drwxr-xr-x. spark spark Jul : node-v8.9.1
drwxrwxr-x. Apr scala-2.11.
drwxrwxrwx. hadoop hadoop Sep : spark-2.2.-bin-hadoop2.
drwxrwxrwx. spark spark Aug : zookeeper-3.4.
[root@master opt]# unzip kafka-manager-1.3.3.18.zip
Archive: kafka-manager-1.3.3.18.zip
inflating: kafka-manager-1.3.3.18/lib/kafka-manager.kafka-manager-1.3.3.18-sans-externalized.jar
。。。。
inflating: kafka-manager-1.3.3.18/share/doc/api/index/index-d.html
inflating: kafka-manager-1.3.3.18/README.md
[root@master opt]# ll
total
drwxr-xr-x. root root Jul : apache-maven-3.5.
drwxr-xr-x. spark spark Jul : elasticsearch-6.2.
drwxr-xr-x. spark spark Jul : elasticsearch-head-master
drwxr-xr-x. spark spark Jul : hadoop-2.9.
drwxr-xr-x. root root Jul : hdfs-over-ftp-master
drwxr-xr-x. Mar : jdk1..0_171
drwxrwxrwx. root root Jul : kafka_2.-1.1.
drwxr-xr-x. root root Sep : kafka-manager-1.3.3.18
drwxr-xr-x. root root Sep : kafka-manager-1.3.3.18-source
-rw-r--r--. root root Sep : kafka-manager-1.3.3.18.zip
drwxrwxr-x. spark spark Feb kibana-6.2.-linux-x86_64
-rw-r--r--. root root Apr mysql57-community-release-el7-.noarch.rpm
drwxrwxr-x. spark spark Aug : nifi-1.7.
drwxr-xr-x. spark spark Jul : node-8.9.
drwxr-xr-x. spark spark Jul : node-v8.9.1
drwxrwxr-x. Apr scala-2.11.
drwxrwxrwx. hadoop hadoop Sep : spark-2.2.-bin-hadoop2.
drwxrwxrwx. spark spark Aug : zookeeper-3.4.
[root@master opt]# cd kafka-manager-1.3.3.18
[root@master kafka-manager-1.3.3.18]# ls
bin conf lib README.md share
[root@master kafka-manager-1.3.3.18]# cd conf/
[root@master conf]# ls
application.conf consumer.properties logback.xml logger.xml routes
[root@master conf]# vim application.conf # Secret key
# ~~~~~
# The secret key is used to secure cryptographics functions.
# If you deploy your application to several instances be sure to use the same key!
play.crypto.secret="^<csmm5Fx4d=r2HEX8pelM3iBkFVv?k[mc;IZE<_Qoq8EkX_/7@Zt6dP05Pzea3U"
play.crypto.secret=${?APPLICATION_SECRET} # The application languages
# ~~~~~
play.i18n.langs=["en"] play.http.requestHandler = "play.http.DefaultHttpRequestHandler"
play.http.context = "/"
play.application.loader=loader.KafkaManagerLoader #kafka-manager.zkhosts="kafka-manager-zookeeper:2181"
kafka-manager.zkhosts="192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181"
kafka-manager.zkhosts=${?ZK_HOSTS}
pinned-dispatcher.type="PinnedDispatcher"
pinned-dispatcher.executor="thread-pool-executor"
application.features=["KMClusterManagerFeature","KMTopicManagerFeature","KMPreferredReplicaElectionFeature","KMReassignPartitionsFea
ture"] akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "INFO"
} akka.logger-startup-timeout = 60s basicAuthentication.enabled=false
basicAuthentication.enabled=${?KAFKA_MANAGER_AUTH_ENABLED}
basicAuthentication.username="admin"
basicAuthentication.username=${?KAFKA_MANAGER_USERNAME}
basicAuthentication.password="password"
basicAuthentication.password=${?KAFKA_MANAGER_PASSWORD}
basicAuthentication.realm="Kafka-Manager"
basicAuthentication.excluded=["/api/health"] # ping the health of your instance without authentification kafka-manager.consumer.properties.file=${?CONSUMER_PROPERTIES_FILE}
~
~
~
~
~
~
~
~
~
"application.conf" 46L, 1682C written
5.启动服务
启动zk集群,kafka集群,再启动kafka-manager服务。
bin/kafka-manager 默认的端口是9000,可通过 -Dhttp.port,指定端口; -Dconfig.file=conf/application.conf指定配置文件:
[root@master conf]# cd ..
[root@master kafka-manager-1.3.3.18]# ls
bin conf lib README.md share
[root@master kafka-manager-1.3.3.18]#
[root@master kafka-manager-1.3.3.18]# nohup bin/kafka-manager -Dconfig.file=conf/application.conf -Dhttp.port= &
jps查看:
[root@master spark]# jps
QuorumPeerMain
Kafka
ProdServerStart
Jps
[root@master spark]#
WebUI查看:http://192.168.0.120:19093/ 出现如下界面则启动成功。
其他就是在ui上新建cluster,新建topic,查看topic等操作。
参考:
http://www.cnblogs.com/frankdeng/p/9584870.html
https://www.cnblogs.com/dadonggg/p/8205302.html