Kafka Web Console:是一款开源的系统,源码的地址在https://github.com/claudemamo/kafka-web-console中。Kafka Web Console也是用Scala语言编写的Java web程序用于监控Apache Kafka。这个系统的功能和KafkaOffsetMonitor很类似,但是我们从源码角度来看,这款系统实现比KafkaOffsetMonitor要复杂很多,而且编译配置比KafkaOffsetMonitor较麻烦。
要想运行这套系统我们需要的先行条件为:
- Play Framework 2.2.x
- Apache Kafka 0.8.x
- Zookeeper 3.3.3 or 3.3.4
同样,我们从https://github.com/claudemamo/kafka-web-console上面将源码下载下来,然后用sbt进行编译,在编译前我们需要做如下的修改:
1、Kafka Web Console默认用的数据库是H2,它支持以下几种数据库:
- H2 (default)
- PostgreSql
- Oracle
- DB2
- MySQL
- ApacheDerby
- Microsoft SQL Server
- #############################################################################
- User:过往记忆
- Date:14-08-08
- Time:11:37
- bolg: https://www.iteblog.com
- 本文地址:https://www.iteblog.com/archives/1084
- 过往记忆博客,专注于hadoop、hive、spark、shark、flume的技术博客,大量的干货
- 过往记忆博客微信公共帐号:iteblog_hadoop
- #############################################################################
- 将这个
- db.default.driver=org.h2.Driver
- db.default.url="jdbc:h2:file:play"
- # db.default.user=sa
- # db.default.password=""
- 修改成
- db.default.driver=com.mysql.jdbc.Driver
- db.default.url="jdbc:mysql://localhost:3306/test"
- db.default.user=root
- db.default.pass=123456
- "mysql"%"mysql-connector-java"%"5.1.31"
修改后的bulid.sbt :
- name := "kafka-web-console"
- version := "2.1.0-SNAPSHOT"
- libraryDependencies ++= Seq(
- jdbc,
- cache,
- "org.squeryl" % "squeryl_2.10" % "0.9.5-6",
- "com.twitter" % "util-zk_2.10" % "6.11.0",
- "com.twitter" % "finagle-core_2.10" % "6.15.0",
- "org.quartz-scheduler" % "quartz" % "2.2.1",
- "org.apache.kafka" % "kafka_2.10" % "0.8.1.1",
- "mysql" % "mysql-connector-java" % "5.1.31"
- exclude("javax.jms", "jms")
- exclude("com.sun.jdmk", "jmxtools")
- exclude("com.sun.jmx", "jmxri")
- )
- play.Project.playScalaSettings
2、执行conf/evolutions/default/bak目录下面的1.sql、2.sql和3.sql三个文件。需要注意的是,这三个sql文件不能直接运行,有语法错误,需要做一些修改。
修改后的1.sql :
- CREATE TABLE zookeepers (
- name VARCHAR(100),
- host VARCHAR(100),
- port INT(100),
- statusId INT(100),
- groupId INT(100),
- PRIMARY KEY (name)
- );
- CREATE TABLE groups (
- id INT(100),
- name VARCHAR(100),
- PRIMARY KEY (id)
- );
- CREATE TABLE status (
- id INT(100),
- name VARCHAR(100),
- PRIMARY KEY (id)
- );
- INSERT INTO groups (id, name) VALUES (0, 'ALL');
- INSERT INTO groups (id, name) VALUES (1, 'DEVELOPMENT');
- INSERT INTO groups (id, name) VALUES (2, 'PRODUCTION');
- INSERT INTO groups (id, name) VALUES (3, 'STAGING');
- INSERT INTO groups (id, name) VALUES (4, 'TEST');
- INSERT INTO status (id, name) VALUES (0, 'CONNECTING');
- INSERT INTO status (id, name) VALUES (1, 'CONNECTED');
- INSERT INTO status (id, name) VALUES (2, 'DISCONNECTED');
- INSERT INTO status (id, name) VALUES (3, 'DELETED');
修改后的2.sql :
- ALTER TABLE zookeepers ADD COLUMN chroot VARCHAR(100);
修改后的3.sql :
- ALTER TABLE zookeepers DROP PRIMARY KEY;
- ALTER TABLE zookeepers ADD COLUMN id int(100) NOT NULL AUTO_INCREMENT PRIMARY KEY;
- ALTER TABLE zookeepers MODIFY COLUMN name VARCHAR(100) NOT NULL;
- ALTER TABLE zookeepers MODIFY COLUMN host VARCHAR(100) NOT NULL;
- ALTER TABLE zookeepers MODIFY COLUMN port INT(100) NOT NULL;
- ALTER TABLE zookeepers MODIFY COLUMN statusId INT(100) NOT NULL;
- ALTER TABLE zookeepers MODIFY COLUMN groupId INT(100) NOT NULL;
- ALTER TABLE zookeepers ADD UNIQUE (name);
- CREATE TABLE offsetHistory (
- id int(100) AUTO_INCREMENT PRIMARY KEY,
- zookeeperId int(100),
- topic VARCHAR(255),
- FOREIGN KEY (zookeeperId) REFERENCES zookeepers(id),
- UNIQUE (zookeeperId, topic)
- );
- CREATE TABLE offsetPoints (
- id int(100) AUTO_INCREMENT PRIMARY KEY,
- consumerGroup VARCHAR(255),
- timestamp TIMESTAMP,
- offsetHistoryId int(100),
- partition int(100),
- offset int(100),
- logSize int(100),
- FOREIGN KEY (offsetHistoryId) REFERENCES offsetHistory(id)
- );
- CREATE TABLE settings (
- key_ VARCHAR(255) PRIMARY KEY,
- value VARCHAR(255)
- );
- INSERT INTO settings (key_, value) VALUES ('PURGE_SCHEDULE', '0 0 0 ? * SUN *');
- INSERT INTO settings (key_, value) VALUES ('OFFSET_FETCH_INTERVAL', '30');
project/build.properties的
sbt.version=0.13.0 要修改实际的sbt版本,比如我用的是sbt.version=0.13.15
上面的注意事项弄完之后,我们就可以编译下载过来的源码:
# sbt package
|
编译的过程比较慢,有些依赖包下载速度非常地慢,请耐心等待。
- [warn]modulenot found: com.typesafe.play#sbt-plugin;2.2.1
- [warn]==== typesafe-ivy-releases: tried
- [warn] http://repo.typesafe.com/typesafe/ivy-releases/
- com.typesafe.play/sbt-plugin/scala_2.9.2/sbt_0.12/2.2.1/ivys/ivy.xml
- [warn]==== sbt-plugin-releases: tried
- [warn] http://scalasbt.artifactoryonline.com/scalasbt/sbt-plugin-releases/
- com.typesafe.play/sbt-plugin/scala_2.9.2/sbt_0.12/2.2.1/ivys/ivy.xml
- [warn]====local: tried
- [warn]/home/iteblog/.ivy2/local/com.typesafe.play/
- sbt-plugin/scala_2.9.2/sbt_0.12/2.2.1/ivys/ivy.xml
- [warn]====Typesafe repository: tried
- [warn] http://repo.typesafe.com/typesafe/releases/com/
- typesafe/play/sbt-plugin_2.9.2_0.12/2.2.1/sbt-plugin-2.2.1.pom
- [warn]====public: tried
- [warn] http://repo1.maven.org/maven2/com/typesafe/play/
- sbt-plugin_2.9.2_0.12/2.2.1/sbt-plugin-2.2.1.pom
- [warn]::::::::::::::::::::::::::::::::::::::::::::::
- ====local: tried
- /home/iteblog/.ivy2/local/org.scala-sbt/collections/0.13.0/jars/collections.jar
- ::::::::::::::::::::::::::::::::::::::::::::::
- :: FAILED DOWNLOADS ::
- ::^ see resolution messages for details ^::
- ::::::::::::::::::::::::::::::::::::::::::::::
- :: org.scala-sbt#collections;0.13.0!collections.jar
- ::::::::::::::::::::::::::::::::::::::::::::::
启动的时候需要把.sql文件删除掉,否则会报错。
http://localhost:9000
最后,我们可以通过下面命令启动Kafka Web Console监控系统:
# sbt run
|
并可以在http://localhost:9000查看。下面是一张效果图
Before you can monitor a broker, you need to register the Zookeeper server associated with it:
Kafka Web Console
Kafka Web Console is a Java web application for monitoring Apache
Kafka. With a modern web
browser, you can view from the console:
- Registered brokers
- Topics, partitions, log sizes, and partition leaders
- Consumer groups, individual consumers, consumer owners, partition offsets and lag
- Graphs showing consumer offset and lag history as well as consumer/producer message throughput history.
- Latest published topic messages (requires web browser support for WebSocket)
Furthermore, the console provides a JSON API described in RAML.
The API can be tested using the embedded API Console accessible through the URL http://[hostname]:[port]/api/console.
Requirements
- Play Framework 2.2.x
- Apache Kafka 0.8.x
- Zookeeper 3.3.3 or 3.3.4
Deployment
Consult Play!'s documentation for deployment
options and instructions.
Getting
Started
-
Kafka Web Console requires a relational database. By default, the server connects to an embedded H2 database and no database installation or configuration is needed. Consult Play!'s documentation to specify
a database for the console. The following databases are supported:- H2 (default)
- PostgreSql
- Oracle
- DB2
- MySQL
- Apache Derby
- Microsoft SQL Server
Changing the database might necessitate making minor modifications to the DDL to
accommodate the new database. Before you can monitor a broker, you need to register the Zookeeper server associated with it:
Filling in the form and clicking on Connect will
register the Zookeeper server. Once the console has successfully established a connection with the registered Zookeeper server, it can retrieve all necessary information about brokers, topics, and consumers:
Apache Kafka监控之Kafka Web Console的更多相关文章
-
【转载】Apache Kafka监控之Kafka Web Console
http://www.iteblog.com/archives/1084 Kafka Web Console是一款开源的系统,源码的地址在https://github.com/claudemamo/k ...
-
Kafka监控系统Kafka Eagle剖析
1.概述 最近有同学留言反馈了使用Kafka监控工具Kafka Eagle的一些问题,这里笔者特意整理了这些问题.并且希望通过这篇博客来解答这些同学的在使用Kafka Eagle的时候遇到的一些困惑, ...
-
Kafka监控系统Kafka Eagle:支持kerberos认证
在线文档:https://ke.smartloli.org/ 作者博客:https://www.cnblogs.com/smartloli/p/9371904.html 源码地址:https://gi ...
-
【转】apache kafka监控系列-KafkaOffsetMonitor
apache kafka监控系列-KafkaOffsetMonitor 时间 2014-05-27 18:15:01 CSDN博客 原文 http://blog.csdn.net/lizhitao ...
-
apache kafka监控系列-KafkaOffsetMonitor(转)
原文链接:apache kafka监控系列-KafkaOffsetMonitor 概览 最 近kafka server消息服务上线了,基于jmx指标参数也写到zabbix中了,但总觉得缺少点什么东西, ...
-
apache kafka监控系列-KafkaOffsetMonitor
apache kafka中国社区QQ群:162272557 概览 近期kafka server消息服务上线了,基于jmx指标參数也写到zabbix中了.但总认为缺少点什么东西.可视化可操作的界面. z ...
-
DataPipeline |《Apache Kafka实战》作者胡夕:Apache Kafka监控与调优
胡夕 <Apache Kafka实战>作者,北航计算机硕士毕业,现任某互金公司计算平台总监,曾就职于IBM.搜狗.微博等公司.国内活跃的Kafka代码贡献者. 前言 虽然目前Apache ...
-
DataPipeline |ApacheKafka实战作者胡夕:Apache Kafka监控与调优
https://baijiahao.baidu.com/s?id=1610644333184173190&wfr=spider&for=pc DataPipeline |ApacheK ...
-
kafka监控系统
Metrics-Java版的指标度量工具之一 Metrics-Java版的指标度量工具之二 JAVA Metrics 度量工具使用介绍1 JAVA Metrics度量工具 - Metrics Core ...
随机推荐
-
poj2391 Ombrophobic Bovines 题解
http://poj.org/problem?id=2391 floyd+网络流+二分 题意:有一个有向图,里面每个点有ai头牛,快下雨了牛要躲进雨棚里,每个点有bi个雨棚,每个雨棚只能躲1头牛.牛可 ...
-
iOS之单例
今天在看多线程同步时,突然想到了单例的同步问题.自从dispatch_once出现后,我们创建单例非常简单且安全: static dispatch_once_t pred; static Single ...
-
Servlet常见错误及解决方法
常见错误及解决方法 1. 404产生的原因为Web服务器(容器)根据请求地址找不到对应资源,以下情况都会出现404的错误提示: 输入的地址有误(应用名大小写不正确,名称拼写不正确) 在web.xml文 ...
-
【oracle】数据库、表空间、用户、数据表之间的关系
来自为知笔记(Wiz) 附件列表 新建_032515_030437_PM.jpg
-
尝试用Uplodify
尝试用Uplodify Uplodify官方 前台index代码: @{ Layout = null; } <script src="~/Scripts/jquery-1.8. ...
-
分享一个md5类
这个md5干嘛用的,大家比我清楚就不说了,这里不是讲md5的原理.要讲md5的原理,网上一大堆,我也不是什么算法很厉害的人,我只是算法搬运工.咱是一般程序员,有时候能完成业务需要就可以,那些伟大算法的 ...
-
JavaWeb 之 重复提交表单和验证码相关的问题!
下面我们首先来说一下表单的重复提交问题,我们知道在真实的网络环境中可能受网速带宽的原因会造成页面中表单在提交的过程中出现网络的延迟等问题,从而造成多次提交的问题!下面我们就具体来分析一下造成表单提交的 ...
-
安装好maven后,在cmd中运行mvn报一下的错误
当然报错,你这个路径下并没有pom.xml文件.你可以运行这个命令: mvn -version.
-
python学习day15 模块(重点)
模块(重点) python2,与py3的区别 py2:range() 在内存中立即把所有的值都创建,xrange() 不会再内存中立即创建,而是在循环时边环边创建. py3:range() 不会再内存 ...
-
HDU 2020 绝对值排序
http://acm.hdu.edu.cn/showproblem.php?pid=2020 Problem Description 输入n(n<=100)个整数,按照绝对值从大到小排序后输出. ...