报错:Error starting Jetty. JSON Metrics may not be available.java.net.BindException:地址已在使用

时间:2024-08-04 18:35:38

报错背景:

刚在CDH中集成Flume插件,启动报错

报错现象:

报错:Error starting Jetty. JSON Metrics may not be available.java.net.BindException:地址已在使用

Error starting Jetty. JSON Metrics may not be available.
java.net.BindException: 地址已在使用
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:)
at sun.nio.ch.Net.bind(Net.java:)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:)
at org.mortbay.jetty.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:)
at org.mortbay.jetty.Server.doStart(Server.java:)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:)
at org.apache.flume.instrumentation.http.HTTPMetricsServer.start(HTTPMetricsServer.java:)
at org.apache.flume.node.Application.loadMonitoring(Application.java:)
at org.apache.flume.node.Application.startAllComponents(Application.java:)
at org.apache.flume.node.Application.handleConfigurationEvent(Application.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at com.google.common.eventbus.EventHandler.handleEvent(EventHandler.java:)
at com.google.common.eventbus.SynchronizedEventHandler.handleEvent(SynchronizedEventHandler.java:)
at com.google.common.eventbus.EventBus.dispatch(EventBus.java:)
at com.google.common.eventbus.EventBus.dispatchQueuedEvents(EventBus.java:)
at com.google.common.eventbus.EventBus.post(EventBus.java:)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$(ScheduledThreadPoolExecutor.java:)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)

报错原因:

我也没看懂具体原因,但是在解决的时候稀里糊涂就把报错解决了。

报错解决:

进入flume的配置目录:/etc/flume-ng/conf/

编辑文件:flume.conf

报错:Error starting Jetty. JSON Metrics may not be available.java.net.BindException:地址已在使用

按照自己的需求填写相关语句

完成之后报错消失(CDH中的相同配置文件可能信息不同步,注意检查)

agent.sources = kafkaSource
agent.channels = memoryChannel
agent.sinks = hdfsSink #-------- kafkaSource相关配置-----------------
agent.sources.kafkaSource.channels = memoryChannel
# 定义消息源类型
agent.sources.kafkaSource.type=org.apache.flume.source.kafka.KafkaSource
# 这里特别注意: 是kafka的zookeeper的地址
agent.sources.kafkaSource.zookeeperConnect=192.168.52.26:
# 配置消费的kafka topic
agent.sources.kafkaSource.topic=AlarmHis
agent.sources.kafkaSource.kafka.consumer.timeout.ms= #------- memoryChannel相关配置-------------------------
# channel类型
agent.channels.memoryChannel.type=memory
# channel存储的事件容量
agent.channels.memoryChannel.capacity=
# 事务容量
agent.channels.memoryChannel.transactionCapacity= #---------hdfsSink 相关配置------------------
agent.sinks.hdfsSink.type=hdfs
agent.sinks.hdfsSink.channel = memoryChannel
# 写到HDFS的路径
agent.sinks.hdfsSink.hdfs.path=hdfs://master:8020/yk/dl/alarm_his
#配置前缀和后缀
agent.sinks.hdfsSink.hdfs.filePrefix = AlarmHis
agent.sinks.hdfsSink.hdfs.fileSuffix=.txt ## 表示只要过了60**24秒钟,就切换生成一个新的文件
agent.sinks.hdfsSink.hdfs.rollInterval =
## 如果记录的文件大于1024**1024字节时切换一次
agent.sinks.hdfsSink.hdfs.rollSize =
## 当写了5个事件时触发
agent.sinks.hdfsSink.hdfs.rollCount =
## 收到了多少条消息往dfs中追加内容
agent.sinks.hdfsSink.hdfs.batchSize =
## 使用本地时间戳
agent.sinks.hdfsSink.hdfs.useLocalTimeStamp = true agent.sinks.hdfsSink.hdfs.writeFormat=Text
#生成的文件类型,默认是Sequencefile,可用DataStream:为普通文本
agent.sinks.hdfsSink.hdfs.fileType=DataStream