chukwa安装配置 集群监控
tar -zxvf chukwa-0.6.0.tar.gz -C /opt/modules/
设置/etc/profile参数
vi /etc/profile
export CHUKWA_HOME=/opt/modules/chukwa-0.6.0
export CHUKWA_CONF_DIR=$CHUKWA_HOME/etc/chukwa
export PATH=$PATH:$CHUKWA_HOME/bin:$CHUKWA_HOME/sbin
source /etc/profile
测试是否配置成功echo $CHUKWA_HOME
将Chukwa文件复制到Hadoop中
把hadoop配置目录中的log4j.properties和hadoop-metrics2.properties文件改名备份
cd /opt/modules/hadoop-1.2.1/conf/
mv log4j.properties log4j.properties.bak
mv hadoop-metrics2.properties hadoop-metrics2.properties.bak
把chukwa配置目录中的log4j.properties和hadoop-metrics2.properties文件复制到hadoop配置目录中
cp /opt/modules/chukwa-0.6.0/etc/chukwa/hadoop-log4j.properties ./log4j.propertie
cp /opt/modules/chukwa-0.6.0/etc/chukwa/hadoop-metrics2.properties ./
将Chukwa中jar复制到Hadoop中
把chukwa中的chukwa-0.6.0-client.jar和json-simple-1.1.jar两个jar文件复制到hadoop中lib目录下
cp /opt/modules/chukwa-0.6.0/share/chukwa/chukwa-0.6.0-client.jar /opt/modules/hadoop-1.2.1/lib
cp /opt/modules/chukwa-0.6.0/share/chukwa/lib/json-simple-1.1.jar /opt/modules/hadoop-1.2.1/lib
修改$CHUKWA_HOME/libexec/chukwa-config.sh文件
export CHUKWA_HOME=/opt/modules/chukwa-0.6.0
修改$CHUKWA_HOME/etc/chukwa/chukwa-env.sh文件
export JAVA_HOME=/opt/modules/jdk1.7
export HADOOP_CONF_DIR=/opt/modules/hadoop-1.2.1/conf
export HBASE_CONF_DIR=/opt/modules/hbase-0.98.15-hadoop1/conf
修改$CHUKWA_HOME/etc/chukwa/collectors文件
该配置指定哪台机器运行收集器进程,例如修改为http://192.168.192.129:8080,指定hadoop机器运行收集器进程
修改$CHUKWA_HOME/etc/chukwa/initial_adaptors文件 可以使用默认配置(即不需要修改)
为了更好显示测试效果这里添加新建的监控服务,监控/app/chukwa-0.6.0/目录下的testing文件变化情况
add filetailer.FileTailingAdaptor FooData /opt/modules/chukwa-0.6.0/testing 0
建立被监控testing文件
cd /opt/modules/chukwa-0.6.0
touch testing
修改$CHUKWA_HOME/etc/chukwa/chukwa-collector-conf.xml文件
启用chukwaCollector.pipeline参数
<property>
<name>chukwaCollector.pipeline</name>
<value>org.apache.hadoop.chukwa.datacollection.writer.SocketTeeWriter,org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter</value>
</property>
指定HDFS的位置为 hdfs://hadoop-master.dragon.org:9000/chukwa/logs
<property>
<name>writer.hdfs.filesystem</name>
<value>hdfs://hadoop-master.dragon.org:9000</value>
<description>HDFS to dump to</description>
</property>
<property>
<name>chukwaCollector.outputDir</name>
<value>/chukwa/logs/</value>
<description>Chukwa data sink directory</description>
</property>
确认默认情况下collector监听8080端口
<property>
<name>chukwaCollector.http.port</name>
<value>8080</value>
<description>The HTTP port number the collector will listen on</description>
</property>
配置Agents文件打开$CHUKWA_HOME/etc/chukwa/agents文件
编辑 hadoop-master.dragon.org
Chukwa部署验证
chukwa-0.6.0/sbin
./start-chukwa.sh
./start-collectors.sh
./start-data-processors.sh
http://192.168.192.129:4080/hicc/
默认用户名和密码是:admin
10 启动hicc
chukwa hicc
http://<server>:4080/hicc
关闭
stop-data-processors.sh
stop-collectors.sh
stop-chukwa.sh
3.2.2 准备日志数据文件和添加数据脚本
cd /opt/modules/chukwa-0.6.0
mkdir testdata
cd testdata
vi weblog
220.181.108.151 [31/Jan/2012:00:02:32] "GET /home.php?mod=space"
208.115.113.82 [31/Jan/2012:00:07:54] "GET /robots.txt"
220.181.94.221 [31/Jan/2012:00:09:24] "GET /home.php?mod=spacecp"
112.97.24.243 [31/Jan/2012:00:14:48] "GET /data/common.css?AZH HTTP/1.1"
112.97.24.243 [31/Jan/2012:00:14:48] "GET /data/auto.css?AZH HTTP/1.1"
112.97.24.243 [31/Jan/2012:00:14:48] "GET /data/display.css?AZH HTTP/1.1"
220.181.108.175 [31/Jan/2012:00:16:54] "GET /home.php"
220.181.94.221 [31/Jan/2012:00:19:15] "GET /?72 HTTP/1.1" 200 13614 "-"
218.5.72.173 [31/Jan/2012:00:21:39] "GET /forum.php?tid=89 HTTP/1.0"
65.52.109.151 [31/Jan/2012:00:24:47] "GET /robots.txt HTTP/1.1"
220.181.94.221 [31/Jan/2012:00:26:12] "GET /?67 HTTP/1.1"
218.205.245.7 [31/Jan/2012:00:27:16] "GET /forum-58-1.html HTTP/1.0"
vi weblogadd.sh
cat /opt/modules/chukwa-0.6.0/testdata/weblog >> /opt/modules/chukwa-0.6.0/chukwa-0.6.0/testing
查看HDFS的文件
启动chukwa的agents和collector,然后运行weblogadd.sh脚本,往weblog文件中添加数据,最后查看HDFS的/chukwa/logs目录下监听生成的数据文件
cd /opt/modules/chukwa-0.6.0/testdata
sudo sh ./weblogadd.sh
hadoop fs -ls /chukwa/logs
相关文章
- 图文讲解基于centos虚拟机的Hadoop集群安装,并且使用Mahout实现贝叶斯分类实例 (7)
- 使用Windows Azure的VM安装和配置CDH搭建Hadoop集群
- 大数据测试之hadoop集群配置和测试
- Hadoop安装-单机-伪分布式简单部署配置
- 05安装一个Hadoop分布式集群
- 大数据入门:Hadoop安装、环境配置及检测
- eclipse中配置hadoop开发环境-----删除之前版本的hadoop插件、编译hadoop eclipse插件、安装插件、eclipse下运行hadoop程序
- 大数据必知必会:Hadoop(4)高可用集群安装
- 大数据必知必会:Hadoop(3)分布式集群环境安装
- centos6.6安装hadoop2.6.0集群