What are simple commands to check if Hadoop daemons are running?
什么是检查Hadoop守护程序是否正在运行的简单命令?
For example if I'm trying to figure out why HDFS is not setup correctly I'll want to know a way to check if namemonode/datanode/jobtracker/tasktracker are running on this machine.
例如,如果我想弄清楚为什么HDFS设置不正确,我想知道一种检查namemonode / datanode / jobtracker / tasktracker是否在这台机器上运行的方法。
Is there any way to check it fast without looking into logs or using ps(on Linux)?
有没有办法快速检查它而不查看日志或使用ps(在Linux上)?
8 个解决方案
#1
13
In the shell type 'jps' (you might need a jdk to run jps). It lists all the running java processes and will list out the hadoop daemons that are running.
在shell类型'jps'中(你可能需要一个jdk来运行jps)。它列出了所有正在运行的java进程,并列出了正在运行的hadoop守护进程。
#2
9
If you see hadoop process is not running on ps -ef|grep hadoop
, run sbin/start-dfs.sh
. Monitor with hdfs dfsadmin -report
:
如果你看到hadoop进程没有在ps -ef | grep hadoop上运行,请运行sbin / start-dfs.sh。使用hdfs dfsadmin -report监视:
[mapr@node1 bin]$ hadoop dfsadmin -report
Configured Capacity: 105689374720 (98.43 GB)
Present Capacity: 96537456640 (89.91 GB)
DFS Remaining: 96448180224 (89.82 GB)
DFS Used: 89276416 (85.14 MB)
DFS Used%: 0.09%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)
Name: 192.168.1.16:50010
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 44638208 (42.57 MB)
Non DFS Used: 4986138624 (4.64 GB)
DFS Remaining: 47813910528(44.53 GB)
DFS Used%: 0.08%
DFS Remaining%: 90.48%
Last contact: Tue Aug 20 13:23:32 EDT 2013
Name: 192.168.1.17:50010
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 44638208 (42.57 MB)
Non DFS Used: 4165779456 (3.88 GB)
DFS Remaining: 48634269696(45.29 GB)
DFS Used%: 0.08%
DFS Remaining%: 92.03%
Last contact: Tue Aug 20 13:23:34 EDT 2013
#3
5
I did not find great solution to it, so I used
我没有找到很好的解决方案,所以我用过
ps -ef | grep hadoop | grep -P 'namenode|datanode|tasktracker|jobtracker'
just to see if stuff is running
只是为了看看是否有东西在运行
and
和
./hadoop dfsadmin -report
but last was not helpful until server was running.
但是在服务器运行之前,最后一次没用。
#4
4
apart from jps, another good idea is to use the web interfaces for NameNode and JobTracker provided by Hadoop. It not only shows you the processes but provides you a lot of other useful info like your cluster summary, ongoing jobs etc atc. to go to the NN UI point your web browser to "YOUR_NAMENODE_HOST:9000" and for JT UI "YOUR_JOBTRACKER_HOST:9001".
除了jps之外,另一个好主意是使用Hadoop提供的NameNode和JobTracker的Web界面。它不仅向您展示了流程,还为您提供了许多其他有用的信息,如群集摘要,正在进行的工作等等。转到NN UI,将您的Web浏览器指向“YOUR_NAMENODE_HOST:9000”,并将JT UI指向“YOUR_JOBTRACKER_HOST:9001”。
#5
4
you can use Jps command as vipin said like this command :
您可以使用Jps命令,因为vipin说像这样的命令:
/usr/lib/java/jdk1.8.0_25/bin/jps
of course you will change the path of java with the one you have "the path you installed java in"
Jps is A nifty tool for checking whether the expected Hadoop processes are running (part of Sun’s Java since v1.5.0).
the result will be something like that :
当然,你将改变java的路径你所拥有的“你安装java的路径”Jps是一个非常好的工具,用于检查预期的Hadoop进程是否正在运行(自v1.5.0以来Sun的Java的一部分)。结果将是这样的:
2287 TaskTracker
2149 JobTracker
1938 DataNode
2085 SecondaryNameNode
2349 Jps
1788 NameNode
I get the answer from this tutorial: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
我从本教程中得到了答案:http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
#6
2
Try jps
command. It specifies the java processes which are up and running.
试试jps命令。它指定启动并运行的java进程。
#7
0
To check whether Hadoop Nodes are running or not:
检查Hadoop节点是否正在运行:
sudo -u hdfs hdfs dfsadmin -report
Configured Capacity: 28799380685 (26.82 GB)
Present Capacity: 25104842752 (23.38 GB)
DFS Remaining: 25012056064 (23.29 GB)
DFS Used: 92786688 (88.49 MB)
DFS Used%: 0.37%
Under replicated blocks: 436
Blocks with corrupt replicas: 0
Missing blocks: 0配置容量:28799380685(26.82 GB)当前容量:25104842752(23.38 GB)DFS剩余:25012056064(23.29 GB)使用的DFS:92786688(88.49 MB)DFS使用%:0.37%在复制块下:436块具有损坏的副本:0缺失块:0
Datanodes available: 1 (1 total, 0 dead)
Datanodes可用:1(总共1个,0个死亡)
Live datanodes:
Name: 127.0.0.1:50010 (localhost.localdomain)
Hostname: localhost.localdomain
Rack: /default
Decommission Status : Normal
Configured Capacity: 28799380685 (26.82 GB)
DFS Used: 92786688 (88.49 MB)
Non DFS Used: 3694537933 (3.44 GB)
DFS Remaining: 25012056064 (23.29 GB)
DFS Used%: 0.32%
DFS Remaining%: 86.85%
Last contact: Thu Mar 01 22:01:38 IST 2018实时datanodes:名称:127.0.0.1:50010(localhost.localdomain)主机名:localhost.localdomain机架:/ default退役状态:正常配置容量:28799380685(26.82 GB)使用DFS:92786688(88.49 MB)非DFS使用:3694537933( 3.44 GB)DFS剩余:25012056064(23.29 GB)DFS使用%:0.32%DFS剩余%:86.85%最后联系人:Thu Mar 01 22:01:38 IST 2018
#8
-1
Try running this:
试试这个:
for service in /etc/init.d/hadoop-hdfs-*; do $service status; done;
#1
13
In the shell type 'jps' (you might need a jdk to run jps). It lists all the running java processes and will list out the hadoop daemons that are running.
在shell类型'jps'中(你可能需要一个jdk来运行jps)。它列出了所有正在运行的java进程,并列出了正在运行的hadoop守护进程。
#2
9
If you see hadoop process is not running on ps -ef|grep hadoop
, run sbin/start-dfs.sh
. Monitor with hdfs dfsadmin -report
:
如果你看到hadoop进程没有在ps -ef | grep hadoop上运行,请运行sbin / start-dfs.sh。使用hdfs dfsadmin -report监视:
[mapr@node1 bin]$ hadoop dfsadmin -report
Configured Capacity: 105689374720 (98.43 GB)
Present Capacity: 96537456640 (89.91 GB)
DFS Remaining: 96448180224 (89.82 GB)
DFS Used: 89276416 (85.14 MB)
DFS Used%: 0.09%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)
Name: 192.168.1.16:50010
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 44638208 (42.57 MB)
Non DFS Used: 4986138624 (4.64 GB)
DFS Remaining: 47813910528(44.53 GB)
DFS Used%: 0.08%
DFS Remaining%: 90.48%
Last contact: Tue Aug 20 13:23:32 EDT 2013
Name: 192.168.1.17:50010
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 44638208 (42.57 MB)
Non DFS Used: 4165779456 (3.88 GB)
DFS Remaining: 48634269696(45.29 GB)
DFS Used%: 0.08%
DFS Remaining%: 92.03%
Last contact: Tue Aug 20 13:23:34 EDT 2013
#3
5
I did not find great solution to it, so I used
我没有找到很好的解决方案,所以我用过
ps -ef | grep hadoop | grep -P 'namenode|datanode|tasktracker|jobtracker'
just to see if stuff is running
只是为了看看是否有东西在运行
and
和
./hadoop dfsadmin -report
but last was not helpful until server was running.
但是在服务器运行之前,最后一次没用。
#4
4
apart from jps, another good idea is to use the web interfaces for NameNode and JobTracker provided by Hadoop. It not only shows you the processes but provides you a lot of other useful info like your cluster summary, ongoing jobs etc atc. to go to the NN UI point your web browser to "YOUR_NAMENODE_HOST:9000" and for JT UI "YOUR_JOBTRACKER_HOST:9001".
除了jps之外,另一个好主意是使用Hadoop提供的NameNode和JobTracker的Web界面。它不仅向您展示了流程,还为您提供了许多其他有用的信息,如群集摘要,正在进行的工作等等。转到NN UI,将您的Web浏览器指向“YOUR_NAMENODE_HOST:9000”,并将JT UI指向“YOUR_JOBTRACKER_HOST:9001”。
#5
4
you can use Jps command as vipin said like this command :
您可以使用Jps命令,因为vipin说像这样的命令:
/usr/lib/java/jdk1.8.0_25/bin/jps
of course you will change the path of java with the one you have "the path you installed java in"
Jps is A nifty tool for checking whether the expected Hadoop processes are running (part of Sun’s Java since v1.5.0).
the result will be something like that :
当然,你将改变java的路径你所拥有的“你安装java的路径”Jps是一个非常好的工具,用于检查预期的Hadoop进程是否正在运行(自v1.5.0以来Sun的Java的一部分)。结果将是这样的:
2287 TaskTracker
2149 JobTracker
1938 DataNode
2085 SecondaryNameNode
2349 Jps
1788 NameNode
I get the answer from this tutorial: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
我从本教程中得到了答案:http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
#6
2
Try jps
command. It specifies the java processes which are up and running.
试试jps命令。它指定启动并运行的java进程。
#7
0
To check whether Hadoop Nodes are running or not:
检查Hadoop节点是否正在运行:
sudo -u hdfs hdfs dfsadmin -report
Configured Capacity: 28799380685 (26.82 GB)
Present Capacity: 25104842752 (23.38 GB)
DFS Remaining: 25012056064 (23.29 GB)
DFS Used: 92786688 (88.49 MB)
DFS Used%: 0.37%
Under replicated blocks: 436
Blocks with corrupt replicas: 0
Missing blocks: 0配置容量:28799380685(26.82 GB)当前容量:25104842752(23.38 GB)DFS剩余:25012056064(23.29 GB)使用的DFS:92786688(88.49 MB)DFS使用%:0.37%在复制块下:436块具有损坏的副本:0缺失块:0
Datanodes available: 1 (1 total, 0 dead)
Datanodes可用:1(总共1个,0个死亡)
Live datanodes:
Name: 127.0.0.1:50010 (localhost.localdomain)
Hostname: localhost.localdomain
Rack: /default
Decommission Status : Normal
Configured Capacity: 28799380685 (26.82 GB)
DFS Used: 92786688 (88.49 MB)
Non DFS Used: 3694537933 (3.44 GB)
DFS Remaining: 25012056064 (23.29 GB)
DFS Used%: 0.32%
DFS Remaining%: 86.85%
Last contact: Thu Mar 01 22:01:38 IST 2018实时datanodes:名称:127.0.0.1:50010(localhost.localdomain)主机名:localhost.localdomain机架:/ default退役状态:正常配置容量:28799380685(26.82 GB)使用DFS:92786688(88.49 MB)非DFS使用:3694537933( 3.44 GB)DFS剩余:25012056064(23.29 GB)DFS使用%:0.32%DFS剩余%:86.85%最后联系人:Thu Mar 01 22:01:38 IST 2018
#8
-1
Try running this:
试试这个:
for service in /etc/init.d/hadoop-hdfs-*; do $service status; done;