i am not able to create new file or directory nor able to list the existing files or directory
我无法创建新的文件或目录,也无法列出现有的文件或目录。
i am using below command to do the operation ,could you please suggest
我正在使用下面的命令进行操作,请您建议。
hduser@c:/usr/local/hadoop$ jps
8546 ResourceManager
9181 Jps
1503 NameNode
8674 NodeManager
4398 DataNode
hduser@c:/usr/local/hadoop$ bin/hadoop fs -ls /
ls: Couldn't create proxy provider null
hduser@c:/usr/local/hadoop$ bin/hadoop fs -mkdir /books
mkdir: Couldn't create proxy provider null
hduser@c:/usr/local/hadoop$
below is my hdfs-site.xml
,which i am using it .
下面是我的hdfs-site。xml,我正在使用它。
<configuration>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.replicaion</name>
<value>2</value>
<description>to specifiy replication</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/h3iHA/name</value>
<final>true</final>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/h3iHA/data2</value>
<final>true</final>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>c:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>a:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>c:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>a:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>file:///mnt/filer</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.configuredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hduser/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence
shell(/bin/true)
</value>
</property>
</configuration>
core file ,which is same for both nodes
核心文件,对于两个节点都是相同的。
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
</configuration>
1 个解决方案
#1
1
The Java class name set for the property dfs.client.failover.proxy.provider.mycluster
is incorrect. It is ConfiguredFailoverProxyProvider
and not configuredFailoverProxyProvider
.
为属性dfs.client. client. proxy.提供者设置的Java类名。mycluster是不正确的。它是配置failoverproxyprovider,而不是配置failoverproxyprovider。
Edit the value of this property in hdfs-site.xml
在hdfs-site.xml中编辑该属性的值。
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
#1
1
The Java class name set for the property dfs.client.failover.proxy.provider.mycluster
is incorrect. It is ConfiguredFailoverProxyProvider
and not configuredFailoverProxyProvider
.
为属性dfs.client. client. proxy.提供者设置的Java类名。mycluster是不正确的。它是配置failoverproxyprovider,而不是配置failoverproxyprovider。
Edit the value of this property in hdfs-site.xml
在hdfs-site.xml中编辑该属性的值。
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>