[Kerberos] Java client访问kerberos-secured cluster

时间:2024-07-11 13:03:38

使用java client访问kerberos-secured cluster,最重要的是先从admin那里拿到可用的keytab文件,用来作认证。接下来就是调整连接的配置。以下先用连接hdfs为例进行说明。

申请可用的keytab文件

1. 申请可认证的keytab文件,keytab文件用来存储principal的key。由KDC那边生成的principal,最终可以存储在keytab文件中。

2. 安装Kerberos client常用命令,并熟悉kerberos认证原理。

3. 配置/etc/krc5.conf,作为kerberos client端需要指明KDC的位置。

配置连接参数

由于直接用java应用程序去测试链接不太方便,这里推荐通过执行Hadoop command来测试是否能连接成功。

1. kinit 认证

kinit -kt path-to-keytab principalName

先认证principalName是否合法。如果合法,KDC会返回initial TGT。该TGT有效期通常是几个小时。

2. 执行Hadoop命令

hadoop fs -ls hdfs://namenode1:8020

执行这个命令之后,会返回儿各种exception,按照exception的提示,逐步添加配置,如下:

1) 配置使用kerberos认证

hadoop.security.authentication: kerberos

2)Failed to specify server's Kerberos principal name

dfs.namenode.kerberos.principal

3)Server has invalid Kerberos principal

配置完2)后,如果返回 Server has invalid Kerberos principal,这个时候可以从以下三个方面考虑:

  • Server principal是否合法或者配置正确,正常情况下将dfs.namenode.kerberos.principal设置成namenode configuration一致就可以了。
  • DSN resolver是否一致。The HDFS client will initiate an RPC call to the namenode to get the hdfs service principal. Then the client with compare the hostname from the service princpal to the canonical name of the namenode hostname. In this case the namenode canonical name on the client machine resolved to a different hostname then what was in DNS.
  • 如果以上两种情况都正常,exception无法帮助我们锁定问题,可用尝试排除最大限度排除不定因素,缩小问题搜索范围。比如安装和server一样的Hadoop版本,并且保持配置一致。如果command能执行成功,那么可用逐步减去lib, conf属性,从而锁定exception的本质原因。

Java Kerberos认证代码

public class HadoopSecurityUtil {

    public static final String EAGLE_KEYTAB_FILE_KEY = "eagle.keytab.file";
public static final String EAGLE_USER_NAME_KEY = "eagle.kerberos.principal"; public static void login(Configuration kConfig) throws IOException {
if (kConfig.get(EAGLE_KEYTAB_FILE_KEY) == null || kConfig.get(EAGLE_USER_NAME_KEY) == null) return; kConfig.setBoolean("hadoop.security.authorization", true);
kConfig.set("hadoop.security.authentication", "kerberos");
UserGroupInformation.setConfiguration(kConfig);
UserGroupInformation.loginUserFromKeytab(kConfig.get(EAGLE_USER_NAME_KEY), kConfig.get(EAGLE_KEYTAB_FILE_KEY));
}
}

配置示例

  • HDFS
{
"fs.defaultFS":"hdfs://nameservice1",
"dfs.nameservices": "nameservice1",
"dfs.ha.namenodes.nameservice1":"namenode1,namenode2",
"dfs.namenode.rpc-address.nameservice1.namenode1": "hadoopnamenode01:8020",
"dfs.namenode.rpc-address.nameservice1.namenode2": "hadoopnamenode02:8020",
"dfs.client.failover.proxy.provider.apollo-phx-nn-ha": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"eagle.keytab.file":"/EAGLE-HOME/.keytab/b_eagle.keytab_apd",
"eagle.kerberos.principal":"eagle@EXAMPLE.COM"
}
  • HBase
 {
"hbase.zookeeper.property.clientPort":"",
"hbase.zookeeper.quorum":"localhost",
"hbase.security.authentication":"kerberos",
"hbase.master.kerberos.principal":"hadoop/_HOST@EXAMPLE.COM",
"zookeeper.znode.parent":"/hbase",
"eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab",
"eagle.kerberos.principal":"eagle@EXAMPLE.COM"
}

References

  • https://github.com/randomtask1155/HadoopDNSVerifier
  • https://support.pivotal.io/hc/en-us/articles/204391288-hdfs-ls-command-fails-with-Server-has-invalid-Kerberos-principal