提交spark on yarn作业报错:
主要错误信息“There are 1 datanode(s) running and 1 node(s) are excluded in this operation”,有一个datanode被排除
1088 [main] INFO - Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
1089 [main] INFO - Will allocate AM container, with 1408 MB memory including 384 MB overhead
1089 [main] INFO - Setting up container launch context for our AM
1091 [main] INFO - Setting up the launch environment for our AM container
1106 [main] INFO - Preparing resources for our AM container
1155 [main] WARN - Neither nor is set, falling back to uploading libraries under SPARK_HOME.
3343 [main] INFO - Uploading resource file:/tmp/spark-510bf868-cc29-4e2f-b1c0-55136c98eb32/__spark_libs__2655251803993337384.zip -> hdfs://dss0:8020/user/root/.sparkStaging/application_1645757337254_0003/__spark_libs__2655251803993
337384.zip3475 [Thread-7] INFO - Exception in createBlockOutputStream blk_1073742400_1577
: Unexpected EOF while trying to read response from server
at (:458)
at (:1762)
at (:1679)
at (:716)
3477 [Thread-7] WARN - Abandoning BP-242088412-192.168.78.12-1644827784895:blk_1073742400_1577
3483 [Thread-7] WARN - Excluding datanode DatanodeInfoWithStorage[192.168.78.12:9866,DS-686c7ba7-eda5-442a-a041-cf44347b8bfc,DISK]
3503 [Thread-7] WARN - DataStreamer Exception
(): File /user/root/.sparkStaging/application_1645757337254_0003/__spark_libs__2655251803993337384.zip could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are ex
cluded in this operation. at .chooseTarget4NewBlock(:2116)
执行"hadoop fs -rm -r -skipTrash /user/root/.sparkStaging/*",清空掉该目录下的作业缓存文件后解决~,如果问题还存在可以尝试重启hdfs以及yarn,或者关闭一些服务使集群资源更加充足,然后再尝试提交作业