[root@linux-node1 bin]# ./spark-submit \
> --class com.kou.List2Hive \
> --master yarn \
> --deploy-mode cluster \
> sparkTestNew-1.0.jar
18/11/27 21:17:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/11/27 21:17:58 INFO client.RMProxy: Connecting to ResourceManager at /192.168.56.11:8032
18/11/27 21:17:58 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
18/11/27 21:17:59 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
18/11/27 21:17:59 INFO yarn.Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
18/11/27 21:17:59 INFO yarn.Client: Setting up container launch context for our AM
18/11/27 21:17:59 INFO yarn.Client: Setting up the launch environment for our AM container
18/11/27 21:17:59 INFO yarn.Client: Preparing resources for our AM container
18/11/27 21:18:01 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
18/11/27 21:18:04 INFO yarn.Client: Uploading resource file:/tmp/spark-a254e21c-8611-4222-926b-5053afb94903/__spark_libs__6405791690239431196.zip -> hdfs://192.168.56.11:9000/user/root/.sparkStaging/application_1543322675361_0004/__spark_libs__6405791690239431196.zip
18/11/27 21:18:06 INFO yarn.Client: Uploading resource file:/home/koushengrui/app/spark/bin/sparkTestNew-1.0.jar -> hdfs://192.168.56.11:9000/user/root/.sparkStaging/application_1543322675361_0004/sparkTestNew-1.0.jar
18/11/27 21:18:06 INFO yarn.Client: Uploading resource file:/tmp/spark-a254e21c-8611-4222-926b-5053afb94903/__spark_conf__1355711675895949044.zip -> hdfs://192.168.56.11:9000/user/root/.sparkStaging/application_1543322675361_0004/__spark_conf__.zip
18/11/27 21:18:07 INFO spark.SecurityManager: Changing view acls to: root
18/11/27 21:18:07 INFO spark.SecurityManager: Changing modify acls to: root
18/11/27 21:18:07 INFO spark.SecurityManager: Changing view acls groups to:
18/11/27 21:18:07 INFO spark.SecurityManager: Changing modify acls groups to:
18/11/27 21:18:07 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
18/11/27 21:18:07 INFO yarn.Client: Submitting application application_1543322675361_0004 to ResourceManager
18/11/27 21:18:07 INFO impl.YarnClientImpl: Submitted application application_1543322675361_0004
18/11/27 21:18:08 INFO yarn.Client: Application report for application_1543322675361_0004 (state: ACCEPTED)
18/11/27 21:18:08 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1543324687092
final status: UNDEFINED
tracking URL: http://linux-node1:8088/proxy/application_1543322675361_0004/
user: root
18/11/27 21:18:09 INFO yarn.Client: Application report for application_1543322675361_0004 (state: ACCEPTED)
18/11/27 21:18:19 INFO yarn.Client: Application report for application_1543322675361_0004 (state: ACCEPTED)
18/11/27 21:18:20 INFO yarn.Client: Application report for application_1543322675361_0004 (state: RUNNING)
18/11/27 21:18:20 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.56.11
ApplicationMaster RPC port: 0
queue: default
start time: 1543324687092
final status: UNDEFINED
tracking URL: http://linux-node1:8088/proxy/application_1543322675361_0004/
user: root
18/11/27 21:18:50 INFO yarn.Client: Application report for application_1543322675361_0004 (state: RUNNING)
18/11/27 21:18:51 INFO yarn.Client: Application report for application_1543322675361_0004 (state: FINISHED)
18/11/27 21:18:51 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.56.11
ApplicationMaster RPC port: 0
queue: default
start time: 1543324687092
final status: SUCCEEDED
tracking URL: http://linux-node1:8088/proxy/application_1543322675361_0004/
user: root
18/11/27 21:18:51 INFO yarn.Client: Deleted staging directory hdfs://192.168.56.11:9000/user/root/.sparkStaging/application_1543322675361_0004
18/11/27 21:18:51 INFO util.ShutdownHookManager: Shutdown hook called
18/11/27 21:18:51 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-a254e21c-8611-4222-926b-5053afb94903
YARN AM日志:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/koushengrui/app/hadoop/data/nm-local-dir/usercache/root/filecache/18/__spark_libs__6405791690239431196.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/koushengrui/app/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/11/27 21:18:11 INFO util.SignalUtils: Registered signal handler for TERM
18/11/27 21:18:11 INFO util.SignalUtils: Registered signal handler for HUP
18/11/27 21:18:11 INFO util.SignalUtils: Registered signal handler for INT
18/11/27 21:18:13 INFO yarn.ApplicationMaster: Preparing Local resources
18/11/27 21:18:14 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1543322675361_0004_000001
18/11/27 21:18:14 INFO spark.SecurityManager: Changing view acls to: root
18/11/27 21:18:14 INFO spark.SecurityManager: Changing modify acls to: root
18/11/27 21:18:14 INFO spark.SecurityManager: Changing view acls groups to:
18/11/27 21:18:14 INFO spark.SecurityManager: Changing modify acls groups to:
18/11/27 21:18:14 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
18/11/27 21:18:14 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread
18/11/27 21:18:14 INFO yarn.ApplicationMaster: Waiting for spark context initialization...
18/11/27 21:18:15 INFO spark.SparkContext: Running Spark version 2.2.1
18/11/27 21:18:15 INFO spark.SparkContext: Submitted application: com.kou.List2Hive
18/11/27 21:18:15 INFO spark.SecurityManager: Changing view acls to: root
18/11/27 21:18:15 INFO spark.SecurityManager: Changing modify acls to: root
18/11/27 21:18:15 INFO spark.SecurityManager: Changing view acls groups to:
18/11/27 21:18:15 INFO spark.SecurityManager: Changing modify acls groups to:
18/11/27 21:18:15 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
18/11/27 21:18:16 INFO util.Utils: Successfully started service 'sparkDriver' on port 44062.
18/11/27 21:18:16 INFO spark.SparkEnv: Registering MapOutputTracker
18/11/27 21:18:16 INFO spark.SparkEnv: Registering BlockManagerMaster
18/11/27 21:18:16 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/11/27 21:18:16 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/11/27 21:18:16 INFO storage.DiskBlockManager: Created local directory at /home/koushengrui/app/hadoop/data/nm-local-dir/usercache/root/appcache/application_1543322675361_0004/blockmgr-d7cd186a-e7c1-4031-a4b1-5d1d47332d0b
18/11/27 21:18:16 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
18/11/27 21:18:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/11/27 21:18:17 INFO util.log: Logging initialized @7141ms
18/11/27 21:18:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
18/11/27 21:18:17 INFO server.Server: jetty-9.3.z-SNAPSHOT
18/11/27 21:18:17 INFO server.Server: Started @7351ms
18/11/27 21:18:17 INFO server.AbstractConnector: Started ServerConnector@46a4bcbe{HTTP/1.1,[http/1.1]}{0.0.0.0:39223}
18/11/27 21:18:17 INFO util.Utils: Successfully started service 'SparkUI' on port 39223.
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@c250ad0{/jobs,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@13f37d70{/jobs/json,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1f8250f{/jobs/job,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@69c80b4e{/jobs/job/json,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@52096ce3{/stages,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@f74409a{/stages/json,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7bef7536{/stages/stage,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2cadc4fe{/stages/stage/json,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6e120907{/stages/pool,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5a5384ce{/stages/pool/json,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1a13e967{/storage,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1d32a279{/storage/json,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@40f6c7ad{/storage/rdd,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1acd76ff{/storage/rdd/json,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@73f13905{/environment,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3150ed0e{/environment/json,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2c5545e1{/executors,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@293923e{/executors/json,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6125c9d3{/executors/threadDump,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@227067c9{/executors/threadDump/json,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4531d781{/static,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3bfa1457{/,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@261b14cd{/api,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@78bdb5a6{/jobs/job/kill,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1861a117{/stages/stage/kill,null,AVAILABLE,@Spark}
18/11/27 21:18:17 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.56.11:39223
18/11/27 21:18:17 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler
18/11/27 21:18:17 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1543322675361_0004 and attemptId Some(appattempt_1543322675361_0004_000001)
18/11/27 21:18:17 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41535.
18/11/27 21:18:17 INFO netty.NettyBlockTransferService: Server created on 192.168.56.11:41535
18/11/27 21:18:17 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/11/27 21:18:17 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.56.11, 41535, None)
18/11/27 21:18:17 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.56.11:41535 with 366.3 MB RAM, BlockManagerId(driver, 192.168.56.11, 41535, None)
18/11/27 21:18:17 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.56.11, 41535, None)
18/11/27 21:18:17 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.56.11, 41535, None)
18/11/27 21:18:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3a948c6b{/metrics/json,null,AVAILABLE,@Spark}
18/11/27 21:18:19 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM@192.168.56.11:44062)
18/11/27 21:18:19 INFO yarn.ApplicationMaster:
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_YARN_STAGING_DIR -> *********(redacted)
SPARK_USER -> *********(redacted)
SPARK_YARN_MODE -> true
command:
{{JAVA_HOME}}/bin/java \
-server \
-Xmx1024m \
-Djava.io.tmpdir={{PWD}}/tmp \
-Dspark.yarn.app.container.log.dir=<LOG_DIR> \
-XX:OnOutOfMemoryError='kill %p' \
org.apache.spark.executor.CoarseGrainedExecutorBackend \
--driver-url \
spark://CoarseGrainedScheduler@192.168.56.11:44062 \
--executor-id \
<executorId> \
--hostname \
<hostname> \
--cores \
1 \
--app-id \
application_1543322675361_0004 \
--user-class-path \
file:$PWD/__app__.jar \
1><LOG_DIR>/stdout \
2><LOG_DIR>/stderr
resources:
__app__.jar -> resource { scheme: "hdfs" host: "192.168.56.11" port: 9000 file: "/user/root/.sparkStaging/application_1543322675361_0004/sparkTestNew-1.0.jar" } size: 6482 timestamp: 1543324686471 type: FILE visibility: PRIVATE
__spark_libs__ -> resource { scheme: "hdfs" host: "192.168.56.11" port: 9000 file: "/user/root/.sparkStaging/application_1543322675361_0004/__spark_libs__6405791690239431196.zip" } size: 209021605 timestamp: 1543324686279 type: ARCHIVE visibility: PRIVATE
__spark_conf__ -> resource { scheme: "hdfs" host: "192.168.56.11" port: 9000 file: "/user/root/.sparkStaging/application_1543322675361_0004/__spark_conf__.zip" } size: 83351 timestamp: 1543324686954 type: ARCHIVE visibility: PRIVATE
===============================================================================
18/11/27 21:18:19 INFO client.RMProxy: Connecting to ResourceManager at /192.168.56.11:8030
18/11/27 21:18:19 INFO yarn.YarnRMClient: Registering the ApplicationMaster
18/11/27 21:18:19 INFO yarn.YarnAllocator: Will request 2 executor container(s), each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
18/11/27 21:18:19 INFO yarn.YarnAllocator: Submitted 2 unlocalized container requests.
18/11/27 21:18:19 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
18/11/27 21:18:21 INFO impl.AMRMClientImpl: Received new token for : linux-node1:46122
18/11/27 21:18:21 INFO yarn.YarnAllocator: Launching container container_1543322675361_0004_01_000002 on host linux-node1 for executor with ID 1
18/11/27 21:18:21 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
18/11/27 21:18:21 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
18/11/27 21:18:21 INFO impl.ContainerManagementProtocolProxy: Opening proxy : linux-node1:46122
18/11/27 21:18:22 INFO yarn.YarnAllocator: Launching container container_1543322675361_0004_01_000003 on host linux-node1 for executor with ID 2
18/11/27 21:18:22 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
18/11/27 21:18:22 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
18/11/27 21:18:22 INFO impl.ContainerManagementProtocolProxy: Opening proxy : linux-node1:46122
18/11/27 21:18:25 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 0 of them.
18/11/27 21:18:26 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.56.11:43818) with ID 1
18/11/27 21:18:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager linux-node1:35265 with 366.3 MB RAM, BlockManagerId(1, linux-node1, 35265, None)
18/11/27 21:18:29 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.56.11:43822) with ID 2
18/11/27 21:18:29 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
18/11/27 21:18:29 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done
18/11/27 21:18:29 INFO kou.List2Hive: conf= [(spark.driver.port,44062), (spark.driver.host,192.168.56.11), (spark.yarn.app.id,application_1543322675361_0004), (spark.submit.deployMode,cluster), (spark.app.id,application_1543322675361_0004), (spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_HOSTS,linux-node1), (spark.executor.id,driver), (spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_URI_BASES,http://linux-node1:8088/proxy/application_1543322675361_0004), (spark.app.name,com.kou.List2Hive), (spark.master,yarn), (spark.ui.port,0), (spark.sql.catalogImplementation,hive), (spark.yarn.app.container.log.dir,/home/koushengrui/app/hadoop/logs/userlogs/application_1543322675361_0004/container_1543322675361_0004_01_000001), (spark.ui.filters,org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter)]
18/11/27 21:18:29 INFO internal.SharedState: loading hive config file: jar:file:/home/koushengrui/app/hadoop/data/nm-local-dir/usercache/root/appcache/application_1543322675361_0004/container_1543322675361_0004_01_000001/__app__.jar!/hive-site.xml
18/11/27 21:18:29 INFO internal.SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/koushengrui/app/hadoop/data/nm-local-dir/usercache/root/appcache/application_1543322675361_0004/container_1543322675361_0004_01_000001/spark-warehouse').
18/11/27 21:18:29 INFO internal.SharedState: Warehouse path is 'file:/home/koushengrui/app/hadoop/data/nm-local-dir/usercache/root/appcache/application_1543322675361_0004/container_1543322675361_0004_01_000001/spark-warehouse'.
18/11/27 21:18:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager linux-node1:35453 with 366.3 MB RAM, BlockManagerId(2, linux-node1, 35453, None)
18/11/27 21:18:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@11d5fa61{/SQL,null,AVAILABLE,@Spark}
18/11/27 21:18:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4399e6bb{/SQL/json,null,AVAILABLE,@Spark}
18/11/27 21:18:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@cc30672{/SQL/execution,null,AVAILABLE,@Spark}
18/11/27 21:18:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1883b28c{/SQL/execution/json,null,AVAILABLE,@Spark}
18/11/27 21:18:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@16d955bc{/static/sql,null,AVAILABLE,@Spark}
18/11/27 21:18:30 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
18/11/27 21:18:32 WARN conf.HiveConf: HiveConf of name hive.server2.webui.host does not exist
18/11/27 21:18:32 WARN conf.HiveConf: HiveConf of name hive.strict.checks.bucketing does not exist
18/11/27 21:18:32 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
18/11/27 21:18:32 INFO metastore.ObjectStore: ObjectStore, initialize called
18/11/27 21:18:32 INFO DataNucleus.Persistence: Property datanucleus.schema.autoCreateTables unknown - will be ignored
18/11/27 21:18:32 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
18/11/27 21:18:32 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
18/11/27 21:18:34 WARN conf.HiveConf: HiveConf of name hive.server2.webui.host does not exist
18/11/27 21:18:34 WARN conf.HiveConf: HiveConf of name hive.strict.checks.bucketing does not exist
18/11/27 21:18:34 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
18/11/27 21:18:37 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/11/27 21:18:37 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/11/27 21:18:38 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/11/27 21:18:38 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/11/27 21:18:38 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
18/11/27 21:18:38 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is OTHER
18/11/27 21:18:38 INFO metastore.ObjectStore: Initialized ObjectStore
18/11/27 21:18:38 INFO metastore.HiveMetaStore: Added admin role in metastore
18/11/27 21:18:38 INFO metastore.HiveMetaStore: Added public role in metastore
18/11/27 21:18:39 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
18/11/27 21:18:39 INFO metastore.HiveMetaStore: 0: get_all_databases
18/11/27 21:18:39 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_databases
18/11/27 21:18:39 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
18/11/27 21:18:39 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_functions: db=default pat=*
18/11/27 21:18:39 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
18/11/27 21:18:39 INFO session.SessionState: Created local directory: /home/hive/iotmp/9c58e9be-d8d9-4a9a-8b65-6310fdb3193e_resources
18/11/27 21:18:39 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/9c58e9be-d8d9-4a9a-8b65-6310fdb3193e
18/11/27 21:18:39 INFO session.SessionState: Created local directory: /home/hive/iotmp/root/9c58e9be-d8d9-4a9a-8b65-6310fdb3193e
18/11/27 21:18:39 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/9c58e9be-d8d9-4a9a-8b65-6310fdb3193e/_tmp_space.db
18/11/27 21:18:39 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/home/koushengrui/app/hadoop/data/nm-local-dir/usercache/root/appcache/application_1543322675361_0004/container_1543322675361_0004_01_000001/spark-warehouse
18/11/27 21:18:39 INFO metastore.HiveMetaStore: 0: get_database: default
18/11/27 21:18:39 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_database: default
18/11/27 21:18:39 INFO metastore.HiveMetaStore: 0: get_database: global_temp
18/11/27 21:18:39 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_database: global_temp
18/11/27 21:18:39 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
18/11/27 21:18:39 WARN conf.HiveConf: HiveConf of name hive.server2.webui.host does not exist
18/11/27 21:18:39 WARN conf.HiveConf: HiveConf of name hive.strict.checks.bucketing does not exist
18/11/27 21:18:39 INFO session.SessionState: Created local directory: /home/hive/iotmp/a432baa6-ce4f-42db-b41a-f8b5c53f02b5_resources
18/11/27 21:18:40 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/a432baa6-ce4f-42db-b41a-f8b5c53f02b5
18/11/27 21:18:40 INFO session.SessionState: Created local directory: /home/hive/iotmp/root/a432baa6-ce4f-42db-b41a-f8b5c53f02b5
18/11/27 21:18:40 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/a432baa6-ce4f-42db-b41a-f8b5c53f02b5/_tmp_space.db
18/11/27 21:18:40 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/home/koushengrui/app/hadoop/data/nm-local-dir/usercache/root/appcache/application_1543322675361_0004/container_1543322675361_0004_01_000001/spark-warehouse
18/11/27 21:18:40 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
18/11/27 21:18:40 INFO kou.List2Hive: runtimeConfig= Map(spark.driver.host -> 192.168.56.11, spark.ui.port -> 0, spark.driver.port -> 44062, spark.yarn.app.id -> application_1543322675361_0004, spark.app.name -> com.kou.List2Hive, spark.executor.id -> driver, spark.yarn.app.container.log.dir -> /home/koushengrui/app/hadoop/logs/userlogs/application_1543322675361_0004/container_1543322675361_0004_01_000001, spark.submit.deployMode -> cluster, spark.master -> yarn, spark.ui.filters -> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, spark.sql.catalogImplementation -> hive, spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_HOSTS -> linux-node1, spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_URI_BASES -> http://linux-node1:8088/proxy/application_1543322675361_0004, spark.app.id -> application_1543322675361_0004)
18/11/27 21:18:42 INFO execution.SparkSqlParser: Parsing command: ss
18/11/27 21:18:43 INFO execution.SparkSqlParser: Parsing command: use default
18/11/27 21:18:43 INFO metastore.HiveMetaStore: 0: get_database: default
18/11/27 21:18:43 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_database: default
18/11/27 21:18:43 INFO execution.SparkSqlParser: Parsing command: insert into table people select * from ss
18/11/27 21:18:43 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=people
18/11/27 21:18:43 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=people
18/11/27 21:18:43 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:18:43 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:18:43 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:18:43 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:18:43 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:18:43 INFO parser.CatalystSqlParser: Parsing command: array<string>
18/11/27 21:18:44 INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://192.168.56.11:9000/user/hive/warehouse/people/.hive-staging_hive_2018-11-27_21-18-44_230_3895375029954795466-1
18/11/27 21:18:44 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
18/11/27 21:18:44 INFO datasources.SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
18/11/27 21:18:45 INFO codegen.CodeGenerator: Code generated in 457.3741 ms
18/11/27 21:18:46 INFO spark.SparkContext: Starting job: sql at List2Hive.java:31
18/11/27 21:18:46 INFO scheduler.DAGScheduler: Got job 0 (sql at List2Hive.java:31) with 1 output partitions
18/11/27 21:18:46 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (sql at List2Hive.java:31)
18/11/27 21:18:46 INFO scheduler.DAGScheduler: Parents of final stage: List()
18/11/27 21:18:46 INFO scheduler.DAGScheduler: Missing parents: List()
18/11/27 21:18:46 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at sql at List2Hive.java:31), which has no missing parents
18/11/27 21:18:46 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 157.2 KB, free 366.1 MB)
18/11/27 21:18:46 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 56.0 KB, free 366.1 MB)
18/11/27 21:18:46 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.56.11:41535 (size: 56.0 KB, free: 366.2 MB)
18/11/27 21:18:46 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
18/11/27 21:18:46 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at sql at List2Hive.java:31) (first 15 tasks are for partitions Vector(0))
18/11/27 21:18:46 INFO cluster.YarnClusterScheduler: Adding task set 0.0 with 1 tasks
18/11/27 21:18:46 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, linux-node1, executor 1, partition 0, PROCESS_LOCAL, 5096 bytes)
18/11/27 21:18:47 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on linux-node1:35265 (size: 56.0 KB, free: 366.2 MB)
18/11/27 21:18:50 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 3730 ms on linux-node1 (executor 1) (1/1)
18/11/27 21:18:50 INFO cluster.YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
18/11/27 21:18:50 INFO scheduler.DAGScheduler: ResultStage 0 (sql at List2Hive.java:31) finished in 3.748 s
18/11/27 21:18:50 INFO scheduler.DAGScheduler: Job 0 finished: sql at List2Hive.java:31, took 4.209767 s
18/11/27 21:18:50 INFO datasources.FileFormatWriter: Job null committed.
18/11/27 21:18:50 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=people
18/11/27 21:18:50 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=people
18/11/27 21:18:50 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=people
18/11/27 21:18:50 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=people
18/11/27 21:18:50 ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
18/11/27 21:18:50 INFO metadata.Hive: Renaming src: hdfs://192.168.56.11:9000/user/hive/warehouse/people/.hive-staging_hive_2018-11-27_21-18-44_230_3895375029954795466-1/-ext-10000/part-00000-5fd46e50-9227-49b4-b5c4-3efdc491ef55-c000, dest: hdfs://192.168.56.11:9000/user/hive/warehouse/people/part-00000-5fd46e50-9227-49b4-b5c4-3efdc491ef55-c000, Status:true
18/11/27 21:18:50 INFO metastore.HiveMetaStore: 0: alter_table: db=default tbl=people newtbl=people
18/11/27 21:18:50 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=alter_table: db=default tbl=people newtbl=people
18/11/27 21:18:50 INFO hive.log: Updating table stats fast for people
18/11/27 21:18:50 INFO hive.log: Updated size of table people to 0
18/11/27 21:18:50 INFO execution.SparkSqlParser: Parsing command: `default`.`people`
18/11/27 21:18:50 INFO metastore.HiveMetaStore: 0: get_database: default
18/11/27 21:18:50 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_database: default
18/11/27 21:18:51 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=people
18/11/27 21:18:51 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=people
18/11/27 21:18:51 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=people
18/11/27 21:18:51 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=people
18/11/27 21:18:51 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:18:51 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:18:51 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:18:51 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:18:51 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:18:51 INFO server.AbstractConnector: Stopped Spark@46a4bcbe{HTTP/1.1,[http/1.1]}{0.0.0.0:0}
18/11/27 21:18:51 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.56.11:39223
18/11/27 21:18:51 INFO yarn.YarnAllocator: Driver requested a total number of 0 executor(s).
18/11/27 21:18:51 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors
18/11/27 21:18:51 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
18/11/27 21:18:51 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
18/11/27 21:18:51 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/11/27 21:18:51 INFO memory.MemoryStore: MemoryStore cleared
18/11/27 21:18:51 INFO storage.BlockManager: BlockManager stopped
18/11/27 21:18:51 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
18/11/27 21:18:51 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/11/27 21:18:51 INFO spark.SparkContext: Successfully stopped SparkContext
18/11/27 21:18:51 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
18/11/27 21:18:51 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
18/11/27 21:18:51 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
18/11/27 21:18:51 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://192.168.56.11:9000/user/root/.sparkStaging/application_1543322675361_0004
18/11/27 21:18:51 INFO util.ShutdownHookManager: Shutdown hook called
18/11/27 21:18:51 INFO util.ShutdownHookManager: Deleting directory /home/koushengrui/app/hadoop/data/nm-local-dir/usercache/root/appcache/application_1543322675361_0004/spark-c58fd913-78ae-4a9a-a369-8278581a4762