spark sqlcontext 读取json 文件

时间:2025-04-13 07:35:17

多行json  直接 使用 ().json("path")  读取时候 报错如下 :

Exception in thread "main" : Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the
referenced columns only include the internal corrupt record column
(named _corrupt_record by default). For example:
(schema).json(file).filter($"_corrupt_record".isNotNull).count()
and (schema).json(file).select("_corrupt_record").show().
Instead, you can cache or save the parsed results and then send the same query.
For example, val df = (schema).json(file).cache() and then
($"_corrupt_record".isNotNull).count().;
	at (:118)
	at $(:129)
	at (:160)
	at $lzycompute(:294)
	at (:290)
	at (:312)
	at (:610)
	at $$anonfun$execute$(:131)
	at $$anonfun$execute$(:127)
	at $$anonfun$executeQuery$(:155)
	at $.withScope(:151)
	at (:152)
	at (:127)
	at (:247)
	at (:337)
	at (:38)
	at $apache$spark$sql$Dataset$$collectFromPlan(:3278)
	at $$anonfun$head$(:2489)
	at $$anonfun$head$(:2489)
	at $$anonfun$(:3259)
	at $.withNewExecutionId(:77)
	at (:3258)
	at (:2489)
	at (:2703)
	at (:254)
	at (:723)
	at (:682)
	at (:691)
	at (:16)
19/09/01 15:03:48 INFO SparkContext: Invoking stop() from shutdown hook
19/09/01 15:03:48 INFO SparkUI: Stopped Spark web UI at http://192.168.1.2:4040
19/09/01 15:03:48 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/09/01 15:03:48 INFO MemoryStore: MemoryStore cleared
19/09/01 15:03:48 INFO BlockManager: BlockManager stopped
19/09/01 15:03:48 INFO BlockManagerMaster: BlockManagerMaster stopped
19/09/01 15:03:48 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/09/01 15:03:48 INFO SparkContext: Successfully stopped SparkContext
19/09/01 15:03:48 INFO ShutdownHookManager: Shutdown hook called
19/09/01 15:03:48 INFO ShutdownHookManager: Deleting directory /private/var/folders/s1/d29h6xhj3r7dnw9d5m2d626w0000gn/T/spark-cbdf2489-115a-4e4b-93fb-4e5e06c24586

将其数据格式修改为单行json  后 无 此问题 

样例数据 

{
  "name": "芙蓉姐姐",
  "age": 12,
  "sex": "W"
}
{
  "name": "女娲",
  "age": 3008,
  "sex" : "W"
}

 

 

测试使用 网上的解决方案 

如 下 links

 方法一设置 option 

方法 二设置 wholeTextFiles  预读取数据 

 

 //Dataset json = ().option("multiline", true).option("mode", "PERMISSIVE").json("./resources/");
        Dataset<Row> json = ().json(("./resources/").values());

        ();

但是效果不太好。

依然没有将多行json 解析 正常  

 

最后发现   使用设置 或者 while TextFile  对文件格式要求较高 。

需要是标准的json 格式 。从新检查数据后 可以运行