从深度嵌套的JSON中创建一个熊猫DataFrame。

时间:2022-04-08 22:44:35

I'm trying to create a single Pandas DataFrame object from a deeply nested JSON string.

我试图从一个深度嵌套的JSON字符串中创建一个熊猫DataFrame对象。

The JSON schema is:

JSON模式是:

{"intervals": [
{
pivots: "Jane Smith",
"series": [
    {
        "interval_id": 0,
        "p_value": 1
       },
     {
         "interval_id": 1,
         "p_value": 1.1162791357932633e-8
     },
   {
        "interval_id": 2,
        "p_value": 0.0000028675012051504467
     }
    ],
   },
  {

"pivots": "Bob Smith",
  "series": [
       {
            "interval_id": 0,
            "p_value": 1
           },
         {
             "interval_id": 1,
            "p_value": 1.1162791357932633e-8
         },
       {
            "interval_id": 2,
            "p_value": 0.0000028675012051504467
         }
       ]
     }
    ]
 }

Desired Outcome I need to flatten this to produce a table:

想要的结果,我需要把它压平来制作一张桌子:

Actor Interval_id Interval_id Interval_id ... 
Jane Smith      1         1.1162        0.00000 ... 
Bob Smith       1         1.1162        0.00000 ... 

The first column is the Pivots values, and the remaining columns are the values of the keys interval_id and p_value stored in the list series.

第一列是Pivots值,其余的列是列表系列中存储的键interval_id和p_value的值。

So far i've got

到目前为止,我已经有了

import requests as r
import pandas as pd
actor_data = r.get("url/to/data").json['data']['intervals']
df = pd.DataFrame(actor_data)

actor_data is a list where the length is equal to the number of individuals ie pivots.values(). The df object simply returns

actor_data是一个列表,其中长度等于个体的数量。df对象只是返回。

<bound method DataFrame.describe of  pivots             Series
0           Jane Smith  [{u'p_value': 1.0, u'interval_id': 0}, {u'p_va...
1           Bob Smith  [{u'p_value': 1.0, u'interval_id': 0}, {u'p_va...
.
.
.

How can I iterate through that series list to get to the dict values and create N distinct columns? Should I try to create a DataFrame for the series list, reshape it,and then do a column bind with the actor names?

我如何遍历这个序列列表来得到字典的值并创建N个不同的列?我是否应该尝试为这个系列列表创建一个DataFrame,然后对其进行重构,然后再用actor名称来绑定一个列?

UPDATE:

更新:

pvalue_list = [i['p_value'] for i in json_data['series']]

this gives me a list of lists. Now I need to figure out how to add each list as a row in a DataFrame.

这给了我一个列表的列表。现在我需要弄清楚如何将每个列表添加到DataFrame中。

value_list = []
for i in pvalue_list:
    pvs = [j['p_value'] for j in i]
    value_list = value_list.append(pvs)
return value_list

This returns a NoneType

这返回一个NoneType

Solution

解决方案

def get_hypthesis_data():
    raw_data = r.get("/url/to/data").json()['data']
    actor_dict = {}
    for actor_series in raw_data['intervals']:
        actor = actor_series['pivots']
        p_values = []
        for interval in actor_series['series']:
            p_values.append(interval['p_value'])
        actor_dict[actor] = p_values
    return pd.DataFrame(actor_dict).T

This returns the correct DataFrame. I transposed it so the individuals were rows and not columns.

这将返回正确的DataFrame。我把它调换了,所以个体是行而不是列。

1 个解决方案

#1


14  

I think organizing your data in way that yields repeating column names is only going to create headaches for you later on down the road. A better approach IMHO is to create a column for each of pivots, interval_id, and p_value. This will make extremely easy to query your data after loading it into pandas.

我认为整理你的数据以产生重复的列名只会在以后给你带来麻烦。IMHO的一个更好的方法是为每个支点、interval_id和p_value创建一个列。这将使得在将数据加载到大熊猫后非常容易地查询数据。

Also, your JSON has some errors in it. I ran it through this to find the errors.

而且,您的JSON也有一些错误。我用它来找出错误。

jq helps here

金桥帮助这里

import sh
jq = sh.jq.bake('-M')  # disable colorizing
json_data = "from above"
rule = """[{pivots: .intervals[].pivots, 
            interval_id: .intervals[].series[].interval_id,
            p_value: .intervals[].series[].p_value}]"""
out = jq(rule, _in=json_data).stdout
res = pd.DataFrame(json.loads(out))

This will yield output similar to

这将产生类似的输出。

    interval_id       p_value      pivots
32            2  2.867501e-06  Jane Smith
33            2  1.000000e+00  Jane Smith
34            2  1.116279e-08  Jane Smith
35            2  2.867501e-06  Jane Smith
36            0  1.000000e+00   Bob Smith
37            0  1.116279e-08   Bob Smith
38            0  2.867501e-06   Bob Smith
39            0  1.000000e+00   Bob Smith
40            0  1.116279e-08   Bob Smith
41            0  2.867501e-06   Bob Smith
42            1  1.000000e+00   Bob Smith
43            1  1.116279e-08   Bob Smith

Adapted from this comment

改编自这个评论

Of course, you can always call res.drop_duplicates() to remove the duplicate rows. This gives

当然,您可以始终调用res.drop_duplicate()来删除重复的行。这给了

In [175]: res.drop_duplicates()
Out[175]:
    interval_id       p_value      pivots
0             0  1.000000e+00  Jane Smith
1             0  1.116279e-08  Jane Smith
2             0  2.867501e-06  Jane Smith
6             1  1.000000e+00  Jane Smith
7             1  1.116279e-08  Jane Smith
8             1  2.867501e-06  Jane Smith
12            2  1.000000e+00  Jane Smith
13            2  1.116279e-08  Jane Smith
14            2  2.867501e-06  Jane Smith
36            0  1.000000e+00   Bob Smith
37            0  1.116279e-08   Bob Smith
38            0  2.867501e-06   Bob Smith
42            1  1.000000e+00   Bob Smith
43            1  1.116279e-08   Bob Smith
44            1  2.867501e-06   Bob Smith
48            2  1.000000e+00   Bob Smith
49            2  1.116279e-08   Bob Smith
50            2  2.867501e-06   Bob Smith

[18 rows x 3 columns]

#1


14  

I think organizing your data in way that yields repeating column names is only going to create headaches for you later on down the road. A better approach IMHO is to create a column for each of pivots, interval_id, and p_value. This will make extremely easy to query your data after loading it into pandas.

我认为整理你的数据以产生重复的列名只会在以后给你带来麻烦。IMHO的一个更好的方法是为每个支点、interval_id和p_value创建一个列。这将使得在将数据加载到大熊猫后非常容易地查询数据。

Also, your JSON has some errors in it. I ran it through this to find the errors.

而且,您的JSON也有一些错误。我用它来找出错误。

jq helps here

金桥帮助这里

import sh
jq = sh.jq.bake('-M')  # disable colorizing
json_data = "from above"
rule = """[{pivots: .intervals[].pivots, 
            interval_id: .intervals[].series[].interval_id,
            p_value: .intervals[].series[].p_value}]"""
out = jq(rule, _in=json_data).stdout
res = pd.DataFrame(json.loads(out))

This will yield output similar to

这将产生类似的输出。

    interval_id       p_value      pivots
32            2  2.867501e-06  Jane Smith
33            2  1.000000e+00  Jane Smith
34            2  1.116279e-08  Jane Smith
35            2  2.867501e-06  Jane Smith
36            0  1.000000e+00   Bob Smith
37            0  1.116279e-08   Bob Smith
38            0  2.867501e-06   Bob Smith
39            0  1.000000e+00   Bob Smith
40            0  1.116279e-08   Bob Smith
41            0  2.867501e-06   Bob Smith
42            1  1.000000e+00   Bob Smith
43            1  1.116279e-08   Bob Smith

Adapted from this comment

改编自这个评论

Of course, you can always call res.drop_duplicates() to remove the duplicate rows. This gives

当然,您可以始终调用res.drop_duplicate()来删除重复的行。这给了

In [175]: res.drop_duplicates()
Out[175]:
    interval_id       p_value      pivots
0             0  1.000000e+00  Jane Smith
1             0  1.116279e-08  Jane Smith
2             0  2.867501e-06  Jane Smith
6             1  1.000000e+00  Jane Smith
7             1  1.116279e-08  Jane Smith
8             1  2.867501e-06  Jane Smith
12            2  1.000000e+00  Jane Smith
13            2  1.116279e-08  Jane Smith
14            2  2.867501e-06  Jane Smith
36            0  1.000000e+00   Bob Smith
37            0  1.116279e-08   Bob Smith
38            0  2.867501e-06   Bob Smith
42            1  1.000000e+00   Bob Smith
43            1  1.116279e-08   Bob Smith
44            1  2.867501e-06   Bob Smith
48            2  1.000000e+00   Bob Smith
49            2  1.116279e-08   Bob Smith
50            2  2.867501e-06   Bob Smith

[18 rows x 3 columns]