如何按键访问pandas groupby dataframe

时间:2021-01-01 21:40:17

How do I access the corresponding groupby dataframe in a groupby object by the key? With the following groupby:

如何通过密钥访问groupby对象中的相应groupby数据帧?使用以下groupby:

rand = np.random.RandomState(1)
df = pd.DataFrame({'A': ['foo', 'bar'] * 3,
                   'B': rand.randn(6),
                   'C': rand.randint(0, 20, 6)})
gb = df.groupby(['A'])

I can iterate through it to get the keys and groups:

我可以遍历它来获取密钥和组:

In [11]: for k, gp in gb:
             print 'key=' + str(k)
             print gp
key=bar
     A         B   C
1  bar -0.611756  18
3  bar -1.072969  10
5  bar -2.301539  18
key=foo
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

I would like to be able to do something like

我希望能够做类似的事情

In [12]: gb['foo']
Out[12]:  
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

But when I do that (well, actually I have to do gb[('foo',)]), I get this weird pandas.core.groupby.DataFrameGroupBy thing which doesn't seem to have any methods that correspond to the DataFrame I want.

但是,当我这样做(好吧,实际上我必须做gb [('foo',)]),我得到这个奇怪的pandas.core.groupby.DataFrameGroupBy的东西似乎没有任何方法对应DataFrame我想要。

The best I can think of is

我能想到的最好的是

In [13]: def gb_df_key(gb, key, orig_df):
             ix = gb.indices[key]
             return orig_df.ix[ix]

         gb_df_key(gb, 'foo', df)
Out[13]:
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14  

but this is kind of nasty, considering how nice pandas usually is at these things.
What's the built-in way of doing this?

但考虑到这些东西通常是多么好的大熊猫,这有点令人讨厌。这样做的内置方法是什么?

5 个解决方案

#1


123  

You can use the get_group method:

您可以使用get_group方法:

In [21]: gb.get_group('foo')
Out[21]: 
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

Note: This doesn't require creating an intermediary dictionary / copy of every subdataframe for every group, so will be much more memory-efficient that creating the naive dictionary with dict(iter(gb)). This is because it uses data-structures already available in the groupby object.

注意:这不需要为每个组创建每个子数据帧的中间字典/副本,因此使用dict(iter(gb))创建天真字典会更加节省内存。这是因为它使用groupby对象中已有的数据结构。


You can select different columns using the groupby slicing:

您可以使用groupby切片选择不同的列:

In [22]: gb[["A", "B"]].get_group("foo")
Out[22]:
     A         B
0  foo  1.624345
2  foo -0.528172
4  foo  0.865408

In [23]: gb["C"].get_group("foo")
Out[23]:
0     5
2    11
4    14
Name: C, dtype: int64

#2


53  

Wes McKinney (pandas' author) in Python for Data Analysis provides the following recipe:

Python for Data Analysis中的Wes McKinney(pandas的作者)提供了以下方法:

groups = dict(list(gb))

which returns a dictionary whose keys are your group labels and whose values are DataFrames, i.e.

返回一个字典,其键是您的组标签,其值是DataFrames,即

groups['foo']

will yield what you are looking for:

会产生你想要的东西:

     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

#3


14  

Rather than

而不是

gb.get_group('foo')

I prefer using gb.groups

我更喜欢使用gb.groups

df.loc[gb.groups['foo']]

Because in this way you can choose multiple columns as well. for example:

因为这样你也可以选择多个列。例如:

df.loc[gb.groups['foo'],('A','B')]

#4


1  

I was looking for a way to sample a few members of the GroupBy obj - had to address the posted question to get this done.

我正在寻找一种方法来抽样GroupBy obj的一些成员 - 必须解决已发布的问题才能完成这项工作。

create groupby object

grouped = df.groupdy('some_key')

pick N dataframes and grab their indicies

sampled_df_i  = random.sample(grouped.indicies,N)

grab the groups

df_list  = map(lambda df_i: grouped.get_group(df_i),sampled_df_i)

optionally - turn it all back into a single dataframe object

sampled_df = pd.concat(df_list, axis=0, join='outer')

#5


1  

gb = df.groupby(['A'])

gb_groups = grouped_df.groups

If you are looking for selective groupby objects then, do: gb_groups.keys(), and input desired key into the following key_list..

如果您正在寻找选择性groupby对象,请执行:gb_groups.keys(),并将所需的键输入到以下key_list中。

gb_groups.keys()

key_list = [key1, key2, key3 and so on...]

for key, values in gb_groups.iteritems():
    if key in key_list:
        print df.ix[values], "\n"

#1


123  

You can use the get_group method:

您可以使用get_group方法:

In [21]: gb.get_group('foo')
Out[21]: 
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

Note: This doesn't require creating an intermediary dictionary / copy of every subdataframe for every group, so will be much more memory-efficient that creating the naive dictionary with dict(iter(gb)). This is because it uses data-structures already available in the groupby object.

注意:这不需要为每个组创建每个子数据帧的中间字典/副本,因此使用dict(iter(gb))创建天真字典会更加节省内存。这是因为它使用groupby对象中已有的数据结构。


You can select different columns using the groupby slicing:

您可以使用groupby切片选择不同的列:

In [22]: gb[["A", "B"]].get_group("foo")
Out[22]:
     A         B
0  foo  1.624345
2  foo -0.528172
4  foo  0.865408

In [23]: gb["C"].get_group("foo")
Out[23]:
0     5
2    11
4    14
Name: C, dtype: int64

#2


53  

Wes McKinney (pandas' author) in Python for Data Analysis provides the following recipe:

Python for Data Analysis中的Wes McKinney(pandas的作者)提供了以下方法:

groups = dict(list(gb))

which returns a dictionary whose keys are your group labels and whose values are DataFrames, i.e.

返回一个字典,其键是您的组标签,其值是DataFrames,即

groups['foo']

will yield what you are looking for:

会产生你想要的东西:

     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

#3


14  

Rather than

而不是

gb.get_group('foo')

I prefer using gb.groups

我更喜欢使用gb.groups

df.loc[gb.groups['foo']]

Because in this way you can choose multiple columns as well. for example:

因为这样你也可以选择多个列。例如:

df.loc[gb.groups['foo'],('A','B')]

#4


1  

I was looking for a way to sample a few members of the GroupBy obj - had to address the posted question to get this done.

我正在寻找一种方法来抽样GroupBy obj的一些成员 - 必须解决已发布的问题才能完成这项工作。

create groupby object

grouped = df.groupdy('some_key')

pick N dataframes and grab their indicies

sampled_df_i  = random.sample(grouped.indicies,N)

grab the groups

df_list  = map(lambda df_i: grouped.get_group(df_i),sampled_df_i)

optionally - turn it all back into a single dataframe object

sampled_df = pd.concat(df_list, axis=0, join='outer')

#5


1  

gb = df.groupby(['A'])

gb_groups = grouped_df.groups

If you are looking for selective groupby objects then, do: gb_groups.keys(), and input desired key into the following key_list..

如果您正在寻找选择性groupby对象,请执行:gb_groups.keys(),并将所需的键输入到以下key_list中。

gb_groups.keys()

key_list = [key1, key2, key3 and so on...]

for key, values in gb_groups.iteritems():
    if key in key_list:
        print df.ix[values], "\n"