标签归档:group-by

Python分组依据

问题:Python分组依据

假设我有一组数据对,其中索引0是值,索引1是类型:

input = [
          ('11013331', 'KAT'), 
          ('9085267',  'NOT'), 
          ('5238761',  'ETH'), 
          ('5349618',  'ETH'), 
          ('11788544', 'NOT'), 
          ('962142',   'ETH'), 
          ('7795297',  'ETH'), 
          ('7341464',  'ETH'), 
          ('9843236',  'KAT'), 
          ('5594916',  'ETH'), 
          ('1550003',  'ETH')
        ]

我想按它们的类型(按第一个索引字符串)将它们分组,如下所示:

result = [ 
           { 
             type:'KAT', 
             items: ['11013331', '9843236'] 
           },
           {
             type:'NOT', 
             items: ['9085267', '11788544'] 
           },
           {
             type:'ETH', 
             items: ['5238761', '962142', '7795297', '7341464', '5594916', '1550003'] 
           }
         ] 

如何有效地做到这一点?

Assume that I have a set of data pair where index 0 is the value and index 1 is the type:

input = [
          ('11013331', 'KAT'), 
          ('9085267',  'NOT'), 
          ('5238761',  'ETH'), 
          ('5349618',  'ETH'), 
          ('11788544', 'NOT'), 
          ('962142',   'ETH'), 
          ('7795297',  'ETH'), 
          ('7341464',  'ETH'), 
          ('9843236',  'KAT'), 
          ('5594916',  'ETH'), 
          ('1550003',  'ETH')
        ]

I want to group them by their type (by the 1st indexed string) as such:

result = [ 
           { 
             type:'KAT', 
             items: ['11013331', '9843236'] 
           },
           {
             type:'NOT', 
             items: ['9085267', '11788544'] 
           },
           {
             type:'ETH', 
             items: ['5238761', '962142', '7795297', '7341464', '5594916', '1550003'] 
           }
         ] 

How can I achieve this in an efficient way?


回答 0

分两步完成。首先,创建字典。

>>> input = [('11013331', 'KAT'), ('9085267', 'NOT'), ('5238761', 'ETH'), ('5349618', 'ETH'), ('11788544', 'NOT'), ('962142', 'ETH'), ('7795297', 'ETH'), ('7341464', 'ETH'), ('9843236', 'KAT'), ('5594916', 'ETH'), ('1550003', 'ETH')]
>>> from collections import defaultdict
>>> res = defaultdict(list)
>>> for v, k in input: res[k].append(v)
...

然后,将该字典转换为预期的格式。

>>> [{'type':k, 'items':v} for k,v in res.items()]
[{'items': ['9085267', '11788544'], 'type': 'NOT'}, {'items': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 'type': 'ETH'}, {'items': ['11013331', '9843236'], 'type': 'KAT'}]

使用itertools.groupby也可以,但是它要求输入首先被排序。

>>> sorted_input = sorted(input, key=itemgetter(1))
>>> groups = groupby(sorted_input, key=itemgetter(1))
>>> [{'type':k, 'items':[x[0] for x in v]} for k, v in groups]
[{'items': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 'type': 'ETH'}, {'items': ['11013331', '9843236'], 'type': 'KAT'}, {'items': ['9085267', '11788544'], 'type': 'NOT'}]

请注意,这两个都不遵守键的原始顺序。如果需要保留订单,则需要一个OrderedDict。

>>> from collections import OrderedDict
>>> res = OrderedDict()
>>> for v, k in input:
...   if k in res: res[k].append(v)
...   else: res[k] = [v]
... 
>>> [{'type':k, 'items':v} for k,v in res.items()]
[{'items': ['11013331', '9843236'], 'type': 'KAT'}, {'items': ['9085267', '11788544'], 'type': 'NOT'}, {'items': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 'type': 'ETH'}]

Do it in 2 steps. First, create a dictionary.

>>> input = [('11013331', 'KAT'), ('9085267', 'NOT'), ('5238761', 'ETH'), ('5349618', 'ETH'), ('11788544', 'NOT'), ('962142', 'ETH'), ('7795297', 'ETH'), ('7341464', 'ETH'), ('9843236', 'KAT'), ('5594916', 'ETH'), ('1550003', 'ETH')]
>>> from collections import defaultdict
>>> res = defaultdict(list)
>>> for v, k in input: res[k].append(v)
...

Then, convert that dictionary into the expected format.

>>> [{'type':k, 'items':v} for k,v in res.items()]
[{'items': ['9085267', '11788544'], 'type': 'NOT'}, {'items': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 'type': 'ETH'}, {'items': ['11013331', '9843236'], 'type': 'KAT'}]

It is also possible with itertools.groupby but it requires the input to be sorted first.

>>> sorted_input = sorted(input, key=itemgetter(1))
>>> groups = groupby(sorted_input, key=itemgetter(1))
>>> [{'type':k, 'items':[x[0] for x in v]} for k, v in groups]
[{'items': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 'type': 'ETH'}, {'items': ['11013331', '9843236'], 'type': 'KAT'}, {'items': ['9085267', '11788544'], 'type': 'NOT'}]

Note both of these do not respect the original order of the keys. You need an OrderedDict if you need to keep the order.

>>> from collections import OrderedDict
>>> res = OrderedDict()
>>> for v, k in input:
...   if k in res: res[k].append(v)
...   else: res[k] = [v]
... 
>>> [{'type':k, 'items':v} for k,v in res.items()]
[{'items': ['11013331', '9843236'], 'type': 'KAT'}, {'items': ['9085267', '11788544'], 'type': 'NOT'}, {'items': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 'type': 'ETH'}]

回答 1

Python的内置itertools模块实际上具有一个groupbyfunction,但是为此,必须首先对要分组的元素进行排序,以使要分组的元素在列表中是连续的:

from operator import itemgetter
sortkeyfn = itemgetter(1)
input = [('11013331', 'KAT'), ('9085267', 'NOT'), ('5238761', 'ETH'), 
 ('5349618', 'ETH'), ('11788544', 'NOT'), ('962142', 'ETH'), ('7795297', 'ETH'), 
 ('7341464', 'ETH'), ('9843236', 'KAT'), ('5594916', 'ETH'), ('1550003', 'ETH')] 
input.sort(key=sortkeyfn)

现在输入看起来像:

[('5238761', 'ETH'), ('5349618', 'ETH'), ('962142', 'ETH'), ('7795297', 'ETH'),
 ('7341464', 'ETH'), ('5594916', 'ETH'), ('1550003', 'ETH'), ('11013331', 'KAT'),
 ('9843236', 'KAT'), ('9085267', 'NOT'), ('11788544', 'NOT')]

groupby返回格式为的2元组序列(key, values_iterator)。我们想要的是将其转换为字典列表,其中“类型”是键,而“项目”是values_iterator返回的元组的第0个元素的列表。像这样:

from itertools import groupby
result = []
for key,valuesiter in groupby(input, key=sortkeyfn):
    result.append(dict(type=key, items=list(v[0] for v in valuesiter)))

现在result包含您想要的字典,如您的问题所述。

但是,您可能会考虑仅对此做出一个单独的dict,按类型键入,每个值都包含值列表。在当前形式中,要查找特定类型的值,必须遍历列表以查找包含匹配的“ type”键的字典,然后从中获取“ items”元素。如果您使用单个词典而不是一个1项词典的列表,则可以通过在主词典中进行单键查找来查找特定类型的项目。使用groupby,这看起来像:

result = {}
for key,valuesiter in groupby(input, key=sortkeyfn):
    result[key] = list(v[0] for v in valuesiter)

result现在包含此字典(这类似于res@KennyTM答案中的中间defaultdict):

{'NOT': ['9085267', '11788544'], 
 'ETH': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 
 'KAT': ['11013331', '9843236']}

(如果您希望将其减少为单层,则可以:

result = dict((key,list(v[0] for v in valuesiter)
              for key,valuesiter in groupby(input, key=sortkeyfn))

或使用新奇的dict-comprehension形式:

result = {key:list(v[0] for v in valuesiter)
              for key,valuesiter in groupby(input, key=sortkeyfn)}

Python’s built-in itertools module actually has a groupby function , but for that the elements to be grouped must first be sorted such that the elements to be grouped are contiguous in the list:

from operator import itemgetter
sortkeyfn = itemgetter(1)
input = [('11013331', 'KAT'), ('9085267', 'NOT'), ('5238761', 'ETH'), 
 ('5349618', 'ETH'), ('11788544', 'NOT'), ('962142', 'ETH'), ('7795297', 'ETH'), 
 ('7341464', 'ETH'), ('9843236', 'KAT'), ('5594916', 'ETH'), ('1550003', 'ETH')] 
input.sort(key=sortkeyfn)

Now input looks like:

[('5238761', 'ETH'), ('5349618', 'ETH'), ('962142', 'ETH'), ('7795297', 'ETH'),
 ('7341464', 'ETH'), ('5594916', 'ETH'), ('1550003', 'ETH'), ('11013331', 'KAT'),
 ('9843236', 'KAT'), ('9085267', 'NOT'), ('11788544', 'NOT')]

groupby returns a sequence of 2-tuples, of the form (key, values_iterator). What we want is to turn this into a list of dicts where the ‘type’ is the key, and ‘items’ is a list of the 0’th elements of the tuples returned by the values_iterator. Like this:

from itertools import groupby
result = []
for key,valuesiter in groupby(input, key=sortkeyfn):
    result.append(dict(type=key, items=list(v[0] for v in valuesiter)))

Now result contains your desired dict, as stated in your question.

You might consider, though, just making a single dict out of this, keyed by type, and each value containing the list of values. In your current form, to find the values for a particular type, you’ll have to iterate over the list to find the dict containing the matching ‘type’ key, and then get the ‘items’ element from it. If you use a single dict instead of a list of 1-item dicts, you can find the items for a particular type with a single keyed lookup into the master dict. Using groupby, this would look like:

result = {}
for key,valuesiter in groupby(input, key=sortkeyfn):
    result[key] = list(v[0] for v in valuesiter)

result now contains this dict (this is similar to the intermediate res defaultdict in @KennyTM’s answer):

{'NOT': ['9085267', '11788544'], 
 'ETH': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 
 'KAT': ['11013331', '9843236']}

(If you want to reduce this to a one-liner, you can:

result = dict((key,list(v[0] for v in valuesiter)
              for key,valuesiter in groupby(input, key=sortkeyfn))

or using the newfangled dict-comprehension form:

result = {key:list(v[0] for v in valuesiter)
              for key,valuesiter in groupby(input, key=sortkeyfn)}

回答 2

我也喜欢熊猫简单的分组。它功能强大,简单,最适合大型数据集

result = pandas.DataFrame(input).groupby(1).groups

I also liked pandas simple grouping. it’s powerful, simple and most adequate for large data set

result = pandas.DataFrame(input).groupby(1).groups


回答 3

此答案类似于@PaulMcG的答案,但不需要对输入进行排序。

对于那些进行函数式编程的人,groupBy可以将其写在一行中(不包括导入!),itertools.groupby与之不同的是,它不需要对输入进行排序:

from functools import reduce # import needed for python3; builtin in python2
from collections import defaultdict

def groupBy(key, seq):
 return reduce(lambda grp, val: grp[key(val)].append(val) or grp, seq, defaultdict(list))

(之所以这样做... or grplambda是因为要reduce()使其正常工作,lambda需要返回其第一个参数;因为list.append()总是返回,None所以or意志总是返回grp。也就是说,它是一个黑客绕过Python的限制,即在拉姆达只能计算一个表达式。)

这将返回一个字典,该字典的键是通过评估给定的函数找到的,其值是按原始顺序列出的原始项目的列表。对于OP的示例,将其称为as groupBy(lambda pair: pair[1], input)将返回此字典:

{'KAT': [('11013331', 'KAT'), ('9843236', 'KAT')],
 'NOT': [('9085267', 'NOT'), ('11788544', 'NOT')],
 'ETH': [('5238761', 'ETH'), ('5349618', 'ETH'), ('962142', 'ETH'), ('7795297', 'ETH'), ('7341464', 'ETH'), ('5594916', 'ETH'), ('1550003', 'ETH')]}

按照@PaulMcG的回答,可以通过将其包装在列表推导中找到OP要求的格式。这样就可以做到:

result = {key: [pair[0] for pair in values],
          for key, values in groupBy(lambda pair: pair[1], input).items()}

This answer is similar to @PaulMcG’s answer but doesn’t require sorting the input.

For those into functional programming, groupBy can be written in one line (not including imports!), and unlike itertools.groupby it doesn’t require the input to be sorted:

from functools import reduce # import needed for python3; builtin in python2
from collections import defaultdict

def groupBy(key, seq):
 return reduce(lambda grp, val: grp[key(val)].append(val) or grp, seq, defaultdict(list))

(The reason for ... or grp in the lambda is that for this reduce() to work, the lambda needs to return its first argument; because list.append() always returns None the or will always return grp. I.e. it’s a hack to get around python’s restriction that a lambda can only evaluate a single expression.)

This returns a dict whose keys are found by evaluating the given function and whose values are a list of the original items in the original order. For the OP’s example, calling this as groupBy(lambda pair: pair[1], input) will return this dict:

{'KAT': [('11013331', 'KAT'), ('9843236', 'KAT')],
 'NOT': [('9085267', 'NOT'), ('11788544', 'NOT')],
 'ETH': [('5238761', 'ETH'), ('5349618', 'ETH'), ('962142', 'ETH'), ('7795297', 'ETH'), ('7341464', 'ETH'), ('5594916', 'ETH'), ('1550003', 'ETH')]}

And as per @PaulMcG’s answer the OP’s requested format can be found by wrapping that in a list comprehension. So this will do it:

result = {key: [pair[0] for pair in values],
          for key, values in groupBy(lambda pair: pair[1], input).items()}

回答 4

以下函数将通过具有任何索引的键快速(无需排序)对任意长度的元组进行分组:

# given a sequence of tuples like [(3,'c',6),(7,'a',2),(88,'c',4),(45,'a',0)],
# returns a dict grouping tuples by idx-th element - with idx=1 we have:
# if merge is True {'c':(3,6,88,4),     'a':(7,2,45,0)}
# if merge is False {'c':((3,6),(88,4)), 'a':((7,2),(45,0))}
def group_by(seqs,idx=0,merge=True):
    d = dict()
    for seq in seqs:
        k = seq[idx]
        v = d.get(k,tuple()) + (seq[:idx]+seq[idx+1:] if merge else (seq[:idx]+seq[idx+1:],))
        d.update({k:v})
    return d

对于您的问题,要分组的键的索引为1,因此:

group_by(input,1)

{'ETH': ('5238761','5349618','962142','7795297','7341464','5594916','1550003'),
 'KAT': ('11013331', '9843236'),
 'NOT': ('9085267', '11788544')}

这不完全是您要求的输出,但也可能满足您的需求。

The following function will quickly (no sorting required) group tuples of any length by a key having any index:

# given a sequence of tuples like [(3,'c',6),(7,'a',2),(88,'c',4),(45,'a',0)],
# returns a dict grouping tuples by idx-th element - with idx=1 we have:
# if merge is True {'c':(3,6,88,4),     'a':(7,2,45,0)}
# if merge is False {'c':((3,6),(88,4)), 'a':((7,2),(45,0))}
def group_by(seqs,idx=0,merge=True):
    d = dict()
    for seq in seqs:
        k = seq[idx]
        v = d.get(k,tuple()) + (seq[:idx]+seq[idx+1:] if merge else (seq[:idx]+seq[idx+1:],))
        d.update({k:v})
    return d

In the case of your question, the index of key you want to group by is 1, therefore:

group_by(input,1)

gives

{'ETH': ('5238761','5349618','962142','7795297','7341464','5594916','1550003'),
 'KAT': ('11013331', '9843236'),
 'NOT': ('9085267', '11788544')}

which is not exactly the output you asked for, but might as well suit your needs.


回答 5

result = []
# Make a set of your "types":
input_set = set([tpl[1] for tpl in input])
>>> set(['ETH', 'KAT', 'NOT'])
# Iterate over the input_set
for type_ in input_set:
    # a dict to gather things:
    D = {}
    # filter all tuples from your input with the same type as type_
    tuples = filter(lambda tpl: tpl[1] == type_, input)
    # write them in the D:
    D["type"] = type_
    D["itmes"] = [tpl[0] for tpl in tuples]
    # append D to results:
    result.append(D)

result
>>> [{'itmes': ['9085267', '11788544'], 'type': 'NOT'}, {'itmes': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 'type': 'ETH'}, {'itmes': ['11013331', '9843236'], 'type': 'KAT'}]
result = []
# Make a set of your "types":
input_set = set([tpl[1] for tpl in input])
>>> set(['ETH', 'KAT', 'NOT'])
# Iterate over the input_set
for type_ in input_set:
    # a dict to gather things:
    D = {}
    # filter all tuples from your input with the same type as type_
    tuples = filter(lambda tpl: tpl[1] == type_, input)
    # write them in the D:
    D["type"] = type_
    D["itmes"] = [tpl[0] for tpl in tuples]
    # append D to results:
    result.append(D)

result
>>> [{'itmes': ['9085267', '11788544'], 'type': 'NOT'}, {'itmes': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 'type': 'ETH'}, {'itmes': ['11013331', '9843236'], 'type': 'KAT'}]

sqlalchemy中的分组和计数功能

问题:sqlalchemy中的分组和计数功能

我想在sqlalchemy中使用“分组并计数”命令。我怎样才能做到这一点?

I want a “group by and count” command in sqlalchemy. How can I do this?


回答 0

关于计数文档说,对于group_by查询,最好使用func.count()

from sqlalchemy import func
session.query(Table.column, func.count(Table.column)).group_by(Table.column).all()

The documentation on counting says that for group_by queries it is better to use func.count():

from sqlalchemy import func
session.query(Table.column, func.count(Table.column)).group_by(Table.column).all()

回答 1

如果您使用Table.query属性:

from sqlalchemy import func
Table.query.with_entities(Table.column, func.count(Table.column)).group_by(Table.column).all()

如果您使用session.query()方法(如miniwark的回答所述):

from sqlalchemy import func
session.query(Table.column, func.count(Table.column)).group_by(Table.column).all()

If you are using Table.query property:

from sqlalchemy import func
Table.query.with_entities(Table.column, func.count(Table.column)).group_by(Table.column).all()

If you are using session.query() method (as stated in miniwark’s answer):

from sqlalchemy import func
session.query(Table.column, func.count(Table.column)).group_by(Table.column).all()

回答 2

您还可以依靠多个组及其交集:

self.session.query(func.count(Table.column1),Table.column1, Table.column2).group_by(Table.column1, Table.column2).all()

上面的查询将返回两列中所有可能的值组合的计数。

You can also count on multiple groups and their intersection:

self.session.query(func.count(Table.column1),Table.column1, Table.column2).group_by(Table.column1, Table.column2).all()

The query above will return counts for all possible combinations of values from both columns.


通过熊猫DataFrame分组并选择最常用的值

问题:通过熊猫DataFrame分组并选择最常用的值

我有一个包含三个字符串列的数据框。我知道第三列中的唯一一个值对于前两个的每种组合都有效。要清理数据,我必须按前两列按数据帧分组,并为每种组合选择第三列的最常用值。

我的代码:

import pandas as pd
from scipy import stats

source = pd.DataFrame({'Country' : ['USA', 'USA', 'Russia','USA'], 
                  'City' : ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
                  'Short name' : ['NY','New','Spb','NY']})

print source.groupby(['Country','City']).agg(lambda x: stats.mode(x['Short name'])[0])

最后一行代码不起作用,它显示“键错误’Short name’”,如果我尝试仅按城市分组,则会收到AssertionError。我该如何解决?

I have a data frame with three string columns. I know that the only one value in the 3rd column is valid for every combination of the first two. To clean the data I have to group by data frame by first two columns and select most common value of the third column for each combination.

My code:

import pandas as pd
from scipy import stats

source = pd.DataFrame({'Country' : ['USA', 'USA', 'Russia','USA'], 
                  'City' : ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
                  'Short name' : ['NY','New','Spb','NY']})

print source.groupby(['Country','City']).agg(lambda x: stats.mode(x['Short name'])[0])

Last line of code doesn’t work, it says “Key error ‘Short name'” and if I try to group only by City, then I got an AssertionError. What can I do fix it?


回答 0

您可以value_counts()用来获取计数系列,并获取第一行:

import pandas as pd

source = pd.DataFrame({'Country' : ['USA', 'USA', 'Russia','USA'], 
                  'City' : ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
                  'Short name' : ['NY','New','Spb','NY']})

source.groupby(['Country','City']).agg(lambda x:x.value_counts().index[0])

如果您想在.agg()中执行其他agg函数,请尝试执行此操作。

# Let's add a new col,  account
source['account'] = [1,2,3,3]

source.groupby(['Country','City']).agg(mod  = ('Short name', \
                                        lambda x: x.value_counts().index[0]),
                                        avg = ('account', 'mean') \
                                      )

You can use value_counts() to get a count series, and get the first row:

import pandas as pd

source = pd.DataFrame({'Country' : ['USA', 'USA', 'Russia','USA'], 
                  'City' : ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
                  'Short name' : ['NY','New','Spb','NY']})

source.groupby(['Country','City']).agg(lambda x:x.value_counts().index[0])

In case you are wondering about performing other agg functions in the .agg() try this.

# Let's add a new col,  account
source['account'] = [1,2,3,3]

source.groupby(['Country','City']).agg(mod  = ('Short name', \
                                        lambda x: x.value_counts().index[0]),
                                        avg = ('account', 'mean') \
                                      )

回答 1

熊猫> = 0.16

pd.Series.mode 可用!

使用groupby,,GroupBy.agg并将pd.Series.mode功能应用于每个组:

source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode)

Country  City            
Russia   Sankt-Petersburg    Spb
USA      New-York             NY
Name: Short name, dtype: object

如果需要将此作为DataFrame,请使用

source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode).to_frame()

                         Short name
Country City                       
Russia  Sankt-Petersburg        Spb
USA     New-York                 NY

有用的Series.mode是,它总是返回一个Series,使其与agg和非常兼容apply,尤其是在重构groupby输出时。它也更快。

# Accepted answer.
%timeit source.groupby(['Country','City']).agg(lambda x:x.value_counts().index[0])
# Proposed in this post.
%timeit source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode)

5.56 ms ± 343 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.76 ms ± 387 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

处理多种模式

Series.mode 当有 多种模式:

source2 = source.append(
    pd.Series({'Country': 'USA', 'City': 'New-York', 'Short name': 'New'}),
    ignore_index=True)

# Now `source2` has two modes for the 
# ("USA", "New-York") group, they are "NY" and "New".
source2

  Country              City Short name
0     USA          New-York         NY
1     USA          New-York        New
2  Russia  Sankt-Petersburg        Spb
3     USA          New-York         NY
4     USA          New-York        New

source2.groupby(['Country','City'])['Short name'].agg(pd.Series.mode)

Country  City            
Russia   Sankt-Petersburg          Spb
USA      New-York            [NY, New]
Name: Short name, dtype: object

或者,如果您想要每种模式单独一行,则可以使用GroupBy.apply

source2.groupby(['Country','City'])['Short name'].apply(pd.Series.mode)

Country  City               
Russia   Sankt-Petersburg  0    Spb
USA      New-York          0     NY
                           1    New
Name: Short name, dtype: object

如果你 不关心返回哪种模式(只要是其中一种模式),那么您将需要一个lambda来调用mode并提取第一个结果。

source2.groupby(['Country','City'])['Short name'].agg(
    lambda x: pd.Series.mode(x)[0])

Country  City            
Russia   Sankt-Petersburg    Spb
USA      New-York             NY
Name: Short name, dtype: object

(不)考虑的替代方案

您也可以使用 statistics.mode从python,但是…

source.groupby(['Country','City'])['Short name'].apply(statistics.mode)

Country  City            
Russia   Sankt-Petersburg    Spb
USA      New-York             NY
Name: Short name, dtype: object

…在处理多种模式时效果不佳;一个StatisticsError提高。在文档中提到了这一点:

如果数据为空,或者没有一个最常用的值,则会引发StatisticsError。

但是你可以自己看…

statistics.mode([1, 2])
# ---------------------------------------------------------------------------
# StatisticsError                           Traceback (most recent call last)
# ...
# StatisticsError: no unique mode; found 2 equally common values

Pandas >= 0.16

pd.Series.mode is available!

Use groupby, GroupBy.agg, and apply the pd.Series.mode function to each group:

source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode)

Country  City            
Russia   Sankt-Petersburg    Spb
USA      New-York             NY
Name: Short name, dtype: object

If this is needed as a DataFrame, use

source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode).to_frame()

                         Short name
Country City                       
Russia  Sankt-Petersburg        Spb
USA     New-York                 NY

The useful thing about Series.mode is that it always returns a Series, making it very compatible with agg and apply, especially when reconstructing the groupby output. It is also faster.

# Accepted answer.
%timeit source.groupby(['Country','City']).agg(lambda x:x.value_counts().index[0])
# Proposed in this post.
%timeit source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode)

5.56 ms ± 343 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.76 ms ± 387 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Dealing with Multiple Modes

Series.mode also does a good job when there are multiple modes:

source2 = source.append(
    pd.Series({'Country': 'USA', 'City': 'New-York', 'Short name': 'New'}),
    ignore_index=True)

# Now `source2` has two modes for the 
# ("USA", "New-York") group, they are "NY" and "New".
source2

  Country              City Short name
0     USA          New-York         NY
1     USA          New-York        New
2  Russia  Sankt-Petersburg        Spb
3     USA          New-York         NY
4     USA          New-York        New

source2.groupby(['Country','City'])['Short name'].agg(pd.Series.mode)

Country  City            
Russia   Sankt-Petersburg          Spb
USA      New-York            [NY, New]
Name: Short name, dtype: object

Or, if you want a separate row for each mode, you can use GroupBy.apply:

source2.groupby(['Country','City'])['Short name'].apply(pd.Series.mode)

Country  City               
Russia   Sankt-Petersburg  0    Spb
USA      New-York          0     NY
                           1    New
Name: Short name, dtype: object

If you don’t care which mode is returned as long as it’s either one of them, then you will need a lambda that calls mode and extracts the first result.

source2.groupby(['Country','City'])['Short name'].agg(
    lambda x: pd.Series.mode(x)[0])

Country  City            
Russia   Sankt-Petersburg    Spb
USA      New-York             NY
Name: Short name, dtype: object

Alternatives to (not) consider

You can also use statistics.mode from python, but…

source.groupby(['Country','City'])['Short name'].apply(statistics.mode)

Country  City            
Russia   Sankt-Petersburg    Spb
USA      New-York             NY
Name: Short name, dtype: object

…it does not work well when having to deal with multiple modes; a StatisticsError is raised. This is mentioned in the docs:

If data is empty, or if there is not exactly one most common value, StatisticsError is raised.

But you can see for yourself…

statistics.mode([1, 2])
# ---------------------------------------------------------------------------
# StatisticsError                           Traceback (most recent call last)
# ...
# StatisticsError: no unique mode; found 2 equally common values

回答 2

对于agg,lambba函数获得一个Series没有'Short name'属性的。

stats.mode 返回两个数组的元组,因此您必须采用此元组中第一个数组的第一个元素。

通过以下两个简单的更改:

source.groupby(['Country','City']).agg(lambda x: stats.mode(x)[0][0])

退货

                         Short name
Country City                       
Russia  Sankt-Petersburg        Spb
USA     New-York                 NY

For agg, the lambba function gets a Series, which does not have a 'Short name' attribute.

stats.mode returns a tuple of two arrays, so you have to take the first element of the first array in this tuple.

With these two simple changements:

source.groupby(['Country','City']).agg(lambda x: stats.mode(x)[0][0])

returns

                         Short name
Country City                       
Russia  Sankt-Petersburg        Spb
USA     New-York                 NY

回答 3

这里的游戏有点晚了,但是我遇到了HYRY解决方案的一些性能问题,因此我不得不提出另一个问题。

它的工作原理是找到每个键值的频率,然后对每个键只保留最常出现的值。

还有一个支持多种模式的附加解决方案。

在代表我正在处理的数据的规模测试中,运行时间从37.4s减少到0.5s!

这是解决方案的代码,一些示例用法和规模测试:

import numpy as np
import pandas as pd
import random
import time

test_input = pd.DataFrame(columns=[ 'key',          'value'],
                          data=  [[ 1,              'A'    ],
                                  [ 1,              'B'    ],
                                  [ 1,              'B'    ],
                                  [ 1,              np.nan ],
                                  [ 2,              np.nan ],
                                  [ 3,              'C'    ],
                                  [ 3,              'C'    ],
                                  [ 3,              'D'    ],
                                  [ 3,              'D'    ]])

def mode(df, key_cols, value_col, count_col):
    '''                                                                                                                                                                                                                                                                                                                                                              
    Pandas does not provide a `mode` aggregation function                                                                                                                                                                                                                                                                                                            
    for its `GroupBy` objects. This function is meant to fill                                                                                                                                                                                                                                                                                                        
    that gap, though the semantics are not exactly the same.                                                                                                                                                                                                                                                                                                         

    The input is a DataFrame with the columns `key_cols`                                                                                                                                                                                                                                                                                                             
    that you would like to group on, and the column                                                                                                                                                                                                                                                                                                                  
    `value_col` for which you would like to obtain the mode.                                                                                                                                                                                                                                                                                                         

    The output is a DataFrame with a record per group that has at least one mode                                                                                                                                                                                                                                                                                     
    (null values are not counted). The `key_cols` are included as columns, `value_col`                                                                                                                                                                                                                                                                               
    contains a mode (ties are broken arbitrarily and deterministically) for each                                                                                                                                                                                                                                                                                     
    group, and `count_col` indicates how many times each mode appeared in its group.                                                                                                                                                                                                                                                                                 
    '''
    return df.groupby(key_cols + [value_col]).size() \
             .to_frame(count_col).reset_index() \
             .sort_values(count_col, ascending=False) \
             .drop_duplicates(subset=key_cols)

def modes(df, key_cols, value_col, count_col):
    '''                                                                                                                                                                                                                                                                                                                                                              
    Pandas does not provide a `mode` aggregation function                                                                                                                                                                                                                                                                                                            
    for its `GroupBy` objects. This function is meant to fill                                                                                                                                                                                                                                                                                                        
    that gap, though the semantics are not exactly the same.                                                                                                                                                                                                                                                                                                         

    The input is a DataFrame with the columns `key_cols`                                                                                                                                                                                                                                                                                                             
    that you would like to group on, and the column                                                                                                                                                                                                                                                                                                                  
    `value_col` for which you would like to obtain the modes.                                                                                                                                                                                                                                                                                                        

    The output is a DataFrame with a record per group that has at least                                                                                                                                                                                                                                                                                              
    one mode (null values are not counted). The `key_cols` are included as                                                                                                                                                                                                                                                                                           
    columns, `value_col` contains lists indicating the modes for each group,                                                                                                                                                                                                                                                                                         
    and `count_col` indicates how many times each mode appeared in its group.                                                                                                                                                                                                                                                                                        
    '''
    return df.groupby(key_cols + [value_col]).size() \
             .to_frame(count_col).reset_index() \
             .groupby(key_cols + [count_col])[value_col].unique() \
             .to_frame().reset_index() \
             .sort_values(count_col, ascending=False) \
             .drop_duplicates(subset=key_cols)

print test_input
print mode(test_input, ['key'], 'value', 'count')
print modes(test_input, ['key'], 'value', 'count')

scale_test_data = [[random.randint(1, 100000),
                    str(random.randint(123456789001, 123456789100))] for i in range(1000000)]
scale_test_input = pd.DataFrame(columns=['key', 'value'],
                                data=scale_test_data)

start = time.time()
mode(scale_test_input, ['key'], 'value', 'count')
print time.time() - start

start = time.time()
modes(scale_test_input, ['key'], 'value', 'count')
print time.time() - start

start = time.time()
scale_test_input.groupby(['key']).agg(lambda x: x.value_counts().index[0])
print time.time() - start

运行此代码将打印如下内容:

   key value
0    1     A
1    1     B
2    1     B
3    1   NaN
4    2   NaN
5    3     C
6    3     C
7    3     D
8    3     D
   key value  count
1    1     B      2
2    3     C      2
   key  count   value
1    1      2     [B]
2    3      2  [C, D]
0.489614009857
9.19386196136
37.4375009537

希望这可以帮助!

A little late to the game here, but I was running into some performance issues with HYRY’s solution, so I had to come up with another one.

It works by finding the frequency of each key-value, and then, for each key, only keeping the value that appears with it most often.

There’s also an additional solution that supports multiple modes.

On a scale test that’s representative of the data I’m working with, this reduced runtime from 37.4s to 0.5s!

Here’s the code for the solution, some example usage, and the scale test:

import numpy as np
import pandas as pd
import random
import time

test_input = pd.DataFrame(columns=[ 'key',          'value'],
                          data=  [[ 1,              'A'    ],
                                  [ 1,              'B'    ],
                                  [ 1,              'B'    ],
                                  [ 1,              np.nan ],
                                  [ 2,              np.nan ],
                                  [ 3,              'C'    ],
                                  [ 3,              'C'    ],
                                  [ 3,              'D'    ],
                                  [ 3,              'D'    ]])

def mode(df, key_cols, value_col, count_col):
    '''                                                                                                                                                                                                                                                                                                                                                              
    Pandas does not provide a `mode` aggregation function                                                                                                                                                                                                                                                                                                            
    for its `GroupBy` objects. This function is meant to fill                                                                                                                                                                                                                                                                                                        
    that gap, though the semantics are not exactly the same.                                                                                                                                                                                                                                                                                                         

    The input is a DataFrame with the columns `key_cols`                                                                                                                                                                                                                                                                                                             
    that you would like to group on, and the column                                                                                                                                                                                                                                                                                                                  
    `value_col` for which you would like to obtain the mode.                                                                                                                                                                                                                                                                                                         

    The output is a DataFrame with a record per group that has at least one mode                                                                                                                                                                                                                                                                                     
    (null values are not counted). The `key_cols` are included as columns, `value_col`                                                                                                                                                                                                                                                                               
    contains a mode (ties are broken arbitrarily and deterministically) for each                                                                                                                                                                                                                                                                                     
    group, and `count_col` indicates how many times each mode appeared in its group.                                                                                                                                                                                                                                                                                 
    '''
    return df.groupby(key_cols + [value_col]).size() \
             .to_frame(count_col).reset_index() \
             .sort_values(count_col, ascending=False) \
             .drop_duplicates(subset=key_cols)

def modes(df, key_cols, value_col, count_col):
    '''                                                                                                                                                                                                                                                                                                                                                              
    Pandas does not provide a `mode` aggregation function                                                                                                                                                                                                                                                                                                            
    for its `GroupBy` objects. This function is meant to fill                                                                                                                                                                                                                                                                                                        
    that gap, though the semantics are not exactly the same.                                                                                                                                                                                                                                                                                                         

    The input is a DataFrame with the columns `key_cols`                                                                                                                                                                                                                                                                                                             
    that you would like to group on, and the column                                                                                                                                                                                                                                                                                                                  
    `value_col` for which you would like to obtain the modes.                                                                                                                                                                                                                                                                                                        

    The output is a DataFrame with a record per group that has at least                                                                                                                                                                                                                                                                                              
    one mode (null values are not counted). The `key_cols` are included as                                                                                                                                                                                                                                                                                           
    columns, `value_col` contains lists indicating the modes for each group,                                                                                                                                                                                                                                                                                         
    and `count_col` indicates how many times each mode appeared in its group.                                                                                                                                                                                                                                                                                        
    '''
    return df.groupby(key_cols + [value_col]).size() \
             .to_frame(count_col).reset_index() \
             .groupby(key_cols + [count_col])[value_col].unique() \
             .to_frame().reset_index() \
             .sort_values(count_col, ascending=False) \
             .drop_duplicates(subset=key_cols)

print test_input
print mode(test_input, ['key'], 'value', 'count')
print modes(test_input, ['key'], 'value', 'count')

scale_test_data = [[random.randint(1, 100000),
                    str(random.randint(123456789001, 123456789100))] for i in range(1000000)]
scale_test_input = pd.DataFrame(columns=['key', 'value'],
                                data=scale_test_data)

start = time.time()
mode(scale_test_input, ['key'], 'value', 'count')
print time.time() - start

start = time.time()
modes(scale_test_input, ['key'], 'value', 'count')
print time.time() - start

start = time.time()
scale_test_input.groupby(['key']).agg(lambda x: x.value_counts().index[0])
print time.time() - start

Running this code will print something like:

   key value
0    1     A
1    1     B
2    1     B
3    1   NaN
4    2   NaN
5    3     C
6    3     C
7    3     D
8    3     D
   key value  count
1    1     B      2
2    3     C      2
   key  count   value
1    1      2     [B]
2    3      2  [C, D]
0.489614009857
9.19386196136
37.4375009537

Hope this helps!


回答 4

这里有两个最重要的答案:

df.groupby(cols).agg(lambda x:x.value_counts().index[0])

或者,最好

df.groupby(cols).agg(pd.Series.mode)

但是,在简单的边缘情况下,这两种方法都会失败,如下所示:

df = pd.DataFrame({
    'client_id':['A', 'A', 'A', 'A', 'B', 'B', 'B', 'C'],
    'date':['2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01'],
    'location':['NY', 'NY', 'LA', 'LA', 'DC', 'DC', 'LA', np.NaN]
})

首先:

df.groupby(['client_id', 'date']).agg(lambda x:x.value_counts().index[0])

收益IndexError(由于group返回的空Series C)。第二:

df.groupby(['client_id', 'date']).agg(pd.Series.mode)

返回ValueError: Function does not reduce,因为第一组返回两个列表(因为有两种模式)。(如记录在这里,如果第一批返回的单一模式,这会工作!)

针对这种情况的两种可能的解决方案是:

import scipy
x.groupby(['client_id', 'date']).agg(lambda x: scipy.stats.mode(x)[0])

以及cs95在这里的评论中给我的解决方案:

def foo(x): 
    m = pd.Series.mode(x); 
    return m.values[0] if not m.empty else np.nan
df.groupby(['client_id', 'date']).agg(foo)

但是,所有这些都很慢,不适合大型数据集。我最终使用的一种解决方案是abw33的答案(应该更高)的稍微修改的版本:a)可以处理这些情况,b)快得多。

def get_mode_per_column(dataframe, group_cols, col):
    return (dataframe.fillna(-1)  # NaN placeholder to keep group 
            .groupby(group_cols + [col])
            .size()
            .to_frame('count')
            .reset_index()
            .sort_values('count', ascending=False)
            .drop_duplicates(subset=group_cols)
            .drop(columns=['count'])
            .sort_values(group_cols)
            .replace(-1, np.NaN))  # restore NaNs

group_cols = ['client_id', 'date']    
non_grp_cols = list(set(df).difference(group_cols))
output_df = get_mode_per_column(df, group_cols, non_grp_cols[0]).set_index(group_cols)
for col in non_grp_cols[1:]:
    output_df[col] = get_mode_per_column(df, group_cols, col)[col].values

从本质上讲,该方法一次在一个col上工作并输出df,因此concat您可以将第一个视为df而不是密集的,然后将输出数组(values.flatten())迭代添加为df中的一列。

The two top answers here suggest:

df.groupby(cols).agg(lambda x:x.value_counts().index[0])

or, preferably

df.groupby(cols).agg(pd.Series.mode)

However both of these fail in simple edge cases, as demonstrated here:

df = pd.DataFrame({
    'client_id':['A', 'A', 'A', 'A', 'B', 'B', 'B', 'C'],
    'date':['2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01'],
    'location':['NY', 'NY', 'LA', 'LA', 'DC', 'DC', 'LA', np.NaN]
})

The first:

df.groupby(['client_id', 'date']).agg(lambda x:x.value_counts().index[0])

yields IndexError (because of the empty Series returned by group C). The second:

df.groupby(['client_id', 'date']).agg(pd.Series.mode)

returns ValueError: Function does not reduce, since the first group returns a list of two (since there are two modes). (As documented here, if the first group returned a single mode this would work!)

Two possible solutions for this case are:

import scipy
x.groupby(['client_id', 'date']).agg(lambda x: scipy.stats.mode(x)[0])

And the solution given to me by cs95 in the comments here:

def foo(x): 
    m = pd.Series.mode(x); 
    return m.values[0] if not m.empty else np.nan
df.groupby(['client_id', 'date']).agg(foo)

However, all of these are slow and not suited for large datasets. A solution I ended up using which a) can deal with these cases and b) is much, much faster, is a lightly modified version of abw33’s answer (which should be higher):

def get_mode_per_column(dataframe, group_cols, col):
    return (dataframe.fillna(-1)  # NaN placeholder to keep group 
            .groupby(group_cols + [col])
            .size()
            .to_frame('count')
            .reset_index()
            .sort_values('count', ascending=False)
            .drop_duplicates(subset=group_cols)
            .drop(columns=['count'])
            .sort_values(group_cols)
            .replace(-1, np.NaN))  # restore NaNs

group_cols = ['client_id', 'date']    
non_grp_cols = list(set(df).difference(group_cols))
output_df = get_mode_per_column(df, group_cols, non_grp_cols[0]).set_index(group_cols)
for col in non_grp_cols[1:]:
    output_df[col] = get_mode_per_column(df, group_cols, col)[col].values

Essentially, the method works on one col at a time and outputs a df, so instead of concat, which is intensive, you treat the first as a df, and then iteratively add the output array (values.flatten()) as a column in the df.


回答 5

正确的答案是@eumiro解决方案。@HYRY解决方案的问题是,当您拥有[1,2,3,4]之类的数字序列时,解决方案是错误的,即您没有mode。例:

>>> import pandas as pd
>>> df = pd.DataFrame(
        {
            'client': ['A', 'B', 'A', 'B', 'B', 'C', 'A', 'D', 'D', 'E', 'E', 'E', 'E', 'E', 'A'], 
            'total': [1, 4, 3, 2, 4, 1, 2, 3, 5, 1, 2, 2, 2, 3, 4], 
            'bla': [10, 40, 30, 20, 40, 10, 20, 30, 50, 10, 20, 20, 20, 30, 40]
        }
    )

如果您像@HYRY那样进行计算,则会得到:

>>> print(df.groupby(['client']).agg(lambda x: x.value_counts().index[0]))
        total  bla
client            
A           4   30
B           4   40
C           1   10
D           3   30
E           2   20

这显然是错误的(请参阅A值,该值为1而不是4),因为它不能使用唯一值进行处理。

因此,另一种解决方案是正确的:

>>> import scipy.stats
>>> print(df.groupby(['client']).agg(lambda x: scipy.stats.mode(x)[0][0]))
        total  bla
client            
A           1   10
B           4   40
C           1   10
D           3   30
E           2   20

Formally, the correct answer is the @eumiro Solution. The problem of @HYRY solution is that when you have a sequence of numbers like [1,2,3,4] the solution is wrong, i. e., you don’t have the mode. Example:

>>> import pandas as pd
>>> df = pd.DataFrame(
        {
            'client': ['A', 'B', 'A', 'B', 'B', 'C', 'A', 'D', 'D', 'E', 'E', 'E', 'E', 'E', 'A'], 
            'total': [1, 4, 3, 2, 4, 1, 2, 3, 5, 1, 2, 2, 2, 3, 4], 
            'bla': [10, 40, 30, 20, 40, 10, 20, 30, 50, 10, 20, 20, 20, 30, 40]
        }
    )

If you compute like @HYRY you obtain:

>>> print(df.groupby(['client']).agg(lambda x: x.value_counts().index[0]))
        total  bla
client            
A           4   30
B           4   40
C           1   10
D           3   30
E           2   20

Which is clearly wrong (see the A value that should be 1 and not 4) because it can’t handle with unique values.

Thus, the other solution is correct:

>>> import scipy.stats
>>> print(df.groupby(['client']).agg(lambda x: scipy.stats.mode(x)[0][0]))
        total  bla
client            
A           1   10
B           4   40
C           1   10
D           3   30
E           2   20

回答 6

如果您想要另一种不依赖的解决方法,value_counts或者scipy.stats可以使用Counter集合

from collections import Counter
get_most_common = lambda values: max(Counter(values).items(), key = lambda x: x[1])[0]

这样可以应用于上面的例子

src = pd.DataFrame({'Country' : ['USA', 'USA', 'Russia','USA'], 
              'City' : ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
              'Short_name' : ['NY','New','Spb','NY']})

src.groupby(['Country','City']).agg(get_most_common)

If you want another approach for solving it that is does not depend on value_counts or scipy.stats you can use the Counter collection

from collections import Counter
get_most_common = lambda values: max(Counter(values).items(), key = lambda x: x[1])[0]

Which can be applied to the above example like this

src = pd.DataFrame({'Country' : ['USA', 'USA', 'Russia','USA'], 
              'City' : ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
              'Short_name' : ['NY','New','Spb','NY']})

src.groupby(['Country','City']).agg(get_most_common)

回答 7

如果您不想包含NaN值,则使用Counter的速度比pd.Series.mode或快得多pd.Series.value_counts()[0]

def get_most_common(srs):
    x = list(srs)
    my_counter = Counter(x)
    return my_counter.most_common(1)[0][0]

df.groupby(col).agg(get_most_common)

应该管用。当您具有NaN值时,这将失败,因为每个NaN都将被分别计算。

If you don’t want to include NaN values, using Counter is much much faster than pd.Series.mode or pd.Series.value_counts()[0]:

def get_most_common(srs):
    x = list(srs)
    my_counter = Counter(x)
    return my_counter.most_common(1)[0][0]

df.groupby(col).agg(get_most_common)

should work. This will fail when you have NaN values, as each NaN will be counted separately.


回答 8

这里的问题是性能,如果您有很多行,那将是一个问题。

如果是您的情况,请尝试以下操作:

import pandas as pd

source = pd.DataFrame({'Country' : ['USA', 'USA', 'Russia','USA'], 
              'City' : ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
              'Short_name' : ['NY','New','Spb','NY']})

source.groupby(['Country','City']).agg(lambda x:x.value_counts().index[0])

source.groupby(['Country','City']).Short_name.value_counts().groupby['Country','City']).first()

The problem here is the performance, if you have a lot of rows it will be a problem.

If it is your case, please try with this:

import pandas as pd

source = pd.DataFrame({'Country' : ['USA', 'USA', 'Russia','USA'], 
              'City' : ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
              'Short_name' : ['NY','New','Spb','NY']})

source.groupby(['Country','City']).agg(lambda x:x.value_counts().index[0])

source.groupby(['Country','City']).Short_name.value_counts().groupby['Country','City']).first()

回答 9

对于较大的数据集,较为笨拙但较快的方法包括获取感兴趣列的计数,将计数从高到低排序,然后对子集进行重复数据删除以仅保留最大的个案。代码示例如下:

>>> import pandas as pd
>>> source = pd.DataFrame(
        {
            'Country': ['USA', 'USA', 'Russia', 'USA'], 
            'City': ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
            'Short name': ['NY', 'New', 'Spb', 'NY']
        }
    )
>>> grouped_df = source\
        .groupby(['Country','City','Short name'])[['Short name']]\
        .count()\
        .rename(columns={'Short name':'count'})\
        .reset_index()\
        .sort_values('count', ascending=False)\
        .drop_duplicates(subset=['Country', 'City'])\
        .drop('count', axis=1)
>>> print(grouped_df)
  Country              City Short name
1     USA          New-York         NY
0  Russia  Sankt-Petersburg        Spb

A slightly clumsier but faster approach for larger datasets involves getting the counts for a column of interest, sorting the counts highest to lowest, and then de-duplicating on a subset to only retain the largest cases. The code example is following:

>>> import pandas as pd
>>> source = pd.DataFrame(
        {
            'Country': ['USA', 'USA', 'Russia', 'USA'], 
            'City': ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
            'Short name': ['NY', 'New', 'Spb', 'NY']
        }
    )
>>> grouped_df = source\
        .groupby(['Country','City','Short name'])[['Short name']]\
        .count()\
        .rename(columns={'Short name':'count'})\
        .reset_index()\
        .sort_values('count', ascending=False)\
        .drop_duplicates(subset=['Country', 'City'])\
        .drop('count', axis=1)
>>> print(grouped_df)
  Country              City Short name
1     USA          New-York         NY
0  Russia  Sankt-Petersburg        Spb

熊猫groupby排序

问题:熊猫groupby排序

我想按两列对数据框进行分组,然后对各组中的汇总结果进行排序。

In [167]:
df

Out[167]:
count   job source
0   2   sales   A
1   4   sales   B
2   6   sales   C
3   3   sales   D
4   7   sales   E
5   5   market  A
6   3   market  B
7   2   market  C
8   4   market  D
9   1   market  E

In [168]:
df.groupby(['job','source']).agg({'count':sum})

Out[168]:
            count
job     source  
market  A   5
        B   3
        C   2
        D   4
        E   1
sales   A   2
        B   4
        C   6
        D   3
        E   7

现在,我想在每个组中按降序对计数列进行排序。然后只取前三行。得到类似的东西:

            count
job     source  
market  A   5
        D   4
        B   3
sales   E   7
        C   6
        B   4

I want to group my dataframe by two columns and then sort the aggregated results within the groups.

In [167]:
df

Out[167]:
count   job source
0   2   sales   A
1   4   sales   B
2   6   sales   C
3   3   sales   D
4   7   sales   E
5   5   market  A
6   3   market  B
7   2   market  C
8   4   market  D
9   1   market  E

In [168]:
df.groupby(['job','source']).agg({'count':sum})

Out[168]:
            count
job     source  
market  A   5
        B   3
        C   2
        D   4
        E   1
sales   A   2
        B   4
        C   6
        D   3
        E   7

I would now like to sort the count column in descending order within each of the groups. And then take only the top three rows. To get something like:

            count
job     source  
market  A   5
        D   4
        B   3
sales   E   7
        C   6
        B   4

回答 0

您实际上想要做的是再次使用groupby(在第一个groupby的结果上):对每个组的前三个元素进行排序并取其值。

从第一个分组依据的结果开始:

In [60]: df_agg = df.groupby(['job','source']).agg({'count':sum})

我们按索引的第一级分组:

In [63]: g = df_agg['count'].groupby(level=0, group_keys=False)

然后,我们要对每个组进行排序(“排序”)并采用前三个元素:

In [64]: res = g.apply(lambda x: x.order(ascending=False).head(3))

但是,为此,有一个快捷功能可以实现nlargest

In [65]: g.nlargest(3)
Out[65]:
job     source
market  A         5
        D         4
        B         3
sales   E         7
        C         6
        B         4
dtype: int64

What you want to do is actually again a groupby (on the result of the first groupby): sort and take the first three elements per group.

Starting from the result of the first groupby:

In [60]: df_agg = df.groupby(['job','source']).agg({'count':sum})

We group by the first level of the index:

In [63]: g = df_agg['count'].groupby('job', group_keys=False)

Then we want to sort (‘order’) each group and take the first three elements:

In [64]: res = g.apply(lambda x: x.sort_values(ascending=False).head(3))

However, for this, there is a shortcut function to do this, nlargest:

In [65]: g.nlargest(3)
Out[65]:
job     source
market  A         5
        D         4
        B         3
sales   E         7
        C         6
        B         4
dtype: int64

So in one go, this looks like:

df_agg['count'].groupby('job', group_keys=False).nlargest(3)

回答 1

您也可以一次性完成,方法是先进行排序,然后使用head进行每个组的前3个。

In[34]: df.sort_values(['job','count'],ascending=False).groupby('job').head(3)

Out[35]: 
   count     job source
4      7   sales      E
2      6   sales      C
1      4   sales      B
5      5  market      A
8      4  market      D
6      3  market      B

You could also just do it in one go, by doing the sort first and using head to take the first 3 of each group.

In[34]: df.sort_values(['job','count'],ascending=False).groupby('job').head(3)

Out[35]: 
   count     job source
4      7   sales      E
2      6   sales      C
1      4   sales      B
5      5  market      A
8      4  market      D
6      3  market      B

回答 2

这是另一个在排序顺序上排在前3位并在组内排序的示例:

In [43]: import pandas as pd                                                                                                                                                       

In [44]:  df = pd.DataFrame({"name":["Foo", "Foo", "Baar", "Foo", "Baar", "Foo", "Baar", "Baar"], "count_1":[5,10,12,15,20,25,30,35], "count_2" :[100,150,100,25,250,300,400,500]})

In [45]: df                                                                                                                                                                        
Out[45]: 
   count_1  count_2  name
0        5      100   Foo
1       10      150   Foo
2       12      100  Baar
3       15       25   Foo
4       20      250  Baar
5       25      300   Foo
6       30      400  Baar
7       35      500  Baar


### Top 3 on sorted order:
In [46]: df.groupby(["name"])["count_1"].nlargest(3)                                                                                                                               
Out[46]: 
name   
Baar  7    35
      6    30
      4    20
Foo   5    25
      3    15
      1    10
dtype: int64


### Sorting within groups based on column "count_1":
In [48]: df.groupby(["name"]).apply(lambda x: x.sort_values(["count_1"], ascending = False)).reset_index(drop=True)
Out[48]: 
   count_1  count_2  name
0       35      500  Baar
1       30      400  Baar
2       20      250  Baar
3       12      100  Baar
4       25      300   Foo
5       15       25   Foo
6       10      150   Foo
7        5      100   Foo

Here’s other example of taking top 3 on sorted order, and sorting within the groups:

In [43]: import pandas as pd                                                                                                                                                       

In [44]:  df = pd.DataFrame({"name":["Foo", "Foo", "Baar", "Foo", "Baar", "Foo", "Baar", "Baar"], "count_1":[5,10,12,15,20,25,30,35], "count_2" :[100,150,100,25,250,300,400,500]})

In [45]: df                                                                                                                                                                        
Out[45]: 
   count_1  count_2  name
0        5      100   Foo
1       10      150   Foo
2       12      100  Baar
3       15       25   Foo
4       20      250  Baar
5       25      300   Foo
6       30      400  Baar
7       35      500  Baar


### Top 3 on sorted order:
In [46]: df.groupby(["name"])["count_1"].nlargest(3)                                                                                                                               
Out[46]: 
name   
Baar  7    35
      6    30
      4    20
Foo   5    25
      3    15
      1    10
dtype: int64


### Sorting within groups based on column "count_1":
In [48]: df.groupby(["name"]).apply(lambda x: x.sort_values(["count_1"], ascending = False)).reset_index(drop=True)
Out[48]: 
   count_1  count_2  name
0       35      500  Baar
1       30      400  Baar
2       20      250  Baar
3       12      100  Baar
4       25      300   Foo
5       15       25   Foo
6       10      150   Foo
7        5      100   Foo

回答 3

试试这个代替

执行“ groupby”并按降序排序的简单方法

df.groupby(['companyName'])['overallRating'].sum().sort_values(ascending=False).head(20)

Try this Instead

simple way to do ‘groupby’ and sorting in descending order

df.groupby(['companyName'])['overallRating'].sum().sort_values(ascending=False).head(20)

回答 4

如果您不需要汇总一列,请使用@tvashtar的答案。如果您确实需要求和,则可以使用@joris的答案或与此非常相似的答案。

df.groupby(['job']).apply(lambda x: (x.groupby('source')
                                      .sum()
                                      .sort_values('count', ascending=False))
                                     .head(3))

If you don’t need to sum a column, then use @tvashtar’s answer. If you do need to sum, then you can use @joris’ answer or this one which is very similar to it.

df.groupby(['job']).apply(lambda x: (x.groupby('source')
                                      .sum()
                                      .sort_values('count', ascending=False))
                                     .head(3))

如何通过密钥按数据组访问熊猫

问题:如何通过密钥按数据组访问熊猫

如何通过密钥访问groupby对象中的相应groupby数据帧?

通过以下groupby:

rand = np.random.RandomState(1)
df = pd.DataFrame({'A': ['foo', 'bar'] * 3,
                   'B': rand.randn(6),
                   'C': rand.randint(0, 20, 6)})
gb = df.groupby(['A'])

我可以遍历它来获取密钥和组:

In [11]: for k, gp in gb:
             print 'key=' + str(k)
             print gp
key=bar
     A         B   C
1  bar -0.611756  18
3  bar -1.072969  10
5  bar -2.301539  18
key=foo
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

我希望能够通过其键访问组:

In [12]: gb['foo']
Out[12]:  
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

但是当我尝试这样做时,gb[('foo',)]我得到了这个奇怪的pandas.core.groupby.DataFrameGroupBy对象,似乎没有任何与我想要的DataFrame相对应的方法。

我能想到的最好的是:

In [13]: def gb_df_key(gb, key, orig_df):
             ix = gb.indices[key]
             return orig_df.ix[ix]

         gb_df_key(gb, 'foo', df)
Out[13]:
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14  

但是,考虑到这些事情上熊猫通常是多么好,这有点令人讨厌。
这样做的内置方式是什么?

How do I access the corresponding groupby dataframe in a groupby object by the key?

With the following groupby:

rand = np.random.RandomState(1)
df = pd.DataFrame({'A': ['foo', 'bar'] * 3,
                   'B': rand.randn(6),
                   'C': rand.randint(0, 20, 6)})
gb = df.groupby(['A'])

I can iterate through it to get the keys and groups:

In [11]: for k, gp in gb:
             print 'key=' + str(k)
             print gp
key=bar
     A         B   C
1  bar -0.611756  18
3  bar -1.072969  10
5  bar -2.301539  18
key=foo
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

I would like to be able to access a group by its key:

In [12]: gb['foo']
Out[12]:  
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

But when I try doing that with gb[('foo',)] I get this weird pandas.core.groupby.DataFrameGroupBy object thing which doesn’t seem to have any methods that correspond to the DataFrame I want.

The best I could think of is:

In [13]: def gb_df_key(gb, key, orig_df):
             ix = gb.indices[key]
             return orig_df.ix[ix]

         gb_df_key(gb, 'foo', df)
Out[13]:
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14  

but this is kind of nasty, considering how nice pandas usually is at these things.
What’s the built-in way of doing this?


回答 0

您可以使用以下get_group方法:

In [21]: gb.get_group('foo')
Out[21]: 
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

注意:这不需要为每个组创建一个中间字典/每个子数据帧的副本,因此与使用来创建朴素的字典相比,其内存效率更高dict(iter(gb))。这是因为它使用了groupby对象中已经可用的数据结构。


您可以使用groupby切片选择不同的列:

In [22]: gb[["A", "B"]].get_group("foo")
Out[22]:
     A         B
0  foo  1.624345
2  foo -0.528172
4  foo  0.865408

In [23]: gb["C"].get_group("foo")
Out[23]:
0     5
2    11
4    14
Name: C, dtype: int64

You can use the get_group method:

In [21]: gb.get_group('foo')
Out[21]: 
     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

Note: This doesn’t require creating an intermediary dictionary / copy of every subdataframe for every group, so will be much more memory-efficient that creating the naive dictionary with dict(iter(gb)). This is because it uses data-structures already available in the groupby object.


You can select different columns using the groupby slicing:

In [22]: gb[["A", "B"]].get_group("foo")
Out[22]:
     A         B
0  foo  1.624345
2  foo -0.528172
4  foo  0.865408

In [23]: gb["C"].get_group("foo")
Out[23]:
0     5
2    11
4    14
Name: C, dtype: int64

回答 1

Python for Data Analysis中的Wes McKinney(熊猫的作者)提供了以下配方:

groups = dict(list(gb))

它返回一个字典,其键是您的组标签,其值是DataFrames,即

groups['foo']

将产生您想要的东西:

     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

Wes McKinney (pandas’ author) in Python for Data Analysis provides the following recipe:

groups = dict(list(gb))

which returns a dictionary whose keys are your group labels and whose values are DataFrames, i.e.

groups['foo']

will yield what you are looking for:

     A         B   C
0  foo  1.624345   5
2  foo -0.528172  11
4  foo  0.865408  14

回答 2

而不是

gb.get_group('foo')

我更喜欢使用 gb.groups

df.loc[gb.groups['foo']]

因为这样您也可以选择多个列。例如:

df.loc[gb.groups['foo'],('A','B')]

Rather than

gb.get_group('foo')

I prefer using gb.groups

df.loc[gb.groups['foo']]

Because in this way you can choose multiple columns as well. for example:

df.loc[gb.groups['foo'],('A','B')]

回答 3

gb = df.groupby(['A'])

gb_groups = grouped_df.groups

如果要查找选择性的groupby对象,请执行:gb_groups.keys(),然后将所需的密钥输入到以下key_list中。

gb_groups.keys()

key_list = [key1, key2, key3 and so on...]

for key, values in gb_groups.iteritems():
    if key in key_list:
        print df.ix[values], "\n"
gb = df.groupby(['A'])

gb_groups = grouped_df.groups

If you are looking for selective groupby objects then, do: gb_groups.keys(), and input desired key into the following key_list..

gb_groups.keys()

key_list = [key1, key2, key3 and so on...]

for key, values in gb_groups.iteritems():
    if key in key_list:
        print df.ix[values], "\n"

回答 4

我正在寻找对GroupBy obj的几个成员进行抽样的方法-必须解决发布的问题才能完成此任务。

创建分组对象

grouped = df.groupby('some_key')

选择N个数据框并获取其索引

sampled_df_i  = random.sample(grouped.indicies, N)

抢团体

df_list  = map(lambda df_i: grouped.get_group(df_i), sampled_df_i)

可选-将所有内容重新转换为单个dataframe对象

sampled_df = pd.concat(df_list, axis=0, join='outer')

I was looking for a way to sample a few members of the GroupBy obj – had to address the posted question to get this done.

create groupby object

grouped = df.groupby('some_key')

pick N dataframes and grab their indicies

sampled_df_i  = random.sample(grouped.indicies, N)

grab the groups

df_list  = map(lambda df_i: grouped.get_group(df_i), sampled_df_i)

optionally – turn it all back into a single dataframe object

sampled_df = pd.concat(df_list, axis=0, join='outer')

将多个功能应用于多个groupby列

问题:将多个功能应用于多个groupby列

文档展示了如何使用输出列名称作为键的字典一次在groupby对象上应用多个功能:

In [563]: grouped['D'].agg({'result1' : np.sum,
   .....:                   'result2' : np.mean})
   .....:
Out[563]: 
      result2   result1
A                      
bar -0.579846 -1.739537
foo -0.280588 -1.402938

但是,这仅适用于Series groupby对象。同样,当将字典类似地传递到groupby DataFrame时,它期望键是将应用该函数的列名。

我想做的是对多个列应用多个功能(但是某些列将被多次操作)。同样,某些函数将依赖于groupby对象中的其他列(如sumif函数)。我当前的解决方案是逐列进行操作,并使用类似于上面代码的代码,对依赖其他行的函数使用lambda。但这要花很长时间,(我认为花很长时间才能遍历groupby对象)。我必须对其进行更改,以便一次运行即可遍历整个groupby对象,但是我想知道熊猫中是否有内置的方法可以使此操作更加简洁。

例如,我尝试过类似

grouped.agg({'C_sum' : lambda x: x['C'].sum(),
             'C_std': lambda x: x['C'].std(),
             'D_sum' : lambda x: x['D'].sum()},
             'D_sumifC3': lambda x: x['D'][x['C'] == 3].sum(), ...)

但正如我所料,我收到一个KeyError(因为如果agg从DataFrame调用,则键必须是一列)。

是否有任何内置方法可以执行我想做的事情,或者可能添加了此功能,或者我只需要手动遍历groupby?

谢谢

The docs show how to apply multiple functions on a groupby object at a time using a dict with the output column names as the keys:

In [563]: grouped['D'].agg({'result1' : np.sum,
   .....:                   'result2' : np.mean})
   .....:
Out[563]: 
      result2   result1
A                      
bar -0.579846 -1.739537
foo -0.280588 -1.402938

However, this only works on a Series groupby object. And when a dict is similarly passed to a groupby DataFrame, it expects the keys to be the column names that the function will be applied to.

What I want to do is apply multiple functions to several columns (but certain columns will be operated on multiple times). Also, some functions will depend on other columns in the groupby object (like sumif functions). My current solution is to go column by column, and doing something like the code above, using lambdas for functions that depend on other rows. But this is taking a long time, (I think it takes a long time to iterate through a groupby object). I’ll have to change it so that I iterate through the whole groupby object in a single run, but I’m wondering if there’s a built in way in pandas to do this somewhat cleanly.

For example, I’ve tried something like

grouped.agg({'C_sum' : lambda x: x['C'].sum(),
             'C_std': lambda x: x['C'].std(),
             'D_sum' : lambda x: x['D'].sum()},
             'D_sumifC3': lambda x: x['D'][x['C'] == 3].sum(), ...)

but as expected I get a KeyError (since the keys have to be a column if agg is called from a DataFrame).

Is there any built in way to do what I’d like to do, or a possibility that this functionality may be added, or will I just need to iterate through the groupby manually?

Thanks


回答 0

当前接受的答案的后半部分已过时,并且有两个过时的建议。首先也是最重要的是,您无法再将字典词典传递给agg groupby方法。第二,永远不要使用.ix

如果您希望同时使用两个单独的列,则建议使用 apply隐式将DataFrame传递给应用函数的方法。让我们使用与上面类似的数据框

df = pd.DataFrame(np.random.rand(4,4), columns=list('abcd'))
df['group'] = [0, 0, 1, 1]
df

          a         b         c         d  group
0  0.418500  0.030955  0.874869  0.145641      0
1  0.446069  0.901153  0.095052  0.487040      0
2  0.843026  0.936169  0.926090  0.041722      1
3  0.635846  0.439175  0.828787  0.714123      1

从列名映射到聚合函数的字典仍然是执行聚合的理想方法。

df.groupby('group').agg({'a':['sum', 'max'], 
                         'b':'mean', 
                         'c':'sum', 
                         'd': lambda x: x.max() - x.min()})

              a                   b         c         d
            sum       max      mean       sum  <lambda>
group                                                  
0      0.864569  0.446069  0.466054  0.969921  0.341399
1      1.478872  0.843026  0.687672  1.754877  0.672401

如果您不喜欢该丑陋的lambda列名称,则可以使用常规函数,并为特殊__name__属性提供自定义名称,如下所示:

def max_min(x):
    return x.max() - x.min()

max_min.__name__ = 'Max minus Min'

df.groupby('group').agg({'a':['sum', 'max'], 
                         'b':'mean', 
                         'c':'sum', 
                         'd': max_min})

              a                   b         c             d
            sum       max      mean       sum Max minus Min
group                                                      
0      0.864569  0.446069  0.466054  0.969921      0.341399
1      1.478872  0.843026  0.687672  1.754877      0.672401

使用 apply和返回系列

现在,如果您有多个需要一起交互的列,则无法使用 agg,这会将Series隐式传递给聚合函数。当使用apply整个集团作为一个数据帧被传递给函数。

我建议制作一个自定义函数,以返回一系列所有聚合。使用系列索引作为新列的标签:

def f(x):
    d = {}
    d['a_sum'] = x['a'].sum()
    d['a_max'] = x['a'].max()
    d['b_mean'] = x['b'].mean()
    d['c_d_prodsum'] = (x['c'] * x['d']).sum()
    return pd.Series(d, index=['a_sum', 'a_max', 'b_mean', 'c_d_prodsum'])

df.groupby('group').apply(f)

         a_sum     a_max    b_mean  c_d_prodsum
group                                           
0      0.864569  0.446069  0.466054     0.173711
1      1.478872  0.843026  0.687672     0.630494

如果您爱上了MultiIndexes,仍然可以返回带有以下内容的Series:

    def f_mi(x):
        d = []
        d.append(x['a'].sum())
        d.append(x['a'].max())
        d.append(x['b'].mean())
        d.append((x['c'] * x['d']).sum())
        return pd.Series(d, index=[['a', 'a', 'b', 'c_d'], 
                                   ['sum', 'max', 'mean', 'prodsum']])

df.groupby('group').apply(f_mi)

              a                   b       c_d
            sum       max      mean   prodsum
group                                        
0      0.864569  0.446069  0.466054  0.173711
1      1.478872  0.843026  0.687672  0.630494

The second half of the currently accepted answer is outdated and has two deprecations. First and most important, you can no longer pass a dictionary of dictionaries to the agg groupby method. Second, never use .ix.

If you desire to work with two separate columns at the same time I would suggest using the apply method which implicitly passes a DataFrame to the applied function. Let’s use a similar dataframe as the one from above

df = pd.DataFrame(np.random.rand(4,4), columns=list('abcd'))
df['group'] = [0, 0, 1, 1]
df

          a         b         c         d  group
0  0.418500  0.030955  0.874869  0.145641      0
1  0.446069  0.901153  0.095052  0.487040      0
2  0.843026  0.936169  0.926090  0.041722      1
3  0.635846  0.439175  0.828787  0.714123      1

A dictionary mapped from column names to aggregation functions is still a perfectly good way to perform an aggregation.

df.groupby('group').agg({'a':['sum', 'max'], 
                         'b':'mean', 
                         'c':'sum', 
                         'd': lambda x: x.max() - x.min()})

              a                   b         c         d
            sum       max      mean       sum  <lambda>
group                                                  
0      0.864569  0.446069  0.466054  0.969921  0.341399
1      1.478872  0.843026  0.687672  1.754877  0.672401

If you don’t like that ugly lambda column name, you can use a normal function and supply a custom name to the special __name__ attribute like this:

def max_min(x):
    return x.max() - x.min()

max_min.__name__ = 'Max minus Min'

df.groupby('group').agg({'a':['sum', 'max'], 
                         'b':'mean', 
                         'c':'sum', 
                         'd': max_min})

              a                   b         c             d
            sum       max      mean       sum Max minus Min
group                                                      
0      0.864569  0.446069  0.466054  0.969921      0.341399
1      1.478872  0.843026  0.687672  1.754877      0.672401

Using apply and returning a Series

Now, if you had multiple columns that needed to interact together then you cannot use agg, which implicitly passes a Series to the aggregating function. When using apply the entire group as a DataFrame gets passed into the function.

I recommend making a single custom function that returns a Series of all the aggregations. Use the Series index as labels for the new columns:

def f(x):
    d = {}
    d['a_sum'] = x['a'].sum()
    d['a_max'] = x['a'].max()
    d['b_mean'] = x['b'].mean()
    d['c_d_prodsum'] = (x['c'] * x['d']).sum()
    return pd.Series(d, index=['a_sum', 'a_max', 'b_mean', 'c_d_prodsum'])

df.groupby('group').apply(f)

         a_sum     a_max    b_mean  c_d_prodsum
group                                           
0      0.864569  0.446069  0.466054     0.173711
1      1.478872  0.843026  0.687672     0.630494

If you are in love with MultiIndexes, you can still return a Series with one like this:

    def f_mi(x):
        d = []
        d.append(x['a'].sum())
        d.append(x['a'].max())
        d.append(x['b'].mean())
        d.append((x['c'] * x['d']).sum())
        return pd.Series(d, index=[['a', 'a', 'b', 'c_d'], 
                                   ['sum', 'max', 'mean', 'prodsum']])

df.groupby('group').apply(f_mi)

              a                   b       c_d
            sum       max      mean   prodsum
group                                        
0      0.864569  0.446069  0.466054  0.173711
1      1.478872  0.843026  0.687672  0.630494

回答 1

对于第一部分,您可以传递键的列名字典和值的函数列表:

In [28]: df
Out[28]:
          A         B         C         D         E  GRP
0  0.395670  0.219560  0.600644  0.613445  0.242893    0
1  0.323911  0.464584  0.107215  0.204072  0.927325    0
2  0.321358  0.076037  0.166946  0.439661  0.914612    1
3  0.133466  0.447946  0.014815  0.130781  0.268290    1

In [26]: f = {'A':['sum','mean'], 'B':['prod']}

In [27]: df.groupby('GRP').agg(f)
Out[27]:
            A                   B
          sum      mean      prod
GRP
0    0.719580  0.359790  0.102004
1    0.454824  0.227412  0.034060

更新1:

由于聚合函数适用于Series,因此对其他列名称的引用会丢失。为了解决这个问题,您可以引用整个数据框并使用lambda函数中的组索引对其进行索引。

这是一个骇人的解决方法:

In [67]: f = {'A':['sum','mean'], 'B':['prod'], 'D': lambda g: df.loc[g.index].E.sum()}

In [69]: df.groupby('GRP').agg(f)
Out[69]:
            A                   B         D
          sum      mean      prod  <lambda>
GRP
0    0.719580  0.359790  0.102004  1.170219
1    0.454824  0.227412  0.034060  1.182901

在此,结果“ D”列由总和“ E”值组成。

更新2:

我认为这是一种可以满足您要求的方法。首先创建一个自定义lambda函数。下面,g引用该组。汇总时,g将是一个系列。传递g.indexdf.ix[]从df中选择当前组。然后,我测试C列是否小于0.5。返回的布尔系列传递给g[]该布尔系列,该布尔系列仅选择那些符合条件的行。

In [95]: cust = lambda g: g[df.loc[g.index]['C'] < 0.5].sum()

In [96]: f = {'A':['sum','mean'], 'B':['prod'], 'D': {'my name': cust}}

In [97]: df.groupby('GRP').agg(f)
Out[97]:
            A                   B         D
          sum      mean      prod   my name
GRP
0    0.719580  0.359790  0.102004  0.204072
1    0.454824  0.227412  0.034060  0.570441

For the first part you can pass a dict of column names for keys and a list of functions for the values:

In [28]: df
Out[28]:
          A         B         C         D         E  GRP
0  0.395670  0.219560  0.600644  0.613445  0.242893    0
1  0.323911  0.464584  0.107215  0.204072  0.927325    0
2  0.321358  0.076037  0.166946  0.439661  0.914612    1
3  0.133466  0.447946  0.014815  0.130781  0.268290    1

In [26]: f = {'A':['sum','mean'], 'B':['prod']}

In [27]: df.groupby('GRP').agg(f)
Out[27]:
            A                   B
          sum      mean      prod
GRP
0    0.719580  0.359790  0.102004
1    0.454824  0.227412  0.034060

UPDATE 1:

Because the aggregate function works on Series, references to the other column names are lost. To get around this, you can reference the full dataframe and index it using the group indices within the lambda function.

Here’s a hacky workaround:

In [67]: f = {'A':['sum','mean'], 'B':['prod'], 'D': lambda g: df.loc[g.index].E.sum()}

In [69]: df.groupby('GRP').agg(f)
Out[69]:
            A                   B         D
          sum      mean      prod  <lambda>
GRP
0    0.719580  0.359790  0.102004  1.170219
1    0.454824  0.227412  0.034060  1.182901

Here, the resultant ‘D’ column is made up of the summed ‘E’ values.

UPDATE 2:

Here’s a method that I think will do everything you ask. First make a custom lambda function. Below, g references the group. When aggregating, g will be a Series. Passing g.index to df.ix[] selects the current group from df. I then test if column C is less than 0.5. The returned boolean series is passed to g[] which selects only those rows meeting the criteria.

In [95]: cust = lambda g: g[df.loc[g.index]['C'] < 0.5].sum()

In [96]: f = {'A':['sum','mean'], 'B':['prod'], 'D': {'my name': cust}}

In [97]: df.groupby('GRP').agg(f)
Out[97]:
            A                   B         D
          sum      mean      prod   my name
GRP
0    0.719580  0.359790  0.102004  0.204072
1    0.454824  0.227412  0.034060  0.570441

回答 2

作为泰德·彼得鲁(Ted Petrou)回答的替代方案(主要是美学方面),我发现我更喜欢紧凑的清单。请不要考虑接受它,它只是对Ted的答案以及代码/数据的更详细的评论。Python / pandas不是我的第一个/最好的,但是我发现它读起来不错:

df.groupby('group') \
  .apply(lambda x: pd.Series({
      'a_sum'       : x['a'].sum(),
      'a_max'       : x['a'].max(),
      'b_mean'      : x['b'].mean(),
      'c_d_prodsum' : (x['c'] * x['d']).sum()
  })
)

          a_sum     a_max    b_mean  c_d_prodsum
group                                           
0      0.530559  0.374540  0.553354     0.488525
1      1.433558  0.832443  0.460206     0.053313

我发现它更让人联想到dplyr管道和data.table链接的命令。并不是说它们更好,只是我更熟悉。(我当然认识到def对于这些类型的操作使用更正式的功能的功能,并且对于许多人而言,这是首选。这只是一种选择,不一定更好。)


我以与Ted相同的方式生成数据,我将添加一个可重复性的种子。

import numpy as np
np.random.seed(42)
df = pd.DataFrame(np.random.rand(4,4), columns=list('abcd'))
df['group'] = [0, 0, 1, 1]
df

          a         b         c         d  group
0  0.374540  0.950714  0.731994  0.598658      0
1  0.156019  0.155995  0.058084  0.866176      0
2  0.601115  0.708073  0.020584  0.969910      1
3  0.832443  0.212339  0.181825  0.183405      1

As an alternative (mostly on aesthetics) to Ted Petrou’s answer, I found I preferred a slightly more compact listing. Please don’t consider accepting it, it’s just a much-more-detailed comment on Ted’s answer, plus code/data. Python/pandas is not my first/best, but I found this to read well:

df.groupby('group') \
  .apply(lambda x: pd.Series({
      'a_sum'       : x['a'].sum(),
      'a_max'       : x['a'].max(),
      'b_mean'      : x['b'].mean(),
      'c_d_prodsum' : (x['c'] * x['d']).sum()
  })
)

          a_sum     a_max    b_mean  c_d_prodsum
group                                           
0      0.530559  0.374540  0.553354     0.488525
1      1.433558  0.832443  0.460206     0.053313

I find it more reminiscent of dplyr pipes and data.table chained commands. Not to say they’re better, just more familiar to me. (I certainly recognize the power and, for many, the preference of using more formalized def functions for these types of operations. This is just an alternative, not necessarily better.)


I generated data in the same manner as Ted, I’ll add a seed for reproducibility.

import numpy as np
np.random.seed(42)
df = pd.DataFrame(np.random.rand(4,4), columns=list('abcd'))
df['group'] = [0, 0, 1, 1]
df

          a         b         c         d  group
0  0.374540  0.950714  0.731994  0.598658      0
1  0.156019  0.155995  0.058084  0.866176      0
2  0.601115  0.708073  0.020584  0.969910      1
3  0.832443  0.212339  0.181825  0.183405      1

回答 3

Pandas >= 0.25.0,称为集合

从pandas版本0.25.0或更高版本开始,我们正在远离基于字典的聚合和重命名,而转向接受a的命名聚合tuple。现在,我们可以同时聚合+重命名为更多信息的列名:

范例

df = pd.DataFrame(np.random.rand(4,4), columns=list('abcd'))
df['group'] = [0, 0, 1, 1]

          a         b         c         d  group
0  0.521279  0.914988  0.054057  0.125668      0
1  0.426058  0.828890  0.784093  0.446211      0
2  0.363136  0.843751  0.184967  0.467351      1
3  0.241012  0.470053  0.358018  0.525032      1

GroupBy.agg以命名聚合应用:

df.groupby('group').agg(
             a_sum=('a', 'sum'),
             a_mean=('a', 'mean'),
             b_mean=('b', 'mean'),
             c_sum=('c', 'sum'),
             d_range=('d', lambda x: x.max() - x.min())
)

          a_sum    a_mean    b_mean     c_sum   d_range
group                                                  
0      0.947337  0.473668  0.871939  0.838150  0.320543
1      0.604149  0.302074  0.656902  0.542985  0.057681

Pandas >= 0.25.0, named aggregations

Since pandas version 0.25.0 or higher, we are moving away from the dictionary based aggregation and renaming, and moving towards named aggregations which accepts a tuple. Now we can simultaneously aggregate + rename to a more informative column name:

Example:

df = pd.DataFrame(np.random.rand(4,4), columns=list('abcd'))
df['group'] = [0, 0, 1, 1]

          a         b         c         d  group
0  0.521279  0.914988  0.054057  0.125668      0
1  0.426058  0.828890  0.784093  0.446211      0
2  0.363136  0.843751  0.184967  0.467351      1
3  0.241012  0.470053  0.358018  0.525032      1

Apply GroupBy.agg with named aggregation:

df.groupby('group').agg(
             a_sum=('a', 'sum'),
             a_mean=('a', 'mean'),
             b_mean=('b', 'mean'),
             c_sum=('c', 'sum'),
             d_range=('d', lambda x: x.max() - x.min())
)

          a_sum    a_mean    b_mean     c_sum   d_range
group                                                  
0      0.947337  0.473668  0.871939  0.838150  0.320543
1      0.604149  0.302074  0.656902  0.542985  0.057681

回答 4

0.25.0版中的新功能。

为了通过控制输出列名来支持特定于列的聚合,pandas在GroupBy.agg()中接受特殊语法,称为“命名聚合”,其中

  • 关键字是输出列名称
  • 值是元组,其第一个元素是要选择的列,第二个元素是要应用于该列的聚合。Pandas为pandas.NamedAgg namedtuple提供了字段[‘column’,’aggfunc’],以使参数更清楚。像往常一样,聚合可以是可调用的或字符串别名。
    In [79]: animals = pd.DataFrame({'kind': ['cat', 'dog', 'cat', 'dog'],
       ....:                         'height': [9.1, 6.0, 9.5, 34.0],
       ....:                         'weight': [7.9, 7.5, 9.9, 198.0]})
       ....: 

    In [80]: animals
    Out[80]: 
      kind  height  weight
    0  cat     9.1     7.9
    1  dog     6.0     7.5
    2  cat     9.5     9.9
    3  dog    34.0   198.0

    In [81]: animals.groupby("kind").agg(
       ....:     min_height=pd.NamedAgg(column='height', aggfunc='min'),
       ....:     max_height=pd.NamedAgg(column='height', aggfunc='max'),
       ....:     average_weight=pd.NamedAgg(column='weight', aggfunc=np.mean),
       ....: )
       ....: 
    Out[81]: 
          min_height  max_height  average_weight
    kind                                        
    cat          9.1         9.5            8.90
    dog          6.0        34.0          102.75

pandas.NamedAgg只是一个namedtuple。普通元组也是允许的。

    In [82]: animals.groupby("kind").agg(
       ....:     min_height=('height', 'min'),
       ....:     max_height=('height', 'max'),
       ....:     average_weight=('weight', np.mean),
       ....: )
       ....: 
    Out[82]: 
          min_height  max_height  average_weight
    kind                                        
    cat          9.1         9.5            8.90
    dog          6.0        34.0          102.75

其他关键字参数不会传递给聚合函数。只有成对的(column,aggfunc)应该作为** kwargs传递。如果您的聚合函数需要其他参数,请通过functools.partial()部分应用它们。

命名聚合对于Series groupby聚合也有效。在这种情况下,没有列选择,因此值仅是函数。

    In [84]: animals.groupby("kind").height.agg(
       ....:     min_height='min',
       ....:     max_height='max',
       ....: )
       ....: 
    Out[84]: 
          min_height  max_height
    kind                        
    cat          9.1         9.5
    dog          6.0        34.0

New in version 0.25.0.

To support column-specific aggregation with control over the output column names, pandas accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where

  • The keywords are the output column names
  • The values are tuples whose first element is the column to select and the second element is the aggregation to apply to that column. Pandas provides the pandas.NamedAgg namedtuple with the fields [‘column’, ‘aggfunc’] to make it clearer what the arguments are. As usual, the aggregation can be a callable or a string alias.
    In [79]: animals = pd.DataFrame({'kind': ['cat', 'dog', 'cat', 'dog'],
       ....:                         'height': [9.1, 6.0, 9.5, 34.0],
       ....:                         'weight': [7.9, 7.5, 9.9, 198.0]})
       ....: 

    In [80]: animals
    Out[80]: 
      kind  height  weight
    0  cat     9.1     7.9
    1  dog     6.0     7.5
    2  cat     9.5     9.9
    3  dog    34.0   198.0

    In [81]: animals.groupby("kind").agg(
       ....:     min_height=pd.NamedAgg(column='height', aggfunc='min'),
       ....:     max_height=pd.NamedAgg(column='height', aggfunc='max'),
       ....:     average_weight=pd.NamedAgg(column='weight', aggfunc=np.mean),
       ....: )
       ....: 
    Out[81]: 
          min_height  max_height  average_weight
    kind                                        
    cat          9.1         9.5            8.90
    dog          6.0        34.0          102.75

pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.

    In [82]: animals.groupby("kind").agg(
       ....:     min_height=('height', 'min'),
       ....:     max_height=('height', 'max'),
       ....:     average_weight=('weight', np.mean),
       ....: )
       ....: 
    Out[82]: 
          min_height  max_height  average_weight
    kind                                        
    cat          9.1         9.5            8.90
    dog          6.0        34.0          102.75

Additional keyword arguments are not passed through to the aggregation functions. Only pairs of (column, aggfunc) should be passed as **kwargs. If your aggregation functions requires additional arguments, partially apply them with functools.partial().

Named aggregation is also valid for Series groupby aggregations. In this case there’s no column selection, so the values are just the functions.

    In [84]: animals.groupby("kind").height.agg(
       ....:     min_height='min',
       ....:     max_height='max',
       ....: )
       ....: 
    Out[84]: 
          min_height  max_height
    kind                        
    cat          9.1         9.5
    dog          6.0        34.0

回答 5

特德的答案是惊人的。我最终使用了一个较小的版本,以防有人感兴趣。在寻找一种取决于多个列中的值的聚合时很有用:

创建一个数据框

df=pd.DataFrame({'a': [1,2,3,4,5,6], 'b': [1,1,0,1,1,0], 'c': ['x','x','y','y','z','z']})


   a  b  c
0  1  1  x
1  2  1  x
2  3  0  y
3  4  1  y
4  5  1  z
5  6  0  z

使用apply分组和聚合(使用多列)

df.groupby('c').apply(lambda x: x['a'][(x['a']>1) & (x['b']==1)].mean())

c
x    2.0
y    4.0
z    5.0

使用聚合进行分组和聚合(使用多列)

我喜欢这种方法,因为我仍然可以使用聚合。也许人们会让我知道为什么对组进行汇总时为什么需要套用多列。

现在似乎很明显,但是只要您不选择感兴趣的列 直接在groupby之后的列,就可以从聚合函数中访问数据框的所有列。

仅访问所选列

df.groupby('c')['a'].aggregate(lambda x: x[x>1].mean())

访问所有列,因为选择毕竟是不可思议的

df.groupby('c').aggregate(lambda x: x[(x['a']>1) & (x['b']==1)].mean())['a']

或类似

df.groupby('c').aggregate(lambda x: x['a'][(x['a']>1) & (x['b']==1)].mean())

我希望这有帮助。

Ted’s answer is amazing. I ended up using a smaller version of that in case anyone is interested. Useful when you are looking for one aggregation that depends on values from multiple columns:

create a dataframe

df=pd.DataFrame({'a': [1,2,3,4,5,6], 'b': [1,1,0,1,1,0], 'c': ['x','x','y','y','z','z']})


   a  b  c
0  1  1  x
1  2  1  x
2  3  0  y
3  4  1  y
4  5  1  z
5  6  0  z

grouping and aggregating with apply (using multiple columns)

df.groupby('c').apply(lambda x: x['a'][(x['a']>1) & (x['b']==1)].mean())

c
x    2.0
y    4.0
z    5.0

grouping and aggregating with aggregate (using multiple columns)

I like this approach since I can still use aggregate. Perhaps people will let me know why apply is needed for getting at multiple columns when doing aggregations on groups.

It seems obvious now, but as long as you don’t select the column of interest directly after the groupby, you will have access to all the columns of the dataframe from within your aggregation function.

only access to the selected column

df.groupby('c')['a'].aggregate(lambda x: x[x>1].mean())

access to all columns since selection is after all the magic

df.groupby('c').aggregate(lambda x: x[(x['a']>1) & (x['b']==1)].mean())['a']

or similarly

df.groupby('c').aggregate(lambda x: x['a'][(x['a']>1) & (x['b']==1)].mean())

I hope this helps.


熊猫分组和

问题:熊猫分组和

我正在使用此数据框:

Fruit   Date      Name  Number
Apples  10/6/2016 Bob    7
Apples  10/6/2016 Bob    8
Apples  10/6/2016 Mike   9
Apples  10/7/2016 Steve 10
Apples  10/7/2016 Bob    1
Oranges 10/7/2016 Bob    2
Oranges 10/6/2016 Tom   15
Oranges 10/6/2016 Mike  57
Oranges 10/6/2016 Bob   65
Oranges 10/7/2016 Tony   1
Grapes  10/7/2016 Bob    1
Grapes  10/7/2016 Tom   87
Grapes  10/7/2016 Bob   22
Grapes  10/7/2016 Bob   12
Grapes  10/7/2016 Tony  15

我想按名称然后按水果进行汇总,以获得每个名称的水果总数。

Bob,Apples,16 ( for example )

我尝试按名称和水果分组,但是如何获取水果总数。

I am using this data frame:

Fruit   Date      Name  Number
Apples  10/6/2016 Bob    7
Apples  10/6/2016 Bob    8
Apples  10/6/2016 Mike   9
Apples  10/7/2016 Steve 10
Apples  10/7/2016 Bob    1
Oranges 10/7/2016 Bob    2
Oranges 10/6/2016 Tom   15
Oranges 10/6/2016 Mike  57
Oranges 10/6/2016 Bob   65
Oranges 10/7/2016 Tony   1
Grapes  10/7/2016 Bob    1
Grapes  10/7/2016 Tom   87
Grapes  10/7/2016 Bob   22
Grapes  10/7/2016 Bob   12
Grapes  10/7/2016 Tony  15

I want to aggregate this by name and then by fruit to get a total number of fruit per name.

Bob,Apples,16 ( for example )

I tried grouping by Name and Fruit but how do I get the total number of fruit.


回答 0

用途GroupBy.sum

df.groupby(['Fruit','Name']).sum()

Out[31]: 
               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Grapes  Bob        35
        Tom        87
        Tony       15
Oranges Bob        67
        Mike       57
        Tom        15
        Tony        1

Use GroupBy.sum:

df.groupby(['Fruit','Name']).sum()

Out[31]: 
               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Grapes  Bob        35
        Tom        87
        Tony       15
Oranges Bob        67
        Mike       57
        Tom        15
        Tony        1

回答 1

你也可以使用agg函数

df.groupby(['Name', 'Fruit'])['Number'].agg('sum')

Also you can use agg function,

df.groupby(['Name', 'Fruit'])['Number'].agg('sum')

回答 2

如果要保留原始列FruitName,请使用reset_index()。否则,FruitName将成为指数的一部分。

df.groupby(['Fruit','Name'])['Number'].sum().reset_index()

Fruit   Name       Number
Apples  Bob        16
Apples  Mike        9
Apples  Steve      10
Grapes  Bob        35
Grapes  Tom        87
Grapes  Tony       15
Oranges Bob        67
Oranges Mike       57
Oranges Tom        15
Oranges Tony        1

如其他答案所示:

df.groupby(['Fruit','Name'])['Number'].sum()

               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Grapes  Bob        35
        Tom        87
        Tony       15
Oranges Bob        67
        Mike       57
        Tom        15
        Tony        1

If you want to keep the original columns Fruit and Name, use reset_index(). Otherwise Fruit and Name will become part of the index.

df.groupby(['Fruit','Name'])['Number'].sum().reset_index()

Fruit   Name       Number
Apples  Bob        16
Apples  Mike        9
Apples  Steve      10
Grapes  Bob        35
Grapes  Tom        87
Grapes  Tony       15
Oranges Bob        67
Oranges Mike       57
Oranges Tom        15
Oranges Tony        1

As seen in the other answers:

df.groupby(['Fruit','Name'])['Number'].sum()

               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Grapes  Bob        35
        Tom        87
        Tony       15
Oranges Bob        67
        Mike       57
        Tom        15
        Tony        1

回答 3

其他两个答案都能满足您的需求。

您可以使用该pivot功能将数据排列在一个漂亮的表中

df.groupby(['Fruit','Name'],as_index = False).sum().pivot('Fruit','Name').fillna(0)



Name    Bob     Mike    Steve   Tom    Tony
Fruit                   
Apples  16.0    9.0     10.0    0.0     0.0
Grapes  35.0    0.0     0.0     87.0    15.0
Oranges 67.0    57.0    0.0     15.0    1.0

Both the other answers accomplish what you want.

You can use the pivot functionality to arrange the data in a nice table

df.groupby(['Fruit','Name'],as_index = False).sum().pivot('Fruit','Name').fillna(0)



Name    Bob     Mike    Steve   Tom    Tony
Fruit                   
Apples  16.0    9.0     10.0    0.0     0.0
Grapes  35.0    0.0     0.0     87.0    15.0
Oranges 67.0    57.0    0.0     15.0    1.0

回答 4

df.groupby(['Fruit','Name'])['Number'].sum()

您可以选择不同的列来对数字求和。

df.groupby(['Fruit','Name'])['Number'].sum()

You can select different columns to sum numbers.


回答 5

您可以将设置groupbyindex ,然后使用sumlevel

df.set_index(['Fruit','Name']).sum(level=[0,1])
Out[175]: 
               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Oranges Bob        67
        Tom        15
        Mike       57
        Tony        1
Grapes  Bob        35
        Tom        87
        Tony       15

You can set the groupby column to index then using sum with level

df.set_index(['Fruit','Name']).sum(level=[0,1])
Out[175]: 
               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Oranges Bob        67
        Tom        15
        Mike       57
        Tony        1
Grapes  Bob        35
        Tom        87
        Tony       15

回答 6

.agg()函数的变体;提供以下功能:(1)持久化类型DataFrame,(2)应用平均值,计数,求和等,以及(3)在保持易读性的同时在多个列上启用groupby。

df.groupby(['att1', 'att2']).agg({'att1': "count", 'att3': "sum",'att4': 'mean'})

用你的价值观…

df.groupby(['Name', 'Fruit']).agg({'Number': "sum"})

A variation on the .agg() function; provides the ability to (1) persist type DataFrame, (2) apply averages, counts, summations, etc. and (3) enables groupby on multiple columns while maintaining legibility.

df.groupby(['att1', 'att2']).agg({'att1': "count", 'att3': "sum",'att4': 'mean'})

using your values…

df.groupby(['Name', 'Fruit']).agg({'Number': "sum"})

等效的熊猫数(不同)

问题:等效的熊猫数(不同)

我使用pandas作为数据库替代品,因为我有多个数据库(oracle,mssql等),并且无法对SQL等效命令进行一系列命令。

我在DataFrame中加载了一个带有一些列的表:

YEARMONTH, CLIENTCODE, SIZE, .... etc etc

在SQL中,每年计算不同客户端的数量将是:

SELECT count(distinct CLIENTCODE) FROM table GROUP BY YEARMONTH;

结果将是

201301    5000
201302    13245

如何在熊猫中做到这一点?

I am using pandas as a db substitute as I have multiple databases (oracle, mssql, etc) and I am unable to make a sequence of commands to a SQL equivalent.

I have a table loaded in a DataFrame with some columns:

YEARMONTH, CLIENTCODE, SIZE, .... etc etc

In SQL, to count the amount of different clients per year would be:

SELECT count(distinct CLIENTCODE) FROM table GROUP BY YEARMONTH;

And the result would be

201301    5000
201302    13245

How can I do that in pandas?


回答 0

我相信这就是您想要的:

table.groupby('YEARMONTH').CLIENTCODE.nunique()

例:

In [2]: table
Out[2]: 
   CLIENTCODE  YEARMONTH
0           1     201301
1           1     201301
2           2     201301
3           1     201302
4           2     201302
5           2     201302
6           3     201302

In [3]: table.groupby('YEARMONTH').CLIENTCODE.nunique()
Out[3]: 
YEARMONTH
201301       2
201302       3

I believe this is what you want:

table.groupby('YEARMONTH').CLIENTCODE.nunique()

Example:

In [2]: table
Out[2]: 
   CLIENTCODE  YEARMONTH
0           1     201301
1           1     201301
2           2     201301
3           1     201302
4           2     201302
5           2     201302
6           3     201302

In [3]: table.groupby('YEARMONTH').CLIENTCODE.nunique()
Out[3]: 
YEARMONTH
201301       2
201302       3

回答 1

这是另一种方法,非常简单,可以说您的数据框名称为daat,列名称为YEARMONTH

daat.YEARMONTH.value_counts()

Here is another method, much simple, lets say your dataframe name is daat and column name is YEARMONTH

daat.YEARMONTH.value_counts()

回答 2

有趣的是,通常len(unique())速度比快几倍(3x-15x)nunique()

Interestingly enough, very often len(unique()) is a few times (3x-15x) faster than nunique().


回答 3

使用crosstab,这将返回比groupby nunique

pd.crosstab(df.YEARMONTH,df.CLIENTCODE)
Out[196]: 
CLIENTCODE  1  2  3
YEARMONTH          
201301      2  1  0
201302      1  2  1

稍作修改后,产生结果

pd.crosstab(df.YEARMONTH,df.CLIENTCODE).ne(0).sum(1)
Out[197]: 
YEARMONTH
201301    2
201302    3
dtype: int64

Using crosstab, this will return more information than groupby nunique

pd.crosstab(df.YEARMONTH,df.CLIENTCODE)
Out[196]: 
CLIENTCODE  1  2  3
YEARMONTH          
201301      2  1  0
201302      1  2  1

After a little bit modify ,yield the result

pd.crosstab(df.YEARMONTH,df.CLIENTCODE).ne(0).sum(1)
Out[197]: 
YEARMONTH
201301    2
201302    3
dtype: int64

回答 4

我也在使用,nunique但是如果您必须使用诸如'min', 'max', 'count' or 'mean'etc之类的聚合函数,它将非常有帮助。

df.groupby('YEARMONTH')['CLIENTCODE'].transform('nunique') #count(distinct)
df.groupby('YEARMONTH')['CLIENTCODE'].transform('min')     #min
df.groupby('YEARMONTH')['CLIENTCODE'].transform('max')     #max
df.groupby('YEARMONTH')['CLIENTCODE'].transform('mean')    #average
df.groupby('YEARMONTH')['CLIENTCODE'].transform('count')   #count

I am also using nunique but it will be very helpful if you have to use an aggregate function like 'min', 'max', 'count' or 'mean' etc.

df.groupby('YEARMONTH')['CLIENTCODE'].transform('nunique') #count(distinct)
df.groupby('YEARMONTH')['CLIENTCODE'].transform('min')     #min
df.groupby('YEARMONTH')['CLIENTCODE'].transform('max')     #max
df.groupby('YEARMONTH')['CLIENTCODE'].transform('mean')    #average
df.groupby('YEARMONTH')['CLIENTCODE'].transform('count')   #count

回答 5

使用新的Pandas版本,很容易获得数据框

unique_count = pd.groupby(['YEARMONTH'], as_index=False).agg(uniq_CLIENTCODE =('CLIENTCODE',pd.Series.count))

With new pandas version, it is easy to get as dataframe

unique_count = pd.groupby(['YEARMONTH'], as_index=False).agg(uniq_CLIENTCODE =('CLIENTCODE',pd.Series.count))

回答 6

这是一种在多个列上具有不同计数的方法。让我们来一些数据:

data = {'CLIENT_CODE':[1,1,2,1,2,2,3],
        'YEAR_MONTH':[201301,201301,201301,201302,201302,201302,201302],
        'PRODUCT_CODE': [100,150,220,400,50,80,100]
       }
table = pd.DataFrame(data)
table

CLIENT_CODE YEAR_MONTH  PRODUCT_CODE
0   1       201301      100
1   1       201301      150
2   2       201301      220
3   1       201302      400
4   2       201302      50
5   2       201302      80
6   3       201302      100

现在,列出感兴趣的列,并使用经过稍微修改的语法的groupby:

columns = ['YEAR_MONTH', 'PRODUCT_CODE']
table[columns].groupby(table['CLIENT_CODE']).nunique()

我们获得:

YEAR_MONTH  PRODUCT_CODE CLIENT_CODE        
1           2            3
2           2            3
3           1            1

Here an approach to have count distinct over multiple columns. Let’s have some data:

data = {'CLIENT_CODE':[1,1,2,1,2,2,3],
        'YEAR_MONTH':[201301,201301,201301,201302,201302,201302,201302],
        'PRODUCT_CODE': [100,150,220,400,50,80,100]
       }
table = pd.DataFrame(data)
table

CLIENT_CODE YEAR_MONTH  PRODUCT_CODE
0   1       201301      100
1   1       201301      150
2   2       201301      220
3   1       201302      400
4   2       201302      50
5   2       201302      80
6   3       201302      100

Now, list the columns of interest and use groupby in a slightly modified syntax:

columns = ['YEAR_MONTH', 'PRODUCT_CODE']
table[columns].groupby(table['CLIENT_CODE']).nunique()

We obtain:

YEAR_MONTH  PRODUCT_CODE CLIENT_CODE        
1           2            3
2           2            3
3           1            1

回答 7

列的不同以及其他列上的聚合

要获取任何列(CLIENTCODE在您的情况下)的不同数量的值,可以使用nunique。我们可以将输入作为字典传递给agg函数,以及其他列的聚合:

grp_df = df.groupby('YEARMONTH').agg({'CLIENTCODE': ['nunique'],
                                      'other_col_1': ['sum', 'count']})

# to flatten the multi-level columns
grp_df.columns = ["_".join(col).strip() for col in grp_df.columns.values]

# if you wish to reset the index
grp_df.reset_index(inplace=True)

Distinct of column along with aggregations on other columns

To get the distinct number of values for any column (CLIENTCODE in your case), we can use nunique. We can pass the input as a dictionary in agg function, along with aggregations on other columns:

grp_df = df.groupby('YEARMONTH').agg({'CLIENTCODE': ['nunique'],
                                      'other_col_1': ['sum', 'count']})

# to flatten the multi-level columns
grp_df.columns = ["_".join(col).strip() for col in grp_df.columns.values]

# if you wish to reset the index
grp_df.reset_index(inplace=True)

如何旋转数据框

问题:如何旋转数据框

  • 什么是支点?
  • 我如何枢纽?
  • 这是支点吗?
  • 长格式到宽格式?

我已经看到很多有关数据透视表的问题。即使他们不知道他们在询问数据透视表,通常也是如此。几乎不可能写出涵盖枢纽各个方面的规范问答。

…但是我要去尝试一下。


现有问题和答案的问题在于,问题通常集中在OP难以推广的细微差别上,以便使用许多现有的良好答案。但是,没有一个答案试图给出全面的解释(因为这是一项艰巨的任务)

从我的Google搜索中查找一些示例

  1. 如何在Pandas中透视数据框?
    • 好问题和答案。但是答案只回答了很少的具体问题。
  2. 熊猫数据透视表到数据框
    • 在此问题中,OP与枢轴的输出有关。即列的外观。OP希望它看起来像R。这对熊猫用户不是很有帮助。
  3. 旋转数据框的熊猫,重复的行
    • 另一个不错的问题,但答案集中在一种方法上,即 pd.DataFrame.pivot

因此,每当有人搜索时,pivot他们都会得到零星的结果,这些结果可能不会回答他们的特定问题。


设定

您可能会注意到,我显眼地命名了我的列和相关的列值,以与我将在下面的答案中介绍的方式相对应。

import numpy as np
import pandas as pd
from numpy.core.defchararray import add

np.random.seed([3,1415])
n = 20

cols = np.array(['key', 'row', 'item', 'col'])
arr1 = (np.random.randint(5, size=(n, 4)) // [2, 1, 2, 1]).astype(str)

df = pd.DataFrame(
    add(cols, arr1), columns=cols
).join(
    pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix('val')
)
print(df)

     key   row   item   col  val0  val1
0   key0  row3  item1  col3  0.81  0.04
1   key1  row2  item1  col2  0.44  0.07
2   key1  row0  item1  col0  0.77  0.01
3   key0  row4  item0  col2  0.15  0.59
4   key1  row0  item2  col1  0.81  0.64
5   key1  row2  item2  col4  0.13  0.88
6   key2  row4  item1  col3  0.88  0.39
7   key1  row4  item1  col1  0.10  0.07
8   key1  row0  item2  col4  0.65  0.02
9   key1  row2  item0  col2  0.35  0.61
10  key2  row0  item2  col1  0.40  0.85
11  key2  row4  item1  col2  0.64  0.25
12  key0  row2  item2  col3  0.50  0.44
13  key0  row4  item1  col4  0.24  0.46
14  key1  row3  item2  col3  0.28  0.11
15  key0  row3  item1  col1  0.31  0.23
16  key0  row0  item2  col3  0.86  0.01
17  key0  row4  item0  col3  0.64  0.21
18  key2  row2  item2  col0  0.13  0.45
19  key0  row2  item0  col4  0.37  0.70

问题

  1. 我为什么得到 ValueError: Index contains duplicate entries, cannot reshape

  2. 如何旋转df以使col值成为列,row值成为索引,值的均值val0

    col   col0   col1   col2   col3  col4
    row                                  
    row0  0.77  0.605    NaN  0.860  0.65
    row2  0.13    NaN  0.395  0.500  0.25
    row3   NaN  0.310    NaN  0.545   NaN
    row4   NaN  0.100  0.395  0.760  0.24
  3. 如何旋转df以使col值是列,row值是索引,值的均值val0是和缺少值是0

    col   col0   col1   col2   col3  col4
    row                                  
    row0  0.77  0.605  0.000  0.860  0.65
    row2  0.13  0.000  0.395  0.500  0.25
    row3  0.00  0.310  0.000  0.545  0.00
    row4  0.00  0.100  0.395  0.760  0.24
  4. 我可以得到比其他的东西mean,如可能sum

    col   col0  col1  col2  col3  col4
    row                               
    row0  0.77  1.21  0.00  0.86  0.65
    row2  0.13  0.00  0.79  0.50  0.50
    row3  0.00  0.31  0.00  1.09  0.00
    row4  0.00  0.10  0.79  1.52  0.24
  5. 我可以一次做多个聚合吗?

           sum                          mean                           
    col   col0  col1  col2  col3  col4  col0   col1   col2   col3  col4
    row                                                                
    row0  0.77  1.21  0.00  0.86  0.65  0.77  0.605  0.000  0.860  0.65
    row2  0.13  0.00  0.79  0.50  0.50  0.13  0.000  0.395  0.500  0.25
    row3  0.00  0.31  0.00  1.09  0.00  0.00  0.310  0.000  0.545  0.00
    row4  0.00  0.10  0.79  1.52  0.24  0.00  0.100  0.395  0.760  0.24
  6. 我可以汇总多个值列吗?

          val0                             val1                          
    col   col0   col1   col2   col3  col4  col0   col1  col2   col3  col4
    row                                                                  
    row0  0.77  0.605  0.000  0.860  0.65  0.01  0.745  0.00  0.010  0.02
    row2  0.13  0.000  0.395  0.500  0.25  0.45  0.000  0.34  0.440  0.79
    row3  0.00  0.310  0.000  0.545  0.00  0.00  0.230  0.00  0.075  0.00
    row4  0.00  0.100  0.395  0.760  0.24  0.00  0.070  0.42  0.300  0.46
  7. 可以细分为多列吗?

    item item0             item1                         item2                   
    col   col2  col3  col4  col0  col1  col2  col3  col4  col0   col1  col3  col4
    row                                                                          
    row0  0.00  0.00  0.00  0.77  0.00  0.00  0.00  0.00  0.00  0.605  0.86  0.65
    row2  0.35  0.00  0.37  0.00  0.00  0.44  0.00  0.00  0.13  0.000  0.50  0.13
    row3  0.00  0.00  0.00  0.00  0.31  0.00  0.81  0.00  0.00  0.000  0.28  0.00
    row4  0.15  0.64  0.00  0.00  0.10  0.64  0.88  0.24  0.00  0.000  0.00  0.00
  8. 要么

    item      item0             item1                         item2                  
    col        col2  col3  col4  col0  col1  col2  col3  col4  col0  col1  col3  col4
    key  row                                                                         
    key0 row0  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.86  0.00
         row2  0.00  0.00  0.37  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.50  0.00
         row3  0.00  0.00  0.00  0.00  0.31  0.00  0.81  0.00  0.00  0.00  0.00  0.00
         row4  0.15  0.64  0.00  0.00  0.00  0.00  0.00  0.24  0.00  0.00  0.00  0.00
    key1 row0  0.00  0.00  0.00  0.77  0.00  0.00  0.00  0.00  0.00  0.81  0.00  0.65
         row2  0.35  0.00  0.00  0.00  0.00  0.44  0.00  0.00  0.00  0.00  0.00  0.13
         row3  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.28  0.00
         row4  0.00  0.00  0.00  0.00  0.10  0.00  0.00  0.00  0.00  0.00  0.00  0.00
    key2 row0  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.40  0.00  0.00
         row2  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.13  0.00  0.00  0.00
         row4  0.00  0.00  0.00  0.00  0.00  0.64  0.88  0.00  0.00  0.00  0.00  0.00
  9. 我可以汇总列和行一起出现的频率,又称为“交叉表”吗?

    col   col0  col1  col2  col3  col4
    row                               
    row0     1     2     0     1     1
    row2     1     0     2     1     2
    row3     0     1     0     2     0
    row4     0     1     2     2     1
  10. 如何通过仅旋转两列将DataFrame从长转换为宽?鉴于

    np.random.seed([3, 1415])
    df2 = pd.DataFrame({'A': list('aaaabbbc'), 'B': np.random.choice(15, 8)})        
    df2        
       A   B
    0  a   0
    1  a  11
    2  a   2
    3  a  11
    4  b  10
    5  b  10
    6  b  14
    7  c   7

    预期应该看起来像

          a     b    c
    0   0.0  10.0  7.0
    1  11.0  10.0  NaN
    2   2.0  14.0  NaN
    3  11.0   NaN  NaN
  11. 我如何将多重索引展平为单索引 pivot

       1  2   
       1  1  2        
    a  2  1  1
    b  2  1  0
    c  1  0  0

       1|1  2|1  2|2               
    a    2    1    1
    b    2    1    0
    c    1    0    0
  • What is pivot?
  • How do I pivot?
  • Is this a pivot?
  • Long format to wide format?

I’ve seen a lot of questions that ask about pivot tables. Even if they don’t know that they are asking about pivot tables, they usually are. It is virtually impossible to write a canonical question and answer that encompasses all aspects of pivoting….

… But I’m going to give it a go.


The problem with existing questions and answers is that often the question is focused on a nuance that the OP has trouble generalizing in order to use a number of the existing good answers. However, none of the answers attempt to give a comprehensive explanation (because it’s a daunting task)

Look a few examples from my google search

  1. How to pivot a dataframe in Pandas?
    • Good question and answer. But the answer only answers the specific question with little explanation.
  2. pandas pivot table to data frame
    • In this question, the OP is concerned with the output of the pivot. Namely how the columns look. OP wanted it to look like R. This isn’t very helpful for pandas users.
  3. pandas pivoting a dataframe, duplicate rows
    • Another decent question but the answer focuses on one method, namely pd.DataFrame.pivot

So whenever someone searches for pivot they get sporadic results that are likely not going to answer their specific question.


Setup

You may notice that I conspicuously named my columns and relevant column values to correspond with how I’m going to pivot in the answers below.

import numpy as np
import pandas as pd
from numpy.core.defchararray import add

np.random.seed([3,1415])
n = 20

cols = np.array(['key', 'row', 'item', 'col'])
arr1 = (np.random.randint(5, size=(n, 4)) // [2, 1, 2, 1]).astype(str)

df = pd.DataFrame(
    add(cols, arr1), columns=cols
).join(
    pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix('val')
)
print(df)

     key   row   item   col  val0  val1
0   key0  row3  item1  col3  0.81  0.04
1   key1  row2  item1  col2  0.44  0.07
2   key1  row0  item1  col0  0.77  0.01
3   key0  row4  item0  col2  0.15  0.59
4   key1  row0  item2  col1  0.81  0.64
5   key1  row2  item2  col4  0.13  0.88
6   key2  row4  item1  col3  0.88  0.39
7   key1  row4  item1  col1  0.10  0.07
8   key1  row0  item2  col4  0.65  0.02
9   key1  row2  item0  col2  0.35  0.61
10  key2  row0  item2  col1  0.40  0.85
11  key2  row4  item1  col2  0.64  0.25
12  key0  row2  item2  col3  0.50  0.44
13  key0  row4  item1  col4  0.24  0.46
14  key1  row3  item2  col3  0.28  0.11
15  key0  row3  item1  col1  0.31  0.23
16  key0  row0  item2  col3  0.86  0.01
17  key0  row4  item0  col3  0.64  0.21
18  key2  row2  item2  col0  0.13  0.45
19  key0  row2  item0  col4  0.37  0.70

Question(s)

  1. Why do I get ValueError: Index contains duplicate entries, cannot reshape

  2. How do I pivot df such that the col values are columns, row values are the index, and mean of val0 are the values?

    col   col0   col1   col2   col3  col4
    row                                  
    row0  0.77  0.605    NaN  0.860  0.65
    row2  0.13    NaN  0.395  0.500  0.25
    row3   NaN  0.310    NaN  0.545   NaN
    row4   NaN  0.100  0.395  0.760  0.24
    
  3. How do I pivot df such that the col values are columns, row values are the index, mean of val0 are the values, and missing values are 0?

    col   col0   col1   col2   col3  col4
    row                                  
    row0  0.77  0.605  0.000  0.860  0.65
    row2  0.13  0.000  0.395  0.500  0.25
    row3  0.00  0.310  0.000  0.545  0.00
    row4  0.00  0.100  0.395  0.760  0.24
    
  4. Can I get something other than mean, like maybe sum?

    col   col0  col1  col2  col3  col4
    row                               
    row0  0.77  1.21  0.00  0.86  0.65
    row2  0.13  0.00  0.79  0.50  0.50
    row3  0.00  0.31  0.00  1.09  0.00
    row4  0.00  0.10  0.79  1.52  0.24
    
  5. Can I do more that one aggregation at a time?

           sum                          mean                           
    col   col0  col1  col2  col3  col4  col0   col1   col2   col3  col4
    row                                                                
    row0  0.77  1.21  0.00  0.86  0.65  0.77  0.605  0.000  0.860  0.65
    row2  0.13  0.00  0.79  0.50  0.50  0.13  0.000  0.395  0.500  0.25
    row3  0.00  0.31  0.00  1.09  0.00  0.00  0.310  0.000  0.545  0.00
    row4  0.00  0.10  0.79  1.52  0.24  0.00  0.100  0.395  0.760  0.24
    
  6. Can I aggregate over multiple value columns?

          val0                             val1                          
    col   col0   col1   col2   col3  col4  col0   col1  col2   col3  col4
    row                                                                  
    row0  0.77  0.605  0.000  0.860  0.65  0.01  0.745  0.00  0.010  0.02
    row2  0.13  0.000  0.395  0.500  0.25  0.45  0.000  0.34  0.440  0.79
    row3  0.00  0.310  0.000  0.545  0.00  0.00  0.230  0.00  0.075  0.00
    row4  0.00  0.100  0.395  0.760  0.24  0.00  0.070  0.42  0.300  0.46
    
  7. Can Subdivide by multiple columns?

    item item0             item1                         item2                   
    col   col2  col3  col4  col0  col1  col2  col3  col4  col0   col1  col3  col4
    row                                                                          
    row0  0.00  0.00  0.00  0.77  0.00  0.00  0.00  0.00  0.00  0.605  0.86  0.65
    row2  0.35  0.00  0.37  0.00  0.00  0.44  0.00  0.00  0.13  0.000  0.50  0.13
    row3  0.00  0.00  0.00  0.00  0.31  0.00  0.81  0.00  0.00  0.000  0.28  0.00
    row4  0.15  0.64  0.00  0.00  0.10  0.64  0.88  0.24  0.00  0.000  0.00  0.00
    
  8. Or

    item      item0             item1                         item2                  
    col        col2  col3  col4  col0  col1  col2  col3  col4  col0  col1  col3  col4
    key  row                                                                         
    key0 row0  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.86  0.00
         row2  0.00  0.00  0.37  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.50  0.00
         row3  0.00  0.00  0.00  0.00  0.31  0.00  0.81  0.00  0.00  0.00  0.00  0.00
         row4  0.15  0.64  0.00  0.00  0.00  0.00  0.00  0.24  0.00  0.00  0.00  0.00
    key1 row0  0.00  0.00  0.00  0.77  0.00  0.00  0.00  0.00  0.00  0.81  0.00  0.65
         row2  0.35  0.00  0.00  0.00  0.00  0.44  0.00  0.00  0.00  0.00  0.00  0.13
         row3  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.28  0.00
         row4  0.00  0.00  0.00  0.00  0.10  0.00  0.00  0.00  0.00  0.00  0.00  0.00
    key2 row0  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.40  0.00  0.00
         row2  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.13  0.00  0.00  0.00
         row4  0.00  0.00  0.00  0.00  0.00  0.64  0.88  0.00  0.00  0.00  0.00  0.00
    
  9. Can I aggregate the frequency in which the column and rows occur together, aka “cross tabulation”?

    col   col0  col1  col2  col3  col4
    row                               
    row0     1     2     0     1     1
    row2     1     0     2     1     2
    row3     0     1     0     2     0
    row4     0     1     2     2     1
    
  10. How do I convert a DataFrame from long to wide by pivoting on ONLY two columns? Given,

    np.random.seed([3, 1415])
    df2 = pd.DataFrame({'A': list('aaaabbbc'), 'B': np.random.choice(15, 8)})        
    df2        
       A   B
    0  a   0
    1  a  11
    2  a   2
    3  a  11
    4  b  10
    5  b  10
    6  b  14
    7  c   7
    

    The expected should would look something like

          a     b    c
    0   0.0  10.0  7.0
    1  11.0  10.0  NaN
    2   2.0  14.0  NaN
    3  11.0   NaN  NaN
    
  11. How do I flatten the multiple index to single index after pivot

    From

       1  2   
       1  1  2        
    a  2  1  1
    b  2  1  0
    c  1  0  0
    

    To

       1|1  2|1  2|2               
    a    2    1    1
    b    2    1    0
    c    1    0    0
    

回答 0

我们首先回答第一个问题:

问题1

我为什么得到 ValueError: Index contains duplicate entries, cannot reshape

发生这种情况是因为pandas试图为a columnsindex具有重复条目的对象重新编制索引。有多种方法可以执行数据透视。当有人要求重复输入密钥时,其中某些方法不太适合。例如。考虑一下pd.DataFrame.pivot。我知道有重复的条目共享rowcol值:

df.duplicated(['row', 'col']).any()

True

所以当我pivot使用

df.pivot(index='row', columns='col', values='val0')

我收到上述错误。实际上,当我尝试使用以下命令执行相同的任务时,会出现相同的错误:

df.set_index(['row', 'col'])['val0'].unstack()

这是我们可以用来透视的成语列表

  1. pd.DataFrame.groupby + pd.DataFrame.unstack
    • 进行几乎所有类型的数据透视的良好通用方法
    • 您可以指定一组将构成枢轴行级别和列级别的所有列。通过选择要聚合的其余列以及要执行聚合的功能,可以做到这一点。最后,您unstack要在列索引中包含的级别。
  2. pd.DataFrame.pivot_table
    • groupby具有更直观API的美化版本。对于许多人来说,这是首选方法。并且是开发人员的预期方法。
    • 指定行级别,列级别,要聚合的值以及执行聚合的函数。
  3. pd.DataFrame.set_index + pd.DataFrame.unstack
    • 方便和直观的一些(包括我自己)。无法处理重复的分组密钥。
    • groupby范式类似,我们指定最终将成为行或列级别的所有列,并将其设置为索引。然后unstack,我们在各列中选择所需的级别。如果其余索引级别或列级别都不唯一,则此方法将失败。
  4. pd.DataFrame.pivot
    • 非常相似之处set_index在于它共享重复的密钥限制。该API也非常有限。只需要对标量值indexcolumnsvalues
    • pivot_table方法类似,我们选择要在其上旋转的行,列和值。但是,我们无法聚合,并且如果行或列都不唯一,则此方法将失败。
  5. pd.crosstab
    • 这是pivot_table最纯粹的形式的一种专门版本,是执行多个任务的最直观的方法。
  6. pd.factorize + np.bincount
    • 这是一种非常先进的技术,它非常晦涩,但是速度却很快。并非在所有情况下都可以使用它,但是只要您可以使用它并且感觉舒适,就会获得性能上的回报。
  7. pd.get_dummies + pd.DataFrame.dot
    • 我用它来巧妙地执行交叉制表。

例子

对于接下来的每个答案和问题,我将使用进行回答pd.DataFrame.pivot_table。然后,我将提供替代方法来执行相同的任务。

问题3

如何旋转df以使col值是列,row值是索引,值的均值val0是和缺少值是0

  • pd.DataFrame.pivot_table

    • fill_value默认情况下未设置。我倾向于适当地设置它。在这种情况下,我将其设置为0。请注意,我跳过了问题2,因为它与此答案相同,但没有fill_value
    • aggfunc='mean'是默认设置,我无需设置。我将其包括在内是为了明确。

      df.pivot_table(
          values='val0', index='row', columns='col',
          fill_value=0, aggfunc='mean')
      
      col   col0   col1   col2   col3  col4
      row                                  
      row0  0.77  0.605  0.000  0.860  0.65
      row2  0.13  0.000  0.395  0.500  0.25
      row3  0.00  0.310  0.000  0.545  0.00
      row4  0.00  0.100  0.395  0.760  0.24
  • pd.DataFrame.groupby

    df.groupby(['row', 'col'])['val0'].mean().unstack(fill_value=0)
  • pd.crosstab

    pd.crosstab(
        index=df['row'], columns=df['col'],
        values=df['val0'], aggfunc='mean').fillna(0)

问题4

我可以得到比其他的东西mean,如可能sum

  • pd.DataFrame.pivot_table

    df.pivot_table(
        values='val0', index='row', columns='col',
        fill_value=0, aggfunc='sum')
    
    col   col0  col1  col2  col3  col4
    row                               
    row0  0.77  1.21  0.00  0.86  0.65
    row2  0.13  0.00  0.79  0.50  0.50
    row3  0.00  0.31  0.00  1.09  0.00
    row4  0.00  0.10  0.79  1.52  0.24
  • pd.DataFrame.groupby

    df.groupby(['row', 'col'])['val0'].sum().unstack(fill_value=0)
  • pd.crosstab

    pd.crosstab(
        index=df['row'], columns=df['col'],
        values=df['val0'], aggfunc='sum').fillna(0)

问题5

我可以一次做多个聚合吗?

请注意,for pivot_tablecrosstab我需要传递可调用对象列表。另一方面,groupby.agg能够为有限数量的特殊功能使用字符串。 groupby.agg也会采用我们传递给其他对象的相同的可调用对象,但是利用字符串函数名称通常会更有效,因为可以提高效率。

  • pd.DataFrame.pivot_table

    df.pivot_table(
        values='val0', index='row', columns='col',
        fill_value=0, aggfunc=[np.size, np.mean])
    
         size                      mean                           
    col  col0 col1 col2 col3 col4  col0   col1   col2   col3  col4
    row                                                           
    row0    1    2    0    1    1  0.77  0.605  0.000  0.860  0.65
    row2    1    0    2    1    2  0.13  0.000  0.395  0.500  0.25
    row3    0    1    0    2    0  0.00  0.310  0.000  0.545  0.00
    row4    0    1    2    2    1  0.00  0.100  0.395  0.760  0.24
  • pd.DataFrame.groupby

    df.groupby(['row', 'col'])['val0'].agg(['size', 'mean']).unstack(fill_value=0)
  • pd.crosstab

    pd.crosstab(
        index=df['row'], columns=df['col'],
        values=df['val0'], aggfunc=[np.size, np.mean]).fillna(0, downcast='infer')

问题6

我可以汇总多个值列吗?

  • pd.DataFrame.pivot_table我们通过了,values=['val0', 'val1']但我们可以完全取消

    df.pivot_table(
        values=['val0', 'val1'], index='row', columns='col',
        fill_value=0, aggfunc='mean')
    
          val0                             val1                          
    col   col0   col1   col2   col3  col4  col0   col1  col2   col3  col4
    row                                                                  
    row0  0.77  0.605  0.000  0.860  0.65  0.01  0.745  0.00  0.010  0.02
    row2  0.13  0.000  0.395  0.500  0.25  0.45  0.000  0.34  0.440  0.79
    row3  0.00  0.310  0.000  0.545  0.00  0.00  0.230  0.00  0.075  0.00
    row4  0.00  0.100  0.395  0.760  0.24  0.00  0.070  0.42  0.300  0.46
  • pd.DataFrame.groupby

    df.groupby(['row', 'col'])['val0', 'val1'].mean().unstack(fill_value=0)

问题7

可以细分为多列吗?

  • pd.DataFrame.pivot_table

    df.pivot_table(
        values='val0', index='row', columns=['item', 'col'],
        fill_value=0, aggfunc='mean')
    
    item item0             item1                         item2                   
    col   col2  col3  col4  col0  col1  col2  col3  col4  col0   col1  col3  col4
    row                                                                          
    row0  0.00  0.00  0.00  0.77  0.00  0.00  0.00  0.00  0.00  0.605  0.86  0.65
    row2  0.35  0.00  0.37  0.00  0.00  0.44  0.00  0.00  0.13  0.000  0.50  0.13
    row3  0.00  0.00  0.00  0.00  0.31  0.00  0.81  0.00  0.00  0.000  0.28  0.00
    row4  0.15  0.64  0.00  0.00  0.10  0.64  0.88  0.24  0.00  0.000  0.00  0.00
  • pd.DataFrame.groupby

    df.groupby(
        ['row', 'item', 'col']
    )['val0'].mean().unstack(['item', 'col']).fillna(0).sort_index(1)

问题8

可以细分为多列吗?

  • pd.DataFrame.pivot_table

    df.pivot_table(
        values='val0', index=['key', 'row'], columns=['item', 'col'],
        fill_value=0, aggfunc='mean')
    
    item      item0             item1                         item2                  
    col        col2  col3  col4  col0  col1  col2  col3  col4  col0  col1  col3  col4
    key  row                                                                         
    key0 row0  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.86  0.00
         row2  0.00  0.00  0.37  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.50  0.00
         row3  0.00  0.00  0.00  0.00  0.31  0.00  0.81  0.00  0.00  0.00  0.00  0.00
         row4  0.15  0.64  0.00  0.00  0.00  0.00  0.00  0.24  0.00  0.00  0.00  0.00
    key1 row0  0.00  0.00  0.00  0.77  0.00  0.00  0.00  0.00  0.00  0.81  0.00  0.65
         row2  0.35  0.00  0.00  0.00  0.00  0.44  0.00  0.00  0.00  0.00  0.00  0.13
         row3  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.28  0.00
         row4  0.00  0.00  0.00  0.00  0.10  0.00  0.00  0.00  0.00  0.00  0.00  0.00
    key2 row0  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.40  0.00  0.00
         row2  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.13  0.00  0.00  0.00
         row4  0.00  0.00  0.00  0.00  0.00  0.64  0.88  0.00  0.00  0.00  0.00  0.00
  • pd.DataFrame.groupby

    df.groupby(
        ['key', 'row', 'item', 'col']
    )['val0'].mean().unstack(['item', 'col']).fillna(0).sort_index(1)
  • pd.DataFrame.set_index 因为键集对于行和列都是唯一的

    df.set_index(
        ['key', 'row', 'item', 'col']
    )['val0'].unstack(['item', 'col']).fillna(0).sort_index(1)

问题9

我可以汇总列和行一起出现的频率,又称为“交叉表”吗?

  • pd.DataFrame.pivot_table

    df.pivot_table(index='row', columns='col', fill_value=0, aggfunc='size')
    
        col   col0  col1  col2  col3  col4
    row                               
    row0     1     2     0     1     1
    row2     1     0     2     1     2
    row3     0     1     0     2     0
    row4     0     1     2     2     1
  • pd.DataFrame.groupby

    df.groupby(['row', 'col'])['val0'].size().unstack(fill_value=0)
  • pd.crosstab

    pd.crosstab(df['row'], df['col'])
  • pd.factorize + np.bincount

    # get integer factorization `i` and unique values `r`
    # for column `'row'`
    i, r = pd.factorize(df['row'].values)
    # get integer factorization `j` and unique values `c`
    # for column `'col'`
    j, c = pd.factorize(df['col'].values)
    # `n` will be the number of rows
    # `m` will be the number of columns
    n, m = r.size, c.size
    # `i * m + j` is a clever way of counting the 
    # factorization bins assuming a flat array of length
    # `n * m`.  Which is why we subsequently reshape as `(n, m)`
    b = np.bincount(i * m + j, minlength=n * m).reshape(n, m)
    # BTW, whenever I read this, I think 'Bean, Rice, and Cheese'
    pd.DataFrame(b, r, c)
    
          col3  col2  col0  col1  col4
    row3     2     0     0     1     0
    row2     1     2     1     0     2
    row0     1     0     1     2     1
    row4     2     2     0     1     1
  • pd.get_dummies

    pd.get_dummies(df['row']).T.dot(pd.get_dummies(df['col']))
    
          col0  col1  col2  col3  col4
    row0     1     2     0     1     1
    row2     1     0     2     1     2
    row3     0     1     0     2     0
    row4     0     1     2     2     1

问题10

如何通过仅旋转两列将DataFrame从长转换为宽?

第一步是为每行分配一个数字-该数字将成为透视结果中该值的行索引。使用GroupBy.cumcount以下命令完成此操作:

df2.insert(0, 'count', df.groupby('A').cumcount())
df2

   count  A   B
0      0  a   0
1      1  a  11
2      2  a   2
3      3  a  11
4      0  b  10
5      1  b  10
6      2  b  14
7      0  c   7

第二步是使用新创建的列作为要调用的索引DataFrame.pivot

df2.pivot(*df)
# df.pivot(index='count', columns='A', values='B')

A         a     b    c
count                 
0       0.0  10.0  7.0
1      11.0  10.0  NaN
2       2.0  14.0  NaN
3      11.0   NaN  NaN

问题11

我如何将多重索引展平为单索引 pivot

如果columns输入object字符串join

df.columns = df.columns.map('|'.join)

其他 format

df.columns = df.columns.map('{0[0]}|{0[1]}'.format) 

We start by answering the first question:

Question 1

Why do I get ValueError: Index contains duplicate entries, cannot reshape

This occurs because pandas is attempting to reindex either a columns or index object with duplicate entries. There are varying methods to use that can perform a pivot. Some of them are not well suited to when there are duplicates of the keys in which it is being asked to pivot on. For example. Consider pd.DataFrame.pivot. I know there are duplicate entries that share the row and col values:

df.duplicated(['row', 'col']).any()

True

So when I pivot using

df.pivot(index='row', columns='col', values='val0')

I get the error mentioned above. In fact, I get the same error when I try to perform the same task with:

df.set_index(['row', 'col'])['val0'].unstack()

Here is a list of idioms we can use to pivot

  1. pd.DataFrame.groupby + pd.DataFrame.unstack
    • Good general approach for doing just about any type of pivot
    • You specify all columns that will constitute the pivoted row levels and column levels in one group by. You follow that by selecting the remaining columns you want to aggregate and the function(s) you want to perform the aggregation. Finally, you unstack the levels that you want to be in the column index.
  2. pd.DataFrame.pivot_table
    • A glorified version of groupby with more intuitive API. For many people, this is the preferred approach. And is the intended approach by the developers.
    • Specify row level, column levels, values to be aggregated, and function(s) to perform aggregations.
  3. pd.DataFrame.set_index + pd.DataFrame.unstack
    • Convenient and intuitive for some (myself included). Cannot handle duplicate grouped keys.
    • Similar to the groupby paradigm, we specify all columns that will eventually be either row or column levels and set those to be the index. We then unstack the levels we want in the columns. If either the remaining index levels or column levels are not unique, this method will fail.
  4. pd.DataFrame.pivot
    • Very similar to set_index in that it shares the duplicate key limitation. The API is very limited as well. It only takes scalar values for index, columns, values.
    • Similar to the pivot_table method in that we select rows, columns, and values on which to pivot. However, we cannot aggregate and if either rows or columns are not unique, this method will fail.
  5. pd.crosstab
    • This a specialized version of pivot_table and in it’s purest form is the most intuitive way to perform several tasks.
  6. pd.factorize + np.bincount
    • This is a highly advanced technique that is very obscure but is very fast. It cannot be used in all circumstances, but when it can be used and you are comfortable using it, you will reap the performance rewards.
  7. pd.get_dummies + pd.DataFrame.dot
    • I use this for cleverly performing cross tabulation.

Examples

What I’m going to do for each subsequent answer and question is to answer it using pd.DataFrame.pivot_table. Then I’ll provide alternatives to perform the same task.

Question 3

How do I pivot df such that the col values are columns, row values are the index, mean of val0 are the values, and missing values are 0?

  • pd.DataFrame.pivot_table

    • fill_value is not set by default. I tend to set it appropriately. In this case I set it to 0. Notice I skipped question 2 as it’s the same as this answer without the fill_value
    • aggfunc='mean' is the default and I didn’t have to set it. I included it to be explicit.

      df.pivot_table(
          values='val0', index='row', columns='col',
          fill_value=0, aggfunc='mean')
      
      col   col0   col1   col2   col3  col4
      row                                  
      row0  0.77  0.605  0.000  0.860  0.65
      row2  0.13  0.000  0.395  0.500  0.25
      row3  0.00  0.310  0.000  0.545  0.00
      row4  0.00  0.100  0.395  0.760  0.24
      
  • pd.DataFrame.groupby

    df.groupby(['row', 'col'])['val0'].mean().unstack(fill_value=0)
    
  • pd.crosstab

    pd.crosstab(
        index=df['row'], columns=df['col'],
        values=df['val0'], aggfunc='mean').fillna(0)
    

Question 4

Can I get something other than mean, like maybe sum?

  • pd.DataFrame.pivot_table

    df.pivot_table(
        values='val0', index='row', columns='col',
        fill_value=0, aggfunc='sum')
    
    col   col0  col1  col2  col3  col4
    row                               
    row0  0.77  1.21  0.00  0.86  0.65
    row2  0.13  0.00  0.79  0.50  0.50
    row3  0.00  0.31  0.00  1.09  0.00
    row4  0.00  0.10  0.79  1.52  0.24
    
  • pd.DataFrame.groupby

    df.groupby(['row', 'col'])['val0'].sum().unstack(fill_value=0)
    
  • pd.crosstab

    pd.crosstab(
        index=df['row'], columns=df['col'],
        values=df['val0'], aggfunc='sum').fillna(0)
    

Question 5

Can I do more that one aggregation at a time?

Notice that for pivot_table and crosstab I needed to pass list of callables. On the other hand, groupby.agg is able to take strings for a limited number of special functions. groupby.agg would also have taken the same callables we passed to the others, but it is often more efficient to leverage the string function names as there are efficiencies to be gained.

  • pd.DataFrame.pivot_table

    df.pivot_table(
        values='val0', index='row', columns='col',
        fill_value=0, aggfunc=[np.size, np.mean])
    
         size                      mean                           
    col  col0 col1 col2 col3 col4  col0   col1   col2   col3  col4
    row                                                           
    row0    1    2    0    1    1  0.77  0.605  0.000  0.860  0.65
    row2    1    0    2    1    2  0.13  0.000  0.395  0.500  0.25
    row3    0    1    0    2    0  0.00  0.310  0.000  0.545  0.00
    row4    0    1    2    2    1  0.00  0.100  0.395  0.760  0.24
    
  • pd.DataFrame.groupby

    df.groupby(['row', 'col'])['val0'].agg(['size', 'mean']).unstack(fill_value=0)
    
  • pd.crosstab

    pd.crosstab(
        index=df['row'], columns=df['col'],
        values=df['val0'], aggfunc=[np.size, np.mean]).fillna(0, downcast='infer')
    

Question 6

Can I aggregate over multiple value columns?

  • pd.DataFrame.pivot_table we pass values=['val0', 'val1'] but we could’ve left that off completely

    df.pivot_table(
        values=['val0', 'val1'], index='row', columns='col',
        fill_value=0, aggfunc='mean')
    
          val0                             val1                          
    col   col0   col1   col2   col3  col4  col0   col1  col2   col3  col4
    row                                                                  
    row0  0.77  0.605  0.000  0.860  0.65  0.01  0.745  0.00  0.010  0.02
    row2  0.13  0.000  0.395  0.500  0.25  0.45  0.000  0.34  0.440  0.79
    row3  0.00  0.310  0.000  0.545  0.00  0.00  0.230  0.00  0.075  0.00
    row4  0.00  0.100  0.395  0.760  0.24  0.00  0.070  0.42  0.300  0.46
    
  • pd.DataFrame.groupby

    df.groupby(['row', 'col'])['val0', 'val1'].mean().unstack(fill_value=0)
    

Question 7

Can Subdivide by multiple columns?

  • pd.DataFrame.pivot_table

    df.pivot_table(
        values='val0', index='row', columns=['item', 'col'],
        fill_value=0, aggfunc='mean')
    
    item item0             item1                         item2                   
    col   col2  col3  col4  col0  col1  col2  col3  col4  col0   col1  col3  col4
    row                                                                          
    row0  0.00  0.00  0.00  0.77  0.00  0.00  0.00  0.00  0.00  0.605  0.86  0.65
    row2  0.35  0.00  0.37  0.00  0.00  0.44  0.00  0.00  0.13  0.000  0.50  0.13
    row3  0.00  0.00  0.00  0.00  0.31  0.00  0.81  0.00  0.00  0.000  0.28  0.00
    row4  0.15  0.64  0.00  0.00  0.10  0.64  0.88  0.24  0.00  0.000  0.00  0.00
    
  • pd.DataFrame.groupby

    df.groupby(
        ['row', 'item', 'col']
    )['val0'].mean().unstack(['item', 'col']).fillna(0).sort_index(1)
    

Question 8

Can Subdivide by multiple columns?

  • pd.DataFrame.pivot_table

    df.pivot_table(
        values='val0', index=['key', 'row'], columns=['item', 'col'],
        fill_value=0, aggfunc='mean')
    
    item      item0             item1                         item2                  
    col        col2  col3  col4  col0  col1  col2  col3  col4  col0  col1  col3  col4
    key  row                                                                         
    key0 row0  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.86  0.00
         row2  0.00  0.00  0.37  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.50  0.00
         row3  0.00  0.00  0.00  0.00  0.31  0.00  0.81  0.00  0.00  0.00  0.00  0.00
         row4  0.15  0.64  0.00  0.00  0.00  0.00  0.00  0.24  0.00  0.00  0.00  0.00
    key1 row0  0.00  0.00  0.00  0.77  0.00  0.00  0.00  0.00  0.00  0.81  0.00  0.65
         row2  0.35  0.00  0.00  0.00  0.00  0.44  0.00  0.00  0.00  0.00  0.00  0.13
         row3  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.28  0.00
         row4  0.00  0.00  0.00  0.00  0.10  0.00  0.00  0.00  0.00  0.00  0.00  0.00
    key2 row0  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.40  0.00  0.00
         row2  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.13  0.00  0.00  0.00
         row4  0.00  0.00  0.00  0.00  0.00  0.64  0.88  0.00  0.00  0.00  0.00  0.00
    
  • pd.DataFrame.groupby

    df.groupby(
        ['key', 'row', 'item', 'col']
    )['val0'].mean().unstack(['item', 'col']).fillna(0).sort_index(1)
    
  • pd.DataFrame.set_index because the set of keys are unique for both rows and columns

    df.set_index(
        ['key', 'row', 'item', 'col']
    )['val0'].unstack(['item', 'col']).fillna(0).sort_index(1)
    

Question 9

Can I aggregate the frequency in which the column and rows occur together, aka “cross tabulation”?

  • pd.DataFrame.pivot_table

    df.pivot_table(index='row', columns='col', fill_value=0, aggfunc='size')
    
        col   col0  col1  col2  col3  col4
    row                               
    row0     1     2     0     1     1
    row2     1     0     2     1     2
    row3     0     1     0     2     0
    row4     0     1     2     2     1
    
  • pd.DataFrame.groupby

    df.groupby(['row', 'col'])['val0'].size().unstack(fill_value=0)
    
  • pd.crosstab

    pd.crosstab(df['row'], df['col'])
    
  • pd.factorize + np.bincount

    # get integer factorization `i` and unique values `r`
    # for column `'row'`
    i, r = pd.factorize(df['row'].values)
    # get integer factorization `j` and unique values `c`
    # for column `'col'`
    j, c = pd.factorize(df['col'].values)
    # `n` will be the number of rows
    # `m` will be the number of columns
    n, m = r.size, c.size
    # `i * m + j` is a clever way of counting the 
    # factorization bins assuming a flat array of length
    # `n * m`.  Which is why we subsequently reshape as `(n, m)`
    b = np.bincount(i * m + j, minlength=n * m).reshape(n, m)
    # BTW, whenever I read this, I think 'Bean, Rice, and Cheese'
    pd.DataFrame(b, r, c)
    
          col3  col2  col0  col1  col4
    row3     2     0     0     1     0
    row2     1     2     1     0     2
    row0     1     0     1     2     1
    row4     2     2     0     1     1
    
  • pd.get_dummies

    pd.get_dummies(df['row']).T.dot(pd.get_dummies(df['col']))
    
          col0  col1  col2  col3  col4
    row0     1     2     0     1     1
    row2     1     0     2     1     2
    row3     0     1     0     2     0
    row4     0     1     2     2     1
    

Question 10

How do I convert a DataFrame from long to wide by pivoting on ONLY two columns?

The first step is to assign a number to each row – this number will be the row index of that value in the pivoted result. This is done using GroupBy.cumcount:

df2.insert(0, 'count', df.groupby('A').cumcount())
df2

   count  A   B
0      0  a   0
1      1  a  11
2      2  a   2
3      3  a  11
4      0  b  10
5      1  b  10
6      2  b  14
7      0  c   7

The second step is to use the newly created column as the index to call DataFrame.pivot.

df2.pivot(*df)
# df.pivot(index='count', columns='A', values='B')

A         a     b    c
count                 
0       0.0  10.0  7.0
1      11.0  10.0  NaN
2       2.0  14.0  NaN
3      11.0   NaN  NaN

Question 11

How do I flatten the multiple index to single index after pivot

If columns type object with string join

df.columns = df.columns.map('|'.join)

else format

df.columns = df.columns.map('{0[0]}|{0[1]}'.format) 

回答 1

扩展@piRSquared的答案的另一个版本的问题10

问题10.1

数据框:

d = data = {'A': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 3, 6: 5},
 'B': {0: 'a', 1: 'b', 2: 'c', 3: 'a', 4: 'b', 5: 'a', 6: 'c'}}
df = pd.DataFrame(d)

   A  B
0  1  a
1  1  b
2  1  c
3  2  a
4  2  b
5  3  a
6  5  c

输出:

   0     1     2
A
1  a     b     c
2  a     b  None
3  a  None  None
5  c  None  None

使用df.groupbypd.Series.tolist

t = df.groupby('A')['B'].apply(list)
out = pd.DataFrame(t.tolist(),index=t.index)
out
   0     1     2
A
1  a     b     c
2  a     b  None
3  a  None  None
5  c  None  None

或者使用更好的替代方法 pd.pivot_tabledf.squeeze.

t = df.pivot_table(index='A',values='B',aggfunc=list).squeeze()
out = pd.DataFrame(t.tolist(),index=t.index)

To extend @piRSquared’s answer another version of Question 10

Question 10.1

DataFrame:

d = data = {'A': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 3, 6: 5},
 'B': {0: 'a', 1: 'b', 2: 'c', 3: 'a', 4: 'b', 5: 'a', 6: 'c'}}
df = pd.DataFrame(d)

   A  B
0  1  a
1  1  b
2  1  c
3  2  a
4  2  b
5  3  a
6  5  c

Output:

   0     1     2
A
1  a     b     c
2  a     b  None
3  a  None  None
5  c  None  None

Using df.groupby and pd.Series.tolist

t = df.groupby('A')['B'].apply(list)
out = pd.DataFrame(t.tolist(),index=t.index)
out
   0     1     2
A
1  a     b     c
2  a     b  None
3  a  None  None
5  c  None  None

Or A much better alternative using pd.pivot_table with df.squeeze.

t = df.pivot_table(index='A',values='B',aggfunc=list).squeeze()
out = pd.DataFrame(t.tolist(),index=t.index)

使用pandas GroupBy获取每个组的统计信息(例如计数,均值等)?

问题:使用pandas GroupBy获取每个组的统计信息(例如计数,均值等)?

我有一个数据框,df并且从中使用了几列groupby

df['col1','col2','col3','col4'].groupby(['col1','col2']).mean()

通过以上方法,我几乎得到了所需的表(数据框)。缺少的是另外一列,其中包含每个组中的行数。换句话说,我有意思,但我也想知道有多少个数字被用来获得这些价值。例如,在第一组中有8个值,在第二组中有10个值,依此类推。

简而言之:如何获取数据框的分组统计信息?

I have a data frame df and I use several columns from it to groupby:

df['col1','col2','col3','col4'].groupby(['col1','col2']).mean()

In the above way I almost get the table (data frame) that I need. What is missing is an additional column that contains number of rows in each group. In other words, I have mean but I also would like to know how many number were used to get these means. For example in the first group there are 8 values and in the second one 10 and so on.

In short: How do I get group-wise statistics for a dataframe?


回答 0

groupby对象上,该agg函数可以列出一个列表,以一次应用多种聚合方法。这应该给您您需要的结果:

df[['col1', 'col2', 'col3', 'col4']].groupby(['col1', 'col2']).agg(['mean', 'count'])

On groupby object, the agg function can take a list to apply several aggregation methods at once. This should give you the result you need:

df[['col1', 'col2', 'col3', 'col4']].groupby(['col1', 'col2']).agg(['mean', 'count'])

回答 1

快速回答:

获取每个组的行数的最简单方法是调用.size(),它返回一个Series

df.groupby(['col1','col2']).size()


通常,您希望此结果为DataFrame(而不是Series),因此您可以执行以下操作:

df.groupby(['col1', 'col2']).size().reset_index(name='counts')


如果您想了解如何计算每组的行数和其他统计信息,请继续阅读下面的内容。


详细的例子:

考虑以下示例数据框:

In [2]: df
Out[2]: 
  col1 col2  col3  col4  col5  col6
0    A    B  0.20 -0.61 -0.49  1.49
1    A    B -1.53 -1.01 -0.39  1.82
2    A    B -0.44  0.27  0.72  0.11
3    A    B  0.28 -1.32  0.38  0.18
4    C    D  0.12  0.59  0.81  0.66
5    C    D -0.13 -1.65 -1.64  0.50
6    C    D -1.42 -0.11 -0.18 -0.44
7    E    F -0.00  1.42 -0.26  1.17
8    E    F  0.91 -0.47  1.35 -0.34
9    G    H  1.48 -0.63 -1.14  0.17

首先让我们.size()用来获取行数:

In [3]: df.groupby(['col1', 'col2']).size()
Out[3]: 
col1  col2
A     B       4
C     D       3
E     F       2
G     H       1
dtype: int64

然后让我们使用.size().reset_index(name='counts')来获取行数:

In [4]: df.groupby(['col1', 'col2']).size().reset_index(name='counts')
Out[4]: 
  col1 col2  counts
0    A    B       4
1    C    D       3
2    E    F       2
3    G    H       1


包括结果以获取更多统计信息

当您要计算分组数据的统计信息时,通常如下所示:

In [5]: (df
   ...: .groupby(['col1', 'col2'])
   ...: .agg({
   ...:     'col3': ['mean', 'count'], 
   ...:     'col4': ['median', 'min', 'count']
   ...: }))
Out[5]: 
            col4                  col3      
          median   min count      mean count
col1 col2                                   
A    B    -0.810 -1.32     4 -0.372500     4
C    D    -0.110 -1.65     3 -0.476667     3
E    F     0.475 -0.47     2  0.455000     2
G    H    -0.630 -0.63     1  1.480000     1

由于嵌套的列标签,并且行计数是基于每列的,因此上面的结果有点令人讨厌。

为了获得对输出的更多控制权,我通常将统计信息拆分为单独的汇总,然后使用进行合并join。看起来像这样:

In [6]: gb = df.groupby(['col1', 'col2'])
   ...: counts = gb.size().to_frame(name='counts')
   ...: (counts
   ...:  .join(gb.agg({'col3': 'mean'}).rename(columns={'col3': 'col3_mean'}))
   ...:  .join(gb.agg({'col4': 'median'}).rename(columns={'col4': 'col4_median'}))
   ...:  .join(gb.agg({'col4': 'min'}).rename(columns={'col4': 'col4_min'}))
   ...:  .reset_index()
   ...: )
   ...: 
Out[6]: 
  col1 col2  counts  col3_mean  col4_median  col4_min
0    A    B       4  -0.372500       -0.810     -1.32
1    C    D       3  -0.476667       -0.110     -1.65
2    E    F       2   0.455000        0.475     -0.47
3    G    H       1   1.480000       -0.630     -0.63



脚注

下面显示了用于生成测试数据的代码:

In [1]: import numpy as np
   ...: import pandas as pd 
   ...: 
   ...: keys = np.array([
   ...:         ['A', 'B'],
   ...:         ['A', 'B'],
   ...:         ['A', 'B'],
   ...:         ['A', 'B'],
   ...:         ['C', 'D'],
   ...:         ['C', 'D'],
   ...:         ['C', 'D'],
   ...:         ['E', 'F'],
   ...:         ['E', 'F'],
   ...:         ['G', 'H'] 
   ...:         ])
   ...: 
   ...: df = pd.DataFrame(
   ...:     np.hstack([keys,np.random.randn(10,4).round(2)]), 
   ...:     columns = ['col1', 'col2', 'col3', 'col4', 'col5', 'col6']
   ...: )
   ...: 
   ...: df[['col3', 'col4', 'col5', 'col6']] = \
   ...:     df[['col3', 'col4', 'col5', 'col6']].astype(float)
   ...: 


免责声明:

如果您要聚合的某些列具有空值,那么您真的希望将组行计数视为每列的独立聚合。否则,您可能会误认为实际上有多少记录用于计算均值之类的东西,因为熊猫会NaN在均值计算中丢弃条目而不会告诉您。

Quick Answer:

The simplest way to get row counts per group is by calling .size(), which returns a Series:

df.groupby(['col1','col2']).size()


Usually you want this result as a DataFrame (instead of a Series) so you can do:

df.groupby(['col1', 'col2']).size().reset_index(name='counts')


If you want to find out how to calculate the row counts and other statistics for each group continue reading below.


Detailed example:

Consider the following example dataframe:

In [2]: df
Out[2]: 
  col1 col2  col3  col4  col5  col6
0    A    B  0.20 -0.61 -0.49  1.49
1    A    B -1.53 -1.01 -0.39  1.82
2    A    B -0.44  0.27  0.72  0.11
3    A    B  0.28 -1.32  0.38  0.18
4    C    D  0.12  0.59  0.81  0.66
5    C    D -0.13 -1.65 -1.64  0.50
6    C    D -1.42 -0.11 -0.18 -0.44
7    E    F -0.00  1.42 -0.26  1.17
8    E    F  0.91 -0.47  1.35 -0.34
9    G    H  1.48 -0.63 -1.14  0.17

First let’s use .size() to get the row counts:

In [3]: df.groupby(['col1', 'col2']).size()
Out[3]: 
col1  col2
A     B       4
C     D       3
E     F       2
G     H       1
dtype: int64

Then let’s use .size().reset_index(name='counts') to get the row counts:

In [4]: df.groupby(['col1', 'col2']).size().reset_index(name='counts')
Out[4]: 
  col1 col2  counts
0    A    B       4
1    C    D       3
2    E    F       2
3    G    H       1


Including results for more statistics

When you want to calculate statistics on grouped data, it usually looks like this:

In [5]: (df
   ...: .groupby(['col1', 'col2'])
   ...: .agg({
   ...:     'col3': ['mean', 'count'], 
   ...:     'col4': ['median', 'min', 'count']
   ...: }))
Out[5]: 
            col4                  col3      
          median   min count      mean count
col1 col2                                   
A    B    -0.810 -1.32     4 -0.372500     4
C    D    -0.110 -1.65     3 -0.476667     3
E    F     0.475 -0.47     2  0.455000     2
G    H    -0.630 -0.63     1  1.480000     1

The result above is a little annoying to deal with because of the nested column labels, and also because row counts are on a per column basis.

To gain more control over the output I usually split the statistics into individual aggregations that I then combine using join. It looks like this:

In [6]: gb = df.groupby(['col1', 'col2'])
   ...: counts = gb.size().to_frame(name='counts')
   ...: (counts
   ...:  .join(gb.agg({'col3': 'mean'}).rename(columns={'col3': 'col3_mean'}))
   ...:  .join(gb.agg({'col4': 'median'}).rename(columns={'col4': 'col4_median'}))
   ...:  .join(gb.agg({'col4': 'min'}).rename(columns={'col4': 'col4_min'}))
   ...:  .reset_index()
   ...: )
   ...: 
Out[6]: 
  col1 col2  counts  col3_mean  col4_median  col4_min
0    A    B       4  -0.372500       -0.810     -1.32
1    C    D       3  -0.476667       -0.110     -1.65
2    E    F       2   0.455000        0.475     -0.47
3    G    H       1   1.480000       -0.630     -0.63



Footnotes

The code used to generate the test data is shown below:

In [1]: import numpy as np
   ...: import pandas as pd 
   ...: 
   ...: keys = np.array([
   ...:         ['A', 'B'],
   ...:         ['A', 'B'],
   ...:         ['A', 'B'],
   ...:         ['A', 'B'],
   ...:         ['C', 'D'],
   ...:         ['C', 'D'],
   ...:         ['C', 'D'],
   ...:         ['E', 'F'],
   ...:         ['E', 'F'],
   ...:         ['G', 'H'] 
   ...:         ])
   ...: 
   ...: df = pd.DataFrame(
   ...:     np.hstack([keys,np.random.randn(10,4).round(2)]), 
   ...:     columns = ['col1', 'col2', 'col3', 'col4', 'col5', 'col6']
   ...: )
   ...: 
   ...: df[['col3', 'col4', 'col5', 'col6']] = \
   ...:     df[['col3', 'col4', 'col5', 'col6']].astype(float)
   ...: 


Disclaimer:

If some of the columns that you are aggregating have null values, then you really want to be looking at the group row counts as an independent aggregation for each column. Otherwise you may be misled as to how many records are actually being used to calculate things like the mean because pandas will drop NaN entries in the mean calculation without telling you about it.


回答 2

一种功能统治一切: GroupBy.describe

返回countmeanstd,和其他有用的统计每个组。

df.groupby(['col1', 'col2'])['col3', 'col4'].describe()

# Setup
np.random.seed(0)
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
                          'foo', 'bar', 'foo', 'foo'],
                   'B' : ['one', 'one', 'two', 'three',
                          'two', 'two', 'one', 'three'],
                   'C' : np.random.randn(8),
                   'D' : np.random.randn(8)})

from IPython.display import display

with pd.option_context('precision', 2):
    display(df.groupby(['A', 'B'])['C'].describe())

           count  mean   std   min   25%   50%   75%   max
A   B                                                     
bar one      1.0  0.40   NaN  0.40  0.40  0.40  0.40  0.40
    three    1.0  2.24   NaN  2.24  2.24  2.24  2.24  2.24
    two      1.0 -0.98   NaN -0.98 -0.98 -0.98 -0.98 -0.98
foo one      2.0  1.36  0.58  0.95  1.15  1.36  1.56  1.76
    three    1.0 -0.15   NaN -0.15 -0.15 -0.15 -0.15 -0.15
    two      2.0  1.42  0.63  0.98  1.20  1.42  1.65  1.87

要获取特定的统计信息,只需选择它们,

df.groupby(['A', 'B'])['C'].describe()[['count', 'mean']]

           count      mean
A   B                     
bar one      1.0  0.400157
    three    1.0  2.240893
    two      1.0 -0.977278
foo one      2.0  1.357070
    three    1.0 -0.151357
    two      2.0  1.423148

describe适用于多列(更改['C']为(['C', 'D']或完全删除),看看会发生什么,结果是一个MultiIndexed列数据框)。

您还将获得不同的字符串数据统计信息。这是一个例子

df2 = df.assign(D=list('aaabbccc')).sample(n=100, replace=True)

with pd.option_context('precision', 2):
    display(df2.groupby(['A', 'B'])
               .describe(include='all')
               .dropna(how='all', axis=1))

              C                                                   D                
          count  mean       std   min   25%   50%   75%   max count unique top freq
A   B                                                                              
bar one    14.0  0.40  5.76e-17  0.40  0.40  0.40  0.40  0.40    14      1   a   14
    three  14.0  2.24  4.61e-16  2.24  2.24  2.24  2.24  2.24    14      1   b   14
    two     9.0 -0.98  0.00e+00 -0.98 -0.98 -0.98 -0.98 -0.98     9      1   c    9
foo one    22.0  1.43  4.10e-01  0.95  0.95  1.76  1.76  1.76    22      2   a   13
    three  15.0 -0.15  0.00e+00 -0.15 -0.15 -0.15 -0.15 -0.15    15      1   c   15
    two    26.0  1.49  4.48e-01  0.98  0.98  1.87  1.87  1.87    26      2   b   15

有关更多信息,请参见文档

One Function to Rule Them All: GroupBy.describe

Returns count, mean, std, and other useful statistics per-group.

df.groupby(['col1', 'col2'])['col3', 'col4'].describe()

# Setup
np.random.seed(0)
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
                          'foo', 'bar', 'foo', 'foo'],
                   'B' : ['one', 'one', 'two', 'three',
                          'two', 'two', 'one', 'three'],
                   'C' : np.random.randn(8),
                   'D' : np.random.randn(8)})

from IPython.display import display

with pd.option_context('precision', 2):
    display(df.groupby(['A', 'B'])['C'].describe())

           count  mean   std   min   25%   50%   75%   max
A   B                                                     
bar one      1.0  0.40   NaN  0.40  0.40  0.40  0.40  0.40
    three    1.0  2.24   NaN  2.24  2.24  2.24  2.24  2.24
    two      1.0 -0.98   NaN -0.98 -0.98 -0.98 -0.98 -0.98
foo one      2.0  1.36  0.58  0.95  1.15  1.36  1.56  1.76
    three    1.0 -0.15   NaN -0.15 -0.15 -0.15 -0.15 -0.15
    two      2.0  1.42  0.63  0.98  1.20  1.42  1.65  1.87

To get specific statistics, just select them,

df.groupby(['A', 'B'])['C'].describe()[['count', 'mean']]

           count      mean
A   B                     
bar one      1.0  0.400157
    three    1.0  2.240893
    two      1.0 -0.977278
foo one      2.0  1.357070
    three    1.0 -0.151357
    two      2.0  1.423148

describe works for multiple columns (change ['C'] to ['C', 'D']—or remove it altogether—and see what happens, the result is a MultiIndexed columned dataframe).

You also get different statistics for string data. Here’s an example,

df2 = df.assign(D=list('aaabbccc')).sample(n=100, replace=True)

with pd.option_context('precision', 2):
    display(df2.groupby(['A', 'B'])
               .describe(include='all')
               .dropna(how='all', axis=1))

              C                                                   D                
          count  mean       std   min   25%   50%   75%   max count unique top freq
A   B                                                                              
bar one    14.0  0.40  5.76e-17  0.40  0.40  0.40  0.40  0.40    14      1   a   14
    three  14.0  2.24  4.61e-16  2.24  2.24  2.24  2.24  2.24    14      1   b   14
    two     9.0 -0.98  0.00e+00 -0.98 -0.98 -0.98 -0.98 -0.98     9      1   c    9
foo one    22.0  1.43  4.10e-01  0.95  0.95  1.76  1.76  1.76    22      2   a   13
    three  15.0 -0.15  0.00e+00 -0.15 -0.15 -0.15 -0.15 -0.15    15      1   c   15
    two    26.0  1.49  4.48e-01  0.98  0.98  1.87  1.87  1.87    26      2   b   15

For more information, see the documentation.


回答 3

我们可以使用groupby和count轻松地做到这一点。但是,我们应该记住使用reset_index()。

df[['col1','col2','col3','col4']].groupby(['col1','col2']).count().\
reset_index()

We can easily do it by using groupby and count. But, we should remember to use reset_index().

df[['col1','col2','col3','col4']].groupby(['col1','col2']).count().\
reset_index()

回答 4

要获取多个统计信息,请折叠索引并保留列名:

df = df.groupby(['col1','col2']).agg(['mean', 'count'])
df.columns = [ ' '.join(str(i) for i in col) for col in df.columns]
df.reset_index(inplace=True)
df

生成:

To get multiple stats, collapse the index, and retain column names:

df = df.groupby(['col1','col2']).agg(['mean', 'count'])
df.columns = [ ' '.join(str(i) for i in col) for col in df.columns]
df.reset_index(inplace=True)
df

Produces:


回答 5

创建一个组对象并调用如下示例所示的方法:

grp = df.groupby(['col1',  'col2',  'col3']) 

grp.max() 
grp.mean() 
grp.describe() 

Create a group object and call methods like below example:

grp = df.groupby(['col1',  'col2',  'col3']) 

grp.max() 
grp.mean() 
grp.describe() 

回答 6

请尝试此代码

new_column=df[['col1', 'col2', 'col3', 'col4']].groupby(['col1', 'col2']).count()
df['count_it']=new_column
df

我认为该代码将添加一个名为“ count it”的列,每个列的计数

Please try this code

new_column=df[['col1', 'col2', 'col3', 'col4']].groupby(['col1', 'col2']).count()
df['count_it']=new_column
df

I think that code will add a column called ‘count it’ which count of each group