问题:将多个过滤器应用于pandas DataFrame或Series的有效方法

我有一种情况,用户想要将多个过滤器应用于Pandas DataFrame或Series对象。本质上,我想有效地将​​用户在运行时指定的一堆过滤(比较操作)链接在一起。

过滤器应为可加性的(又称每个过滤器应缩小结果)。

我当前正在使用,reindex()但这每次都会创建一个新对象并复制基础数据(如果我正确理解了文档)。因此,这在过滤大型Series或DataFrame时可能效率很低。

我认为使用apply()map()或类似的方法可能更好。我对Pandas来说还很陌生,因此仍然想尽一切办法。

TL; DR

我想采用以下形式的字典,并将每个操作应用于给定的Series对象,并返回一个“过滤后的” Series对象。

relops = {'>=': [1], '<=': [1]}

长例子

我将从当前的示例开始,仅过滤单个Series对象。以下是我当前正在使用的功能:

   def apply_relops(series, relops):
        """
        Pass dictionary of relational operators to perform on given series object
        """
        for op, vals in relops.iteritems():
            op_func = ops[op]
            for val in vals:
                filtered = op_func(series, val)
                series = series.reindex(series[filtered])
        return series

用户向字典提供他们要执行的操作:

>>> df = pandas.DataFrame({'col1': [0, 1, 2], 'col2': [10, 11, 12]})
>>> print df
>>> print df
   col1  col2
0     0    10
1     1    11
2     2    12

>>> from operator import le, ge
>>> ops ={'>=': ge, '<=': le}
>>> apply_relops(df['col1'], {'>=': [1]})
col1
1       1
2       2
Name: col1
>>> apply_relops(df['col1'], relops = {'>=': [1], '<=': [1]})
col1
1       1
Name: col1

同样,上述方法的“问题”是,我认为中间步骤之间存在很多不必要的数据复制。

另外,我想对此进行扩展,以便传入的字典可以包括要操作的列,并根据输入字典过滤整个DataFrame。但是,我假设该系列的所有工作都可以轻松扩展到DataFrame。

I have a scenario where a user wants to apply several filters to a Pandas DataFrame or Series object. Essentially, I want to efficiently chain a bunch of filtering (comparison operations) together that are specified at run-time by the user.

The filters should be additive (aka each one applied should narrow results).

I’m currently using reindex() but this creates a new object each time and copies the underlying data (if I understand the documentation correctly). So, this could be really inefficient when filtering a big Series or DataFrame.

I’m thinking that using apply(), map(), or something similar might be better. I’m pretty new to Pandas though so still trying to wrap my head around everything.

TL;DR

I want to take a dictionary of the following form and apply each operation to a given Series object and return a ‘filtered’ Series object.

relops = {'>=': [1], '<=': [1]}

Long Example

I’ll start with an example of what I have currently and just filtering a single Series object. Below is the function I’m currently using:

   def apply_relops(series, relops):
        """
        Pass dictionary of relational operators to perform on given series object
        """
        for op, vals in relops.iteritems():
            op_func = ops[op]
            for val in vals:
                filtered = op_func(series, val)
                series = series.reindex(series[filtered])
        return series

The user provides a dictionary with the operations they want to perform:

>>> df = pandas.DataFrame({'col1': [0, 1, 2], 'col2': [10, 11, 12]})
>>> print df
>>> print df
   col1  col2
0     0    10
1     1    11
2     2    12

>>> from operator import le, ge
>>> ops ={'>=': ge, '<=': le}
>>> apply_relops(df['col1'], {'>=': [1]})
col1
1       1
2       2
Name: col1
>>> apply_relops(df['col1'], relops = {'>=': [1], '<=': [1]})
col1
1       1
Name: col1

Again, the ‘problem’ with my above approach is that I think there is a lot of possibly unnecessary copying of the data for the in-between steps.

Also, I would like to expand this so that the dictionary passed in can include the columns to operator on and filter an entire DataFrame based on the input dictionary. However, I’m assuming whatever works for the Series can be easily expanded to a DataFrame.


回答 0

熊猫(和numpy)允许使用布尔索引,这将更加高效:

In [11]: df.loc[df['col1'] >= 1, 'col1']
Out[11]: 
1    1
2    2
Name: col1

In [12]: df[df['col1'] >= 1]
Out[12]: 
   col1  col2
1     1    11
2     2    12

In [13]: df[(df['col1'] >= 1) & (df['col1'] <=1 )]
Out[13]: 
   col1  col2
1     1    11

如果要为此编写辅助函数,请考虑以下方面:

In [14]: def b(x, col, op, n): 
             return op(x[col],n)

In [15]: def f(x, *b):
             return x[(np.logical_and(*b))]

In [16]: b1 = b(df, 'col1', ge, 1)

In [17]: b2 = b(df, 'col1', le, 1)

In [18]: f(df, b1, b2)
Out[18]: 
   col1  col2
1     1    11

更新:pandas 0.13针对此类用例提供了一种查询方法,假设列名是有效的标识符,则可以进行以下工作(并且对于大型框架,在幕后使用numexpr可能更为有效):

In [21]: df.query('col1 <= 1 & 1 <= col1')
Out[21]:
   col1  col2
1     1    11

Pandas (and numpy) allow for boolean indexing, which will be much more efficient:

In [11]: df.loc[df['col1'] >= 1, 'col1']
Out[11]: 
1    1
2    2
Name: col1

In [12]: df[df['col1'] >= 1]
Out[12]: 
   col1  col2
1     1    11
2     2    12

In [13]: df[(df['col1'] >= 1) & (df['col1'] <=1 )]
Out[13]: 
   col1  col2
1     1    11

If you want to write helper functions for this, consider something along these lines:

In [14]: def b(x, col, op, n): 
             return op(x[col],n)

In [15]: def f(x, *b):
             return x[(np.logical_and(*b))]

In [16]: b1 = b(df, 'col1', ge, 1)

In [17]: b2 = b(df, 'col1', le, 1)

In [18]: f(df, b1, b2)
Out[18]: 
   col1  col2
1     1    11

Update: pandas 0.13 has a query method for these kind of use cases, assuming column names are valid identifiers the following works (and can be more efficient for large frames as it uses numexpr behind the scenes):

In [21]: df.query('col1 <= 1 & 1 <= col1')
Out[21]:
   col1  col2
1     1    11

回答 1

链接条件会产生长行,而pep8不建议这样做。使用.query方法会强制使用字符串,该字符串功能强大但不具有Python特色,而且不是很动态。

一旦每个过滤器都安装到位,一种方法是

import numpy as np
import functools
def conjunction(*conditions):
    return functools.reduce(np.logical_and, conditions)

c_1 = data.col1 == True
c_2 = data.col2 < 64
c_3 = data.col3 != 4

data_filtered = data[conjunction(c1,c2,c3)]

np.ologic可以运行并且运行很快,但是不超过两个参数,这些参数由functools.reduce处理。

请注意,这仍然有一些冗余:a)快捷方式不会在全局级别上发生b)每个单独条件都在整个初始数据上运行。不过,我希望它对于许多应用程序都足够有效,并且可读性强。

Chaining conditions creates long lines, which are discouraged by pep8. Using the .query method forces to use strings, which is powerful but unpythonic and not very dynamic.

Once each of the filters is in place, one approach is

import numpy as np
import functools
def conjunction(*conditions):
    return functools.reduce(np.logical_and, conditions)

c_1 = data.col1 == True
c_2 = data.col2 < 64
c_3 = data.col3 != 4

data_filtered = data[conjunction(c1,c2,c3)]

np.logical operates on and is fast, but does not take more than two arguments, which is handled by functools.reduce.

Note that this still has some redundancies: a) shortcutting does not happen on a global level b) Each of the individual conditions runs on the whole initial data. Still, I expect this to be efficient enough for many applications and it is very readable.

You can also make a disjunction (wherein only one of the conditions needs to be true) by using np.logical_or instead:

import numpy as np
import functools
def disjunction(*conditions):
    return functools.reduce(np.logical_or, conditions)

c_1 = data.col1 == True
c_2 = data.col2 < 64
c_3 = data.col3 != 4

data_filtered = data[disjunction(c1,c2,c3)]

回答 2

最简单的解决方案:

用:

filtered_df = df[(df['col1'] >= 1) & (df['col1'] <= 5)]

另一个示例,要过滤数据框以查找属于Feb-2018的值,请使用以下代码

filtered_df = df[(df['year'] == 2018) & (df['month'] == 2)]

Simplest of All Solutions:

Use:

filtered_df = df[(df['col1'] >= 1) & (df['col1'] <= 5)]

Another Example, To filter the dataframe for values belonging to Feb-2018, use the below code

filtered_df = df[(df['year'] == 2018) & (df['month'] == 2)]

回答 3

由于pandas 0.22更新,提供了比较选项,例如:

  • gt(大于)
  • lt(小于)
  • eq(等于)
  • ne(不等于)
  • ge(大于或等于)

还有很多。这些函数返回布尔数组。让我们看看如何使用它们:

# sample data
df = pd.DataFrame({'col1': [0, 1, 2,3,4,5], 'col2': [10, 11, 12,13,14,15]})

# get values from col1 greater than or equals to 1
df.loc[df['col1'].ge(1),'col1']

1    1
2    2
3    3
4    4
5    5

# where co11 values is better 0 and 2
df.loc[df['col1'].between(0,2)]

 col1 col2
0   0   10
1   1   11
2   2   12

# where col1 > 1
df.loc[df['col1'].gt(1)]

 col1 col2
2   2   12
3   3   13
4   4   14
5   5   15

Since pandas 0.22 update, comparison options are available like:

  • gt (greater than)
  • lt (lesser than)
  • eq (equals to)
  • ne (not equals to)
  • ge (greater than or equals to)

and many more. These functions return boolean array. Let’s see how we can use them:

# sample data
df = pd.DataFrame({'col1': [0, 1, 2,3,4,5], 'col2': [10, 11, 12,13,14,15]})

# get values from col1 greater than or equals to 1
df.loc[df['col1'].ge(1),'col1']

1    1
2    2
3    3
4    4
5    5

# where co11 values is better 0 and 2
df.loc[df['col1'].between(0,2)]

 col1 col2
0   0   10
1   1   11
2   2   12

# where col1 > 1
df.loc[df['col1'].gt(1)]

 col1 col2
2   2   12
3   3   13
4   4   14
5   5   15

回答 4

为什么不这样做呢?

def filt_spec(df, col, val, op):
    import operator
    ops = {'eq': operator.eq, 'neq': operator.ne, 'gt': operator.gt, 'ge': operator.ge, 'lt': operator.lt, 'le': operator.le}
    return df[ops[op](df[col], val)]
pandas.DataFrame.filt_spec = filt_spec

演示:

df = pd.DataFrame({'a': [1,2,3,4,5], 'b':[5,4,3,2,1]})
df.filt_spec('a', 2, 'ge')

结果:

   a  b
 1  2  4
 2  3  3
 3  4  2
 4  5  1

您可以看到列“ a”已被过滤,其中> = 2。

这比操作员链接略快(键入时间,而不是性能)。您当然可以将导入文件放在文件顶部。

Why not do this?

def filt_spec(df, col, val, op):
    import operator
    ops = {'eq': operator.eq, 'neq': operator.ne, 'gt': operator.gt, 'ge': operator.ge, 'lt': operator.lt, 'le': operator.le}
    return df[ops[op](df[col], val)]
pandas.DataFrame.filt_spec = filt_spec

Demo:

df = pd.DataFrame({'a': [1,2,3,4,5], 'b':[5,4,3,2,1]})
df.filt_spec('a', 2, 'ge')

Result:

   a  b
 1  2  4
 2  3  3
 3  4  2
 4  5  1

You can see that column ‘a’ has been filtered where a >=2.

This is slightly faster (typing time, not performance) than operator chaining. You could of course put the import at the top of the file.


回答 5

e还可以基于不在列表中或不可迭代的列的值来选择行。我们将像以前一样创建布尔变量,但是现在我们将〜放在前面来否定布尔变量。

例如

list = [1, 0]
df[df.col1.isin(list)]

e can also select rows based on values of a column that are not in a list or any iterable. We will create boolean variable just like before, but now we will negate the boolean variable by placing ~ in the front.

For example

list = [1, 0]
df[df.col1.isin(list)]

声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。