问题:列表理解与lambda +过滤器
我碰巧发现自己有一个基本的过滤需求:我有一个列表,并且必须按项目的属性对其进行过滤。
我的代码如下所示:
my_list = [x for x in my_list if x.attribute == value]
但是后来我想,这样写会更好吗?
my_list = filter(lambda x: x.attribute == value, my_list)
它更具可读性,并且如果需要性能,可以将lambda取出以获取收益。
问题是:使用第二种方法是否有警告?有任何性能差异吗?我是否完全想念Pythonic Way™,应该以另一种方式来做到这一点(例如,使用itemgetter而不是lambda)吗?
I happened to find myself having a basic filtering need: I have a list and I have to filter it by an attribute of the items.
My code looked like this:
my_list = [x for x in my_list if x.attribute == value]
But then I thought, wouldn’t it be better to write it like this?
my_list = filter(lambda x: x.attribute == value, my_list)
It’s more readable, and if needed for performance the lambda could be taken out to gain something.
Question is: are there any caveats in using the second way? Any performance difference? Am I missing the Pythonic Way™ entirely and should do it in yet another way (such as using itemgetter instead of the lambda)?
回答 0
奇怪的是,不同的人有多少美丽。我发现列表理解比filter
+ 清晰得多lambda
,但是请使用任何您更容易理解的列表。
有两件事可能会减慢您对的使用filter
。
第一个是函数调用开销:使用Python函数(无论是由def
还是创建的lambda
)后,过滤器的运行速度可能会比列表理解慢。几乎可以肯定,这还不够重要,并且在对代码进行计时并发现它是瓶颈之前,您不应该对性能进行太多的考虑,但是区别仍然存在。
可能适用的其他开销是,lambda被强制访问作用域变量(value
)。这比访问局部变量要慢,并且在Python 2.x中,列表推导仅访问局部变量。如果您使用的是Python 3.x,则列表推导是在单独的函数中运行的,因此它也将value
通过闭包进行访问,这种区别将不适用。
要考虑的另一个选项是使用生成器而不是列表推导:
def filterbyvalue(seq, value):
for el in seq:
if el.attribute==value: yield el
然后,在您的主要代码(这才是真正的可读性)中,您已经用有希望的有意义的函数名称替换了列表理解和过滤器。
It is strange how much beauty varies for different people. I find the list comprehension much clearer than filter
+lambda
, but use whichever you find easier.
There are two things that may slow down your use of filter
.
The first is the function call overhead: as soon as you use a Python function (whether created by def
or lambda
) it is likely that filter will be slower than the list comprehension. It almost certainly is not enough to matter, and you shouldn’t think much about performance until you’ve timed your code and found it to be a bottleneck, but the difference will be there.
The other overhead that might apply is that the lambda is being forced to access a scoped variable (value
). That is slower than accessing a local variable and in Python 2.x the list comprehension only accesses local variables. If you are using Python 3.x the list comprehension runs in a separate function so it will also be accessing value
through a closure and this difference won’t apply.
The other option to consider is to use a generator instead of a list comprehension:
def filterbyvalue(seq, value):
for el in seq:
if el.attribute==value: yield el
Then in your main code (which is where readability really matters) you’ve replaced both list comprehension and filter with a hopefully meaningful function name.
回答 1
在Python中,这是一个有点宗教性的问题。即使存在足够的反弹,最终只reduce
从内置函数转移到functools.reduce。
我个人认为列表理解更容易阅读。[i for i in list if i.attribute == value]
由于所有行为都在表面上而不是在过滤器函数内部,因此表达式中发生的事情更加明确。
我不会太担心这两种方法之间的性能差异,因为这是微不足道的。如果确实证明这是您应用程序中的瓶颈(不太可能),我真的只会对其进行优化。
另外,由于BDFL希望filter
摆脱这种语言,因此可以肯定地自动使列表理解更具Pythonic ;-)
This is a somewhat religious issue in Python. Even though , there was enough of a backlash that in the end only reduce
was moved from built-ins to functools.reduce.
Personally I find list comprehensions easier to read. It is more explicit what is happening from the expression [i for i in list if i.attribute == value]
as all the behaviour is on the surface not inside the filter function.
I would not worry too much about the performance difference between the two approaches as it is marginal. I would really only optimise this if it proved to be the bottleneck in your application which is unlikely.
Also since the BDFL wanted filter
gone from the language then surely that automatically makes list comprehensions more Pythonic ;-)
回答 2
由于任何速度差都将是微不足道的,因此使用过滤器还是列表理解都取决于品味。总的来说,我倾向于使用理解(这里似乎与大多数其他答案一致),但是在某些情况下,我更喜欢使用filter
。
一个非常常见的用例是抽取某些可迭代的X的值作为谓词P(x):
[x for x in X if P(x)]
但有时您想先将某些函数应用于这些值:
[f(x) for x in X if P(f(x))]
作为一个具体的例子,考虑
primes_cubed = [x*x*x for x in range(1000) if prime(x)]
我认为这看起来比使用略好filter
。但是现在考虑
prime_cubes = [x*x*x for x in range(1000) if prime(x*x*x)]
在这种情况下,我们要filter
反对后计算值。除了两次计算多维数据集的问题(想象一个更昂贵的计算)外,还有两次写入表达式的问题,这违背了DRY的审美观。在这种情况下,我倾向于使用
prime_cubes = filter(prime, [x*x*x for x in range(1000)])
Since any speed difference is bound to be miniscule, whether to use filters or list comprehensions comes down to a matter of taste. In general I’m inclined to use comprehensions (which seems to agree with most other answers here), but there is one case where I prefer filter
.
A very frequent use case is pulling out the values of some iterable X subject to a predicate P(x):
[x for x in X if P(x)]
but sometimes you want to apply some function to the values first:
[f(x) for x in X if P(f(x))]
As a specific example, consider
primes_cubed = [x*x*x for x in range(1000) if prime(x)]
I think this looks slightly better than using filter
. But now consider
prime_cubes = [x*x*x for x in range(1000) if prime(x*x*x)]
In this case we want to filter
against the post-computed value. Besides the issue of computing the cube twice (imagine a more expensive calculation), there is the issue of writing the expression twice, violating the DRY aesthetic. In this case I’d be apt to use
prime_cubes = filter(prime, [x*x*x for x in range(1000)])
回答 3
尽管filter
可能是“更快的方法”,但“ Python方式”将不在乎这些事情,除非性能绝对至关重要(在这种情况下,您将不会使用Python!)。
Although filter
may be the “faster way”, the “Pythonic way” would be not to care about such things unless performance is absolutely critical (in which case you wouldn’t be using Python!).
回答 4
我以为我会在python 3中添加,filter()实际上是一个迭代器对象,因此您必须将filter方法调用传递给list()才能构建过滤后的列表。所以在python 2:
lst_a = range(25) #arbitrary list
lst_b = [num for num in lst_a if num % 2 == 0]
lst_c = filter(lambda num: num % 2 == 0, lst_a)
列表b和c具有相同的值,并且大约在相同的时间内完成,因为filter()等效[如果在z中,则x表示y中的x]。但是,在3中,相同的代码将使列表c包含过滤器对象,而不是过滤后的列表。要在3中产生相同的值:
lst_a = range(25) #arbitrary list
lst_b = [num for num in lst_a if num % 2 == 0]
lst_c = list(filter(lambda num: num %2 == 0, lst_a))
问题在于list()接受一个可迭代的参数,并从该参数创建一个新列表。结果是,在python 3中以这种方式使用filter所花费的时间是[x for x in y if z]中方法的两倍,因为您必须遍历filter()的输出以及原始列表。
I thought I’d just add that in python 3, filter() is actually an iterator object, so you’d have to pass your filter method call to list() in order to build the filtered list. So in python 2:
lst_a = range(25) #arbitrary list
lst_b = [num for num in lst_a if num % 2 == 0]
lst_c = filter(lambda num: num % 2 == 0, lst_a)
lists b and c have the same values, and were completed in about the same time as filter() was equivalent [x for x in y if z]. However, in 3, this same code would leave list c containing a filter object, not a filtered list. To produce the same values in 3:
lst_a = range(25) #arbitrary list
lst_b = [num for num in lst_a if num % 2 == 0]
lst_c = list(filter(lambda num: num %2 == 0, lst_a))
The problem is that list() takes an iterable as it’s argument, and creates a new list from that argument. The result is that using filter in this way in python 3 takes up to twice as long as the [x for x in y if z] method because you have to iterate over the output from filter() as well as the original list.
回答 5
一个重要的区别是列表理解将list
在过滤器返回a 的同时返回filter
,而您不能像a那样操作list
(即:对其进行调用len
,但不能与的返回一起使用filter
)。
我自己的自学使我遇到了一些类似的问题。
话虽这么说,如果有一种方法可以list
从a中获得结果filter
,就像您在.NET中所做的那样lst.Where(i => i.something()).ToList()
,我很想知道。
编辑:这是Python 3而不是2的情况(请参阅注释中的讨论)。
An important difference is that list comprehension will return a list
while the filter returns a filter
, which you cannot manipulate like a list
(ie: call len
on it, which does not work with the return of filter
).
My own self-learning brought me to some similar issue.
That being said, if there is a way to have the resulting list
from a filter
, a bit like you would do in .NET when you do lst.Where(i => i.something()).ToList()
, I am curious to know it.
EDIT: This is the case for Python 3, not 2 (see discussion in comments).
回答 6
我发现第二种方法更具可读性。它确切地告诉您意图是什么:过滤列表。
PS:请勿将“列表”用作变量名
I find the second way more readable. It tells you exactly what the intention is: filter the list.
PS: do not use ‘list’ as a variable name
回答 7
filter
如果使用内置函数,通常会稍快一些。
我希望列表理解在您的情况下会更快
generally filter
is slightly faster if using a builtin function.
I would expect the list comprehension to be slightly faster in your case
回答 8
过滤器就是这样。它过滤出列表的元素。您可以看到定义中提到的内容相同(在我之前提到的官方文档链接中)。然而,列表理解是什么,作用于后产生一个新的列表的东西前面的列表上。(两个过滤器和列表理解创造了新的名单,并取代旧的名单无法执行操作。这里一个新的名单是像一个列表(例如,一种全新的数据类型。例如将整数转换为字符串等)
在您的示例中,按照定义,使用过滤器比使用列表理解更好。但是,如果您想从列表元素中说other_attribute,在您的示例中要作为新列表进行检索,则可以使用列表理解。
return [item.other_attribute for item in my_list if item.attribute==value]
这就是我实际上记得有关过滤器和列表理解的方式。删除列表中的一些内容并保持其他元素不变,请使用过滤器。在元素上自行使用一些逻辑,并创建适合于某些目的的缩减列表,使用列表理解。
Filter is just that. It filters out the elements of a list. You can see the definition mentions the same(in the official docs link I mentioned before). Whereas, list comprehension is something that produces a new list after acting upon something on the previous list.(Both filter and list comprehension creates new list and not perform operation in place of the older list. A new list here is something like a list with, say, an entirely new data type. Like converting integers to string ,etc)
In your example, it is better to use filter than list comprehension, as per the definition. However, if you want, say other_attribute from the list elements, in your example is to be retrieved as a new list, then you can use list comprehension.
return [item.other_attribute for item in my_list if item.attribute==value]
This is how I actually remember about filter and list comprehension. Remove a few things within a list and keep the other elements intact, use filter. Use some logic on your own at the elements and create a watered down list suitable for some purpose, use list comprehension.
回答 9
这是我需要在列表理解后进行筛选时使用的一小段内容。只是过滤器,lambda和列表(也称为猫的忠诚度和狗的清洁度)的组合。
在这种情况下,我正在读取文件,删除空白行,注释掉行,以及对行进行注释后的所有内容:
# Throw out blank lines and comments
with open('file.txt', 'r') as lines:
# From the inside out:
# [s.partition('#')[0].strip() for s in lines]... Throws out comments
# filter(lambda x: x!= '', [s.part... Filters out blank lines
# y for y in filter... Converts filter object to list
file_contents = [y for y in filter(lambda x: x != '', [s.partition('#')[0].strip() for s in lines])]
Here’s a short piece I use when I need to filter on something after the list comprehension. Just a combination of filter, lambda, and lists (otherwise known as the loyalty of a cat and the cleanliness of a dog).
In this case I’m reading a file, stripping out blank lines, commented out lines, and anything after a comment on a line:
# Throw out blank lines and comments
with open('file.txt', 'r') as lines:
# From the inside out:
# [s.partition('#')[0].strip() for s in lines]... Throws out comments
# filter(lambda x: x!= '', [s.part... Filters out blank lines
# y for y in filter... Converts filter object to list
file_contents = [y for y in filter(lambda x: x != '', [s.partition('#')[0].strip() for s in lines])]
回答 10
除了可接受的答案外,还有一个极端的情况,您应该使用过滤器而不是列表推导。如果列表不可散列,则无法直接使用列表推导处理它。一个真实的例子是,如果您用来pyodbc
从数据库中读取结果。在fetchAll()
从结果cursor
是unhashable列表。在这种情况下,要直接处理返回的结果,应使用过滤器:
cursor.execute("SELECT * FROM TABLE1;")
data_from_db = cursor.fetchall()
processed_data = filter(lambda s: 'abc' in s.field1 or s.StartTime >= start_date_time, data_from_db)
如果您在此处使用列表理解,则会出现错误:
TypeError:无法散列的类型:“列表”
In addition to the accepted answer, there is a corner case when you should use filter instead of a list comprehension. If the list is unhashable you cannot directly process it with a list comprehension. A real world example is if you use pyodbc
to read results from a database. The fetchAll()
results from cursor
is an unhashable list. In this situation, to directly manipulating on the returned results, filter should be used:
cursor.execute("SELECT * FROM TABLE1;")
data_from_db = cursor.fetchall()
processed_data = filter(lambda s: 'abc' in s.field1 or s.StartTime >= start_date_time, data_from_db)
If you use list comprehension here you will get the error:
TypeError: unhashable type: ‘list’
回答 11
我花了一些时间熟悉higher order functions
filter
and map
。因此,我习惯了它们,实际上我很喜欢filter
,因为很明显它通过保留真实内容来进行过滤,而且我知道一些functional programming
术语也很酷。
然后,我读了这段文章(Fluent Python书):
map和filter函数仍是Python 3中的内置函数,但是由于引入了列表理解和生成器表达式,因此它们并不那么重要。listcomp或genexp可将地图和过滤器组合在一起,但可读性更高。
现在,我想,如果您可以使用已经很广泛的成语(例如列表理解)来实现filter
/ 的概念,那又何必困扰 map
呢?而且maps
和filters
是种功能。在这种情况下,我更喜欢使用Anonymous functions
lambda。
最后,仅出于测试目的,我对这两种方法(map
和listComp
)都进行了计时,但没有看到任何相关的速度差异来证明对此进行论证的合理性。
from timeit import Timer
timeMap = Timer(lambda: list(map(lambda x: x*x, range(10**7))))
print(timeMap.timeit(number=100))
timeListComp = Timer(lambda:[(lambda x: x*x) for x in range(10**7)])
print(timeListComp.timeit(number=100))
#Map: 166.95695265199174
#List Comprehension 177.97208347299602
It took me some time to get familiarized with the higher order functions
filter
and map
. So i got used to them and i actually liked filter
as it was explicit that it filters by keeping whatever is truthy and I’ve felt cool that I knew some functional programming
terms.
Then I read this passage (Fluent Python Book):
The map and filter functions are still builtins in Python 3, but since the introduction of list comprehensions and generator ex‐ pressions, they are not as important. A listcomp or a genexp does the job of map and filter combined, but is more readable.
And now I think, why bother with the concept of filter
/ map
if you can achieve it with already widely spread idioms like list comprehensions. Furthermore maps
and filters
are kind of functions. In this case I prefer using Anonymous functions
lambdas.
Finally, just for the sake of having it tested, I’ve timed both methods (map
and listComp
) and I didn’t see any relevant speed difference that would justify making arguments about it.
from timeit import Timer
timeMap = Timer(lambda: list(map(lambda x: x*x, range(10**7))))
print(timeMap.timeit(number=100))
timeListComp = Timer(lambda:[(lambda x: x*x) for x in range(10**7)])
print(timeListComp.timeit(number=100))
#Map: 166.95695265199174
#List Comprehension 177.97208347299602
回答 12
奇怪的是,在Python 3上,我看到过滤器的执行速度快于列表推导。
我一直认为列表理解会更有效。类似于:[如果名称不是None,则在brand_names_db中使用名称命名]生成的字节码要好一些。
>>> def f1(seq):
... return list(filter(None, seq))
>>> def f2(seq):
... return [i for i in seq if i is not None]
>>> disassemble(f1.__code__)
2 0 LOAD_GLOBAL 0 (list)
2 LOAD_GLOBAL 1 (filter)
4 LOAD_CONST 0 (None)
6 LOAD_FAST 0 (seq)
8 CALL_FUNCTION 2
10 CALL_FUNCTION 1
12 RETURN_VALUE
>>> disassemble(f2.__code__)
2 0 LOAD_CONST 1 (<code object <listcomp> at 0x10cfcaa50, file "<stdin>", line 2>)
2 LOAD_CONST 2 ('f2.<locals>.<listcomp>')
4 MAKE_FUNCTION 0
6 LOAD_FAST 0 (seq)
8 GET_ITER
10 CALL_FUNCTION 1
12 RETURN_VALUE
但是它们实际上要慢一些:
>>> timeit(stmt="f1(range(1000))", setup="from __main__ import f1,f2")
21.177661532000116
>>> timeit(stmt="f2(range(1000))", setup="from __main__ import f1,f2")
42.233950221000214
Curiously on Python 3, I see filter performing faster than list comprehensions.
I always thought that the list comprehensions would be more performant. Something like: [name for name in brand_names_db if name is not None] The bytecode generated is a bit better.
>>> def f1(seq):
... return list(filter(None, seq))
>>> def f2(seq):
... return [i for i in seq if i is not None]
>>> disassemble(f1.__code__)
2 0 LOAD_GLOBAL 0 (list)
2 LOAD_GLOBAL 1 (filter)
4 LOAD_CONST 0 (None)
6 LOAD_FAST 0 (seq)
8 CALL_FUNCTION 2
10 CALL_FUNCTION 1
12 RETURN_VALUE
>>> disassemble(f2.__code__)
2 0 LOAD_CONST 1 (<code object <listcomp> at 0x10cfcaa50, file "<stdin>", line 2>)
2 LOAD_CONST 2 ('f2.<locals>.<listcomp>')
4 MAKE_FUNCTION 0
6 LOAD_FAST 0 (seq)
8 GET_ITER
10 CALL_FUNCTION 1
12 RETURN_VALUE
But they are actually slower:
>>> timeit(stmt="f1(range(1000))", setup="from __main__ import f1,f2")
21.177661532000116
>>> timeit(stmt="f2(range(1000))", setup="from __main__ import f1,f2")
42.233950221000214
回答 13
我拿
def filter_list(list, key, value, limit=None):
return [i for i in list if i[key] == value][:limit]
My take
def filter_list(list, key, value, limit=None):
return [i for i in list if i[key] == value][:limit]
声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。