问题:生成器表达式与列表理解

什么时候应该使用生成器表达式,什么时候应该在Python中使用列表推导?

# Generator expression
(x*2 for x in range(256))

# List comprehension
[x*2 for x in range(256)]

When should you use generator expressions and when should you use list comprehensions in Python?

# Generator expression
(x*2 for x in range(256))

# List comprehension
[x*2 for x in range(256)]

回答 0

John的答案很好(当您要迭代多次时,列表理解会更好)。但是,还应注意,如果要使用任何列表方法,都应使用列表。例如,以下代码将不起作用:

def gen():
    return (something for something in get_some_stuff())

print gen()[:2]     # generators don't support indexing or slicing
print [5,6] + gen() # generators can't be added to lists

基本上,如果您要做的只是迭代一次,则使用生成器表达式。如果要存储和使用生成的结果,则最好使用列表理解功能。

由于性能是选择彼此的最常见原因,所以我的建议是不要担心它,而只选择一个即可。如果您发现程序运行速度太慢,则只有这样,您才应回去担心调整代码。

John’s answer is good (that list comprehensions are better when you want to iterate over something multiple times). However, it’s also worth noting that you should use a list if you want to use any of the list methods. For example, the following code won’t work:

def gen():
    return (something for something in get_some_stuff())

print gen()[:2]     # generators don't support indexing or slicing
print [5,6] + gen() # generators can't be added to lists

Basically, use a generator expression if all you’re doing is iterating once. If you want to store and use the generated results, then you’re probably better off with a list comprehension.

Since performance is the most common reason to choose one over the other, my advice is to not worry about it and just pick one; if you find that your program is running too slowly, then and only then should you go back and worry about tuning your code.


回答 1

遍历生成器表达式列表理解将执行相同的操作。但是,列表理解将首先在内存中创建整个列表,而生成器表达式将在运行中创建项目,因此您可以将其用于非常大的(也可以是无限的!)序列。

Iterating over the generator expression or the list comprehension will do the same thing. However, the list comprehension will create the entire list in memory first while the generator expression will create the items on the fly, so you are able to use it for very large (and also infinite!) sequences.


回答 2

当结果需要多次迭代或速度至关重要时,请使用列表推导。使用范围较大或无限的生成器表达式。

有关更多信息,请参见生成器表达式和列表推导。

Use list comprehensions when the result needs to be iterated over multiple times, or where speed is paramount. Use generator expressions where the range is large or infinite.

See Generator expressions and list comprehensions for more info.


回答 3

重要的是列表理解会创建一个新列表。生成器创建一个可迭代的对象,当您使用这些位时,它将动态“过滤”源材料。

假设您有一个名为“ hugefile.txt”的2TB日志文件,并且想要以单词“ ENTRY”开头的所有行的内容和长度。

因此,您尝试通过编写列表理解来开始:

logfile = open("hugefile.txt","r")
entry_lines = [(line,len(line)) for line in logfile if line.startswith("ENTRY")]

这样会抓取整个文件,处理每一行,并将匹配的行存储在数组中。因此,此阵列最多可以包含2TB的内容。那会占用很多RAM,对于您的目的可能不切实际。

因此,我们可以使用生成器将“过滤器”应用于我们的内容。直到我们开始遍历结果之前,才实际读取任何数据。

logfile = open("hugefile.txt","r")
entry_lines = ((line,len(line)) for line in logfile if line.startswith("ENTRY"))

甚至没有从我们的文件中读取任何一行。实际上,假设我们想进一步过滤结果:

long_entries = ((line,length) for (line,length) in entry_lines if length > 80)

仍未读取任何内容,但是我们现在指定了两个生成器,它们将根据需要对数据起作用。

让我们将过滤后的行写到另一个文件中:

outfile = open("filtered.txt","a")
for entry,length in long_entries:
    outfile.write(entry)

现在我们读取输入文件。随着for循环继续请求其他行,long_entries生成器要求生成器提供行entry_lines,仅返回长度大于80个字符的行。然后,entry_lines生成器从logfile迭代迭代器读取文件。

因此,不是以完全填充列表的形式将数据“推送”到输出函数,而是为输出函数提供了一种仅在需要时才“拉”数据的方法。在我们的情况下,这要高效得多,但不够灵活。生成器是一种方式,一次通过。我们读取的日志文件中的数据会立即被丢弃,因此我们无法返回上一行。另一方面,完成数据后,我们不必担心保留数据。

The important point is that the list comprehension creates a new list. The generator creates a an iterable object that will “filter” the source material on-the-fly as you consume the bits.

Imagine you have a 2TB log file called “hugefile.txt”, and you want the content and length for all the lines that start with the word “ENTRY”.

So you try starting out by writing a list comprehension:

logfile = open("hugefile.txt","r")
entry_lines = [(line,len(line)) for line in logfile if line.startswith("ENTRY")]

This slurps up the whole file, processes each line, and stores the matching lines in your array. This array could therefore contain up to 2TB of content. That’s a lot of RAM, and probably not practical for your purposes.

So instead we can use a generator to apply a “filter” to our content. No data is actually read until we start iterating over the result.

logfile = open("hugefile.txt","r")
entry_lines = ((line,len(line)) for line in logfile if line.startswith("ENTRY"))

Not even a single line has been read from our file yet. In fact, say we want to filter our result even further:

long_entries = ((line,length) for (line,length) in entry_lines if length > 80)

Still nothing has been read, but we’ve specified now two generators that will act on our data as we wish.

Lets write out our filtered lines to another file:

outfile = open("filtered.txt","a")
for entry,length in long_entries:
    outfile.write(entry)

Now we read the input file. As our for loop continues to request additional lines, the long_entries generator demands lines from the entry_lines generator, returning only those whose length is greater than 80 characters. And in turn, the entry_lines generator requests lines (filtered as indicated) from the logfile iterator, which in turn reads the file.

So instead of “pushing” data to your output function in the form of a fully-populated list, you’re giving the output function a way to “pull” data only when its needed. This is in our case much more efficient, but not quite as flexible. Generators are one way, one pass; the data from the log file we’ve read gets immediately discarded, so we can’t go back to a previous line. On the other hand, we don’t have to worry about keeping data around once we’re done with it.


回答 4

生成器表达式的好处是它使用较少的内存,因为它不会立即构建整个列表。当列表是中间变量时,最好使用生成器表达式,例如对结果求和或根据结果创建字典。

例如:

sum(x*2 for x in xrange(256))

dict( (k, some_func(k)) for k in some_list_of_keys )

这样做的好处是列表不会完全生成,因此使用的内存很少(而且应该更快)

但是,当所需的最终产品是列表时,应该使用列表推导。您将不会使用生成器表达式保存任何内存,因为您需要生成的列表。您还可以获得能够使用任何列表功能(如已排序或反转)的好处。

例如:

reversed( [x*2 for x in xrange(256)] )

The benefit of a generator expression is that it uses less memory since it doesn’t build the whole list at once. Generator expressions are best used when the list is an intermediary, such as summing the results, or creating a dict out of the results.

For example:

sum(x*2 for x in xrange(256))

dict( (k, some_func(k)) for k in some_list_of_keys )

The advantage there is that the list isn’t completely generated, and thus little memory is used (and should also be faster)

You should, though, use list comprehensions when the desired final product is a list. You are not going to save any memeory using generator expressions, since you want the generated list. You also get the benefit of being able to use any of the list functions like sorted or reversed.

For example:

reversed( [x*2 for x in xrange(256)] )

回答 5

从可变对象(如列表)创建生成器时,请注意,生成器将在使用生成器时(而不是在创建生成器时)根据列表的状态进行评估:

>>> mylist = ["a", "b", "c"]
>>> gen = (elem + "1" for elem in mylist)
>>> mylist.clear()
>>> for x in gen: print (x)
# nothing

如果您的列表有可能被修改(或列表中的可变对象),但是您需要在生成器创建时的状态,则需要使用列表推导。

When creating a generator from a mutable object (like a list) be aware that the generator will get evaluated on the state of the list at time of using the generator, not at time of the creation of the generator:

>>> mylist = ["a", "b", "c"]
>>> gen = (elem + "1" for elem in mylist)
>>> mylist.clear()
>>> for x in gen: print (x)
# nothing

If there is any chance of your list getting modified (or a mutable object inside that list) but you need the state at creation of the generator you need to use a list comprehension instead.


回答 6

有时,您可以从itertools中使用tee函数,它为同一生成器返回多个迭代器,这些迭代器可以独立使用。

Sometimes you can get away with the tee function from itertools, it returns multiple iterators for the same generator that can be used independently.


回答 7

我正在使用Hadoop Mincemeat模块。我认为这是一个值得注意的好例子:

import mincemeat

def mapfn(k,v):
    for w in v:
        yield 'sum',w
        #yield 'count',1


def reducefn(k,v): 
    r1=sum(v)
    r2=len(v)
    print r2
    m=r1/r2
    std=0
    for i in range(r2):
       std+=pow(abs(v[i]-m),2)  
    res=pow((std/r2),0.5)
    return r1,r2,res

在这里,生成器从文本文件(最大为15GB)中获取数字,并使用Hadoop的map-reduce对这些数字进行简单的数学运算。如果我没有使用yield函数,而是使用列表理解,那么计算总和和平均值将花费更长的时间(更不用说空间复杂性了)。

Hadoop是利用Generators的所有优点的一个很好的例子。

I’m using the Hadoop Mincemeat module. I think this is a great example to take a note of:

import mincemeat

def mapfn(k,v):
    for w in v:
        yield 'sum',w
        #yield 'count',1


def reducefn(k,v): 
    r1=sum(v)
    r2=len(v)
    print r2
    m=r1/r2
    std=0
    for i in range(r2):
       std+=pow(abs(v[i]-m),2)  
    res=pow((std/r2),0.5)
    return r1,r2,res

Here the generator gets numbers out of a text file (as big as 15GB) and applies simple math on those numbers using Hadoop’s map-reduce. If I had not used the yield function, but instead a list comprehension, it would have taken a much longer time calculating the sums and average (not to mention the space complexity).

Hadoop is a great example for using all the advantages of Generators.


声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。