问题:使用reduce()的有用代码?[关闭]

这里有没有人有任何有用的代码在python中使用reduce()函数?除了示例中常见的+和*之外,是否还有其他代码?

通过GvR 引用Python 3000中的reduce()的命运

Does anyone here have any useful code which uses reduce() function in python? Is there any code other than the usual + and * that we see in the examples?

Refer Fate of reduce() in Python 3000 by GvR


回答 0

除+和*外,我为它找到的其他用途是与和和或,但现在我们有anyall来替换这些情况。

foldl并且foldr确实在Scheme中出现了很多…

这是一些可爱的用法:

整理清单

目标:[[1, 2, 3], [4, 5], [6, 7, 8]]变成[1, 2, 3, 4, 5, 6, 7, 8]

reduce(list.__add__, [[1, 2, 3], [4, 5], [6, 7, 8]], [])

数字列表到一个数字

目标:[1, 2, 3, 4, 5, 6, 7, 8]变成12345678

丑陋,缓慢的方式:

int("".join(map(str, [1,2,3,4,5,6,7,8])))

漂亮的reduce方式:

reduce(lambda a,d: 10*a+d, [1,2,3,4,5,6,7,8], 0)

The other uses I’ve found for it besides + and * were with and and or, but now we have any and all to replace those cases.

foldl and foldr do come up in Scheme a lot…

Here’s some cute usages:

Flatten a list

Goal: turn [[1, 2, 3], [4, 5], [6, 7, 8]] into [1, 2, 3, 4, 5, 6, 7, 8].

reduce(list.__add__, [[1, 2, 3], [4, 5], [6, 7, 8]], [])

List of digits to a number

Goal: turn [1, 2, 3, 4, 5, 6, 7, 8] into 12345678.

Ugly, slow way:

int("".join(map(str, [1,2,3,4,5,6,7,8])))

Pretty reduce way:

reduce(lambda a,d: 10*a+d, [1,2,3,4,5,6,7,8], 0)

回答 1

reduce()可用于查找3个或更多数字的最小公倍数

#!/usr/bin/env python
from fractions import gcd
from functools import reduce

def lcm(*args):
    return reduce(lambda a,b: a * b // gcd(a, b), args)

例:

>>> lcm(100, 23, 98)
112700
>>> lcm(*range(1, 20))
232792560

reduce() can be used to find Least common multiple for 3 or more numbers:

#!/usr/bin/env python
from fractions import gcd
from functools import reduce

def lcm(*args):
    return reduce(lambda a,b: a * b // gcd(a, b), args)

Example:

>>> lcm(100, 23, 98)
112700
>>> lcm(*range(1, 20))
232792560

回答 2

reduce()可以用来解析点名(eval()太不安全了,无法使用):

>>> import __main__
>>> reduce(getattr, "os.path.abspath".split('.'), __main__)
<function abspath at 0x009AB530>

reduce() could be used to resolve dotted names (where eval() is too unsafe to use):

>>> import __main__
>>> reduce(getattr, "os.path.abspath".split('.'), __main__)
<function abspath at 0x009AB530>

回答 3

找到N个给定列表的交集:

input_list = [[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]]

result = reduce(set.intersection, map(set, input_list))

返回:

result = set([3, 4, 5])

通过:Python-两个列表的交集

Find the intersection of N given lists:

input_list = [[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]]

result = reduce(set.intersection, map(set, input_list))

returns:

result = set([3, 4, 5])

via: Python – Intersection of two lists


回答 4

我认为reduce是一个愚蠢的命令。因此:

reduce(lambda hold,next:hold+chr(((ord(next.upper())-65)+13)%26+65),'znlorabggbbhfrshy','')

I think reduce is a silly command. Hence:

reduce(lambda hold,next:hold+chr(((ord(next.upper())-65)+13)%26+65),'znlorabggbbhfrshy','')

回答 5

reduce我在代码中发现的用法涉及以下情况:我具有一些用于逻辑表达式的类结构,因此需要将这些表达式对象的列表转换为表达式的并集。我已经有了一个make_and给定两个表达式的连接词创建函数,因此我写了reduce(make_and,l)。(我知道列表不为空;否则它将是类似reduce(make_and,l,make_true)。)

这正是(某些)函数式程序员喜欢reduce(或折叠函数,通常称为此类函数)的原因。经常有已经有很多二元函数喜欢+*minmax,级联和,在我的情况,make_andmake_or。有一个reduce它,将这些操作提升到列表(或一般来说是折叠功能的树或其他任何东西)变得微不足道。

当然,如果sum经常使用某些实例化(例如),那么您就不想继续写作reduce。但是,除了sum使用某些for循环定义之外,您还可以使用轻松定义它reduce

正如其他人提到的,可读性确实是一个问题。但是,您可能会争辩说,人们发现reduce“清晰”较少的唯一原因是因为它不是许多人知道和/或使用的功能。

The usage of reduce that I found in my code involved the situation where I had some class structure for logic expression and I needed to convert a list of these expression objects to a conjunction of the expressions. I already had a function make_and to create a conjunction given two expressions, so I wrote reduce(make_and,l). (I knew the list wasn’t empty; otherwise it would have been something like reduce(make_and,l,make_true).)

This is exactly the reason that (some) functional programmers like reduce (or fold functions, as such functions are typically called). There are often already many binary functions like +, *, min, max, concatenation and, in my case, make_and and make_or. Having a reduce makes it trivial to lift these operations to lists (or trees or whatever you got, for fold functions in general).

Of course, if certain instantiations (such as sum) are often used, then you don’t want to keep writing reduce. However, instead of defining the sum with some for-loop, you can just as easily define it with reduce.

Readability, as mentioned by others, is indeed an issue. You could argue, however, that only reason why people find reduce less “clear” is because it is not a function that many people know and/or use.


回答 6

函数组成:如果您已经有了要连续应用的函数列表,例如:

color = lambda x: x.replace('brown', 'blue')
speed = lambda x: x.replace('quick', 'slow')
work = lambda x: x.replace('lazy', 'industrious')
fs = [str.lower, color, speed, work, str.title]

然后,您可以使用以下命令连续应用它们:

>>> call = lambda s, func: func(s)
>>> s = "The Quick Brown Fox Jumps Over the Lazy Dog"
>>> reduce(call, fs, s)
'The Slow Blue Fox Jumps Over The Industrious Dog'

在这种情况下,方法链接可能更具可读性。但是有时这是不可能的,并且这种组合可能比f1(f2(f3(f4(x))))某种语法更易读和可维护。

Function composition: If you already have a list of functions that you’d like to apply in succession, such as:

color = lambda x: x.replace('brown', 'blue')
speed = lambda x: x.replace('quick', 'slow')
work = lambda x: x.replace('lazy', 'industrious')
fs = [str.lower, color, speed, work, str.title]

Then you can apply them all consecutively with:

>>> call = lambda s, func: func(s)
>>> s = "The Quick Brown Fox Jumps Over the Lazy Dog"
>>> reduce(call, fs, s)
'The Slow Blue Fox Jumps Over The Industrious Dog'

In this case, method chaining may be more readable. But sometimes it isn’t possible, and this kind of composition may be more readable and maintainable than a f1(f2(f3(f4(x)))) kind of syntax.


回答 7

您可以替换value = json_obj['a']['b']['c']['d']['e']为:

value = reduce(dict.__getitem__, 'abcde', json_obj)

如果您已经将路径a/b/c/..作为列表。例如,使用list中的项目更改嵌套字典的dict中的值

You could replace value = json_obj['a']['b']['c']['d']['e'] with:

value = reduce(dict.__getitem__, 'abcde', json_obj)

If you already have the path a/b/c/.. as a list. For example, Change values in dict of nested dicts using items in a list.


回答 8

@Blair Conrad:您也可以使用sum来实现glob / reduce,如下所示:

files = sum([glob.glob(f) for f in args], [])

这比您的两个示例中的任何一个都不那么冗长,完全是Python风格的,并且仍然只是一行代码。

因此,为了回答最初的问题,我个人尝试避免使用reduce,因为它从来没有真正需要过,而且我发现它比其他方法不太清楚。但是,有些人习惯于减少并开始喜欢它来列出理解力(尤其是Haskell程序员)。但是,如果您还没有考虑过reduce的问题,那么您可能不必担心使用它。

@Blair Conrad: You could also implement your glob/reduce using sum, like so:

files = sum([glob.glob(f) for f in args], [])

This is less verbose than either of your two examples, is perfectly Pythonic, and is still only one line of code.

So to answer the original question, I personally try to avoid using reduce because it’s never really necessary and I find it to be less clear than other approaches. However, some people get used to reduce and come to prefer it to list comprehensions (especially Haskell programmers). But if you’re not already thinking about a problem in terms of reduce, you probably don’t need to worry about using it.


回答 9

reduce 可用于支持链式属性查找:

reduce(getattr, ('request', 'user', 'email'), self)

当然,这相当于

self.request.user.email

但是在代码需要接受任意属性列表时很有用。

(在处理Django模型时,任意长度的链接属性是常见的。)

reduce can be used to support chained attribute lookups:

reduce(getattr, ('request', 'user', 'email'), self)

Of course, this is equivalent to

self.request.user.email

but it’s useful when your code needs to accept an arbitrary list of attributes.

(Chained attributes of arbitrary length are common when dealing with Django models.)


回答 10

reduce当您需要查找类似set对象序列的并集或交集时,此功能很有用。

>>> reduce(operator.or_, ({1}, {1, 2}, {1, 3}))  # union
{1, 2, 3}
>>> reduce(operator.and_, ({1}, {1, 2}, {1, 3}))  # intersection
{1}

(除了实际set的,其中的一个示例是Django的Q对象。)

另一方面,如果要处理bools,则应使用anyall

>>> any((True, False, True))
True

reduce is useful when you need to find the union or intersection of a sequence of set-like objects.

>>> reduce(operator.or_, ({1}, {1, 2}, {1, 3}))  # union
{1, 2, 3}
>>> reduce(operator.and_, ({1}, {1, 2}, {1, 3}))  # intersection
{1}

(Apart from actual sets, an example of these are Django’s Q objects.)

On the other hand, if you’re dealing with bools, you should use any and all:

>>> any((True, False, True))
True

回答 11

重复我的代码后,看来我使用过的reduce唯一要做的就是计算阶乘:

reduce(operator.mul, xrange(1, x+1) or (1,))

After grepping my code, it seems the only thing I’ve used reduce for is calculating the factorial:

reduce(operator.mul, xrange(1, x+1) or (1,))

回答 12

我正在为一种语言编写一个compose函数,因此我使用reduce和我的apply运算符来构造该组合函数。

简而言之,compose将一系列函数组合成一个函数。如果我有一个分阶段应用的复杂操作,那么我希望将其全部组合起来,如下所示:

complexop = compose(stage4, stage3, stage2, stage1)

这样,我便可以将其应用于这样的表达式:

complexop(expression)

我希望它等同于:

stage4(stage3(stage2(stage1(expression))))

现在,要构建内部对象,我希望它说:

Lambda([Symbol('x')], Apply(stage4, Apply(stage3, Apply(stage2, Apply(stage1, Symbol('x'))))))

(Lambda类构建用户定义的函数,Apply构建函数应用程序。)

现在,不幸的是,reduce折叠的方向错误,所以我大致使用了:

reduce(lambda x,y: Apply(y, x), reversed(args + [Symbol('x')]))

要弄清楚reduce产生了什么,请在REPL中尝试以下方法:

reduce(lambda x, y: (x, y), range(1, 11))
reduce(lambda x, y: (y, x), reversed(range(1, 11)))

I’m writing a compose function for a language, so I construct the composed function using reduce along with my apply operator.

In a nutshell, compose takes a list of functions to compose into a single function. If I have a complex operation that is applied in stages, I want to put it all together like so:

complexop = compose(stage4, stage3, stage2, stage1)

This way, I can then apply it to an expression like so:

complexop(expression)

And I want it to be equivalent to:

stage4(stage3(stage2(stage1(expression))))

Now, to build my internal objects, I want it to say:

Lambda([Symbol('x')], Apply(stage4, Apply(stage3, Apply(stage2, Apply(stage1, Symbol('x'))))))

(The Lambda class builds a user-defined function, and Apply builds a function application.)

Now, reduce, unfortunately, folds the wrong way, so I wound up using, roughly:

reduce(lambda x,y: Apply(y, x), reversed(args + [Symbol('x')]))

To figure out what reduce produces, try these in the REPL:

reduce(lambda x, y: (x, y), range(1, 11))
reduce(lambda x, y: (y, x), reversed(range(1, 11)))

回答 13

reduce可以用来获取第n个元素最大的列表

reduce(lambda x,y: x if x[2] > y[2] else y,[[1,2,3,4],[5,2,5,7],[1,6,0,2]])

将返回[5、2、5、7],因为它是具有最大3rd元素的列表+

reduce can be used to get the list with the maximum nth element

reduce(lambda x,y: x if x[2] > y[2] else y,[[1,2,3,4],[5,2,5,7],[1,6,0,2]])

would return [5, 2, 5, 7] as it is the list with max 3rd element +


回答 14

Reduce不仅限于标量运算;它也可以用于将事物分类到存储桶中。(这是我最常使用的减少方法)。

想象一下,如果您有一个对象列表,并且想根据对象中平面存储的属性按层次结构对其进行重新组织。在以下示例中,我使用该articles功能生成了与XML编码报纸中的文章相关的元数据对象列表。articles生成一个XML元素列表,然后一个一个地映射它们,生成包含一些有趣信息的对象。在前端,我要让用户按节/小节/标题浏览文章。因此,我通常使用reduce文章列表,并返回一个反映章节/小节/文章层次结构的字典。

from lxml import etree
from Reader import Reader

class IssueReader(Reader):
    def articles(self):
        arts = self.q('//div3')  # inherited ... runs an xpath query against the issue
        subsection = etree.XPath('./ancestor::div2/@type')
        section = etree.XPath('./ancestor::div1/@type')
        header_text = etree.XPath('./head//text()')
        return map(lambda art: {
            'text_id': self.id,
            'path': self.getpath(art)[0],
            'subsection': (subsection(art)[0] or '[none]'),
            'section': (section(art)[0] or '[none]'),
            'headline': (''.join(header_text(art)) or '[none]')
        }, arts)

    def by_section(self):
        arts = self.articles()

        def extract(acc, art):  # acc for accumulator
            section = acc.get(art['section'], False)
            if section:
                subsection = acc.get(art['subsection'], False)
                if subsection:
                    subsection.append(art)
                else:
                    section[art['subsection']] = [art]
            else:
                acc[art['section']] = {art['subsection']: [art]}
            return acc

        return reduce(extract, arts, {})

我在这里给出两个函数,因为我认为它显示了map和reduce在处理对象时如何很好地互补。使用for循环可以完成相同的事情,但是,……花一些严肃的时间使用函数式语言往往会使我在映射和归约方面进行思考。

顺便说一句,如果有人像我在中所做的那样设置属性的更好方法extract,而您要设置的属性的父级可能还不存在,请告诉我。

Reduce isn’t limited to scalar operations; it can also be used to sort things into buckets. (This is what I use reduce for most often).

Imagine a case in which you have a list of objects, and you want to re-organize it hierarchically based on properties stored flatly in the object. In the following example, I produce a list of metadata objects related to articles in an XML-encoded newspaper with the articles function. articles generates a list of XML elements, and then maps through them one by one, producing objects that hold some interesting info about them. On the front end, I’m going to want to let the user browse the articles by section/subsection/headline. So I use reduce to take the list of articles and return a single dictionary that reflects the section/subsection/article hierarchy.

from lxml import etree
from Reader import Reader

class IssueReader(Reader):
    def articles(self):
        arts = self.q('//div3')  # inherited ... runs an xpath query against the issue
        subsection = etree.XPath('./ancestor::div2/@type')
        section = etree.XPath('./ancestor::div1/@type')
        header_text = etree.XPath('./head//text()')
        return map(lambda art: {
            'text_id': self.id,
            'path': self.getpath(art)[0],
            'subsection': (subsection(art)[0] or '[none]'),
            'section': (section(art)[0] or '[none]'),
            'headline': (''.join(header_text(art)) or '[none]')
        }, arts)

    def by_section(self):
        arts = self.articles()

        def extract(acc, art):  # acc for accumulator
            section = acc.get(art['section'], False)
            if section:
                subsection = acc.get(art['subsection'], False)
                if subsection:
                    subsection.append(art)
                else:
                    section[art['subsection']] = [art]
            else:
                acc[art['section']] = {art['subsection']: [art]}
            return acc

        return reduce(extract, arts, {})

I give both functions here because I think it shows how map and reduce can complement each other nicely when dealing with objects. The same thing could have been accomplished with a for loop, … but spending some serious time with a functional language has tended to make me think in terms of map and reduce.

By the way, if anybody has a better way to set properties like I’m doing in extract, where the parents of the property you want to set might not exist yet, please let me know.


回答 15

不知道这是您要追求的,但是您可以在Google上搜索源代码

点击链接以搜索“ function:reduce()lang:python”在Google Code搜索上

乍看之下,以下项目使用 reduce()

  • MoinMoin
  • 佐佩
  • 数字
  • 科学Python

等),但由于它们是大型项目,因此这些不足为奇。

reduce的功能可以使用函数递归来完成,我想Guido认为它更明确。

更新:

由于Google的代码搜索已于2012年1月15日停产,因此除了恢复常规的Google搜索外,还有一个名为“代码片段集”的代码看起来很有希望。这个(封闭的)问题的答案中提到了许多其他资源。是否可以替换Google Code Search?

更新2(2017年5月29日):

Nullege搜索引擎是Python示例(使用开源代码)的一个很好的来源。

Not sure if this is what you are after but you can search source code on Google.

Follow the link for a search on ‘function:reduce() lang:python’ on Google Code search

At first glance the following projects use reduce()

  • MoinMoin
  • Zope
  • Numeric
  • ScientificPython

etc. etc. but then these are hardly surprising since they are huge projects.

The functionality of reduce can be done using function recursion which I guess Guido thought was more explicit.

Update:

Since Google’s Code Search was discontinued on 15-Jan-2012, besides reverting to regular Google searches, there’s something called Code Snippets Collection that looks promising. A number of other resources are mentioned in answers this (closed) question Replacement for Google Code Search?.

Update 2 (29-May-2017):

A good source for Python examples (in open-source code) is the Nullege search engine.


回答 16

import os

files = [
    # full filenames
    "var/log/apache/errors.log",
    "home/kane/images/avatars/crusader.png",
    "home/jane/documents/diary.txt",
    "home/kane/images/selfie.jpg",
    "var/log/abc.txt",
    "home/kane/.vimrc",
    "home/kane/images/avatars/paladin.png",
]

# unfolding of plain filiname list to file-tree
fs_tree = ({}, # dict of folders
           []) # list of files
for full_name in files:
    path, fn = os.path.split(full_name)
    reduce(
        # this fucction walks deep into path
        # and creates placeholders for subfolders
        lambda d, k: d[0].setdefault(k,         # walk deep
                                     ({}, [])), # or create subfolder storage
        path.split(os.path.sep),
        fs_tree
    )[1].append(fn)

print fs_tree
#({'home': (
#    {'jane': (
#        {'documents': (
#           {},
#           ['diary.txt']
#        )},
#        []
#    ),
#    'kane': (
#       {'images': (
#          {'avatars': (
#             {},
#             ['crusader.png',
#             'paladin.png']
#          )},
#          ['selfie.jpg']
#       )},
#       ['.vimrc']
#    )},
#    []
#  ),
#  'var': (
#     {'log': (
#         {'apache': (
#            {},
#            ['errors.log']
#         )},
#         ['abc.txt']
#     )},
#     [])
#},
#[])
import os

files = [
    # full filenames
    "var/log/apache/errors.log",
    "home/kane/images/avatars/crusader.png",
    "home/jane/documents/diary.txt",
    "home/kane/images/selfie.jpg",
    "var/log/abc.txt",
    "home/kane/.vimrc",
    "home/kane/images/avatars/paladin.png",
]

# unfolding of plain filiname list to file-tree
fs_tree = ({}, # dict of folders
           []) # list of files
for full_name in files:
    path, fn = os.path.split(full_name)
    reduce(
        # this fucction walks deep into path
        # and creates placeholders for subfolders
        lambda d, k: d[0].setdefault(k,         # walk deep
                                     ({}, [])), # or create subfolder storage
        path.split(os.path.sep),
        fs_tree
    )[1].append(fn)

print fs_tree
#({'home': (
#    {'jane': (
#        {'documents': (
#           {},
#           ['diary.txt']
#        )},
#        []
#    ),
#    'kane': (
#       {'images': (
#          {'avatars': (
#             {},
#             ['crusader.png',
#             'paladin.png']
#          )},
#          ['selfie.jpg']
#       )},
#       ['.vimrc']
#    )},
#    []
#  ),
#  'var': (
#     {'log': (
#         {'apache': (
#            {},
#            ['errors.log']
#         )},
#         ['abc.txt']
#     )},
#     [])
#},
#[])

回答 17

def dump(fname,iterable):
  with open(fname,'w') as f:
    reduce(lambda x, y: f.write(unicode(y,'utf-8')), iterable)
def dump(fname,iterable):
  with open(fname,'w') as f:
    reduce(lambda x, y: f.write(unicode(y,'utf-8')), iterable)

回答 18

我曾经用sqlalchemy-searchable中的运算符reduce 来连接PostgreSQL搜索向量列表||

vectors = (self.column_vector(getattr(self.table.c, column_name))
           for column_name in self.indexed_columns)
concatenated = reduce(lambda x, y: x.op('||')(y), vectors)
compiled = concatenated.compile(self.conn)

I used reduce to concatenate a list of PostgreSQL search vectors with the || operator in sqlalchemy-searchable:

vectors = (self.column_vector(getattr(self.table.c, column_name))
           for column_name in self.indexed_columns)
concatenated = reduce(lambda x, y: x.op('||')(y), vectors)
compiled = concatenated.compile(self.conn)

回答 19

我有一个老式的pipegrep Python实现,该实现使用reduce和glob模块来构建要处理的文件列表:

files = []
files.extend(reduce(lambda x, y: x + y, map(glob.glob, args)))

当时我觉得很方便,但实际上没有必要,因为类似的东西一样好,而且可读性更高

files = []
for f in args:
    files.extend(glob.glob(f))

I have an old Python implementation of pipegrep that uses reduce and the glob module to build a list of files to process:

files = []
files.extend(reduce(lambda x, y: x + y, map(glob.glob, args)))

I found it handy at the time, but it’s really not necessary, as something similar is just as good, and probably more readable

files = []
for f in args:
    files.extend(glob.glob(f))

回答 20

假设有一些年度统计数据存储在“计数器”列表中。我们想要查找不同年份中每个月的MIN / MAX值。例如,对于1月将是10。对于2月将是15。我们需要将结果存储在新的Counter中。

from collections import Counter

stat2011 = Counter({"January": 12, "February": 20, "March": 50, "April": 70, "May": 15,
           "June": 35, "July": 30, "August": 15, "September": 20, "October": 60,
           "November": 13, "December": 50})

stat2012 = Counter({"January": 36, "February": 15, "March": 50, "April": 10, "May": 90,
           "June": 25, "July": 35, "August": 15, "September": 20, "October": 30,
           "November": 10, "December": 25})

stat2013 = Counter({"January": 10, "February": 60, "March": 90, "April": 10, "May": 80,
           "June": 50, "July": 30, "August": 15, "September": 20, "October": 75,
           "November": 60, "December": 15})

stat_list = [stat2011, stat2012, stat2013]

print reduce(lambda x, y: x & y, stat_list)     # MIN
print reduce(lambda x, y: x | y, stat_list)     # MAX

Let say that there are some yearly statistic data stored a list of Counters. We want to find the MIN/MAX values in each month across the different years. For example, for January it would be 10. And for February it would be 15. We need to store the results in a new Counter.

from collections import Counter

stat2011 = Counter({"January": 12, "February": 20, "March": 50, "April": 70, "May": 15,
           "June": 35, "July": 30, "August": 15, "September": 20, "October": 60,
           "November": 13, "December": 50})

stat2012 = Counter({"January": 36, "February": 15, "March": 50, "April": 10, "May": 90,
           "June": 25, "July": 35, "August": 15, "September": 20, "October": 30,
           "November": 10, "December": 25})

stat2013 = Counter({"January": 10, "February": 60, "March": 90, "April": 10, "May": 80,
           "June": 50, "July": 30, "August": 15, "September": 20, "October": 75,
           "November": 60, "December": 15})

stat_list = [stat2011, stat2012, stat2013]

print reduce(lambda x, y: x & y, stat_list)     # MIN
print reduce(lambda x, y: x | y, stat_list)     # MAX

回答 21

我有代表某种重叠区间(基因组外显子)的对象,并使用__and__以下方法重新定义了它们的交集:

class Exon:
    def __init__(self):
        ...
    def __and__(self,other):
        ...
        length = self.length + other.length  # (e.g.)
        return self.__class__(...length,...)

然后,当我有它们的集合(例如,在同一个基因中)时,我使用

intersection = reduce(lambda x,y: x&y, exons)

I have objects representing some kind of overlapping intervals (genomic exons), and redefined their intersection using __and__:

class Exon:
    def __init__(self):
        ...
    def __and__(self,other):
        ...
        length = self.length + other.length  # (e.g.)
        return self.__class__(...length,...)

Then when I have a collection of them (for instance, in the same gene), I use

intersection = reduce(lambda x,y: x&y, exons)

回答 22

我只是发现了有用的用法reduce:在不删除定界符的情况下拆分字符串该代码完全来自“编程口语”博客。这是代码:

reduce(lambda acc, elem: acc[:-1] + [acc[-1] + elem] if elem == "\n" else acc + [elem], re.split("(\n)", "a\nb\nc\n"), [])

结果如下:

['a\n', 'b\n', 'c\n', '']

请注意,它处理的是在SO中无法普遍解决的极端情况。有关更深入的解释,我将您重定向到原始博客文章。

I just found useful usage of reduce: splitting string without removing the delimiter. The code is entirely from Programatically Speaking blog. Here’s the code:

reduce(lambda acc, elem: acc[:-1] + [acc[-1] + elem] if elem == "\n" else acc + [elem], re.split("(\n)", "a\nb\nc\n"), [])

Here’s the result:

['a\n', 'b\n', 'c\n', '']

Note that it handles edge cases that popular answer in SO doesn’t. For more in-depth explanation, I am redirecting you to original blog post.


回答 23

使用reduce()来确定日期列表是否连续:

from datetime import date, timedelta


def checked(d1, d2):
    """
    We assume the date list is sorted.
    If d2 & d1 are different by 1, everything up to d2 is consecutive, so d2
    can advance to the next reduction.
    If d2 & d1 are not different by 1, returning d1 - 1 for the next reduction
    will guarantee the result produced by reduce() to be something other than
    the last date in the sorted date list.

    Definition 1: 1/1/14, 1/2/14, 1/2/14, 1/3/14 is consider consecutive
    Definition 2: 1/1/14, 1/2/14, 1/2/14, 1/3/14 is consider not consecutive

    """
    #if (d2 - d1).days == 1 or (d2 - d1).days == 0:  # for Definition 1
    if (d2 - d1).days == 1:                          # for Definition 2
        return d2
    else:
        return d1 + timedelta(days=-1)

# datelist = [date(2014, 1, 1), date(2014, 1, 3),
#             date(2013, 12, 31), date(2013, 12, 30)]

# datelist = [date(2014, 2, 19), date(2014, 2, 19), date(2014, 2, 20),
#             date(2014, 2, 21), date(2014, 2, 22)]

datelist = [date(2014, 2, 19), date(2014, 2, 21),
            date(2014, 2, 22), date(2014, 2, 20)]

datelist.sort()

if datelist[-1] == reduce(checked, datelist):
    print "dates are consecutive"
else:
    print "dates are not consecutive"

Using reduce() to find out if a list of dates are consecutive:

from datetime import date, timedelta


def checked(d1, d2):
    """
    We assume the date list is sorted.
    If d2 & d1 are different by 1, everything up to d2 is consecutive, so d2
    can advance to the next reduction.
    If d2 & d1 are not different by 1, returning d1 - 1 for the next reduction
    will guarantee the result produced by reduce() to be something other than
    the last date in the sorted date list.

    Definition 1: 1/1/14, 1/2/14, 1/2/14, 1/3/14 is consider consecutive
    Definition 2: 1/1/14, 1/2/14, 1/2/14, 1/3/14 is consider not consecutive

    """
    #if (d2 - d1).days == 1 or (d2 - d1).days == 0:  # for Definition 1
    if (d2 - d1).days == 1:                          # for Definition 2
        return d2
    else:
        return d1 + timedelta(days=-1)

# datelist = [date(2014, 1, 1), date(2014, 1, 3),
#             date(2013, 12, 31), date(2013, 12, 30)]

# datelist = [date(2014, 2, 19), date(2014, 2, 19), date(2014, 2, 20),
#             date(2014, 2, 21), date(2014, 2, 22)]

datelist = [date(2014, 2, 19), date(2014, 2, 21),
            date(2014, 2, 22), date(2014, 2, 20)]

datelist.sort()

if datelist[-1] == reduce(checked, datelist):
    print "dates are consecutive"
else:
    print "dates are not consecutive"

声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。