标签归档:performance

我何时应该在代码中使用pandas apply()?

问题:我何时应该在代码中使用pandas apply()?

我已经看到许多有关使用Pandas方法的堆栈溢出问题的答案apply。我还看到用户在他们的下面发表评论,说“ apply缓慢,应避免使用”。

我已经阅读了许多有关性能的文章,这些文章解释apply得很慢。我还在文档中看到了关于免除apply传递UDF的便捷功能的免责声明(现在似乎找不到)。因此,普遍的共识是,apply应尽可能避免。但是,这引起了以下问题:

  1. 如果apply太糟糕了,那为什么在API中呢?
  2. 我应该如何以及何时使代码apply免费?
  3. 在任何情况下apply都有良好的情况(比其他可能的解决方案更好)吗?

I have seen many answers posted to questions on Stack Overflow involving the use of the Pandas method apply. I have also seen users commenting under them saying that “apply is slow, and should be avoided”.

I have read many articles on the topic of performance that explain apply is slow. I have also seen a disclaimer in the docs about how apply is simply a convenience function for passing UDFs (can’t seem to find that now). So, the general consensus is that apply should be avoided if possible. However, this raises the following questions:

  1. If apply is so bad, then why is it in the API?
  2. How and when should I make my code apply-free?
  3. Are there ever any situations where apply is good (better than other possible solutions)?

回答 0

apply,您不需要的便利功能

我们首先在OP中逐一解决问题。

如果应用是如此糟糕,那么为什么要在API中使用它呢?

DataFrame.applySeries.apply是分别在DataFrame和Series对象上定义的便捷函数apply接受任何在DataFrame上应用转换/聚合的用户定义函数。apply实际上是完成任何现有熊猫功能无法完成的灵丹妙药。

一些事情apply可以做:

  • 在DataFrame或Series上运行任何用户定义的函数
  • 在DataFrame上按行(axis=1)或按列()应用函数axis=0
  • 应用功能时执行索引对齐
  • 使用用户定义的函数执行汇总(但是,我们通常更喜欢aggtransform在这种情况下)
  • 执行逐元素转换
  • 将汇总结果广播到原始行(请参阅result_type参数)。
  • 接受位置/关键字参数以传递给用户定义的函数。

…其他 有关更多信息,请参见文档中的行或列函数应用程序

那么,具有所有这些功能,为什么apply不好?这是因为apply 缓慢的。Pandas对功能的性质不做任何假设,因此在必要时将您的功能迭代地应用于每个行/列。此外,处理上述所有情况均意味着apply每次迭代都会产生一些重大开销。此外,apply会消耗更多的内存,这对于内存受限的应用程序是一个挑战。

在极少数情况下,apply适合使用(以下更多内容)。如果不确定是否应该使用apply,则可能不应该使用。


让我们解决下一个问题。

如何当我应该让我的代码申请-免费?

重新说明一下,这是一些常见的情况,在这些情况下您将希望摆脱对的任何调用apply

数值数据

如果您正在使用数字数据,则可能已经有一个矢量化的cython函数可以完全实现您要执行的操作(如果没有,请在Stack Overflow上提问或在GitHub上打开功能请求)。

对比一下apply简单加法运算的性能。

df = pd.DataFrame({"A": [9, 4, 2, 1], "B": [12, 7, 5, 4]})
df

   A   B
0  9  12
1  4   7
2  2   5
3  1   4

df.apply(np.sum)

A    16
B    28
dtype: int64

df.sum()

A    16
B    28
dtype: int64

在性能方面,没有任何可比的,被cythonized的等效物要快得多。不需要图表,因为即使对于玩具数据,差异也很明显。

%timeit df.apply(np.sum)
%timeit df.sum()
2.22 ms ± 41.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
471 µs ± 8.16 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

即使您启用带有raw参数的原始数组传递,它的速度仍然是原来的两倍。

%timeit df.apply(np.sum, raw=True)
840 µs ± 691 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

另一个例子:

df.apply(lambda x: x.max() - x.min())

A    8
B    8
dtype: int64

df.max() - df.min()

A    8
B    8
dtype: int64

%timeit df.apply(lambda x: x.max() - x.min())
%timeit df.max() - df.min()

2.43 ms ± 450 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.23 ms ± 14.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

通常,如果可能寻找向量化的替代方案。

字符串/正则表达式

在大多数情况下,Pandas提供“矢量化”字符串函数,但是在极少数情况下,这些函数不会…“应用”,可以这么说。

一个常见的问题是检查同一行的另一列中是否存在一列中的值。

df = pd.DataFrame({
    'Name': ['mickey', 'donald', 'minnie'],
    'Title': ['wonderland', "welcome to donald's castle", 'Minnie mouse clubhouse'],
    'Value': [20, 10, 86]})
df

     Name  Value                       Title
0  mickey     20                  wonderland
1  donald     10  welcome to donald's castle
2  minnie     86      Minnie mouse clubhouse

这应该返回第二行和第三行,因为“唐纳德”和“米妮”出现在它们各自的“标题”列中。

使用apply,这将使用

df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)

0    False
1     True
2     True
dtype: bool

df[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)]

     Name                       Title  Value
1  donald  welcome to donald's castle     10
2  minnie      Minnie mouse clubhouse     86

但是,使用列表推导存在更好的解决方案。

df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]]

     Name                       Title  Value
1  donald  welcome to donald's castle     10
2  minnie      Minnie mouse clubhouse     86

%timeit df[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)]
%timeit df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]]

2.85 ms ± 38.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
788 µs ± 16.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

这里要注意的是apply,由于开销较低,因此迭代例程的运行速度比快。如果您需要处理NaN和无效的dtype,则可以使用自定义函数在此基础上进行构建,然后再使用列表推导中的参数进行调用。

有关何时应该将列表理解视为一个不错的选择的更多信息,请参见我的文章:对于熊猫循环-我何时应该关心?

注意
日期和日期时间操作也具有矢量化版本。因此,例如,您应该更喜欢pd.to_datetime(df['date'])df['date'].apply(pd.to_datetime)

docs上阅读更多内容 。

一个常见的陷阱:列表的爆炸列

s = pd.Series([[1, 2]] * 3)
s

0    [1, 2]
1    [1, 2]
2    [1, 2]
dtype: object

人们很想使用apply(pd.Series)。就性能而言,这太可怕了。

s.apply(pd.Series)

   0  1
0  1  2
1  1  2
2  1  2

更好的选择是列出该列并将其传递给pd.DataFrame。

pd.DataFrame(s.tolist())

   0  1
0  1  2
1  1  2
2  1  2

%timeit s.apply(pd.Series)
%timeit pd.DataFrame(s.tolist())

2.65 ms ± 294 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
816 µs ± 40.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

最后,

有什么情况 apply 是好的吗?

Apply是一项便利功能,因此在某些情况下开销可以忽略不计,可以原谅。它实际上取决于函数被调用多少次。

为系列矢量化的函数,但不是数据帧的函数
如果要对多列应用字符串操作该怎么办?如果要将多列转换为日期时间怎么办?这些函数仅针对系列进行矢量化处理,因此必须将它们应用于要转换/操作的每一列。

df = pd.DataFrame(
         pd.date_range('2018-12-31','2019-01-31', freq='2D').date.astype(str).reshape(-1, 2), 
         columns=['date1', 'date2'])
df

       date1      date2
0 2018-12-31 2019-01-02
1 2019-01-04 2019-01-06
2 2019-01-08 2019-01-10
3 2019-01-12 2019-01-14
4 2019-01-16 2019-01-18
5 2019-01-20 2019-01-22
6 2019-01-24 2019-01-26
7 2019-01-28 2019-01-30

df.dtypes

date1    object
date2    object
dtype: object

这是以下情况的可接受案例apply

df.apply(pd.to_datetime, errors='coerce').dtypes

date1    datetime64[ns]
date2    datetime64[ns]
dtype: object

请注意,这对于stack还是有意义的,或者仅使用显式循环。所有这些选项都比使用稍微快一点apply,但是差异很小,可以原谅。

%timeit df.apply(pd.to_datetime, errors='coerce')
%timeit pd.to_datetime(df.stack(), errors='coerce').unstack()
%timeit pd.concat([pd.to_datetime(df[c], errors='coerce') for c in df], axis=1)
%timeit for c in df.columns: df[c] = pd.to_datetime(df[c], errors='coerce')

5.49 ms ± 247 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.94 ms ± 48.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.16 ms ± 216 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.41 ms ± 1.71 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

您可以对其他操作(例如字符串操作或转换为类别)进行类似的设置。

u = df.apply(lambda x: x.str.contains(...))
v = df.apply(lambda x: x.astype(category))

伏/秒

u = pd.concat([df[c].str.contains(...) for c in df], axis=1)
v = df.copy()
for c in df:
    v[c] = df[c].astype(category)

等等…

将Series转换为strastypevsapply

这似乎是API的特质。与使用相比,apply用于将Series中的整数转换为字符串的方法具有可比性(有时更快)astype

在此处输入图片说明 使用该perfplot库绘制该图。

import perfplot

perfplot.show(
    setup=lambda n: pd.Series(np.random.randint(0, n, n)),
    kernels=[
        lambda s: s.astype(str),
        lambda s: s.apply(str)
    ],
    labels=['astype', 'apply'],
    n_range=[2**k for k in range(1, 20)],
    xlabel='N',
    logx=True,
    logy=True,
    equality_check=lambda x, y: (x == y).all())

使用浮点数时,我看到的astype速度始终与一样快,或略快于apply。因此,这与测试中的数据是整数类型有关。

GroupBy 链式转换操作

GroupBy.apply到目前为止尚未进行讨论,但是GroupBy.apply它也是一个迭代便利函数,用于处理现有GroupBy函数未处理的任何事情。

一个常见的要求是执行GroupBy,然后执行两个主要操作,例如“滞后的累积量”:

df = pd.DataFrame({"A": list('aabcccddee'), "B": [12, 7, 5, 4, 5, 4, 3, 2, 1, 10]})
df

   A   B
0  a  12
1  a   7
2  b   5
3  c   4
4  c   5
5  c   4
6  d   3
7  d   2
8  e   1
9  e  10

您需要在此处进行两个连续的groupby调用:

df.groupby('A').B.cumsum().groupby(df.A).shift()

0     NaN
1    12.0
2     NaN
3     NaN
4     4.0
5     9.0
6     NaN
7     3.0
8     NaN
9     1.0
Name: B, dtype: float64

使用apply,您可以将其缩短为一个电话。

df.groupby('A').B.apply(lambda x: x.cumsum().shift())

0     NaN
1    12.0
2     NaN
3     NaN
4     4.0
5     9.0
6     NaN
7     3.0
8     NaN
9     1.0
Name: B, dtype: float64

量化性能非常困难,因为它取决于数据。但是总的来说,apply如果目标是减少groupby通话,这是一个可以接受的解决方案(因为groupby它也很昂贵)。


其他注意事项

除了上述注意事项外,还值得一提的是apply在第一行(或列)上执行两次。这样做是为了确定该功能是否有任何副作用。如果不是,则apply可能能够使用快速路径来评估结果,否则将退回到缓慢的实施方式。

df = pd.DataFrame({
    'A': [1, 2],
    'B': ['x', 'y']
})

def func(x):
    print(x['A'])
    return x

df.apply(func, axis=1)

# 1
# 1
# 2
   A  B
0  1  x
1  2  y

GroupBy.apply小于0.25的熊猫版本中也可以看到此行为(已固定为0.25,有关更多信息请参见此处。)

apply, the Convenience Function you Never Needed

We start by addressing the questions in the OP, one by one.

“If apply is so bad, then why is it in the API?”

DataFrame.apply and Series.apply are convenience functions defined on DataFrame and Series object respectively. apply accepts any user defined function that applies a transformation/aggregation on a DataFrame. apply is effectively a silver bullet that does whatever any existing pandas function cannot do.

Some of the things apply can do:

  • Run any user-defined function on a DataFrame or Series
  • Apply a function either row-wise (axis=1) or column-wise (axis=0) on a DataFrame
  • Perform index alignment while applying the function
  • Perform aggregation with user-defined functions (however, we usually prefer agg or transform in these cases)
  • Perform element-wise transformations
  • Broadcast aggregated results to original rows (see the result_type argument).
  • Accept positional/keyword arguments to pass to the user-defined functions.

…Among others. For more information, see Row or Column-wise Function Application in the documentation.

So, with all these features, why is apply bad? It is because apply is slow. Pandas makes no assumptions about the nature of your function, and so iteratively applies your function to each row/column as necessary. Additionally, handling all of the situations above means apply incurs some major overhead at each iteration. Further, apply consumes a lot more memory, which is a challenge for memory bounded applications.

There are very few situations where apply is appropriate to use (more on that below). If you’re not sure whether you should be using apply, you probably shouldn’t.



Let’s address the next question.

“How and when should I make my code apply-free?”

To rephrase, here are some common situations where you will want to get rid of any calls to apply.

Numeric Data

If you’re working with numeric data, there is likely already a vectorized cython function that does exactly what you’re trying to do (if not, please either ask a question on Stack Overflow or open a feature request on GitHub).

Contrast the performance of apply for a simple addition operation.

df = pd.DataFrame({"A": [9, 4, 2, 1], "B": [12, 7, 5, 4]})
df

   A   B
0  9  12
1  4   7
2  2   5
3  1   4

<!- ->

df.apply(np.sum)

A    16
B    28
dtype: int64

df.sum()

A    16
B    28
dtype: int64

Performance wise, there’s no comparison, the cythonized equivalent is much faster. There’s no need for a graph, because the difference is obvious even for toy data.

%timeit df.apply(np.sum)
%timeit df.sum()
2.22 ms ± 41.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
471 µs ± 8.16 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Even if you enable passing raw arrays with the raw argument, it’s still twice as slow.

%timeit df.apply(np.sum, raw=True)
840 µs ± 691 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Another example:

df.apply(lambda x: x.max() - x.min())

A    8
B    8
dtype: int64

df.max() - df.min()

A    8
B    8
dtype: int64

%timeit df.apply(lambda x: x.max() - x.min())
%timeit df.max() - df.min()

2.43 ms ± 450 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.23 ms ± 14.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In general, seek out vectorized alternatives if possible.


String/Regex

Pandas provides “vectorized” string functions in most situations, but there are rare cases where those functions do not… “apply”, so to speak.

A common problem is to check whether a value in a column is present in another column of the same row.

df = pd.DataFrame({
    'Name': ['mickey', 'donald', 'minnie'],
    'Title': ['wonderland', "welcome to donald's castle", 'Minnie mouse clubhouse'],
    'Value': [20, 10, 86]})
df

     Name  Value                       Title
0  mickey     20                  wonderland
1  donald     10  welcome to donald's castle
2  minnie     86      Minnie mouse clubhouse

This should return the row second and third row, since “donald” and “minnie” are present in their respective “Title” columns.

Using apply, this would be done using

df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)

0    False
1     True
2     True
dtype: bool
 
df[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)]

     Name                       Title  Value
1  donald  welcome to donald's castle     10
2  minnie      Minnie mouse clubhouse     86

However, a better solution exists using list comprehensions.

df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]]

     Name                       Title  Value
1  donald  welcome to donald's castle     10
2  minnie      Minnie mouse clubhouse     86

<!- ->

%timeit df[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)]
%timeit df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]]

2.85 ms ± 38.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
788 µs ± 16.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

The thing to note here is that iterative routines happen to be faster than apply, because of the lower overhead. If you need to handle NaNs and invalid dtypes, you can build on this using a custom function you can then call with arguments inside the list comprehension.

For more information on when list comprehensions should be considered a good option, see my writeup: Are for-loops in pandas really bad? When should I care?.

Note
Date and datetime operations also have vectorized versions. So, for example, you should prefer pd.to_datetime(df['date']), over, say, df['date'].apply(pd.to_datetime).

Read more at the docs.


A Common Pitfall: Exploding Columns of Lists

s = pd.Series([[1, 2]] * 3)
s

0    [1, 2]
1    [1, 2]
2    [1, 2]
dtype: object

People are tempted to use apply(pd.Series). This is horrible in terms of performance.

s.apply(pd.Series)

   0  1
0  1  2
1  1  2
2  1  2

A better option is to listify the column and pass it to pd.DataFrame.

pd.DataFrame(s.tolist())

   0  1
0  1  2
1  1  2
2  1  2

<!- ->

%timeit s.apply(pd.Series)
%timeit pd.DataFrame(s.tolist())

2.65 ms ± 294 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
816 µs ± 40.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)


Lastly,

“Are there any situations where apply is good?”

Apply is a convenience function, so there are situations where the overhead is negligible enough to forgive. It really depends on how many times the function is called.

Functions that are Vectorized for Series, but not DataFrames
What if you want to apply a string operation on multiple columns? What if you want to convert multiple columns to datetime? These functions are vectorized for Series only, so they must be applied over each column that you want to convert/operate on.

df = pd.DataFrame(
         pd.date_range('2018-12-31','2019-01-31', freq='2D').date.astype(str).reshape(-1, 2), 
         columns=['date1', 'date2'])
df

       date1      date2
0 2018-12-31 2019-01-02
1 2019-01-04 2019-01-06
2 2019-01-08 2019-01-10
3 2019-01-12 2019-01-14
4 2019-01-16 2019-01-18
5 2019-01-20 2019-01-22
6 2019-01-24 2019-01-26
7 2019-01-28 2019-01-30

df.dtypes

date1    object
date2    object
dtype: object
    

This is an admissible case for apply:

df.apply(pd.to_datetime, errors='coerce').dtypes

date1    datetime64[ns]
date2    datetime64[ns]
dtype: object

Note that it would also make sense to stack, or just use an explicit loop. All these options are slightly faster than using apply, but the difference is small enough to forgive.

%timeit df.apply(pd.to_datetime, errors='coerce')
%timeit pd.to_datetime(df.stack(), errors='coerce').unstack()
%timeit pd.concat([pd.to_datetime(df[c], errors='coerce') for c in df], axis=1)
%timeit for c in df.columns: df[c] = pd.to_datetime(df[c], errors='coerce')

5.49 ms ± 247 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.94 ms ± 48.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.16 ms ± 216 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.41 ms ± 1.71 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

You can make a similar case for other operations such as string operations, or conversion to category.

u = df.apply(lambda x: x.str.contains(...))
v = df.apply(lambda x: x.astype(category))

v/s

u = pd.concat([df[c].str.contains(...) for c in df], axis=1)
v = df.copy()
for c in df:
    v[c] = df[c].astype(category)

And so on…


Converting Series to str: astype versus apply

This seems like an idiosyncrasy of the API. Using apply to convert integers in a Series to string is comparable (and sometimes faster) than using astype.

enter image description here The graph was plotted using the perfplot library.

import perfplot

perfplot.show(
    setup=lambda n: pd.Series(np.random.randint(0, n, n)),
    kernels=[
        lambda s: s.astype(str),
        lambda s: s.apply(str)
    ],
    labels=['astype', 'apply'],
    n_range=[2**k for k in range(1, 20)],
    xlabel='N',
    logx=True,
    logy=True,
    equality_check=lambda x, y: (x == y).all())

With floats, I see the astype is consistently as fast as, or slightly faster than apply. So this has to do with the fact that the data in the test is integer type.


GroupBy operations with chained transformations

GroupBy.apply has not been discussed until now, but GroupBy.apply is also an iterative convenience function to handle anything that the existing GroupBy functions do not.

One common requirement is to perform a GroupBy and then two prime operations such as a “lagged cumsum”:

df = pd.DataFrame({"A": list('aabcccddee'), "B": [12, 7, 5, 4, 5, 4, 3, 2, 1, 10]})
df

   A   B
0  a  12
1  a   7
2  b   5
3  c   4
4  c   5
5  c   4
6  d   3
7  d   2
8  e   1
9  e  10

<!- ->

You’d need two successive groupby calls here:

df.groupby('A').B.cumsum().groupby(df.A).shift()
 
0     NaN
1    12.0
2     NaN
3     NaN
4     4.0
5     9.0
6     NaN
7     3.0
8     NaN
9     1.0
Name: B, dtype: float64

Using apply, you can shorten this to a a single call.

df.groupby('A').B.apply(lambda x: x.cumsum().shift())

0     NaN
1    12.0
2     NaN
3     NaN
4     4.0
5     9.0
6     NaN
7     3.0
8     NaN
9     1.0
Name: B, dtype: float64

It is very hard to quantify the performance because it depends on the data. But in general, apply is an acceptable solution if the goal is to reduce a groupby call (because groupby is also quite expensive).



Other Caveats

Aside from the caveats mentioned above, it is also worth mentioning that apply operates on the first row (or column) twice. This is done to determine whether the function has any side effects. If not, apply may be able to use a fast-path for evaluating the result, else it falls back to a slow implementation.

df = pd.DataFrame({
    'A': [1, 2],
    'B': ['x', 'y']
})

def func(x):
    print(x['A'])
    return x

df.apply(func, axis=1)

# 1
# 1
# 2
   A  B
0  1  x
1  2  y

This behaviour is also seen in GroupBy.apply on pandas versions <0.25 (it was fixed for 0.25, see here for more information.)


回答 1

并非所有人apply都一样

下图建议何时考虑apply1。绿色意味着高效。红色避免。

在此处输入图片说明

其中一些是直观的:pd.Series.apply是Python级的逐行循环,同上是pd.DataFrame.apply逐行(axis=1)。这些的滥用是广泛的。另一篇文章更深入地探讨了它们。流行的解决方案是使用矢量化方法,列表推导(假定数据干净)或有效的工具pd.DataFrame(例如构造函数)(例如避免使用apply(pd.Series))。

如果使用pd.DataFrame.apply逐行方式,则指定raw=True(如果可能)通常是有益的。在这个阶段,numba通常是一个更好的选择。

GroupBy.apply:普遍偏爱

groupby避免重复操作apply会损害性能。GroupBy.apply只要您在自定义函数中使用的方法本身是矢量化的,通常在这里就可以了。有时,没有适用于希望应用的逐组聚合的本地Pandas方法。在这种情况下,对于少数apply具有自定义功能的组可能仍会提供合理的性能。

pd.DataFrame.apply 专栏式:混合袋

pd.DataFrame.apply按列(axis=0)是一个有趣的情况。对于少量的行而不是大量的列,几乎总是很昂贵的。对于相对于列的大量行(更常见的情况),使用以下命令有时可能看到显着的性能改进apply

# Python 3.7, Pandas 0.23.4
np.random.seed(0)
df = pd.DataFrame(np.random.random((10**7, 3)))     # Scenario_1, many rows
df = pd.DataFrame(np.random.random((10**4, 10**3))) # Scenario_2, many columns

                                               # Scenario_1  | Scenario_2
%timeit df.sum()                               # 800 ms      | 109 ms
%timeit df.apply(pd.Series.sum)                # 568 ms      | 325 ms

%timeit df.max() - df.min()                    # 1.63 s      | 314 ms
%timeit df.apply(lambda x: x.max() - x.min())  # 838 ms      | 473 ms

%timeit df.mean()                              # 108 ms      | 94.4 ms
%timeit df.apply(pd.Series.mean)               # 276 ms      | 233 ms

1有exceptions,但通常很少或很少。几个例子:

  1. df['col'].apply(str)可能略胜一筹df['col'].astype(str)
  2. df.apply(pd.to_datetime)与常规for循环相比,对字符串进行处理无法很好地适应行缩放。

Not all applys are alike

The below chart suggests when to consider apply1. Green means possibly efficient; red avoid.

enter image description here

Some of this is intuitive: pd.Series.apply is a Python-level row-wise loop, ditto pd.DataFrame.apply row-wise (axis=1). The misuses of these are many and wide-ranging. The other post deals with them in more depth. Popular solutions are to use vectorised methods, list comprehensions (assumes clean data), or efficient tools such as the pd.DataFrame constructor (e.g. to avoid apply(pd.Series)).

If you are using pd.DataFrame.apply row-wise, specifying raw=True (where possible) is often beneficial. At this stage, numba is usually a better choice.

GroupBy.apply: generally favoured

Repeating groupby operations to avoid apply will hurt performance. GroupBy.apply is usually fine here, provided the methods you use in your custom function are themselves vectorised. Sometimes there is no native Pandas method for a groupwise aggregation you wish to apply. In this case, for a small number of groups apply with a custom function may still offer reasonable performance.

pd.DataFrame.apply column-wise: a mixed bag

pd.DataFrame.apply column-wise (axis=0) is an interesting case. For a small number of rows versus a large number of columns, it’s almost always expensive. For a large number of rows relative to columns, the more common case, you may sometimes see significant performance improvements using apply:

# Python 3.7, Pandas 0.23.4
np.random.seed(0)
df = pd.DataFrame(np.random.random((10**7, 3)))     # Scenario_1, many rows
df = pd.DataFrame(np.random.random((10**4, 10**3))) # Scenario_2, many columns

                                               # Scenario_1  | Scenario_2
%timeit df.sum()                               # 800 ms      | 109 ms
%timeit df.apply(pd.Series.sum)                # 568 ms      | 325 ms

%timeit df.max() - df.min()                    # 1.63 s      | 314 ms
%timeit df.apply(lambda x: x.max() - x.min())  # 838 ms      | 473 ms

%timeit df.mean()                              # 108 ms      | 94.4 ms
%timeit df.apply(pd.Series.mean)               # 276 ms      | 233 ms

1 There are exceptions, but these are usually marginal or uncommon. A couple of examples:

  1. df['col'].apply(str) may slightly outperform df['col'].astype(str).
  2. df.apply(pd.to_datetime) working on strings doesn’t scale well with rows versus a regular for loop.

回答 2

对于axis=1(即按行函数),则可以使用以下函数代替apply。我想知道为什么这不是pandas行为。(未经复合索引测试,但确实比快得多apply

def faster_df_apply(df, func):
    cols = list(df.columns)
    data, index = [], []
    for row in df.itertuples(index=True):
        row_dict = {f:v for f,v in zip(cols, row[1:])}
        data.append(func(row_dict))
        index.append(row[0])
    return pd.Series(data, index=index)

For axis=1 (i.e. row-wise functions) then you can just use the following function in lieu of apply. I wonder why this isn’t the pandas behavior. (Untested with compound indexes, but it does appear to be much faster than apply)

def faster_df_apply(df, func):
    cols = list(df.columns)
    data, index = [], []
    for row in df.itertuples(index=True):
        row_dict = {f:v for f,v in zip(cols, row[1:])}
        data.append(func(row_dict))
        index.append(row[0])
    return pd.Series(data, index=index)

回答 3

有没有什么情况apply是好的?是的,有时。

任务:解码Unicode字符串。

import numpy as np
import pandas as pd
import unidecode

s = pd.Series(['mañana','Ceñía'])
s.head()
0    mañana
1     Ceñía


s.apply(unidecode.unidecode)
0    manana
1     Cenia

更新
我绝不是提倡使用apply,只是考虑到NumPy无法解决上述情况,因此它可能是一个很好的选择pandas apply。但是由于@jpp的提醒,我忘记了普通的ol列表理解。

Are there ever any situations where apply is good? Yes, sometimes.

Task: decode Unicode strings.

import numpy as np
import pandas as pd
import unidecode

s = pd.Series(['mañana','Ceñía'])
s.head()
0    mañana
1     Ceñía


s.apply(unidecode.unidecode)
0    manana
1     Cenia

Update
I was by no means advocating for the use of apply, just thinking since the NumPy cannot deal with the above situation, it could have been a good candidate for pandas apply. But I was forgetting the plain ol list comprehension thanks to the reminder by @jpp.


当else做得最多时,最有效的方式来执行if-elif-elif-else语句?

问题:当else做得最多时,最有效的方式来执行if-elif-elif-else语句?

我有一条if-elif-elif-else语句,其中99%的时间执行else语句:

if something == 'this':
    doThis()
elif something == 'that':
    doThat()
elif something == 'there':
    doThere()
else:
    doThisMostOfTheTime()

这个构造完成了很多,但是由于它在碰到其他情况之前会遍历所有条件,所以我觉得这不是很有效,更不用说Pythonic了。另一方面,它确实需要知道是否满足任何这些条件,因此无论如何都应该对其进行测试。

是否有人知道是否以及如何可以更有效地完成此工作,或者这仅仅是做到这一点的最佳方法?

I’ve got a in if-elif-elif-else statement in which 99% of the time, the else statement is executed:

if something == 'this':
    doThis()
elif something == 'that':
    doThat()
elif something == 'there':
    doThere()
else:
    doThisMostOfTheTime()

This construct is done a lot, but since it goes over every condition before it hits the else I have the feeling this is not very efficient, let alone Pythonic. On the other hand, it does need to know if any of those conditions are met, so it should test it anyway.

Does anybody know if and how this could be done more efficiently or is this simply the best possible way to do it?


回答 0

代码…

options.get(something, doThisMostOfTheTime)()

…看起来应该是快,但它实际上是慢于ifelifelse结构,因为它要调用一个函数,它可以在一个紧密的循环一个显著的性能开销。

考虑这些例子…

1.py

something = 'something'

for i in xrange(1000000):
    if something == 'this':
        the_thing = 1
    elif something == 'that':
        the_thing = 2
    elif something == 'there':
        the_thing = 3
    else:
        the_thing = 4

2.py

something = 'something'
options = {'this': 1, 'that': 2, 'there': 3}

for i in xrange(1000000):
    the_thing = options.get(something, 4)

3.py

something = 'something'
options = {'this': 1, 'that': 2, 'there': 3}

for i in xrange(1000000):
    if something in options:
        the_thing = options[something]
    else:
        the_thing = 4

4.py

from collections import defaultdict

something = 'something'
options = defaultdict(lambda: 4, {'this': 1, 'that': 2, 'there': 3})

for i in xrange(1000000):
    the_thing = options[something]

…并注意他们使用的CPU时间…

1.py: 160ms
2.py: 170ms
3.py: 110ms
4.py: 100ms

…使用来自的用户时间time(1)

选项#4确实有额外的内存开销,需要为每个不同的键缺失添加一个新项,因此,如果您期望数量众多的不同的键缺失,我会选择方法#3,它在原始构造。

The code…

options.get(something, doThisMostOfTheTime)()

…looks like it ought to be faster, but it’s actually slower than the ifelifelse construct, because it has to call a function, which can be a significant performance overhead in a tight loop.

Consider these examples…

1.py

something = 'something'

for i in xrange(1000000):
    if something == 'this':
        the_thing = 1
    elif something == 'that':
        the_thing = 2
    elif something == 'there':
        the_thing = 3
    else:
        the_thing = 4

2.py

something = 'something'
options = {'this': 1, 'that': 2, 'there': 3}

for i in xrange(1000000):
    the_thing = options.get(something, 4)

3.py

something = 'something'
options = {'this': 1, 'that': 2, 'there': 3}

for i in xrange(1000000):
    if something in options:
        the_thing = options[something]
    else:
        the_thing = 4

4.py

from collections import defaultdict

something = 'something'
options = defaultdict(lambda: 4, {'this': 1, 'that': 2, 'there': 3})

for i in xrange(1000000):
    the_thing = options[something]

…and note the amount of CPU time they use…

1.py: 160ms
2.py: 170ms
3.py: 110ms
4.py: 100ms

…using the user time from time(1).

Option #4 does have the additional memory overhead of adding a new item for every distinct key miss, so if you’re expecting an unbounded number of distinct key misses, I’d go with option #3, which is still a significant improvement on the original construct.


回答 1

我要创建一个字典:

options = {'this': doThis,'that' :doThat, 'there':doThere}

现在只使用:

options.get(something, doThisMostOfTheTime)()

如果somethingoptionsdict中找不到,dict.get则将返回默认值doThisMostOfTheTime

一些时间比较:

脚本:

from random import shuffle
def doThis():pass
def doThat():pass
def doThere():pass
def doSomethingElse():pass
options = {'this':doThis, 'that':doThat, 'there':doThere}
lis = range(10**4) + options.keys()*100
shuffle(lis)

def get():
    for x in lis:
        options.get(x, doSomethingElse)()

def key_in_dic():
    for x in lis:
        if x in options:
            options[x]()
        else:
            doSomethingElse()

def if_else():
    for x in lis:
        if x == 'this':
            doThis()
        elif x == 'that':
            doThat()
        elif x == 'there':
            doThere()
        else:
            doSomethingElse()

结果:

>>> from so import *
>>> %timeit get()
100 loops, best of 3: 5.06 ms per loop
>>> %timeit key_in_dic()
100 loops, best of 3: 3.55 ms per loop
>>> %timeit if_else()
100 loops, best of 3: 6.42 ms per loop

对于10**5不存在的密钥和100个有效密钥:

>>> %timeit get()
10 loops, best of 3: 84.4 ms per loop
>>> %timeit key_in_dic()
10 loops, best of 3: 50.4 ms per loop
>>> %timeit if_else()
10 loops, best of 3: 104 ms per loop

因此,对于普通字典而言,在key in options这里使用键是最有效的方法:

if key in options:
   options[key]()
else:
   doSomethingElse()

I’d create a dictionary :

options = {'this': doThis,'that' :doThat, 'there':doThere}

Now use just:

options.get(something, doThisMostOfTheTime)()

If something is not found in the options dict then dict.get will return the default value doThisMostOfTheTime

Some timing comparisons:

Script:

from random import shuffle
def doThis():pass
def doThat():pass
def doThere():pass
def doSomethingElse():pass
options = {'this':doThis, 'that':doThat, 'there':doThere}
lis = range(10**4) + options.keys()*100
shuffle(lis)

def get():
    for x in lis:
        options.get(x, doSomethingElse)()

def key_in_dic():
    for x in lis:
        if x in options:
            options[x]()
        else:
            doSomethingElse()

def if_else():
    for x in lis:
        if x == 'this':
            doThis()
        elif x == 'that':
            doThat()
        elif x == 'there':
            doThere()
        else:
            doSomethingElse()

Results:

>>> from so import *
>>> %timeit get()
100 loops, best of 3: 5.06 ms per loop
>>> %timeit key_in_dic()
100 loops, best of 3: 3.55 ms per loop
>>> %timeit if_else()
100 loops, best of 3: 6.42 ms per loop

For 10**5 non-existent keys and 100 valid keys::

>>> %timeit get()
10 loops, best of 3: 84.4 ms per loop
>>> %timeit key_in_dic()
10 loops, best of 3: 50.4 ms per loop
>>> %timeit if_else()
10 loops, best of 3: 104 ms per loop

So, for a normal dictionary checking for the key using key in options is the most efficient way here:

if key in options:
   options[key]()
else:
   doSomethingElse()

回答 2

可以使用pypy吗?

保留原始代码,但在pypy上运行可使我的速度提高50倍。

CPython:

matt$ python
Python 2.6.8 (unknown, Nov 26 2012, 10:25:03)
[GCC 4.2.1 Compatible Apple Clang 3.0 (tags/Apple/clang-211.12)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> from timeit import timeit
>>> timeit("""
... if something == 'this': pass
... elif something == 'that': pass
... elif something == 'there': pass
... else: pass
... """, "something='foo'", number=10000000)
1.728302001953125

pypy:

matt$ pypy
Python 2.7.3 (daf4a1b651e0, Dec 07 2012, 23:00:16)
[PyPy 2.0.0-beta1 with GCC 4.2.1] on darwin
Type "help", "copyright", "credits" or "license" for more information.
And now for something completely different: ``a 10th of forever is 1h45''
>>>>
>>>> from timeit import timeit
>>>> timeit("""
.... if something == 'this': pass
.... elif something == 'that': pass
.... elif something == 'there': pass
.... else: pass
.... """, "something='foo'", number=10000000)
0.03306388854980469

Are you able to use pypy?

Keeping your original code but running it on pypy gives a 50x speed-up for me.

CPython:

matt$ python
Python 2.6.8 (unknown, Nov 26 2012, 10:25:03)
[GCC 4.2.1 Compatible Apple Clang 3.0 (tags/Apple/clang-211.12)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> from timeit import timeit
>>> timeit("""
... if something == 'this': pass
... elif something == 'that': pass
... elif something == 'there': pass
... else: pass
... """, "something='foo'", number=10000000)
1.728302001953125

Pypy:

matt$ pypy
Python 2.7.3 (daf4a1b651e0, Dec 07 2012, 23:00:16)
[PyPy 2.0.0-beta1 with GCC 4.2.1] on darwin
Type "help", "copyright", "credits" or "license" for more information.
And now for something completely different: ``a 10th of forever is 1h45''
>>>>
>>>> from timeit import timeit
>>>> timeit("""
.... if something == 'this': pass
.... elif something == 'that': pass
.... elif something == 'there': pass
.... else: pass
.... """, "something='foo'", number=10000000)
0.03306388854980469

回答 3

这里是将动态条件转换为字典的if的示例。

selector = {lambda d: datetime(2014, 12, 31) >= d : 'before2015',
            lambda d: datetime(2015, 1, 1) <= d < datetime(2016, 1, 1): 'year2015',
            lambda d: datetime(2016, 1, 1) <= d < datetime(2016, 12, 31): 'year2016'}

def select_by_date(date, selector=selector):
    selected = [selector[x] for x in selector if x(date)] or ['after2016']
    return selected[0]

这是一种方法,但可能不是最Python的方法,因为对于不熟练使用Python的人来说可读性较差。

Here an example of a if with dynamic conditions translated to a dictionary.

selector = {lambda d: datetime(2014, 12, 31) >= d : 'before2015',
            lambda d: datetime(2015, 1, 1) <= d < datetime(2016, 1, 1): 'year2015',
            lambda d: datetime(2016, 1, 1) <= d < datetime(2016, 12, 31): 'year2016'}

def select_by_date(date, selector=selector):
    selected = [selector[x] for x in selector if x(date)] or ['after2016']
    return selected[0]

It is a way, but may not be the most pythonic way to do it because is less readable for whom is not fluent in Python.


回答 4

人们exec出于安全原因发出警告,但这是一个理想的案例。
这是一个简单的状态机。

Codes = {}
Codes [0] = compile('blah blah 0; nextcode = 1')
Codes [1] = compile('blah blah 1; nextcode = 2')
Codes [2] = compile('blah blah 2; nextcode = 0')

nextcode = 0
While True:
    exec(Codes[nextcode])

People warn about exec for security reasons, but this is an ideal case for it.
It’s an easy state machine.

Codes = {}
Codes [0] = compile('blah blah 0; nextcode = 1')
Codes [1] = compile('blah blah 1; nextcode = 2')
Codes [2] = compile('blah blah 2; nextcode = 0')

nextcode = 0
While True:
    exec(Codes[nextcode])

Python:查找表的列表与字典

问题:Python:查找表的列表与字典

我需要在某种类型的查询表中放入大约1000万个值,所以我想知道列表字典哪个更有效?

我知道您可以为这两种方法执行以下操作:

if something in dict_of_stuff:
    pass

if something in list_of_stuff:
    pass

我的想法是,该字典将更快,更高效。

谢谢你的帮助。

编辑1
我正在尝试做的更多信息。 欧拉问题92。我正在查找表,以查看所计算的值是否已全部准备好。

编辑2
查找效率。

编辑3
没有与值相关的值…那么集合会更好吗?

I have about 10million values that I need to put in some type of look up table, so I was wondering which would be more efficient a list or dict?

I know you can do something like this for both:

if something in dict_of_stuff:
    pass

and

if something in list_of_stuff:
    pass

My thought is the dict will be faster and more efficient.

Thanks for your help.

EDIT 1
Little more info on what I’m trying to do. Euler Problem 92. I’m making a look up table to see if a value calculated has all ready been calculated.

EDIT 2
Efficiency for look up.

EDIT 3
There are no values assosiated with the value…so would a set be better?


回答 0

速度

关于数据结构中的项目数,列表中的查找为O(n),字典中的查找摊销为O(1)。如果不需要关联值,请使用集合。

记忆

字典和集合都使用哈希,并且它们使用的内存比仅用于对象存储的更多。根据Beautiful Code的 AM Kuchling的说法,该实现尝试使哈希2/3保持完整,因此您可能会浪费一些内存。

如果您不立即添加新条目(根据更新的问题进行操作),则可能需要对列表进行排序并使用二进制搜索。这是O(log n),对于字符串来说可能更慢,对于没有自然顺序的对象则不可能。

Speed

Lookups in lists are O(n), lookups in dictionaries are amortized O(1), with regard to the number of items in the data structure. If you don’t need to associate values, use sets.

Memory

Both dictionaries and sets use hashing and they use much more memory than only for object storage. According to A.M. Kuchling in Beautiful Code, the implementation tries to keep the hash 2/3 full, so you might waste quite some memory.

If you do not add new entries on the fly (which you do, based on your updated question), it might be worthwhile to sort the list and use binary search. This is O(log n), and is likely to be slower for strings, impossible for objects which do not have a natural ordering.


回答 1

dict是哈希表,因此查找密钥确实非常快。因此,在字典和列表之间,字典会更快。但是,如果您没有关联的值,则最好使用集合。它是一个散列表,没有“表”部分。


编辑:对于您的新问题,是的,设置一个会更好。只需创建2组,一组用于以1结尾的序列,另一组用于以89结尾的序列。我已经成功地使用组解决了这个问题。

A dict is a hash table, so it is really fast to find the keys. So between dict and list, dict would be faster. But if you don’t have a value to associate, it is even better to use a set. It is a hash table, without the “table” part.


EDIT: for your new question, YES, a set would be better. Just create 2 sets, one for sequences ended in 1 and other for the sequences ended in 89. I have sucessfully solved this problem using sets.


回答 2

set()正是您想要的。O(1)查找,并且小于字典。

set() is exactly what you want. O(1) lookups, and smaller than a dict.


回答 3

我做了一些基准测试,结果表明dict比列出和设置大型数据集都快,它在linux的i7 CPU上运行python 2.7.3:

  • python -mtimeit -s 'd=range(10**7)' '5*10**6 in d'

    10个循环,每个循环最好3:64.2毫秒

  • python -mtimeit -s 'd=dict.fromkeys(range(10**7))' '5*10**6 in d'

    10000000次循环,最好为3:每个循环0.0759微秒

  • python -mtimeit -s 'from sets import Set; d=Set(range(10**7))' '5*10**6 in d'

    1000000次循环,最好为3:每个循环0.262微秒

如您所见,dict比list快得多,比set快约3倍。但是,在某些应用程序中,您可能仍想选择设置以美观。如果数据集非常小(<1000个元素),则列表的效果会很好。

I did some benchmarking and it turns out that dict is faster than both list and set for large data sets, running python 2.7.3 on an i7 CPU on linux:

  • python -mtimeit -s 'd=range(10**7)' '5*10**6 in d'

    10 loops, best of 3: 64.2 msec per loop

  • python -mtimeit -s 'd=dict.fromkeys(range(10**7))' '5*10**6 in d'

    10000000 loops, best of 3: 0.0759 usec per loop

  • python -mtimeit -s 'from sets import Set; d=Set(range(10**7))' '5*10**6 in d'

    1000000 loops, best of 3: 0.262 usec per loop

As you can see, dict is considerably faster than list and about 3 times faster than set. In some applications you might still want to choose set for the beauty of it, though. And if the data sets are really small (< 1000 elements) lists perform pretty well.


回答 4

你想要一个字典。

对于Python中的(未排序)列表,“输入”操作需要O(n)时间-如果您有大量数据,则不好。另一方面,字典是哈希表,因此您可以期望O(1)查找时间。

正如其他人指出的那样,如果您只有键而不是键/值对,则可以选择一个集合(一种特殊类型的dict)。

有关:

  • Python Wiki:有关Python容器操作时间复杂度的信息。
  • SO:Python容器操作时间和内存复杂性

You want a dict.

For (unsorted) lists in Python, the “in” operation requires O(n) time—not good when you have a large amount of data. A dict, on the other hand, is a hash table, so you can expect O(1) lookup time.

As others have noted, you might choose a set (a special type of dict) instead, if you only have keys rather than key/value pairs.

Related:

  • Python wiki: information on the time complexity of Python container operations.
  • SO: Python container operation time and memory complexities

回答 5

如果数据是唯一的,set()将是最有效的,但是是两个-dict(这也需要唯一性,哎呀:)

if data are unique set() will be the most efficient, but of two – dict (which also requires uniqueness, oops :)


回答 6

这些年来,作为一组新的测试表明@ EriF89仍然正确:

$ python -m timeit -s "l={k:k for k in xrange(5000)}"    "[i for i in xrange(10000) if i in l]"
1000 loops, best of 3: 1.84 msec per loop
$ python -m timeit -s "l=[k for k in xrange(5000)]"    "[i for i in xrange(10000) if i in l]"
10 loops, best of 3: 573 msec per loop
$ python -m timeit -s "l=tuple([k for k in xrange(5000)])"    "[i for i in xrange(10000) if i in l]"
10 loops, best of 3: 587 msec per loop
$ python -m timeit -s "l=set([k for k in xrange(5000)])"    "[i for i in xrange(10000) if i in l]"
1000 loops, best of 3: 1.88 msec per loop

在这里,我们还比较了一个tuplelists在某些用例中,它比(并且使用更少的内存)更快。对于查找表,tuple整流罩没有更好的选择。

无论是dictset表现非常出色。这带来了一个与@SilentGhost有关唯一性的答案有关的有趣观点:如果OP在数据集中具有10M值,并且不知道它们中是否存在重复项,则值得将其元素的集合/ dict并行保存使用实际数据集,并测试该数据集中是否存在该数据。10M数据点可能只有10个唯一值,这是一个很小的搜索空间!

SilentGhost关于dict的错误实际上是有启发性的,因为人们可以使用dict将重复的数据(以值形式)关联到一个非重复的集合(键)中,从而保留一个数据对象来保存所有数据,但仍然像查找表一样快。例如,一个dict键可能是要查找的值,并且该值可能是该值出现的虚构列表中的索引列表。

例如,如果要搜索的源数据列表为l=[1,2,3,1,2,1,4],则可以通过将此字典替换为dict来针对搜索和内存进行优化:

>>> from collections import defaultdict
>>> d = defaultdict(list)
>>> l=[1,2,3,1,2,1,4]
>>> for i, e in enumerate(l):
...     d[e].append(i)
>>> d
defaultdict(<class 'list'>, {1: [0, 3, 5], 2: [1, 4], 3: [2], 4: [6]})

有了这个命令,就可以知道:

  1. 如果值在原始数据集中(即2 in d返回True
  2. 该值是原始数据集(即d[2]返回,其中数据是在原始数据列表中找到索引列表:[1, 4]

As a new set of tests to show @EriF89 is still right after all these years:

$ python -m timeit -s "l={k:k for k in xrange(5000)}"    "[i for i in xrange(10000) if i in l]"
1000 loops, best of 3: 1.84 msec per loop
$ python -m timeit -s "l=[k for k in xrange(5000)]"    "[i for i in xrange(10000) if i in l]"
10 loops, best of 3: 573 msec per loop
$ python -m timeit -s "l=tuple([k for k in xrange(5000)])"    "[i for i in xrange(10000) if i in l]"
10 loops, best of 3: 587 msec per loop
$ python -m timeit -s "l=set([k for k in xrange(5000)])"    "[i for i in xrange(10000) if i in l]"
1000 loops, best of 3: 1.88 msec per loop

Here we also compare a tuple, which are known to be faster than lists (and use less memory) in some use cases. In the case of lookup table, the tuple faired no better .

Both the dict and set performed very well. This brings up an interesting point tying into @SilentGhost answer about uniqueness: if the OP has 10M values in a data set, and it’s unknown if there are duplicates in them, then it would be worth keeping a set/dict of its elements in parallel with the actual data set, and testing for existence in that set/dict. It’s possible the 10M data points only have 10 unique values, which is a much smaller space to search!

SilentGhost’s mistake about dicts is actually illuminating because one could use a dict to correlate duplicated data (in values) into a nonduplicated set (keys), and thus keep one data object to hold all data, yet still be fast as a lookup table. For example, a dict key could be the value being looked up, and the value could be a list of indices in an imaginary list where that value occurred.

For example, if the source data list to be searched was l=[1,2,3,1,2,1,4], it could be optimized for both searching and memory by replacing it with this dict:

>>> from collections import defaultdict
>>> d = defaultdict(list)
>>> l=[1,2,3,1,2,1,4]
>>> for i, e in enumerate(l):
...     d[e].append(i)
>>> d
defaultdict(<class 'list'>, {1: [0, 3, 5], 2: [1, 4], 3: [2], 4: [6]})

With this dict, one can know:

  1. If a value was in the original dataset (ie 2 in d returns True)
  2. Where the value was in the original dataset (ie d[2] returns list of indices where data was found in original data list: [1, 4])

回答 7

实际上,您实际上不需要在表中存储1000万个值,因此这两种方法都没什么大不了的。

提示:考虑一下在第一次平方和运算后结果可能有多大。最大可能的结果将是小于1000万。

You don’t actually need to store 10 million values in the table, so it’s not a big deal either way.

Hint: think about how large your result can be after the first sum of squares operation. The largest possible result will be much smaller than 10 million…


为什么x ** 4.0比Python 3中的x ** 4快?

问题:为什么x ** 4.0比Python 3中的x ** 4快?

为什么x**4.0要比x**4?我正在使用CPython 3.5.2。

$ python -m timeit "for x in range(100):" " x**4.0"
  10000 loops, best of 3: 24.2 usec per loop

$ python -m timeit "for x in range(100):" " x**4"
  10000 loops, best of 3: 30.6 usec per loop

我尝试更改所提高的力量以查看其作用方式,例如,如果我将x提高至10或16的力量,它会从30跳到35,但是如果我以浮点数提高10.0,那只是在移动大约24.1〜4。

我想这可能与浮点转换和2的幂有关,但我真的不知道。

我注意到在这两种情况下2的幂都更快,因为这些计算对于解释器/计算机而言更本机/更容易,所以我想。但尽管如此,浮子几乎没有动。2.0 => 24.1~4 & 128.0 => 24.1~4 2 => 29 & 128 => 62


TigerhawkT3指出,它不会在循环之外发生。我检查了一下,这种情况只发生在基地抬高时(从我所看到的情况)。有什么想法吗?

Why is x**4.0 faster than x**4? I am using CPython 3.5.2.

$ python -m timeit "for x in range(100):" " x**4.0"
  10000 loops, best of 3: 24.2 usec per loop

$ python -m timeit "for x in range(100):" " x**4"
  10000 loops, best of 3: 30.6 usec per loop

I tried changing the power I raised by to see how it acts, and for example if I raise x to the power of 10 or 16 it’s jumping from 30 to 35, but if I’m raising by 10.0 as a float, it’s just moving around 24.1~4.

I guess it has something to do with float conversion and powers of 2 maybe, but I don’t really know.

I noticed that in both cases powers of 2 are faster, I guess since those calculations are more native/easy for the interpreter/computer. But still, with floats it’s almost not moving. 2.0 => 24.1~4 & 128.0 => 24.1~4 but 2 => 29 & 128 => 62


TigerhawkT3 pointed out that it doesn’t happen outside of the loop. I checked and the situation only occurs (from what I’ve seen) when the base is getting raised. Any idea about that?

回答 0

为什么比Python 3 * x**4.0 更快x**4

Python 3 int对象是成熟的对象,旨在支持任意大小。因此,它们是在C级别上进行处理的(请参阅如何在中将所有变量声明为PyLongObject *type long_pow)。这也使它们的取幂变得更加棘手乏味,因为您需要ob_digit使用它用来表示其值的数组进行操作。(勇敢的人士来了。-有关s的更多信息,请参见:了解Python中大整数的内存分配PyLongObject。)

float相反,可以将 Python 对象转换为C double类型(通过使用PyFloat_AsDouble),并且可以使用这些本机类型执行操作。这很棒,因为在检查了相关的边缘情况之后,它允许Python 使用平台的powC pow,即)来处理实际的幂运算:

/* Now iv and iw are finite, iw is nonzero, and iv is
 * positive and not equal to 1.0.  We finally allow
 * the platform pow to step in and do the rest.
 */
errno = 0;
PyFPE_START_PROTECT("pow", return NULL)
ix = pow(iv, iw); 

其中,iviw是我们的原PyFloatObjectS作为Ç double秒。

对于它的价值:Python 2.7.13对我而言是一个2~3更快的因子,并且显示出相反的行为。

先前的事实也解释了Python 2和3之间的差异,因此,我认为我也将解决此评论,因为它很有趣。

在Python 2中,您使用的旧int对象与intPython 3中的对象不同(int3.x中的所有对象都是PyLongObject类型)。在Python 2中,有一个区别取决于对象的值(或者,如果使用后缀L/l):

# Python 2
type(30)  # <type 'int'>
type(30L) # <type 'long'>

<type 'int'>你看到这里做同样的事情float就做,它被安全地转换成C long 时,就可以进行幂(中int_pow也暗示编译器放你的歌在寄存器中,如果能够这样做,这样可以有所作为) :

static PyObject *
int_pow(PyIntObject *v, PyIntObject *w, PyIntObject *z)
{
    register long iv, iw, iz=0, ix, temp, prev;
/* Snipped for brevity */    

这样可以提高速度。

要查看<type 'long'>s与<type 'int'>s 相比是否比较慢,如果将x名称包装long在Python 2 中的调用中(实质上是强制将其long_pow像在Python 3中那样使用),则速度增益会消失:

# <type 'int'>
(python2)  python -m timeit "for x in range(1000):" " x**2"       
10000 loops, best of 3: 116 usec per loop
# <type 'long'> 
(python2)  python -m timeit "for x in range(1000):" " long(x)**2"
100 loops, best of 3: 2.12 msec per loop

请注意,尽管一个代码段将转换为intlong而另一个代码段未将其转换为(如@pydsinger所指出的),但这种转换并不是减慢速度的推动力。执行long_pow是。(仅long(x)将时间与语句一起查看)。

[…]它不会在循环之外发生。[…]有什么想法吗?

这是CPython的窥孔优化器,可以为您折叠常量。无论哪种情况,您都会获得相同的精确计时,因为没有实际的计算来找到幂运算的结果,只加载值:

dis.dis(compile('4 ** 4', '', 'exec'))
  1           0 LOAD_CONST               2 (256)
              3 POP_TOP
              4 LOAD_CONST               1 (None)
              7 RETURN_VALUE

生成相同的字节码'4 ** 4.',唯一的区别是LOAD_CONST加载的是float 256.0而不是int 256

dis.dis(compile('4 ** 4.', '', 'exec'))
  1           0 LOAD_CONST               3 (256.0)
              2 POP_TOP
              4 LOAD_CONST               2 (None)
              6 RETURN_VALUE

所以时代是一样的。


*以上所有内容仅适用于CPython(Python的参考实现)。其他实现可能会有所不同。

Why is x**4.0 faster than x**4 in Python 3*?

Python 3 int objects are a full fledged object designed to support an arbitrary size; due to that fact, they are handled as such on the C level (see how all variables are declared as PyLongObject * type in long_pow). This also makes their exponentiation a lot more trickier and tedious since you need to play around with the ob_digit array it uses to represent its value to perform it. (Source for the brave. — See: Understanding memory allocation for large integers in Python for more on PyLongObjects.)

Python float objects, on the contrary, can be transformed to a C double type (by using PyFloat_AsDouble) and operations can be performed using those native types. This is great because, after checking for relevant edge-cases, it allows Python to use the platforms’ pow (C’s pow, that is) to handle the actual exponentiation:

/* Now iv and iw are finite, iw is nonzero, and iv is
 * positive and not equal to 1.0.  We finally allow
 * the platform pow to step in and do the rest.
 */
errno = 0;
PyFPE_START_PROTECT("pow", return NULL)
ix = pow(iv, iw); 

where iv and iw are our original PyFloatObjects as C doubles.

For what it’s worth: Python 2.7.13 for me is a factor 2~3 faster, and shows the inverse behaviour.

The previous fact also explains the discrepancy between Python 2 and 3 so, I thought I’d address this comment too because it is interesting.

In Python 2, you’re using the old int object that differs from the int object in Python 3 (all int objects in 3.x are of PyLongObject type). In Python 2, there’s a distinction that depends on the value of the object (or, if you use the suffix L/l):

# Python 2
type(30)  # <type 'int'>
type(30L) # <type 'long'>

The <type 'int'> you see here does the same thing floats do, it gets safely converted into a C long when exponentiation is performed on it (The int_pow also hints the compiler to put ’em in a register if it can do so, so that could make a difference):

static PyObject *
int_pow(PyIntObject *v, PyIntObject *w, PyIntObject *z)
{
    register long iv, iw, iz=0, ix, temp, prev;
/* Snipped for brevity */    

this allows for a good speed gain.

To see how sluggish <type 'long'>s are in comparison to <type 'int'>s, if you wrapped the x name in a long call in Python 2 (essentially forcing it to use long_pow as in Python 3), the speed gain disappears:

# <type 'int'>
(python2) ➜ python -m timeit "for x in range(1000):" " x**2"       
10000 loops, best of 3: 116 usec per loop
# <type 'long'> 
(python2) ➜ python -m timeit "for x in range(1000):" " long(x)**2"
100 loops, best of 3: 2.12 msec per loop

Take note that, though the one snippet transforms the int to long while the other does not (as pointed out by @pydsinger), this cast is not the contributing force behind the slowdown. The implementation of long_pow is. (Time the statements solely with long(x) to see).

[…] it doesn’t happen outside of the loop. […] Any idea about that?

This is CPython’s peephole optimizer folding the constants for you. You get the same exact timings either case since there’s no actual computation to find the result of the exponentiation, only loading of values:

dis.dis(compile('4 ** 4', '', 'exec'))
  1           0 LOAD_CONST               2 (256)
              3 POP_TOP
              4 LOAD_CONST               1 (None)
              7 RETURN_VALUE

Identical byte-code is generated for '4 ** 4.' with the only difference being that the LOAD_CONST loads the float 256.0 instead of the int 256:

dis.dis(compile('4 ** 4.', '', 'exec'))
  1           0 LOAD_CONST               3 (256.0)
              2 POP_TOP
              4 LOAD_CONST               2 (None)
              6 RETURN_VALUE

So the times are identical.


*All of the above apply solely for CPython, the reference implementation of Python. Other implementations might perform differently.


回答 1

如果我们查看字节码,我们可以看到表达式是完全相同的。唯一的区别是常量的类型,它将是的自变量BINARY_POWER。因此,最肯定的是由于它int被转换为浮点数。

>>> def func(n):
...    return n**4
... 
>>> def func1(n):
...    return n**4.0
... 
>>> from dis import dis
>>> dis(func)
  2           0 LOAD_FAST                0 (n)
              3 LOAD_CONST               1 (4)
              6 BINARY_POWER
              7 RETURN_VALUE
>>> dis(func1)
  2           0 LOAD_FAST                0 (n)
              3 LOAD_CONST               1 (4.0)
              6 BINARY_POWER
              7 RETURN_VALUE

更新:让我们看一下CPython源代码中的Objects / abstract.c

PyObject *
PyNumber_Power(PyObject *v, PyObject *w, PyObject *z)
{
    return ternary_op(v, w, z, NB_SLOT(nb_power), "** or pow()");
}

PyNumber_Power呼叫ternary_op,这太长了,无法粘贴到这里,因此这是链接

它调用的nb_power广告位xy作为参数传递。

最后,在Objects / floatobject.c的float_pow()第686行,我们看到在实际操作之前将参数转换为C :double

static PyObject *
float_pow(PyObject *v, PyObject *w, PyObject *z)
{
    double iv, iw, ix;
    int negate_result = 0;

    if ((PyObject *)z != Py_None) {
        PyErr_SetString(PyExc_TypeError, "pow() 3rd argument not "
            "allowed unless all arguments are integers");
        return NULL;
    }

    CONVERT_TO_DOUBLE(v, iv);
    CONVERT_TO_DOUBLE(w, iw);
    ...

If we look at the bytecode, we can see that the expressions are purely identical. The only difference is a type of a constant that will be an argument of BINARY_POWER. So it’s most certainly due to an int being converted to a floating point number down the line.

>>> def func(n):
...    return n**4
... 
>>> def func1(n):
...    return n**4.0
... 
>>> from dis import dis
>>> dis(func)
  2           0 LOAD_FAST                0 (n)
              3 LOAD_CONST               1 (4)
              6 BINARY_POWER
              7 RETURN_VALUE
>>> dis(func1)
  2           0 LOAD_FAST                0 (n)
              3 LOAD_CONST               1 (4.0)
              6 BINARY_POWER
              7 RETURN_VALUE

Update: let’s take a look at Objects/abstract.c in the CPython source code:

PyObject *
PyNumber_Power(PyObject *v, PyObject *w, PyObject *z)
{
    return ternary_op(v, w, z, NB_SLOT(nb_power), "** or pow()");
}

PyNumber_Power calls ternary_op, which is too long to paste here, so here’s the link.

It calls the nb_power slot of x, passing y as an argument.

Finally, in float_pow() at line 686 of Objects/floatobject.c we see that arguments are converted to a C double right before the actual operation:

static PyObject *
float_pow(PyObject *v, PyObject *w, PyObject *z)
{
    double iv, iw, ix;
    int negate_result = 0;

    if ((PyObject *)z != Py_None) {
        PyErr_SetString(PyExc_TypeError, "pow() 3rd argument not "
            "allowed unless all arguments are integers");
        return NULL;
    }

    CONVERT_TO_DOUBLE(v, iv);
    CONVERT_TO_DOUBLE(w, iw);
    ...

回答 2

因为一个是正确的,所以另一个是近似值。

>>> 334453647687345435634784453567231654765 ** 4.0
1.2512490121794596e+154
>>> 334453647687345435634784453567231654765 ** 4
125124901217945966595797084130108863452053981325370920366144
719991392270482919860036990488994139314813986665699000071678
41534843695972182197917378267300625

Because one is correct, another is approximation.

>>> 334453647687345435634784453567231654765 ** 4.0
1.2512490121794596e+154
>>> 334453647687345435634784453567231654765 ** 4
125124901217945966595797084130108863452053981325370920366144
719991392270482919860036990488994139314813986665699000071678
41534843695972182197917378267300625

列表理解和功能比“ for循环”快吗?

问题:列表理解和功能比“ for循环”快吗?

在Python的性能方面,是一个列表理解或功能,如map()filter()reduce()比for循环快?从技术上讲,为什么它们以C速度运行,而for循环以python虚拟机速度运行

假设在我正在开发的游戏中,我需要使用for循环绘制复杂而庞大的地图。这个问题肯定是相关的,例如,如果列表理解确实确实更快,那么它将是避免滞后的更好选择(尽管代码具有视觉上的复杂性)。

In terms of performance in Python, is a list-comprehension, or functions like map(), filter() and reduce() faster than a for loop? Why, technically, they run in a C speed, while the for loop runs in the python virtual machine speed?.

Suppose that in a game that I’m developing I need to draw complex and huge maps using for loops. This question would be definitely relevant, for if a list-comprehension, for example, is indeed faster, it would be a much better option in order to avoid lags (Despite the visual complexity of the code).


回答 0

以下是粗略的准则和基于经验的有根据的猜测。您应该timeit或配置您的具体用例以获得确切的数字,并且这些数字有时可能与以下内容不一致。

列表理解通常比精确等效的for循环(实际上是构建列表)要快一点,这很可能是因为它不必append在每次迭代时都查找列表及其方法。但是,列表理解仍然会执行字节码级别的循环:

>>> dis.dis(<the code object for `[x for x in range(10)]`>)
 1           0 BUILD_LIST               0
             3 LOAD_FAST                0 (.0)
       >>    6 FOR_ITER                12 (to 21)
             9 STORE_FAST               1 (x)
            12 LOAD_FAST                1 (x)
            15 LIST_APPEND              2
            18 JUMP_ABSOLUTE            6
       >>   21 RETURN_VALUE

由于创建和扩展列表的开销很大,因此使用列表推导代替构建列表的循环,无意义地累积无意义的值的列表然后将其扔掉通常会较慢。列表理解并不是比一个好的旧循环本质上更快的魔术。

至于功能列表处理功能:虽然这些都是用C语言编写,并可能超越Python编写的相同的功能,它们是不是一定是最快的选择。如果该函数也是用C编写的,则可以提高速度。但是在大多数情况下,使用lambda(或其他Python函数)重复设置Python堆栈框架等开销会消耗掉所有的节省。在没有函数调用的情况下,简单地在线完成相同的工作(例如,用列表理解代替mapor filter)通常会稍快一些。

假设在我正在开发的游戏中,我需要使用for循环绘制复杂而庞大的地图。这个问题肯定是相关的,例如,如果列表理解确实确实更快,那么它将是避免滞后的更好选择(尽管代码具有视觉上的复杂性)。

很有可能,如果用良好的非“优化” Python编写时这样的代码还不够快,那么就没有足够的Python级微优化可以使它足够快了,您应该开始考虑降级为C。微观优化通常可以大大加快Python代码的速度,对此(绝对值)的限制很低。而且,即使在达到极限之前,咬住子弹并写一些C语言也变得更具成本效益(加速15%,加速300%)。

The following are rough guidelines and educated guesses based on experience. You should timeit or profile your concrete use case to get hard numbers, and those numbers may occasionally disagree with the below.

A list comprehension is usually a tiny bit faster than the precisely equivalent for loop (that actually builds a list), most likely because it doesn’t have to look up the list and its append method on every iteration. However, a list comprehension still does a bytecode-level loop:

>>> dis.dis(<the code object for `[x for x in range(10)]`>)
 1           0 BUILD_LIST               0
             3 LOAD_FAST                0 (.0)
       >>    6 FOR_ITER                12 (to 21)
             9 STORE_FAST               1 (x)
            12 LOAD_FAST                1 (x)
            15 LIST_APPEND              2
            18 JUMP_ABSOLUTE            6
       >>   21 RETURN_VALUE

Using a list comprehension in place of a loop that doesn’t build a list, nonsensically accumulating a list of meaningless values and then throwing the list away, is often slower because of the overhead of creating and extending the list. List comprehensions aren’t magic that is inherently faster than a good old loop.

As for functional list processing functions: While these are written in C and probably outperform equivalent functions written in Python, they are not necessarily the fastest option. Some speed up is expected if the function is written in C too. But most cases using a lambda (or other Python function), the overhead of repeatedly setting up Python stack frames etc. eats up any savings. Simply doing the same work in-line, without function calls (e.g. a list comprehension instead of map or filter) is often slightly faster.

Suppose that in a game that I’m developing I need to draw complex and huge maps using for loops. This question would be definitely relevant, for if a list-comprehension, for example, is indeed faster, it would be a much better option in order to avoid lags (Despite the visual complexity of the code).

Chances are, if code like this isn’t already fast enough when written in good non-“optimized” Python, no amount of Python level micro optimization is going to make it fast enough and you should start thinking about dropping to C. While extensive micro optimizations can often speed up Python code considerably, there is a low (in absolute terms) limit to this. Moreover, even before you hit that ceiling, it becomes simply more cost efficient (15% speedup vs. 300% speed up with the same effort) to bite the bullet and write some C.


回答 1

如果查看python.org上信息,则可以看到以下摘要:

Version Time (seconds)
Basic loop 3.47
Eliminate dots 2.45
Local variable & no dots 1.79
Using map function 0.54

但是您确实应该详细阅读以上文章,以了解造成性能差异的原因。

我也强烈建议您使用timeit对代码计时。在一天的结束时,可能会出现例如for在满足条件时需要跳出循环的情况。它可能比调用找出结果更快map

If you check the info on python.org, you can see this summary:

Version Time (seconds)
Basic loop 3.47
Eliminate dots 2.45
Local variable & no dots 1.79
Using map function 0.54

But you really should read the above article in details to understand the cause of the performance difference.

I also strongly suggest you should time your code by using timeit. At the end of the day, there can be a situation where, for example, you may need to break out of for loop when a condition is met. It could potentially be faster than finding out the result by calling map.


回答 2

你具体问左右map()filter()reduce(),但我相信你想了解一般的函数式编程。在对一组点中所有点之间的距离进行计算的问题上进行了自我测试之后,结果表明,函数编程(使用starmap内置itertools模块中的函数)比for循环要慢一些(使用1.25倍的时间)。事实)。这是我使用的示例代码:

import itertools, time, math, random

class Point:
    def __init__(self,x,y):
        self.x, self.y = x, y

point_set = (Point(0, 0), Point(0, 1), Point(0, 2), Point(0, 3))
n_points = 100
pick_val = lambda : 10 * random.random() - 5
large_set = [Point(pick_val(), pick_val()) for _ in range(n_points)]
    # the distance function
f_dist = lambda x0, x1, y0, y1: math.sqrt((x0 - x1) ** 2 + (y0 - y1) ** 2)
    # go through each point, get its distance from all remaining points 
f_pos = lambda p1, p2: (p1.x, p2.x, p1.y, p2.y)

extract_dists = lambda x: itertools.starmap(f_dist, 
                          itertools.starmap(f_pos, 
                          itertools.combinations(x, 2)))

print('Distances:', list(extract_dists(point_set)))

t0_f = time.time()
list(extract_dists(large_set))
dt_f = time.time() - t0_f

功能版本是否比程序版本更快?

def extract_dists_procedural(pts):
    n_pts = len(pts)
    l = []    
    for k_p1 in range(n_pts - 1):
        for k_p2 in range(k_p1, n_pts):
            l.append((pts[k_p1].x - pts[k_p2].x) ** 2 +
                     (pts[k_p1].y - pts[k_p2].y) ** 2)
    return l

t0_p = time.time()
list(extract_dists_procedural(large_set)) 
    # using list() on the assumption that
    # it eats up as much time as in the functional version

dt_p = time.time() - t0_p

f_vs_p = dt_p / dt_f
if f_vs_p >= 1.0:
    print('Time benefit of functional progamming:', f_vs_p, 
          'times as fast for', n_points, 'points')
else:
    print('Time penalty of functional programming:', 1 / f_vs_p, 
          'times as slow for', n_points, 'points')

You ask specifically about map(), filter() and reduce(), but I assume you want to know about functional programming in general. Having tested this myself on the problem of computing distances between all points within a set of points, functional programming (using the starmap function from the built-in itertools module) turned out to be slightly slower than for-loops (taking 1.25 times as long, in fact). Here is the sample code I used:

import itertools, time, math, random

class Point:
    def __init__(self,x,y):
        self.x, self.y = x, y

point_set = (Point(0, 0), Point(0, 1), Point(0, 2), Point(0, 3))
n_points = 100
pick_val = lambda : 10 * random.random() - 5
large_set = [Point(pick_val(), pick_val()) for _ in range(n_points)]
    # the distance function
f_dist = lambda x0, x1, y0, y1: math.sqrt((x0 - x1) ** 2 + (y0 - y1) ** 2)
    # go through each point, get its distance from all remaining points 
f_pos = lambda p1, p2: (p1.x, p2.x, p1.y, p2.y)

extract_dists = lambda x: itertools.starmap(f_dist, 
                          itertools.starmap(f_pos, 
                          itertools.combinations(x, 2)))

print('Distances:', list(extract_dists(point_set)))

t0_f = time.time()
list(extract_dists(large_set))
dt_f = time.time() - t0_f

Is the functional version faster than the procedural version?

def extract_dists_procedural(pts):
    n_pts = len(pts)
    l = []    
    for k_p1 in range(n_pts - 1):
        for k_p2 in range(k_p1, n_pts):
            l.append((pts[k_p1].x - pts[k_p2].x) ** 2 +
                     (pts[k_p1].y - pts[k_p2].y) ** 2)
    return l

t0_p = time.time()
list(extract_dists_procedural(large_set)) 
    # using list() on the assumption that
    # it eats up as much time as in the functional version

dt_p = time.time() - t0_p

f_vs_p = dt_p / dt_f
if f_vs_p >= 1.0:
    print('Time benefit of functional progamming:', f_vs_p, 
          'times as fast for', n_points, 'points')
else:
    print('Time penalty of functional programming:', 1 / f_vs_p, 
          'times as slow for', n_points, 'points')

回答 3

我写了一个简单的脚本来测试速度,这就是我发现的结果。实际上对于我来说,for循环最快。真的让我感到惊讶,请检查波纹管(计算平方和)。

from functools import reduce
import datetime


def time_it(func, numbers, *args):
    start_t = datetime.datetime.now()
    for i in range(numbers):
        func(args[0])
    print (datetime.datetime.now()-start_t)

def square_sum1(numbers):
    return reduce(lambda sum, next: sum+next**2, numbers, 0)


def square_sum2(numbers):
    a = 0
    for i in numbers:
        i = i**2
        a += i
    return a

def square_sum3(numbers):
    sqrt = lambda x: x**2
    return sum(map(sqrt, numbers))

def square_sum4(numbers):
    return(sum([int(i)**2 for i in numbers]))


time_it(square_sum1, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum2, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum3, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum4, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
0:00:00.302000 #Reduce
0:00:00.144000 #For loop
0:00:00.318000 #Map
0:00:00.390000 #List comprehension

I wrote a simple script that test the speed and this is what I found out. Actually for loop was fastest in my case. That really suprised me, check out bellow (was calculating sum of squares).

from functools import reduce
import datetime


def time_it(func, numbers, *args):
    start_t = datetime.datetime.now()
    for i in range(numbers):
        func(args[0])
    print (datetime.datetime.now()-start_t)

def square_sum1(numbers):
    return reduce(lambda sum, next: sum+next**2, numbers, 0)


def square_sum2(numbers):
    a = 0
    for i in numbers:
        i = i**2
        a += i
    return a

def square_sum3(numbers):
    sqrt = lambda x: x**2
    return sum(map(sqrt, numbers))

def square_sum4(numbers):
    return(sum([int(i)**2 for i in numbers]))


time_it(square_sum1, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum2, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum3, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum4, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
0:00:00.302000 #Reduce
0:00:00.144000 #For loop
0:00:00.318000 #Map
0:00:00.390000 #List comprehension

回答 4

我修改了@Alisa的代码,并用来cProfile说明为什么列表理解速度更快:

from functools import reduce
import datetime

def reduce_(numbers):
    return reduce(lambda sum, next: sum + next * next, numbers, 0)

def for_loop(numbers):
    a = []
    for i in numbers:
        a.append(i*2)
    a = sum(a)
    return a

def map_(numbers):
    sqrt = lambda x: x*x
    return sum(map(sqrt, numbers))

def list_comp(numbers):
    return(sum([i*i for i in numbers]))

funcs = [
        reduce_,
        for_loop,
        map_,
        list_comp
        ]

if __name__ == "__main__":
    # [1, 2, 5, 3, 1, 2, 5, 3]
    import cProfile
    for f in funcs:
        print('=' * 25)
        print("Profiling:", f.__name__)
        print('=' * 25)
        pr = cProfile.Profile()
        for i in range(10**6):
            pr.runcall(f, [1, 2, 5, 3, 1, 2, 5, 3])
        pr.create_stats()
        pr.print_stats()

结果如下:

=========================
Profiling: reduce_
=========================
         11000000 function calls in 1.501 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  1000000    0.162    0.000    1.473    0.000 profiling.py:4(reduce_)
  8000000    0.461    0.000    0.461    0.000 profiling.py:5(<lambda>)
  1000000    0.850    0.000    1.311    0.000 {built-in method _functools.reduce}
  1000000    0.028    0.000    0.028    0.000 {method 'disable' of '_lsprof.Profiler' objects}


=========================
Profiling: for_loop
=========================
         11000000 function calls in 1.372 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  1000000    0.879    0.000    1.344    0.000 profiling.py:7(for_loop)
  1000000    0.145    0.000    0.145    0.000 {built-in method builtins.sum}
  8000000    0.320    0.000    0.320    0.000 {method 'append' of 'list' objects}
  1000000    0.027    0.000    0.027    0.000 {method 'disable' of '_lsprof.Profiler' objects}


=========================
Profiling: map_
=========================
         11000000 function calls in 1.470 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  1000000    0.264    0.000    1.442    0.000 profiling.py:14(map_)
  8000000    0.387    0.000    0.387    0.000 profiling.py:15(<lambda>)
  1000000    0.791    0.000    1.178    0.000 {built-in method builtins.sum}
  1000000    0.028    0.000    0.028    0.000 {method 'disable' of '_lsprof.Profiler' objects}


=========================
Profiling: list_comp
=========================
         4000000 function calls in 0.737 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  1000000    0.318    0.000    0.709    0.000 profiling.py:18(list_comp)
  1000000    0.261    0.000    0.261    0.000 profiling.py:19(<listcomp>)
  1000000    0.131    0.000    0.131    0.000 {built-in method builtins.sum}
  1000000    0.027    0.000    0.027    0.000 {method 'disable' of '_lsprof.Profiler' objects}

恕我直言:

  • reduce并且map总体上来说很慢。不仅如此,summap返回sum列表相比,在返回的迭代器上使用慢
  • for_loop 使用append,这在某种程度上当然很慢
  • 列表理解不仅花费了最少的时间来构建列表,而且与之sum相比,它也变得更快map

I modified @Alisa’s code and used cProfile to show why list comprehension is faster:

from functools import reduce
import datetime

def reduce_(numbers):
    return reduce(lambda sum, next: sum + next * next, numbers, 0)

def for_loop(numbers):
    a = []
    for i in numbers:
        a.append(i*2)
    a = sum(a)
    return a

def map_(numbers):
    sqrt = lambda x: x*x
    return sum(map(sqrt, numbers))

def list_comp(numbers):
    return(sum([i*i for i in numbers]))

funcs = [
        reduce_,
        for_loop,
        map_,
        list_comp
        ]

if __name__ == "__main__":
    # [1, 2, 5, 3, 1, 2, 5, 3]
    import cProfile
    for f in funcs:
        print('=' * 25)
        print("Profiling:", f.__name__)
        print('=' * 25)
        pr = cProfile.Profile()
        for i in range(10**6):
            pr.runcall(f, [1, 2, 5, 3, 1, 2, 5, 3])
        pr.create_stats()
        pr.print_stats()

Here’s the results:

=========================
Profiling: reduce_
=========================
         11000000 function calls in 1.501 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  1000000    0.162    0.000    1.473    0.000 profiling.py:4(reduce_)
  8000000    0.461    0.000    0.461    0.000 profiling.py:5(<lambda>)
  1000000    0.850    0.000    1.311    0.000 {built-in method _functools.reduce}
  1000000    0.028    0.000    0.028    0.000 {method 'disable' of '_lsprof.Profiler' objects}


=========================
Profiling: for_loop
=========================
         11000000 function calls in 1.372 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  1000000    0.879    0.000    1.344    0.000 profiling.py:7(for_loop)
  1000000    0.145    0.000    0.145    0.000 {built-in method builtins.sum}
  8000000    0.320    0.000    0.320    0.000 {method 'append' of 'list' objects}
  1000000    0.027    0.000    0.027    0.000 {method 'disable' of '_lsprof.Profiler' objects}


=========================
Profiling: map_
=========================
         11000000 function calls in 1.470 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  1000000    0.264    0.000    1.442    0.000 profiling.py:14(map_)
  8000000    0.387    0.000    0.387    0.000 profiling.py:15(<lambda>)
  1000000    0.791    0.000    1.178    0.000 {built-in method builtins.sum}
  1000000    0.028    0.000    0.028    0.000 {method 'disable' of '_lsprof.Profiler' objects}


=========================
Profiling: list_comp
=========================
         4000000 function calls in 0.737 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  1000000    0.318    0.000    0.709    0.000 profiling.py:18(list_comp)
  1000000    0.261    0.000    0.261    0.000 profiling.py:19(<listcomp>)
  1000000    0.131    0.000    0.131    0.000 {built-in method builtins.sum}
  1000000    0.027    0.000    0.027    0.000 {method 'disable' of '_lsprof.Profiler' objects}

IMHO:

  • reduce and map are in general pretty slow. Not only that, using sum on the iterators that map returned is slow, compared to suming a list
  • for_loop uses append, which is of course slow to some extent
  • list-comprehension not only spent the least time building the list, it also makes sum much quicker, in contrast to map

回答 5

Alphii答案增加一点扭曲,实际上for循环将是第二好,并且比循环慢6倍map

from functools import reduce
import datetime


def time_it(func, numbers, *args):
    start_t = datetime.datetime.now()
    for i in range(numbers):
        func(args[0])
    print (datetime.datetime.now()-start_t)

def square_sum1(numbers):
    return reduce(lambda sum, next: sum+next**2, numbers, 0)


def square_sum2(numbers):
    a = 0
    for i in numbers:
        a += i**2
    return a

def square_sum3(numbers):
    a = 0
    map(lambda x: a+x**2, numbers)
    return a

def square_sum4(numbers):
    a = 0
    return [a+i**2 for i in numbers]

time_it(square_sum1, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum2, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum3, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum4, 100000, [1, 2, 5, 3, 1, 2, 5, 3])

主要的更改是消除了缓慢的sum通话,以及int()在最后一种情况下可能不必要的通话。实际上,将for循环和map放在相同的术语中就可以说是事实。请记住,lambda是功能性概念,从理论上讲不应该具有副作用,但是,它们可以具有诸如添加到的副作用a。在这种情况下使用Python 3.6.1,Ubuntu 14.04,Intel(R)Core(TM)i7-4770 CPU @ 3.40GHz时的结果

0:00:00.257703 #Reduce
0:00:00.184898 #For loop
0:00:00.031718 #Map
0:00:00.212699 #List comprehension

Adding a twist to Alphii answer, actually the for loop would be second best and about 6 times slower than map

from functools import reduce
import datetime


def time_it(func, numbers, *args):
    start_t = datetime.datetime.now()
    for i in range(numbers):
        func(args[0])
    print (datetime.datetime.now()-start_t)

def square_sum1(numbers):
    return reduce(lambda sum, next: sum+next**2, numbers, 0)


def square_sum2(numbers):
    a = 0
    for i in numbers:
        a += i**2
    return a

def square_sum3(numbers):
    a = 0
    map(lambda x: a+x**2, numbers)
    return a

def square_sum4(numbers):
    a = 0
    return [a+i**2 for i in numbers]

time_it(square_sum1, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum2, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum3, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum4, 100000, [1, 2, 5, 3, 1, 2, 5, 3])

Main changes have been to eliminate the slow sum calls, as well as the probably unnecessary int() in the last case. Putting the for loop and map in the same terms makes it quite fact, actually. Remember that lambdas are functional concepts and theoretically shouldn’t have side effects, but, well, they can have side effects like adding to a. Results in this case with Python 3.6.1, Ubuntu 14.04, Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz

0:00:00.257703 #Reduce
0:00:00.184898 #For loop
0:00:00.031718 #Map
0:00:00.212699 #List comprehension

回答 6

我设法修改了@alpiii的一些代码,并发现List理解比for循环快一点。它可能是由引起的int(),在列表理解和for循环之间是不公平的。

from functools import reduce
import datetime

def time_it(func, numbers, *args):
    start_t = datetime.datetime.now()
    for i in range(numbers):
        func(args[0])
    print (datetime.datetime.now()-start_t)

def square_sum1(numbers):
    return reduce(lambda sum, next: sum+next*next, numbers, 0)

def square_sum2(numbers):
    a = []
    for i in numbers:
        a.append(i*2)
    a = sum(a)
    return a

def square_sum3(numbers):
    sqrt = lambda x: x*x
    return sum(map(sqrt, numbers))

def square_sum4(numbers):
    return(sum([i*i for i in numbers]))

time_it(square_sum1, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum2, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum3, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum4, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
0:00:00.101122 #Reduce

0:00:00.089216 #For loop

0:00:00.101532 #Map

0:00:00.068916 #List comprehension

I have managed to modify some of @alpiii’s code and discovered that List comprehension is a little faster than for loop. It might be caused by int(), it is not fair between list comprehension and for loop.

from functools import reduce
import datetime

def time_it(func, numbers, *args):
    start_t = datetime.datetime.now()
    for i in range(numbers):
        func(args[0])
    print (datetime.datetime.now()-start_t)

def square_sum1(numbers):
    return reduce(lambda sum, next: sum+next*next, numbers, 0)

def square_sum2(numbers):
    a = []
    for i in numbers:
        a.append(i*2)
    a = sum(a)
    return a

def square_sum3(numbers):
    sqrt = lambda x: x*x
    return sum(map(sqrt, numbers))

def square_sum4(numbers):
    return(sum([i*i for i in numbers]))

time_it(square_sum1, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum2, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum3, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
time_it(square_sum4, 100000, [1, 2, 5, 3, 1, 2, 5, 3])
0:00:00.101122 #Reduce

0:00:00.089216 #For loop

0:00:00.101532 #Map

0:00:00.068916 #List comprehension

为什么Python的数组变慢?

问题:为什么Python的数组变慢?

我期望array.array比列表更快,因为数组似乎已拆箱。

但是,我得到以下结果:

In [1]: import array

In [2]: L = list(range(100000000))

In [3]: A = array.array('l', range(100000000))

In [4]: %timeit sum(L)
1 loop, best of 3: 667 ms per loop

In [5]: %timeit sum(A)
1 loop, best of 3: 1.41 s per loop

In [6]: %timeit sum(L)
1 loop, best of 3: 627 ms per loop

In [7]: %timeit sum(A)
1 loop, best of 3: 1.39 s per loop

造成这种差异的原因是什么?

I expected array.array to be faster than lists, as arrays seem to be unboxed.

However, I get the following result:

In [1]: import array

In [2]: L = list(range(100000000))

In [3]: A = array.array('l', range(100000000))

In [4]: %timeit sum(L)
1 loop, best of 3: 667 ms per loop

In [5]: %timeit sum(A)
1 loop, best of 3: 1.41 s per loop

In [6]: %timeit sum(L)
1 loop, best of 3: 627 ms per loop

In [7]: %timeit sum(A)
1 loop, best of 3: 1.39 s per loop

What could be the cause of such a difference?


回答 0

存储是“未装箱的”,但是每次访问元素时,Python都必须对其进行“装箱”(将其嵌入到常规Python对象中),以便对其进行任何处理。例如,您sum(A)遍历数组,并将每个整数一次装在一个常规Python int对象中。那要花时间。在您的中sum(L),所有装箱操作均在创建列表时完成。

因此,最后,阵列通常较慢,但所需的内存却少得多。


这是最新版本的Python 3的相关代码,但是自Python首次发布以来,相同的基本思想也适用于所有CPython实现。

这是访问列表项的代码:

PyObject *
PyList_GetItem(PyObject *op, Py_ssize_t i)
{
    /* error checking omitted */
    return ((PyListObject *)op) -> ob_item[i];
}

它几乎没有什么: somelist[i]只返回i列表中的第一个对象(CPython中的所有Python对象都是指向结构的指针,该结构的初始段符合a的布局struct PyObject)。

__getitem__array带有类型代码的实现l

static PyObject *
l_getitem(arrayobject *ap, Py_ssize_t i)
{
    return PyLong_FromLong(((long *)ap->ob_item)[i]);
}

原始内存被视为平台本地C long整数的向量;该i“个C long被读取起来; 然后PyLong_FromLong()调用该方法将本机包装(“框”)C long在Python long对象中(在Python 3中,该对象消除了Python 2 int和之间的区别long,实际上显示为type int)。

此装箱必须为Python int对象分配新的内存,然后将native C long的位喷入其中。在原始示例的上下文中,该对象的生存期非常短(足够长,足以sum()将内容添加到运行总计中),然后需要更多时间才能取消分配新int对象。

在CPython实现中,这就是速度差异的来源,永远都是这样,永远都是这样。

The storage is “unboxed”, but every time you access an element Python has to “box” it (embed it in a regular Python object) in order to do anything with it. For example, your sum(A) iterates over the array, and boxes each integer, one at a time, in a regular Python int object. That costs time. In your sum(L), all the boxing was done at the time the list was created.

So, in the end, an array is generally slower, but requires substantially less memory.


Here’s the relevant code from a recent version of Python 3, but the same basic ideas apply to all CPython implementations since Python was first released.

Here’s the code to access a list item:

PyObject *
PyList_GetItem(PyObject *op, Py_ssize_t i)
{
    /* error checking omitted */
    return ((PyListObject *)op) -> ob_item[i];
}

There’s very little to it: somelist[i] just returns the i‘th object in the list (and all Python objects in CPython are pointers to a struct whose initial segment conforms to the layout of a struct PyObject).

And here’s the __getitem__ implementation for an array with type code l:

static PyObject *
l_getitem(arrayobject *ap, Py_ssize_t i)
{
    return PyLong_FromLong(((long *)ap->ob_item)[i]);
}

The raw memory is treated as a vector of platform-native C long integers; the i‘th C long is read up; and then PyLong_FromLong() is called to wrap (“box”) the native C long in a Python long object (which, in Python 3, which eliminates Python 2’s distinction between int and long, is actually shown as type int).

This boxing has to allocate new memory for a Python int object, and spray the native C long‘s bits into it. In the context of the original example, this object’s lifetime is very brief (just long enough for sum() to add the contents into a running total), and then more time is required to deallocate the new int object.

This is where the speed difference comes from, always has come from, and always will come from in the CPython implementation.


回答 1

为了增加蒂姆·彼得斯的出色答案,数组实现了缓冲协议,而列表则没有。这意味着,如果您正在编写C扩展名(或道德上的对等物,例如编写Cython模块),则可以比Python更快地访问和使用数组的元素。这将为您带来可观的速度改进,可能远远超过一个数量级。但是,它有很多缺点:

  1. 您现在正从事编写C而不是Python的工作。Cython是改善这种情况的一种方法,但是并不能消除语言之间的许多基本差异;您需要熟悉C语义并了解它在做什么。
  2. PyPy的C API 在某种程度上可以正常工作,但是速度不是很快。如果您以PyPy为目标,则可能应该只编写带有常规列表的简单代码,然后让JITter为您优化它。
  3. C扩展比纯Python代码更难分发,因为它们需要编译。编译通常取决于体系结构和操作系统,因此您需要确保要针对目标平台进行编译。

直接转到C扩展程序可能会使用大锤拍打苍蝇,具体取决于您的用例。您首先应该研究NumPy,看看它是否足够强大,可以执行您想做的任何数学运算。如果正确使用,它将比本地Python快得多。

To add to Tim Peters’ excellent answer, arrays implement the buffer protocol, while lists do not. This means that, if you are writing a C extension (or the moral equivalent, such as writing a Cython module), then you can access and work with the elements of an array much faster than anything Python can do. This will give you considerable speed improvements, possibly well over an order of magnitude. However, it has a number of downsides:

  1. You are now in the business of writing C instead of Python. Cython is one way to ameliorate this, but it does not eliminate many fundamental differences between the languages; you need to be familiar with C semantics and understand what it is doing.
  2. PyPy’s C API works to some extent, but isn’t very fast. If you are targeting PyPy, you should probably just write simple code with regular lists, and then let the JITter optimize it for you.
  3. C extensions are harder to distribute than pure Python code because they need to be compiled. Compilation tends to be architecture and operating-system dependent, so you will need to ensure you are compiling for your target platform.

Going straight to C extensions may be using a sledgehammer to swat a fly, depending on your use case. You should first investigate NumPy and see if it is powerful enough to do whatever math you’re trying to do. It will also be much faster than native Python, if used correctly.


回答 2

蒂姆·彼得斯(Tim Peters)回答了为什么这样做很慢,但让我们看看如何改善它。

坚持您的示例sum(range(...))(比示例小10倍,以适合此处的内存):

import numpy
import array
L = list(range(10**7))
A = array.array('l', L)
N = numpy.array(L)

%timeit sum(L)
10 loops, best of 3: 101 ms per loop

%timeit sum(A)
1 loop, best of 3: 237 ms per loop

%timeit sum(N)
1 loop, best of 3: 743 ms per loop

这样numpy也需要装箱/拆箱,这有额外的开销。要使其快速运行,必须停留在numpy C代码内:

%timeit N.sum()
100 loops, best of 3: 6.27 ms per loop

因此,从列表解决方案到numpy版本,这是运行时的因素16。

我们还要检查创建这些数据结构需要花费多长时间

%timeit list(range(10**7))
1 loop, best of 3: 283 ms per loop

%timeit array.array('l', range(10**7))
1 loop, best of 3: 884 ms per loop

%timeit numpy.array(range(10**7))
1 loop, best of 3: 1.49 s per loop

%timeit numpy.arange(10**7)
10 loops, best of 3: 21.7 ms per loop

明确的获胜者:Numpy

还要注意,创建数据结构所花费的时间与求和所花费的时间差不多,甚至更多。分配内存很慢。

那些的内存使用情况:

sys.getsizeof(L)
90000112
sys.getsizeof(A)
81940352
sys.getsizeof(N)
80000096

因此,每个数字占用8个字节,开销各不相同。对于此范围,我们使用32位整数就足够了,因此我们可以保护一些内存。

N=numpy.arange(10**7, dtype=numpy.int32)

sys.getsizeof(N)
40000096

%timeit N.sum()
100 loops, best of 3: 8.35 ms per loop

但是事实证明,在我的计算机上添加64位整数要快于32位整数,因此只有在受内存/带宽限制的情况下才值得这样做。

Tim Peters answered why this is slow, but let’s see how to improve it.

Sticking to your example of sum(range(...)) (factor 10 smaller than your example to fit into memory here):

import numpy
import array
L = list(range(10**7))
A = array.array('l', L)
N = numpy.array(L)

%timeit sum(L)
10 loops, best of 3: 101 ms per loop

%timeit sum(A)
1 loop, best of 3: 237 ms per loop

%timeit sum(N)
1 loop, best of 3: 743 ms per loop

This way also numpy needs to box/unbox, which has additional overhead. To make it fast one has to stay within the numpy c code:

%timeit N.sum()
100 loops, best of 3: 6.27 ms per loop

So from the list solution to the numpy version this is a factor 16 in runtime.

Let’s also check how long creating those data structures takes

%timeit list(range(10**7))
1 loop, best of 3: 283 ms per loop

%timeit array.array('l', range(10**7))
1 loop, best of 3: 884 ms per loop

%timeit numpy.array(range(10**7))
1 loop, best of 3: 1.49 s per loop

%timeit numpy.arange(10**7)
10 loops, best of 3: 21.7 ms per loop

Clear winner: Numpy

Also note that creating the data structure takes about as much time as summing, if not more. Allocating memory is slow.

Memory usage of those:

sys.getsizeof(L)
90000112
sys.getsizeof(A)
81940352
sys.getsizeof(N)
80000096

So these take 8 bytes per number with varying overhead. For the range we use 32bit ints are sufficient, so we can safe some memory.

N=numpy.arange(10**7, dtype=numpy.int32)

sys.getsizeof(N)
40000096

%timeit N.sum()
100 loops, best of 3: 8.35 ms per loop

But it turns out that adding 64bit ints is faster than 32bit ints on my machine, so this is only worth it if you are limited by memory/bandwidth.


回答 3

请注意,100000000等于10^8不等于10^7,我的结果如下所示:

100000000 == 10**8

# my test results on a Linux virtual machine:
#<L = list(range(100000000))> Time: 0:00:03.263585
#<A = array.array('l', range(100000000))> Time: 0:00:16.728709
#<L = list(range(10**8))> Time: 0:00:03.119379
#<A = array.array('l', range(10**8))> Time: 0:00:18.042187
#<A = array.array('l', L)> Time: 0:00:07.524478
#<sum(L)> Time: 0:00:01.640671
#<np.sum(L)> Time: 0:00:20.762153

please note that 100000000 equals to 10^8 not to 10^7, and my results are as the folowwing:

100000000 == 10**8

# my test results on a Linux virtual machine:
#<L = list(range(100000000))> Time: 0:00:03.263585
#<A = array.array('l', range(100000000))> Time: 0:00:16.728709
#<L = list(range(10**8))> Time: 0:00:03.119379
#<A = array.array('l', range(10**8))> Time: 0:00:18.042187
#<A = array.array('l', L)> Time: 0:00:07.524478
#<sum(L)> Time: 0:00:01.640671
#<np.sum(L)> Time: 0:00:20.762153

多处理-管道与队列

问题:多处理-管道与队列

Python的多处理程序包中的队列和管道之间的根本区别是什么?

在什么情况下应该选择一种?什么时候使用比较有利Pipe()?什么时候使用比较有利Queue()

What are the fundamental differences between queues and pipes in Python’s multiprocessing package?

In what scenarios should one choose one over the other? When is it advantageous to use Pipe()? When is it advantageous to use Queue()?


回答 0

  • A Pipe()只能有两个端点。

  • 一个Queue()可以有多个生产者和消费者。

何时使用它们

如果您需要两个以上的交流点,请使用Queue()

如果您需要绝对的性能,那么a Pipe()会更快,因为Queue()它建立在之上Pipe()

绩效基准

假设您要生成两个进程并在它们之间尽快发送消息。这些是使用Pipe()和进行类似测试之间的拖动竞赛的计时结果Queue()。这是在运行Ubuntu 11.10和Python 2.7.2的ThinkpadT61上进行的。

仅供参考,我把成绩JoinableQueue()作为奖金;JoinableQueue()queue.task_done()调用时说明任务(它甚至不知道特定任务,它只计算队列中未完成的任务),以便queue.join()知道工作已完成。

每个答案底部的代码…

mpenning@mpenning-T61:~$ python multi_pipe.py 
Sending 10000 numbers to Pipe() took 0.0369849205017 seconds
Sending 100000 numbers to Pipe() took 0.328398942947 seconds
Sending 1000000 numbers to Pipe() took 3.17266988754 seconds
mpenning@mpenning-T61:~$ python multi_queue.py 
Sending 10000 numbers to Queue() took 0.105256080627 seconds
Sending 100000 numbers to Queue() took 0.980564117432 seconds
Sending 1000000 numbers to Queue() took 10.1611330509 seconds
mpnening@mpenning-T61:~$ python multi_joinablequeue.py 
Sending 10000 numbers to JoinableQueue() took 0.172781944275 seconds
Sending 100000 numbers to JoinableQueue() took 1.5714070797 seconds
Sending 1000000 numbers to JoinableQueue() took 15.8527247906 seconds
mpenning@mpenning-T61:~$

概括而言,Pipe()它的速度比速度快3倍Queue()JoinableQueue()除非您确实必须拥有这些好处,否则请不要考虑。

奖励材料2

除非您知道一些捷径,否则多处理会在信息流中引入微妙的变化,使调试变得困难。例如,在许多情况下,当您通过字典建立索引时,您的脚本可能运行良好,但是某些输入很少会失败。

通常,当整个python进程崩溃时,我们会获得有关失败的线索;但是,如果多处理功能崩溃,则不会在控制台上打印未经请求的崩溃回溯。很难找到未知的多处理崩溃,而又不知道导致进程崩溃的线索。

我发现跟踪多处理崩溃信息的最简单方法是将整个多处理功能包装在try/中except并使用traceback.print_exc()

import traceback
def run(self, args):
    try:
        # Insert stuff to be multiprocessed here
        return args[0]['that']
    except:
        print "FATAL: reader({0}) exited while multiprocessing".format(args) 
        traceback.print_exc()

现在,当您发现崩溃时,您会看到类似以下内容的信息:

FATAL: reader([{'crash': 'this'}]) exited while multiprocessing
Traceback (most recent call last):
  File "foo.py", line 19, in __init__
    self.run(args)
  File "foo.py", line 46, in run
    KeyError: 'that'

源代码:


"""
multi_pipe.py
"""
from multiprocessing import Process, Pipe
import time

def reader_proc(pipe):
    ## Read from the pipe; this will be spawned as a separate Process
    p_output, p_input = pipe
    p_input.close()    # We are only reading
    while True:
        msg = p_output.recv()    # Read from the output pipe and do nothing
        if msg=='DONE':
            break

def writer(count, p_input):
    for ii in xrange(0, count):
        p_input.send(ii)             # Write 'count' numbers into the input pipe
    p_input.send('DONE')

if __name__=='__main__':
    for count in [10**4, 10**5, 10**6]:
        # Pipes are unidirectional with two endpoints:  p_input ------> p_output
        p_output, p_input = Pipe()  # writer() writes to p_input from _this_ process
        reader_p = Process(target=reader_proc, args=((p_output, p_input),))
        reader_p.daemon = True
        reader_p.start()     # Launch the reader process

        p_output.close()       # We no longer need this part of the Pipe()
        _start = time.time()
        writer(count, p_input) # Send a lot of stuff to reader_proc()
        p_input.close()
        reader_p.join()
        print("Sending {0} numbers to Pipe() took {1} seconds".format(count,
            (time.time() - _start)))

"""
multi_queue.py
"""

from multiprocessing import Process, Queue
import time
import sys

def reader_proc(queue):
    ## Read from the queue; this will be spawned as a separate Process
    while True:
        msg = queue.get()         # Read from the queue and do nothing
        if (msg == 'DONE'):
            break

def writer(count, queue):
    ## Write to the queue
    for ii in range(0, count):
        queue.put(ii)             # Write 'count' numbers into the queue
    queue.put('DONE')

if __name__=='__main__':
    pqueue = Queue() # writer() writes to pqueue from _this_ process
    for count in [10**4, 10**5, 10**6]:             
        ### reader_proc() reads from pqueue as a separate process
        reader_p = Process(target=reader_proc, args=((pqueue),))
        reader_p.daemon = True
        reader_p.start()        # Launch reader_proc() as a separate python process

        _start = time.time()
        writer(count, pqueue)    # Send a lot of stuff to reader()
        reader_p.join()         # Wait for the reader to finish
        print("Sending {0} numbers to Queue() took {1} seconds".format(count, 
            (time.time() - _start)))

"""
multi_joinablequeue.py
"""
from multiprocessing import Process, JoinableQueue
import time

def reader_proc(queue):
    ## Read from the queue; this will be spawned as a separate Process
    while True:
        msg = queue.get()         # Read from the queue and do nothing
        queue.task_done()

def writer(count, queue):
    for ii in xrange(0, count):
        queue.put(ii)             # Write 'count' numbers into the queue

if __name__=='__main__':
    for count in [10**4, 10**5, 10**6]:
        jqueue = JoinableQueue() # writer() writes to jqueue from _this_ process
        # reader_proc() reads from jqueue as a different process...
        reader_p = Process(target=reader_proc, args=((jqueue),))
        reader_p.daemon = True
        reader_p.start()     # Launch the reader process
        _start = time.time()
        writer(count, jqueue) # Send a lot of stuff to reader_proc() (in different process)
        jqueue.join()         # Wait for the reader to finish
        print("Sending {0} numbers to JoinableQueue() took {1} seconds".format(count, 
            (time.time() - _start)))
  • A Pipe() can only have two endpoints.

  • A Queue() can have multiple producers and consumers.

When to use them

If you need more than two points to communicate, use a Queue().

If you need absolute performance, a Pipe() is much faster because Queue() is built on top of Pipe().

Performance Benchmarking

Let’s assume you want to spawn two processes and send messages between them as quickly as possible. These are the timing results of a drag race between similar tests using Pipe() and Queue()… This is on a ThinkpadT61 running Ubuntu 11.10, and Python 2.7.2.

FYI, I threw in results for JoinableQueue() as a bonus; JoinableQueue() accounts for tasks when queue.task_done() is called (it doesn’t even know about the specific task, it just counts unfinished tasks in the queue), so that queue.join() knows the work is finished.

The code for each at bottom of this answer…

mpenning@mpenning-T61:~$ python multi_pipe.py 
Sending 10000 numbers to Pipe() took 0.0369849205017 seconds
Sending 100000 numbers to Pipe() took 0.328398942947 seconds
Sending 1000000 numbers to Pipe() took 3.17266988754 seconds
mpenning@mpenning-T61:~$ python multi_queue.py 
Sending 10000 numbers to Queue() took 0.105256080627 seconds
Sending 100000 numbers to Queue() took 0.980564117432 seconds
Sending 1000000 numbers to Queue() took 10.1611330509 seconds
mpnening@mpenning-T61:~$ python multi_joinablequeue.py 
Sending 10000 numbers to JoinableQueue() took 0.172781944275 seconds
Sending 100000 numbers to JoinableQueue() took 1.5714070797 seconds
Sending 1000000 numbers to JoinableQueue() took 15.8527247906 seconds
mpenning@mpenning-T61:~$

In summary Pipe() is about three times faster than a Queue(). Don’t even think about the JoinableQueue() unless you really must have the benefits.

BONUS MATERIAL 2

Multiprocessing introduces subtle changes in information flow that make debugging hard unless you know some shortcuts. For instance, you might have a script that works fine when indexing through a dictionary in under many conditions, but infrequently fails with certain inputs.

Normally we get clues to the failure when the entire python process crashes; however, you don’t get unsolicited crash tracebacks printed to the console if the multiprocessing function crashes. Tracking down unknown multiprocessing crashes is hard without a clue to what crashed the process.

The simplest way I have found to track down multiprocessing crash informaiton is to wrap the entire multiprocessing function in a try / except and use traceback.print_exc():

import traceback
def run(self, args):
    try:
        # Insert stuff to be multiprocessed here
        return args[0]['that']
    except:
        print "FATAL: reader({0}) exited while multiprocessing".format(args) 
        traceback.print_exc()

Now, when you find a crash you see something like:

FATAL: reader([{'crash': 'this'}]) exited while multiprocessing
Traceback (most recent call last):
  File "foo.py", line 19, in __init__
    self.run(args)
  File "foo.py", line 46, in run
    KeyError: 'that'

Source Code:


"""
multi_pipe.py
"""
from multiprocessing import Process, Pipe
import time

def reader_proc(pipe):
    ## Read from the pipe; this will be spawned as a separate Process
    p_output, p_input = pipe
    p_input.close()    # We are only reading
    while True:
        msg = p_output.recv()    # Read from the output pipe and do nothing
        if msg=='DONE':
            break

def writer(count, p_input):
    for ii in xrange(0, count):
        p_input.send(ii)             # Write 'count' numbers into the input pipe
    p_input.send('DONE')

if __name__=='__main__':
    for count in [10**4, 10**5, 10**6]:
        # Pipes are unidirectional with two endpoints:  p_input ------> p_output
        p_output, p_input = Pipe()  # writer() writes to p_input from _this_ process
        reader_p = Process(target=reader_proc, args=((p_output, p_input),))
        reader_p.daemon = True
        reader_p.start()     # Launch the reader process

        p_output.close()       # We no longer need this part of the Pipe()
        _start = time.time()
        writer(count, p_input) # Send a lot of stuff to reader_proc()
        p_input.close()
        reader_p.join()
        print("Sending {0} numbers to Pipe() took {1} seconds".format(count,
            (time.time() - _start)))

"""
multi_queue.py
"""

from multiprocessing import Process, Queue
import time
import sys

def reader_proc(queue):
    ## Read from the queue; this will be spawned as a separate Process
    while True:
        msg = queue.get()         # Read from the queue and do nothing
        if (msg == 'DONE'):
            break

def writer(count, queue):
    ## Write to the queue
    for ii in range(0, count):
        queue.put(ii)             # Write 'count' numbers into the queue
    queue.put('DONE')

if __name__=='__main__':
    pqueue = Queue() # writer() writes to pqueue from _this_ process
    for count in [10**4, 10**5, 10**6]:             
        ### reader_proc() reads from pqueue as a separate process
        reader_p = Process(target=reader_proc, args=((pqueue),))
        reader_p.daemon = True
        reader_p.start()        # Launch reader_proc() as a separate python process

        _start = time.time()
        writer(count, pqueue)    # Send a lot of stuff to reader()
        reader_p.join()         # Wait for the reader to finish
        print("Sending {0} numbers to Queue() took {1} seconds".format(count, 
            (time.time() - _start)))

"""
multi_joinablequeue.py
"""
from multiprocessing import Process, JoinableQueue
import time

def reader_proc(queue):
    ## Read from the queue; this will be spawned as a separate Process
    while True:
        msg = queue.get()         # Read from the queue and do nothing
        queue.task_done()

def writer(count, queue):
    for ii in xrange(0, count):
        queue.put(ii)             # Write 'count' numbers into the queue

if __name__=='__main__':
    for count in [10**4, 10**5, 10**6]:
        jqueue = JoinableQueue() # writer() writes to jqueue from _this_ process
        # reader_proc() reads from jqueue as a different process...
        reader_p = Process(target=reader_proc, args=((jqueue),))
        reader_p.daemon = True
        reader_p.start()     # Launch the reader process
        _start = time.time()
        writer(count, jqueue) # Send a lot of stuff to reader_proc() (in different process)
        jqueue.join()         # Wait for the reader to finish
        print("Sending {0} numbers to JoinableQueue() took {1} seconds".format(count, 
            (time.time() - _start)))

回答 1

另一个Queue()值得注意的功能是进纸器线程。节指出:“当进程首先将项目放入队列时,将启动一个供料器线程,该线程将对象从缓冲区转移到管道中。” 可以插入无数(或maxsize)个项目,Queue()而无需任何queue.put()阻塞。这使您可以将多个项目存储在中Queue(),直到程序准备处理它们为止。

Pipe()另一方面,对于已发送到一个连接但尚未从另一连接接收的项目,则具有有限的存储量。存储空间用完后,对的调用connection.send()将阻塞,直到有空间写入整个项目为止。这将使线程停止写入操作,直到从管道读取其他线程为止。Connection对象使您可以访问基础文件描述符。在* nix系统上,您可以connection.send()使用os.set_blocking()函数防止呼叫阻塞。但是,如果您尝试发送不适合管道文件的单个项目,这将导致问题。Linux的最新版本允许您增加文件的大小,但是允许的最大大小根据系统配置而有所不同。因此,您永远不应依赖于Pipe()缓冲数据。调用connection.send 可能会阻塞,直到从其他管道读取数据为止。

总之,当您需要缓冲数据时,队列是比管道更好的选择。即使您只需要在两点之间进行交流。

One additional feature of Queue() that is worth noting is the feeder thread. This section notes “When a process first puts an item on the queue a feeder thread is started which transfers objects from a buffer into the pipe.” An infinite number of (or maxsize) items can be inserted into Queue() without any calls to queue.put() blocking. This allows you to store multiple items in a Queue(), until your program is ready to process them.

Pipe(), on the other hand, has a finite amount of storage for items that have been sent to one connection, but have not been received from the other connection. After this storage is used up, calls to connection.send() will block until there is space to write the entire item. This will stall the thread doing the writing until some other thread reads from the pipe. Connection objects give you access to the underlying file descriptor. On *nix systems, you can prevent connection.send() calls from blocking using the os.set_blocking() function. However, this will cause problems if you try to send a single item that does not fit in the pipe’s file. Recent versions of Linux allow you to increase the size of a file, but the maximum size allowed varies based on system configurations. You should therefore never rely on Pipe() to buffer data. Calls to connection.send could block until data gets read from the pipe somehwere else.

In conclusion, Queue is a better choice than pipe when you need to buffer data. Even when you only need to communicate between two points.


对于Python 3.x整数,比位移快两倍?

问题:对于Python 3.x整数,比位移快两倍?

我正在查看sorted_containers的来源,很惊讶地看到这一行

self._load, self._twice, self._half = load, load * 2, load >> 1

load是整数。为什么在一个位置使用位移,而在另一位置使用乘法?移位可能比整数除以2快,但这是合理的,但是为什么不还用移位代替乘法呢?我对以下情况进行了基准测试:

  1. (时间,分)
  2. (班次,班次)
  3. (时间,班次)
  4. (平移,除法)

并发现#3始终比其他替代方法快:

# self._load, self._twice, self._half = load, load * 2, load >> 1

import random
import timeit
import pandas as pd

x = random.randint(10 ** 3, 10 ** 6)

def test_naive():
    a, b, c = x, 2 * x, x // 2

def test_shift():
    a, b, c = x, x << 1, x >> 1    

def test_mixed():
    a, b, c = x, x * 2, x >> 1    

def test_mixed_swapped():
    a, b, c = x, x << 1, x // 2

def observe(k):
    print(k)
    return {
        'naive': timeit.timeit(test_naive),
        'shift': timeit.timeit(test_shift),
        'mixed': timeit.timeit(test_mixed),
        'mixed_swapped': timeit.timeit(test_mixed_swapped),
    }

def get_observations():
    return pd.DataFrame([observe(k) for k in range(100)])

在此处输入图片说明 在此处输入图片说明

问题:

我的考试有效吗?如果是这样,为什么(乘法,移位)比(移位,移位)快?

我在Ubuntu 14.04上运行Python 3.5。

编辑

以上是问题的原始陈述。Dan Getz在回答中提供了出色的解释。

为了完整起见,以下是x不适用乘法优化时的较大示例示例。

在此处输入图片说明 在此处输入图片说明

I was looking at the source of sorted_containers and was surprised to see this line:

self._load, self._twice, self._half = load, load * 2, load >> 1

Here load is an integer. Why use bit shift in one place, and multiplication in another? It seems reasonable that bit shifting may be faster than integral division by 2, but why not replace the multiplication by a shift as well? I benchmarked the the following cases:

  1. (times, divide)
  2. (shift, shift)
  3. (times, shift)
  4. (shift, divide)

and found that #3 is consistently faster than other alternatives:

# self._load, self._twice, self._half = load, load * 2, load >> 1

import random
import timeit
import pandas as pd

x = random.randint(10 ** 3, 10 ** 6)

def test_naive():
    a, b, c = x, 2 * x, x // 2

def test_shift():
    a, b, c = x, x << 1, x >> 1    

def test_mixed():
    a, b, c = x, x * 2, x >> 1    

def test_mixed_swapped():
    a, b, c = x, x << 1, x // 2

def observe(k):
    print(k)
    return {
        'naive': timeit.timeit(test_naive),
        'shift': timeit.timeit(test_shift),
        'mixed': timeit.timeit(test_mixed),
        'mixed_swapped': timeit.timeit(test_mixed_swapped),
    }

def get_observations():
    return pd.DataFrame([observe(k) for k in range(100)])

enter image description here enter image description here

The question:

Is my test valid? If so, why is (multiply, shift) faster than (shift, shift)?

I run Python 3.5 on Ubuntu 14.04.

Edit

Above is the original statement of the question. Dan Getz provides an excellent explanation in his answer.

For the sake of completeness, here are sample illustrations for larger x when multiplication optimizations do not apply.

enter image description here enter image description here


回答 0

这似乎是因为小数的乘法在CPython 3.5中得到了优化,而小数的左移则没有。正向左移始终会创建一个较大的整数对象,以存储结果,作为计算的一部分,而对于您在测试中使用的排序的乘法,特殊的优化可避免这种情况,并创建正确大小的整数对象。这可以在Python整数实现的源代码中看到。

由于Python中的整数是任意精度的,因此它们存储为整数“数字”的数组,每个整数位数的位数受到限制。因此,在一般情况下,涉及整数的运算不是单个运算,而是需要处理多个“数字”的情况。在pyport.h中,此位限制在64位平台上定义为 30位,否则为15位。(为了简化说明,我从这里开始将其称为30。但是请注意,如果您使用的是针对32位编译的Python,则基准测试的结果取决于是否x小于32,768。)

当操作的输入和输出保持在此30位限制内时,可以以优化的方式而不是一般的方式来处理操作。整数乘法实现的开始如下:

static PyObject *
long_mul(PyLongObject *a, PyLongObject *b)
{
    PyLongObject *z;

    CHECK_BINOP(a, b);

    /* fast path for single-digit multiplication */
    if (Py_ABS(Py_SIZE(a)) <= 1 && Py_ABS(Py_SIZE(b)) <= 1) {
        stwodigits v = (stwodigits)(MEDIUM_VALUE(a)) * MEDIUM_VALUE(b);
#ifdef HAVE_LONG_LONG
        return PyLong_FromLongLong((PY_LONG_LONG)v);
#else
        /* if we don't have long long then we're almost certainly
           using 15-bit digits, so v will fit in a long.  In the
           unlikely event that we're using 30-bit digits on a platform
           without long long, a large v will just cause us to fall
           through to the general multiplication code below. */
        if (v >= LONG_MIN && v <= LONG_MAX)
            return PyLong_FromLong((long)v);
#endif
    }

因此,当两个整数相乘时,每个整数都适合一个30位数字,这由CPython解释器直接进行乘法运算,而不是将整数作为数组使用。(MEDIUM_VALUE()在正整数对象上调用仅得到其第一个30位数字。)如果结果适合单个30位数字,PyLong_FromLongLong()则将在相对较少的操作中注意到这一点,并创建一个单个数字整数对象进行存储它。

相反,左移并没有以这种方式优化,每个左移都将整数作为数组进行处理。特别是,如果您查看的源代码long_lshift(),那么在很小但为正的左移的情况下,总是创建一个2位整数对象,只要稍后将其长度截断为1:(我的评论/*** ***/

static PyObject *
long_lshift(PyObject *v, PyObject *w)
{
    /*** ... ***/

    wordshift = shiftby / PyLong_SHIFT;   /*** zero for small w ***/
    remshift  = shiftby - wordshift * PyLong_SHIFT;   /*** w for small w ***/

    oldsize = Py_ABS(Py_SIZE(a));   /*** 1 for small v > 0 ***/
    newsize = oldsize + wordshift;
    if (remshift)
        ++newsize;   /*** here newsize becomes at least 2 for w > 0, v > 0 ***/
    z = _PyLong_New(newsize);

    /*** ... ***/
}

整数除法

您没有问过整数地板除法与右移相比性能更差的问题,因为这符合您(和我)的期望。但是,将一个小正数除以另一个小正数也不会像小乘法那样优化。每个//都使用函数来计算商余数long_divrem()。该余数是针对一个带乘法的小除数计算的,并存储在新分配的整数对象中,在这种情况下,该整数对象将立即丢弃。

This seems to be because multiplication of small numbers is optimized in CPython 3.5, in a way that left shifts by small numbers are not. Positive left shifts always create a larger integer object to store the result, as part of the calculation, while for multiplications of the sort you used in your test, a special optimization avoids this and creates an integer object of the correct size. This can be seen in the source code of Python’s integer implementation.

Because integers in Python are arbitrary-precision, they are stored as arrays of integer “digits”, with a limit on the number of bits per integer digit. So in the general case, operations involving integers are not single operations, but instead need to handle the case of multiple “digits”. In pyport.h, this bit limit is defined as 30 bits on 64-bit platform, or 15 bits otherwise. (I’ll just call this 30 from here on to keep the explanation simple. But note that if you were using Python compiled for 32-bit, your benchmark’s result would depend on if x were less than 32,768 or not.)

When an operation’s inputs and outputs stay within this 30-bit limit, the operation can be handled in an optimized way instead of the general way. The beginning of the integer multiplication implementation is as follows:

static PyObject *
long_mul(PyLongObject *a, PyLongObject *b)
{
    PyLongObject *z;

    CHECK_BINOP(a, b);

    /* fast path for single-digit multiplication */
    if (Py_ABS(Py_SIZE(a)) <= 1 && Py_ABS(Py_SIZE(b)) <= 1) {
        stwodigits v = (stwodigits)(MEDIUM_VALUE(a)) * MEDIUM_VALUE(b);
#ifdef HAVE_LONG_LONG
        return PyLong_FromLongLong((PY_LONG_LONG)v);
#else
        /* if we don't have long long then we're almost certainly
           using 15-bit digits, so v will fit in a long.  In the
           unlikely event that we're using 30-bit digits on a platform
           without long long, a large v will just cause us to fall
           through to the general multiplication code below. */
        if (v >= LONG_MIN && v <= LONG_MAX)
            return PyLong_FromLong((long)v);
#endif
    }

So when multiplying two integers where each fits in a 30-bit digit, this is done as a direct multiplication by the CPython interpreter, instead of working with the integers as arrays. (MEDIUM_VALUE() called on a positive integer object simply gets its first 30-bit digit.) If the result fits in a single 30-bit digit, PyLong_FromLongLong() will notice this in a relatively small number of operations, and create a single-digit integer object to store it.

In contrast, left shifts are not optimized this way, and every left shift deals with the integer being shifted as an array. In particular, if you look at the source code for long_lshift(), in the case of a small but positive left shift, a 2-digit integer object is always created, if only to have its length truncated to 1 later: (my comments in /*** ***/)

static PyObject *
long_lshift(PyObject *v, PyObject *w)
{
    /*** ... ***/

    wordshift = shiftby / PyLong_SHIFT;   /*** zero for small w ***/
    remshift  = shiftby - wordshift * PyLong_SHIFT;   /*** w for small w ***/

    oldsize = Py_ABS(Py_SIZE(a));   /*** 1 for small v > 0 ***/
    newsize = oldsize + wordshift;
    if (remshift)
        ++newsize;   /*** here newsize becomes at least 2 for w > 0, v > 0 ***/
    z = _PyLong_New(newsize);

    /*** ... ***/
}

Integer division

You didn’t ask about the worse performance of integer floor division compared to right shifts, because that fit your (and my) expectations. But dividing a small positive number by another small positive number is not as optimized as small multiplications, either. Every // computes both the quotient and the remainder using the function long_divrem(). This remainder is computed for a small divisor with a multiplication, and is stored in a newly-allocated integer object, which in this situation is immediately discarded.


在Python中查找数字的所有因子的最有效方法是什么?

问题:在Python中查找数字的所有因子的最有效方法是什么?

有人可以向我解释一种在Python(2.7)中找到一个数字的所有因子的有效方法吗?

我可以创建一个算法来执行此操作,但是我认为它的编码很差,并且花费大量时间才能生成大量结果。

Can someone explain to me an efficient way of finding all the factors of a number in Python (2.7)?

I can create an algorithm to do this, but I think it is poorly coded and takes too long to produce a result for a large number.


回答 0

from functools import reduce

def factors(n):    
    return set(reduce(list.__add__, 
                ([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0)))

这将很快返回所有因素n

为什么以平方根为上限?

sqrt(x) * sqrt(x) = x。因此,如果两个因素相同,则它们都是平方根。如果使一个因子变大,则必须使另一个因子变小。这意味着这两个之一将始终小于或等于sqrt(x),因此您只需要搜索到该点即可找到两个匹配因子之一。然后,您可以使用x / fac1获取fac2

reduce(list.__add__, ...)走的小名单[fac1, fac2],并在一个长长的清单一起加入他们。

[i, n/i] for i in range(1, int(sqrt(n)) + 1) if n % i == 0返回两个因素,如果当你除以其余n由较小的一个是零(它并不需要检查较大的一个过;它只是获取除以n由较小的一个。)

set(...)在外面摆脱重复,这仅发生于完美的正方形。对于n = 4,它将返回2两次,因此set摆脱其中之一。

from functools import reduce

def factors(n):    
    return set(reduce(list.__add__, 
                ([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0)))

This will return all of the factors, very quickly, of a number n.

Why square root as the upper limit?

sqrt(x) * sqrt(x) = x. So if the two factors are the same, they’re both the square root. If you make one factor bigger, you have to make the other factor smaller. This means that one of the two will always be less than or equal to sqrt(x), so you only have to search up to that point to find one of the two matching factors. You can then use x / fac1 to get fac2.

The reduce(list.__add__, ...) is taking the little lists of [fac1, fac2] and joining them together in one long list.

The [i, n/i] for i in range(1, int(sqrt(n)) + 1) if n % i == 0 returns a pair of factors if the remainder when you divide n by the smaller one is zero (it doesn’t need to check the larger one too; it just gets that by dividing n by the smaller one.)

The set(...) on the outside is getting rid of duplicates, which only happens for perfect squares. For n = 4, this will return 2 twice, so set gets rid of one of them.


回答 1

@agf提出的解决方案很棒,但是通过检查奇偶校验,可以将任意奇数的运行时间缩短约50%。由于奇数的因子本身始终都是奇数,因此在处理奇数时不必检查它们。

我刚刚开始解决欧拉计划困惑了自己。在某些问题中,在两个嵌套for循环内调用除数检查,因此该功能的性能至关重要。

将这一事实与agf的出色解决方案相结合,我最终获得了以下功能:

from math import sqrt
def factors(n):
        step = 2 if n%2 else 1
        return set(reduce(list.__add__,
                    ([i, n//i] for i in range(1, int(sqrt(n))+1, step) if n % i == 0)))

但是,对于较小的数字(〜<100),此更改带来的额外开销可能导致该功能花费更长的时间。

我进行了一些测试以检查速度。下面是使用的代码。为了产生不同的情节,我相应地进行了更改X = range(1,100,1)

import timeit
from math import sqrt
from matplotlib.pyplot import plot, legend, show

def factors_1(n):
    step = 2 if n%2 else 1
    return set(reduce(list.__add__,
                ([i, n//i] for i in range(1, int(sqrt(n))+1, step) if n % i == 0)))

def factors_2(n):
    return set(reduce(list.__add__,
                ([i, n//i] for i in range(1, int(sqrt(n)) + 1) if n % i == 0)))

X = range(1,100000,1000)
Y = []
for i in X:
    f_1 = timeit.timeit('factors_1({})'.format(i), setup='from __main__ import factors_1', number=10000)
    f_2 = timeit.timeit('factors_2({})'.format(i), setup='from __main__ import factors_2', number=10000)
    Y.append(f_1/f_2)
plot(X,Y, label='Running time with/without parity check')
legend()
show()

X =范围(1,100,1) X =范围(1,100,1)

这里没有显着差异,但是数量更大时,优势显而易见:

X =范围(1,100000,1000)(仅奇数) X =范围(1,100000,1000)(仅奇数)

X = range(2,100000,100)(仅偶数) X = range(2,100000,100)(仅偶数)

X = range(1,100000,1001)(交替奇偶校验) X = range(1,100000,1001)(交替奇偶校验)

The solution presented by @agf is great, but one can achieve ~50% faster run time for an arbitrary odd number by checking for parity. As the factors of an odd number always are odd themselves, it is not necessary to check these when dealing with odd numbers.

I’ve just started solving Project Euler puzzles myself. In some problems, a divisor check is called inside two nested for loops, and the performance of this function is thus essential.

Combining this fact with agf’s excellent solution, I’ve ended up with this function:

from math import sqrt
def factors(n):
        step = 2 if n%2 else 1
        return set(reduce(list.__add__,
                    ([i, n//i] for i in range(1, int(sqrt(n))+1, step) if n % i == 0)))

However, on small numbers (~ < 100), the extra overhead from this alteration may cause the function to take longer.

I ran some tests in order to check the speed. Below is the code used. To produce the different plots, I altered the X = range(1,100,1) accordingly.

import timeit
from math import sqrt
from matplotlib.pyplot import plot, legend, show

def factors_1(n):
    step = 2 if n%2 else 1
    return set(reduce(list.__add__,
                ([i, n//i] for i in range(1, int(sqrt(n))+1, step) if n % i == 0)))

def factors_2(n):
    return set(reduce(list.__add__,
                ([i, n//i] for i in range(1, int(sqrt(n)) + 1) if n % i == 0)))

X = range(1,100000,1000)
Y = []
for i in X:
    f_1 = timeit.timeit('factors_1({})'.format(i), setup='from __main__ import factors_1', number=10000)
    f_2 = timeit.timeit('factors_2({})'.format(i), setup='from __main__ import factors_2', number=10000)
    Y.append(f_1/f_2)
plot(X,Y, label='Running time with/without parity check')
legend()
show()

X = range(1,100,1) X = range(1,100,1)

No significant difference here, but with bigger numbers, the advantage is obvious:

X = range(1,100000,1000) (only odd numbers) X = range(1,100000,1000) (only odd numbers)

X = range(2,100000,100) (only even numbers) X = range(2,100000,100) (only even numbers)

X = range(1,100000,1001) (alternating parity) X = range(1,100000,1001) (alternating parity)


回答 2

AGF的答案确实很酷。我想看看是否可以重写它以避免使用reduce()。这是我想出的:

import itertools
flatten_iter = itertools.chain.from_iterable
def factors(n):
    return set(flatten_iter((i, n//i) 
                for i in range(1, int(n**0.5)+1) if n % i == 0))

我还尝试了使用棘手的生成器功能的版本:

def factors(n):
    return set(x for tup in ([i, n//i] 
                for i in range(1, int(n**0.5)+1) if n % i == 0) for x in tup)

我通过计算来计时:

start = 10000000
end = start + 40000
for n in range(start, end):
    factors(n)

我运行了一次以让Python对其进行编译,然后在time(1)命令下运行了三次,并保持了最佳时间。

  • 减少版本:11.58秒
  • itertools版本:11.49秒
  • 棘手的版本:​​11.12秒

注意itertools版本正在构建一个元组,并将其传递给flatten_iter()。如果我更改代码以构建列表,则它会稍微降低速度:

  • iterools(列表)版本:11.62秒

我相信棘手的生成器函数版本是Python中最快的。但这并没有比简化版本快很多,根据我的测量,速度大约快4%。

agf’s answer is really quite cool. I wanted to see if I could rewrite it to avoid using reduce(). This is what I came up with:

import itertools
flatten_iter = itertools.chain.from_iterable
def factors(n):
    return set(flatten_iter((i, n//i) 
                for i in range(1, int(n**0.5)+1) if n % i == 0))

I also tried a version that uses tricky generator functions:

def factors(n):
    return set(x for tup in ([i, n//i] 
                for i in range(1, int(n**0.5)+1) if n % i == 0) for x in tup)

I timed it by computing:

start = 10000000
end = start + 40000
for n in range(start, end):
    factors(n)

I ran it once to let Python compile it, then ran it under the time(1) command three times and kept the best time.

  • reduce version: 11.58 seconds
  • itertools version: 11.49 seconds
  • tricky version: 11.12 seconds

Note that the itertools version is building a tuple and passing it to flatten_iter(). If I change the code to build a list instead, it slows down slightly:

  • iterools (list) version: 11.62 seconds

I believe that the tricky generator functions version is the fastest possible in Python. But it’s not really much faster than the reduce version, roughly 4% faster based on my measurements.


回答 3

AGF回答的另一种方法:

def factors(n):    
    result = set()
    for i in range(1, int(n ** 0.5) + 1):
        div, mod = divmod(n, i)
        if mod == 0:
            result |= {i, div}
    return result

An alternative approach to agf’s answer:

def factors(n):    
    result = set()
    for i in range(1, int(n ** 0.5) + 1):
        div, mod = divmod(n, i)
        if mod == 0:
            result |= {i, div}
    return result

回答 4

这是@agf解决方案的替代方法,该解决方案以更Python的样式实现相同的算法:

def factors(n):
    return set(
        factor for i in range(1, int(n**0.5) + 1) if n % i == 0
        for factor in (i, n//i)
    )

此解决方案在没有导入的Python 2和Python 3中均可使用,并且可读性更高。我还没有测试这种方法的性能,但是渐近地它应该是相同的,并且如果性能是一个严重的问题,那么这两种解决方案都不是最优的。

Here’s an alternative to @agf’s solution which implements the same algorithm in a more pythonic style:

def factors(n):
    return set(
        factor for i in range(1, int(n**0.5) + 1) if n % i == 0
        for factor in (i, n//i)
    )

This solution works in both Python 2 and Python 3 with no imports and is much more readable. I haven’t tested the performance of this approach, but asymptotically it should be the same, and if performance is a serious concern, neither solution is optimal.


回答 5

SymPy中有一种称为强度因子的行业优势算法:

>>> from sympy import factorint
>>> factorint(2**70 + 3**80) 
{5: 2,
 41: 1,
 101: 1,
 181: 1,
 821: 1,
 1597: 1,
 5393: 1,
 27188665321L: 1,
 41030818561L: 1}

这花了不到一分钟的时间。它在多种方法之间切换。请参阅上面链接的文档。

考虑到所有主要因素,可以轻松构建所有其他因素。


请注意,即使允许接受的答案运行足够长的时间(即一个永恒的时间)来分解上述数字,但对于某些较大的数字,它将失败,例如以下示例。这是由于马虎int(n**0.5)。例如,当n = 10000000000000079**2我们有

>>> int(n**0.5)
10000000000000078L

由于10000000000000079是质数,因此可接受的答案的算法将永远找不到此因子。请注意,这不仅是一对一的。对于更大的数字,它将更多。因此,最好避免在此类算法中使用浮点数。

There is an industry-strength algorithm in SymPy called factorint:

>>> from sympy import factorint
>>> factorint(2**70 + 3**80) 
{5: 2,
 41: 1,
 101: 1,
 181: 1,
 821: 1,
 1597: 1,
 5393: 1,
 27188665321L: 1,
 41030818561L: 1}

This took under a minute. It switches among a cocktail of methods. See the documentation linked above.

Given all the prime factors, all other factors can be built easily.


Note that even if the accepted answer was allowed to run for long enough (i.e. an eternity) to factor the above number, for some large numbers it will fail, such the following example. This is due to the sloppy int(n**0.5). For example, when n = 10000000000000079**2, we have

>>> int(n**0.5)
10000000000000078L

Since 10000000000000079 is a prime, the accepted answer’s algorithm will never find this factor. Note that it’s not just an off-by-one; for larger numbers it will be off by more. For this reason it’s better to avoid floating-point numbers in algorithms of this sort.


回答 6

对于n高达10 ** 16(甚至更多)的情况,这是一个快速的纯Python 3.6解决方案,

from itertools import compress

def primes(n):
    """ Returns  a list of primes < n for n > 2 """
    sieve = bytearray([True]) * (n//2)
    for i in range(3,int(n**0.5)+1,2):
        if sieve[i//2]:
            sieve[i*i//2::i] = bytearray((n-i*i-1)//(2*i)+1)
    return [2,*compress(range(3,n,2), sieve[1:])]

def factorization(n):
    """ Returns a list of the prime factorization of n """
    pf = []
    for p in primeslist:
      if p*p > n : break
      count = 0
      while not n % p:
        n //= p
        count += 1
      if count > 0: pf.append((p, count))
    if n > 1: pf.append((n, 1))
    return pf

def divisors(n):
    """ Returns an unsorted list of the divisors of n """
    divs = [1]
    for p, e in factorization(n):
        divs += [x*p**k for k in range(1,e+1) for x in divs]
    return divs

n = 600851475143
primeslist = primes(int(n**0.5)+1) 
print(divisors(n))

For n up to 10**16 (maybe even a bit more), here is a fast pure Python 3.6 solution,

from itertools import compress

def primes(n):
    """ Returns  a list of primes < n for n > 2 """
    sieve = bytearray([True]) * (n//2)
    for i in range(3,int(n**0.5)+1,2):
        if sieve[i//2]:
            sieve[i*i//2::i] = bytearray((n-i*i-1)//(2*i)+1)
    return [2,*compress(range(3,n,2), sieve[1:])]

def factorization(n):
    """ Returns a list of the prime factorization of n """
    pf = []
    for p in primeslist:
      if p*p > n : break
      count = 0
      while not n % p:
        n //= p
        count += 1
      if count > 0: pf.append((p, count))
    if n > 1: pf.append((n, 1))
    return pf

def divisors(n):
    """ Returns an unsorted list of the divisors of n """
    divs = [1]
    for p, e in factorization(n):
        divs += [x*p**k for k in range(1,e+1) for x in divs]
    return divs

n = 600851475143
primeslist = primes(int(n**0.5)+1) 
print(divisors(n))

回答 7

对afg&eryksun解决方案的进一步改进。下面的代码返回所有因素的排序列表,而不改变运行时的渐进复杂性:

    def factors(n):    
        l1, l2 = [], []
        for i in range(1, int(n ** 0.5) + 1):
            q,r = n//i, n%i     # Alter: divmod() fn can be used.
            if r == 0:
                l1.append(i) 
                l2.append(q)    # q's obtained are decreasing.
        if l1[-1] == l2[-1]:    # To avoid duplication of the possible factor sqrt(n)
            l1.pop()
        l2.reverse()
        return l1 + l2

想法:不要使用list.sort()函数来获取有序列表,从而使nlog(n)变得复杂。在l2上使用list.reverse()更快,这会增加O(n)的复杂度。(这就是python的制作方法。)在l2.reverse()之后,可以将l2附加到l1以获得因子的排序列表。

注意,l1包含不断增加的i -s。l2包含q -s递减。这就是使用上述想法的原因。

Further improvement to afg & eryksun’s solution. The following piece of code returns a sorted list of all the factors without changing run time asymptotic complexity:

    def factors(n):    
        l1, l2 = [], []
        for i in range(1, int(n ** 0.5) + 1):
            q,r = n//i, n%i     # Alter: divmod() fn can be used.
            if r == 0:
                l1.append(i) 
                l2.append(q)    # q's obtained are decreasing.
        if l1[-1] == l2[-1]:    # To avoid duplication of the possible factor sqrt(n)
            l1.pop()
        l2.reverse()
        return l1 + l2

Idea: Instead of using the list.sort() function to get a sorted list which gives nlog(n) complexity; It is much faster to use list.reverse() on l2 which takes O(n) complexity. (That’s how python is made.) After l2.reverse(), l2 may be appended to l1 to get the sorted list of factors.

Notice, l1 contains i-s which are increasing. l2 contains q-s which are decreasing. Thats the reason behind using the above idea.


回答 8

我用timeit尝试了这些绝妙的答案中的大多数,以比较它们的效率与我的简单功能,但我不断看到我的表现优于此处列出的那些。我想分享一下,看看大家都怎么想。

def factors(n):
    results = set()
    for i in xrange(1, int(math.sqrt(n)) + 1):
        if n % i == 0:
            results.add(i)
            results.add(int(n/i))
    return results

在编写本文时,您必须导入数学以进行测试,但是用n **。5替换math.sqrt(n)也应同样有效。我不会浪费时间检查重复项,因为无论如何重复项都不能存在于集合中。

I’ve tried most of these wonderful answers with timeit to compare their efficiency versus my simple function and yet I constantly see mine outperform those listed here. I figured I’d share it and see what you all think.

def factors(n):
    results = set()
    for i in xrange(1, int(math.sqrt(n)) + 1):
        if n % i == 0:
            results.add(i)
            results.add(int(n/i))
    return results

As it’s written you’ll have to import math to test, but replacing math.sqrt(n) with n**.5 should work just as well. I don’t bother wasting time checking for duplicates because duplicates can’t exist in a set regardless.


回答 9

这是另一个没有reduce的替代方法,可以很好地处理大量数据。它用于sum拉平列表。

def factors(n):
    return set(sum([[i, n//i] for i in xrange(1, int(n**0.5)+1) if not n%i], []))

Here is another alternate without reduce that performs well with large numbers. It uses sum to flatten the list.

def factors(n):
    return set(sum([[i, n//i] for i in xrange(1, int(n**0.5)+1) if not n%i], []))

回答 10

确保抓取的数字大于sqrt(number_to_factor)不寻常数字(例如99,其具有3 * 3 * 11和)floor sqrt(99)+1 == 10

import math

def factor(x):
  if x == 0 or x == 1:
    return None
  res = []
  for i in range(2,int(math.floor(math.sqrt(x)+1))):
    while x % i == 0:
      x /= i
      res.append(i)
  if x != 1: # Unusual numbers
    res.append(x)
  return res

Be sure to grab the number larger than sqrt(number_to_factor) for unusual numbers like 99 which has 3*3*11 and floor sqrt(99)+1 == 10.

import math

def factor(x):
  if x == 0 or x == 1:
    return None
  res = []
  for i in range(2,int(math.floor(math.sqrt(x)+1))):
    while x % i == 0:
      x /= i
      res.append(i)
  if x != 1: # Unusual numbers
    res.append(x)
  return res

回答 11

查找数量因子的最简单方法:

def factors(x):
    return [i for i in range(1,x+1) if x%i==0]

The simplest way of finding factors of a number:

def factors(x):
    return [i for i in range(1,x+1) if x%i==0]

回答 12

这是一个示例,如果您想使用质数更快。这些列表很容易在Internet上找到。我在代码中添加了注释。

# http://primes.utm.edu/lists/small/10000.txt
# First 10000 primes

_PRIMES = (2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 
        31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 
        73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 
        127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 
        179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 
        233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 
        283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 
        353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 
        419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 
        467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 
        547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 
        607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 
        661, 673, 677, 683, 691, 701, 709, 719, 727, 733, 
        739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 
        811, 821, 823, 827, 829, 839, 853, 857, 859, 863, 
        877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 
        947, 953, 967, 971, 977, 983, 991, 997, 1009, 1013, 
# Mising a lot of primes for the purpose of the example
)


from bisect import bisect_left as _bisect_left
from math import sqrt as _sqrt


def get_factors(n):
    assert isinstance(n, int), "n must be an integer."
    assert n > 0, "n must be greather than zero."
    limit = pow(_PRIMES[-1], 2)
    assert n <= limit, "n is greather then the limit of {0}".format(limit)
    result = set((1, n))
    root = int(_sqrt(n))
    primes = [t for t in get_primes_smaller_than(root + 1) if not n % t]
    result.update(primes)  # Add all the primes factors less or equal to root square
    for t in primes:
        result.update(get_factors(n/t))  # Add all the factors associted for the primes by using the same process
    return sorted(result)


def get_primes_smaller_than(n):
    return _PRIMES[:_bisect_left(_PRIMES, n)]

Here is an example if you want to use the primes number to go a lot faster. These lists are easy to find on the internet. I added comments in the code.

# http://primes.utm.edu/lists/small/10000.txt
# First 10000 primes

_PRIMES = (2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 
        31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 
        73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 
        127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 
        179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 
        233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 
        283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 
        353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 
        419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 
        467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 
        547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 
        607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 
        661, 673, 677, 683, 691, 701, 709, 719, 727, 733, 
        739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 
        811, 821, 823, 827, 829, 839, 853, 857, 859, 863, 
        877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 
        947, 953, 967, 971, 977, 983, 991, 997, 1009, 1013, 
# Mising a lot of primes for the purpose of the example
)


from bisect import bisect_left as _bisect_left
from math import sqrt as _sqrt


def get_factors(n):
    assert isinstance(n, int), "n must be an integer."
    assert n > 0, "n must be greather than zero."
    limit = pow(_PRIMES[-1], 2)
    assert n <= limit, "n is greather then the limit of {0}".format(limit)
    result = set((1, n))
    root = int(_sqrt(n))
    primes = [t for t in get_primes_smaller_than(root + 1) if not n % t]
    result.update(primes)  # Add all the primes factors less or equal to root square
    for t in primes:
        result.update(get_factors(n/t))  # Add all the factors associted for the primes by using the same process
    return sorted(result)


def get_primes_smaller_than(n):
    return _PRIMES[:_bisect_left(_PRIMES, n)]

回答 13

一种可能比此处介绍的算法更有效的算法(尤其是如果其中的主要因素较少时n)。诀窍是调整每次发现主要因素时需要进行试验划分的限制

def factors(n):
    '''
    return prime factors and multiplicity of n
    n = p0^e0 * p1^e1 * ... * pk^ek encoded as
    res = [(p0, e0), (p1, e1), ..., (pk, ek)]
    '''

    res = []

    # get rid of all the factors of 2 using bit shifts
    mult = 0
    while not n & 1:
        mult += 1
        n >>= 1
    if mult != 0:
        res.append((2, mult))

    limit = round(sqrt(n))
    test_prime = 3
    while test_prime <= limit:
        mult = 0
        while n % test_prime == 0:
            mult += 1
            n //= test_prime
        if mult != 0:
            res.append((test_prime, mult))
            if n == 1:              # only useful if ek >= 3 (ek: multiplicity
                break               # of the last prime) 
            limit = round(sqrt(n))  # adjust the limit
        test_prime += 2             # will often not be prime...
    if n != 1:
        res.append((n, 1))
    return res

当然,这仍然是审判部门,仅此而已。因此,其效率仍然非常有限(尤其是对于没有小除数的大量用户)。

这是python3; 划分//应该是您唯一需要适应python 2(add from __future__ import division)的东西。

a potentially more efficient algorithm than the ones presented here already (especially if there are small prime factons in n). the trick here is to adjust the limit up to which trial division is needed every time prime factors are found:

def factors(n):
    '''
    return prime factors and multiplicity of n
    n = p0^e0 * p1^e1 * ... * pk^ek encoded as
    res = [(p0, e0), (p1, e1), ..., (pk, ek)]
    '''

    res = []

    # get rid of all the factors of 2 using bit shifts
    mult = 0
    while not n & 1:
        mult += 1
        n >>= 1
    if mult != 0:
        res.append((2, mult))

    limit = round(sqrt(n))
    test_prime = 3
    while test_prime <= limit:
        mult = 0
        while n % test_prime == 0:
            mult += 1
            n //= test_prime
        if mult != 0:
            res.append((test_prime, mult))
            if n == 1:              # only useful if ek >= 3 (ek: multiplicity
                break               # of the last prime) 
            limit = round(sqrt(n))  # adjust the limit
        test_prime += 2             # will often not be prime...
    if n != 1:
        res.append((n, 1))
    return res

this is of course still trial division and nothing more fancy. and therefore still very limited in its efficiency (especially for big numbers without small divisors).

this is python3; the division // should be the only thing you need to adapt for python 2 (add from __future__ import division).


回答 14

使用set(...)会使代码稍微慢一些,并且仅在检查平方根时才真正需要。这是我的版本:

def factors(num):
    if (num == 1 or num == 0):
        return []
    f = [1]
    sq = int(math.sqrt(num))
    for i in range(2, sq):
        if num % i == 0:
            f.append(i)
            f.append(num/i)
    if sq > 1 and num % sq == 0:
        f.append(sq)
        if sq*sq != num:
            f.append(num/sq)
    return f

if sq*sq != num:对于12之类的数字,条件是必需的,其中平方根不是整数,但是平方根的底数是一个因子。

请注意,此版本本身不返回数字,但是如果需要,可以轻松解决。输出也未排序。

我将其定为在所有数字1-200上运行10000次,并在所有数字1-5000上运行100次。它的性能优于我测试过的所有其他版本,包括dansalmo,Jason Schorn,oxrock,agf,steveha和eryksun的解决方案,尽管oxrock最接近。

Using set(...) makes the code slightly slower, and is only really necessary for when you check the square root. Here’s my version:

def factors(num):
    if (num == 1 or num == 0):
        return []
    f = [1]
    sq = int(math.sqrt(num))
    for i in range(2, sq):
        if num % i == 0:
            f.append(i)
            f.append(num/i)
    if sq > 1 and num % sq == 0:
        f.append(sq)
        if sq*sq != num:
            f.append(num/sq)
    return f

The if sq*sq != num: condition is necessary for numbers like 12, where the square root is not an integer, but the floor of the square root is a factor.

Note that this version doesn’t return the number itself, but that is an easy fix if you want it. The output also isn’t sorted.

I timed it running 10000 times on all numbers 1-200 and 100 times on all numbers 1-5000. It outperforms all the other versions I tested, including dansalmo’s, Jason Schorn’s, oxrock’s, agf’s, steveha’s, and eryksun’s solutions, though oxrock’s is by far the closest.


回答 15

您的最大因数不超过您的数字,所以,假设

def factors(n):
    factors = []
    for i in range(1, n//2+1):
        if n % i == 0:
            factors.append (i)
    factors.append(n)

    return factors

瞧!

your max factor is not more than your number, so, let’s say

def factors(n):
    factors = []
    for i in range(1, n//2+1):
        if n % i == 0:
            factors.append (i)
    factors.append(n)

    return factors

voilá!


回答 16

 import math

    '''
    I applied finding prime factorization to solve this. (Trial Division)
    It's not complicated
    '''


    def generate_factors(n):
        lower_bound_check = int(math.sqrt(n))  # determine lowest bound divisor range [16 = 4]
        factors = set()  # store factors
        for divisors in range(1, lower_bound_check + 1):  # loop [1 .. 4]
            if n % divisors == 0:
                factors.add(divisors)  # lower bound divisor is found 16 [ 1, 2, 4]
                factors.add(n // divisors)  # get upper divisor from lower [ 16 / 1 = 16, 16 / 2 = 8, 16 / 4 = 4]
        return factors  # [1, 2, 4, 8 16]


    print(generate_factors(12)) # {1, 2, 3, 4, 6, 12} -> pycharm output

 Pierre Vriens hopefully this makes more sense. this is an O(nlogn) solution. 
 import math

    '''
    I applied finding prime factorization to solve this. (Trial Division)
    It's not complicated
    '''


    def generate_factors(n):
        lower_bound_check = int(math.sqrt(n))  # determine lowest bound divisor range [16 = 4]
        factors = set()  # store factors
        for divisors in range(1, lower_bound_check + 1):  # loop [1 .. 4]
            if n % divisors == 0:
                factors.add(divisors)  # lower bound divisor is found 16 [ 1, 2, 4]
                factors.add(n // divisors)  # get upper divisor from lower [ 16 / 1 = 16, 16 / 2 = 8, 16 / 4 = 4]
        return factors  # [1, 2, 4, 8 16]


    print(generate_factors(12)) # {1, 2, 3, 4, 6, 12} -> pycharm output

 Pierre Vriens hopefully this makes more sense. this is an O(nlogn) solution. 

回答 17

使用与以下列表推导一样简单的方法,请注意,我们不需要测试1和我们要查找的数字:

def factors(n):
    return [x for x in range(2, n//2+1) if n%x == 0]

关于平方根的使用,假设我们要查找10的因数。sqrt(10) = 4因此range(1, int(sqrt(10))) = [1, 2, 3, 4],求4 的整数部分显然未命中5。

除非我想念什么,否则我建议您使用,如果您必须这样做int(ceil(sqrt(x)))。当然,这会产生很多不必要的函数调用。

Use something as simple as the following list comprehension, noting that we do not need to test 1 and the number we are trying to find:

def factors(n):
    return [x for x in range(2, n//2+1) if n%x == 0]

In reference to the use of square root, say we want to find factors of 10. The integer portion of the sqrt(10) = 4 therefore range(1, int(sqrt(10))) = [1, 2, 3, 4] and testing up to 4 clearly misses 5.

Unless I am missing something I would suggest, if you must do it this way, using int(ceil(sqrt(x))). Of course this produces a lot of unnecessary calls to functions.


回答 18

我认为为了提高可读性和速度,@ oxrock的解决方案是最好的,所以这里是为python 3+重写的代码:

def num_factors(n):
    results = set()
    for i in range(1, int(n**0.5) + 1):
        if n % i == 0: results.update([i,int(n/i)])
    return results

I think for readability and speed @oxrock’s solution is the best, so here is the code rewritten for python 3+:

def num_factors(n):
    results = set()
    for i in range(1, int(n**0.5) + 1):
        if n % i == 0: results.update([i,int(n/i)])
    return results

回答 19

当我看到这个问题,即使numpy 比python循环快得多的时候,也没有人使用numpy,我感到非常惊讶。通过使用numpy实现@agf的解决方案,结果平均8倍。我相信,如果您以numpy实施其他一些解决方案,您将获得美好的时光。

这是我的功能:

import numpy as np
def b(n):
    r = np.arange(1, int(n ** 0.5) + 1)
    x = r[np.mod(n, r) == 0]
    return set(np.concatenate((x, n / x), axis=None))   

请注意,x轴的数字不是功能的输入。函数的输入为2,x轴上的数字为负1。因此,输入十是2 ** 10-1 = 1023

使用numpy代替for循环的性能测试结果。

I was pretty surprised when I saw this question that no one used numpy even when numpy is way faster than python loops. By implementing @agf’s solution with numpy and it turned out at average 8x faster. I belive that if you implemented some of the other solutions in numpy you could get amazing times.

Here is my function:

import numpy as np
def b(n):
    r = np.arange(1, int(n ** 0.5) + 1)
    x = r[np.mod(n, r) == 0]
    return set(np.concatenate((x, n / x), axis=None))   

Notice that the numbers of the x-axis are not the input to the functions. The input to the functions is 2 to the the number on the x-axis minus 1. So where ten is the input would be 2**10-1 = 1023

Performance test results of using numpy instead of for loops.


回答 20

import 'dart:math';
generateFactorsOfN(N){
  //determine lowest bound divisor range
  final lowerBoundCheck = sqrt(N).toInt();
  var factors = Set<int>(); //stores factors
  /**
   * Lets take 16:
   * 4 = sqrt(16)
   * start from 1 ...  4 inclusive
   * check mod 16 % 1 == 0?  set[1, (16 / 1)]
   * check mod 16 % 2 == 0?  set[1, (16 / 1) , 2 , (16 / 2)]
   * check mod 16 % 3 == 0?  set[1, (16 / 1) , 2 , (16 / 2)] -> unchanged
   * check mod 16 % 4 == 0?  set[1, (16 / 1) , 2 , (16 / 2), 4, (16 / 4)]
   *
   *  ******************* set is used to remove duplicate
   *  ******************* case 4 and (16 / 4) both equal to 4
   *  return factor set<int>.. this isn't ordered
   */

  for(var divisor = 1; divisor <= lowerBoundCheck; divisor++){
    if(N % divisor == 0){
      factors.add(divisor);
      factors.add(N ~/ divisor); // ~/ integer division 
    }
  }
  return factors;
}
import 'dart:math';
generateFactorsOfN(N){
  //determine lowest bound divisor range
  final lowerBoundCheck = sqrt(N).toInt();
  var factors = Set<int>(); //stores factors
  /**
   * Lets take 16:
   * 4 = sqrt(16)
   * start from 1 ...  4 inclusive
   * check mod 16 % 1 == 0?  set[1, (16 / 1)]
   * check mod 16 % 2 == 0?  set[1, (16 / 1) , 2 , (16 / 2)]
   * check mod 16 % 3 == 0?  set[1, (16 / 1) , 2 , (16 / 2)] -> unchanged
   * check mod 16 % 4 == 0?  set[1, (16 / 1) , 2 , (16 / 2), 4, (16 / 4)]
   *
   *  ******************* set is used to remove duplicate
   *  ******************* case 4 and (16 / 4) both equal to 4
   *  return factor set<int>.. this isn't ordered
   */

  for(var divisor = 1; divisor <= lowerBoundCheck; divisor++){
    if(N % divisor == 0){
      factors.add(divisor);
      factors.add(N ~/ divisor); // ~/ integer division 
    }
  }
  return factors;
}

回答 21

我认为这是最简单的方法:

    x = 23

    i = 1
    while i <= x:
      if x % i == 0:
        print("factor: %s"% i)
      i += 1

I reckon this is the simplest way to do that:

    x = 23

    i = 1
    while i <= x:
      if x % i == 0:
        print("factor: %s"% i)
      i += 1

使用if-return-return或if-else-return更有效吗?

问题:使用if-return-return或if-else-return更有效吗?

假设我有一个if带有的语句return。从效率的角度来看,我应该使用

if(A > B):
    return A+1
return A-1

要么

if(A > B):
    return A+1
else:
    return A-1

使用编译语言(C)或脚本化语言(Python)时,我应该选择一种还是另一种?

Suppose I have an if statement with a return. From the efficiency perspective, should I use

if(A > B):
    return A+1
return A-1

or

if(A > B):
    return A+1
else:
    return A-1

Should I prefer one or another when using a compiled language (C) or a scripted one (Python)?


回答 0

由于该return语句终止了当前函数的执行,因此两种形式是等效的(尽管第二种形式比第一种更具可读性)。

两种形式的效率都相当,如果if条件为假,则基础机器代码必须执行跳转。

请注意,Python支持一种语法,该语法仅允许您使用一种return情况:

return A+1 if A > B else A-1

Since the return statement terminates the execution of the current function, the two forms are equivalent (although the second one is arguably more readable than the first).

The efficiency of both forms is comparable, the underlying machine code has to perform a jump if the if condition is false anyway.

Note that Python supports a syntax that allows you to use only one return statement in your case:

return A+1 if A > B else A-1

回答 1

根据Chromium的风格指南:

返回后请勿使用其他:

# Bad
if (foo)
  return 1
else
  return 2

# Good
if (foo)
  return 1
return 2

return 1 if foo else 2

From Chromium’s style guide:

Don’t use else after return:

# Bad
if (foo)
  return 1
else
  return 2

# Good
if (foo)
  return 1
return 2

return 1 if foo else 2

回答 2

关于编码风格:

无论哪种语言,大多数编码标准都禁止从单个函数中使用多个返回语句,这是一种不好的做法。

(尽管我个人会说在某些情况下多个返回语句确实有意义:文本/数据协议解析器,具有大量错误处理的功能等)

所有这些行业编码标准的共识是,该表达式应写为:

int result;

if(A > B)
{
  result = A+1;
}
else
{
  result = A-1;
}
return result;

关于效率:

上面的示例和问题中的两个示例在效率方面都完全等效。在所有这些情况下,机器码都必须比较A> B,然后跳转到A + 1或A-1计算,然后将结果存储在CPU寄存器或堆栈中。

编辑:

资料来源:

  • MISRA-C:2004规则14.7,依次引用…:
  • IEC 61508-3。第3部分,表B.9。
  • IEC 61508-7。C.2.9。

Regarding coding style:

Most coding standards no matter language ban multiple return statements from a single function as bad practice.

(Although personally I would say there are several cases where multiple return statements do make sense: text/data protocol parsers, functions with extensive error handling etc)

The consensus from all those industry coding standards is that the expression should be written as:

int result;

if(A > B)
{
  result = A+1;
}
else
{
  result = A-1;
}
return result;

Regarding efficiency:

The above example and the two examples in the question are all completely equivalent in terms of efficiency. The machine code in all these cases have to compare A > B, then branch to either the A+1 or the A-1 calculation, then store the result of that in a CPU register or on the stack.

EDIT :

Sources:

  • MISRA-C:2004 rule 14.7, which in turn cites…:
  • IEC 61508-3. Part 3, table B.9.
  • IEC 61508-7. C.2.9.

回答 3

对于任何明智的编译器,您都应该观察到没有区别。它们应该被编译为相同的机器代码,因为它们是等效的。

With any sensible compiler, you should observe no difference; they should be compiled to identical machine code as they’re equivalent.


回答 4

因为口译员不在乎,所以这是一个风格(或偏好)问题。就我个人而言,我尽量不要对以函数基础以外的缩进级别返回值的函数做最终声明。示例1中的else会(即使只是稍微)掩盖了函数的结束位置。

根据偏好,我使用:

return A+1 if (A > B) else A-1

因为它遵循了将单个return语句作为函数中的最后一条语句的良好约定(如已提到的那样)以及避免命令式中间结果的良好的函数编程范例。

对于更复杂的功能,我更喜欢将功能分解为多个子功能,以避免可能的过早返回。否则,我将恢复使用称为rval的命令式样式变量。我尽量不要使用多个return语句,除非该函数是微不足道的,或者在结束之前的return语句是由于错误导致的。过早返回会突出显示您无法继续前进的事实。对于旨在分解为多个子功能的复杂功能,我尝试将它们编码为case语句(例如,由dict驱动)。

一些海报提到了运行速度。对于我来说,运行时的速度是次要的,因为如果您需要执行速度,那么Python并不是最好的语言。我将Python用作对我很重要的编码效率(即编写无错误代码)。

This is a question of style (or preference) since the interpreter does not care. Personally I would try not to make the final statement of a function which returns a value at an indent level other than the function base. The else in example 1 obscures, if only slightly, where the end of the function is.

By preference I use:

return A+1 if (A > B) else A-1

As it obeys both the good convention of having a single return statement as the last statement in the function (as already mentioned) and the good functional programming paradigm of avoiding imperative style intermediate results.

For more complex functions I prefer to break the function into multiple sub-functions to avoid premature returns if possible. Otherwise I revert to using an imperative style variable called rval. I try not to use multiple return statements unless the function is trivial or the return statement before the end is as a result of an error. Returning prematurely highlights the fact that you cannot go on. For complex functions that are designed to branch off into multiple subfunctions I try to code them as case statements (driven by a dict for instance).

Some posters have mentioned speed of operation. Speed of Run-time is secondary for me since if you need speed of execution Python is not the best language to use. I use Python as its the efficiency of coding (i.e. writing error free code) that matters to me.


回答 5

我个人else尽可能避免阻塞。参见反假宣传运动

另外,他们不收取“额外”费用,你知道:p

“简单胜于复杂”和“可读性为王”

delta = 1 if (A > B) else -1
return A + delta

I personally avoid else blocks when possible. See the Anti-if Campaign

Also, they don’t charge ‘extra’ for the line, you know :p

“Simple is better than complex” & “Readability is king”

delta = 1 if (A > B) else -1
return A + delta

回答 6

版本A更简单,这就是我要使用它的原因。

而且,如果您打开Java中的所有编译器警告,您将在第二个版本上收到警告,因为它是不必要的,并且增加了代码复杂度。

Version A is simpler and that’s why I would use it.

And if you turn on all compiler warnings in Java you will get a warning on the second Version because it is unnecesarry and turns up code complexity.


回答 7

我知道这个问题被标记为python,但是它提到了动态语言,因此我想我应该提到在ruby中if语句实际上具有一个返回类型,因此您可以执行以下操作

def foo
  rv = if (A > B)
         A+1
       else
         A-1
       end
  return rv 
end

或者因为它也有隐式的回报

def foo 
  if (A>B)
    A+1
  else 
    A-1
  end
end

解决了没有很好的多次收益的样式问题。

I know the question is tagged python, but it mentions dynamic languages so thought I should mention that in ruby the if statement actually has a return type so you can do something like

def foo
  rv = if (A > B)
         A+1
       else
         A-1
       end
  return rv 
end

Or because it also has implicit return simply

def foo 
  if (A>B)
    A+1
  else 
    A-1
  end
end

which gets around the style issue of not having multiple returns quite nicely.