问题:在numpy数组上映射函数的最有效方法

在numpy数组上映射函数的最有效方法是什么?我在当前项目中一直采用的方式如下:

import numpy as np 

x = np.array([1, 2, 3, 4, 5])

# Obtain array of square of each element in x
squarer = lambda t: t ** 2
squares = np.array([squarer(xi) for xi in x])

但是,这似乎效率很低,因为我正在使用列表推导将新数组构造为Python列表,然后再将其转换回numpy数组。

我们可以做得更好吗?

What is the most efficient way to map a function over a numpy array? The way I’ve been doing it in my current project is as follows:

import numpy as np 

x = np.array([1, 2, 3, 4, 5])

# Obtain array of square of each element in x
squarer = lambda t: t ** 2
squares = np.array([squarer(xi) for xi in x])

However, this seems like it is probably very inefficient, since I am using a list comprehension to construct the new array as a Python list before converting it back to a numpy array.

Can we do better?


回答 0

我测试过的所有建议的方法,加上np.array(map(f, x))perfplot(我的一个小项目)。

消息1:如果可以使用numpy的本机函数,请执行此操作。

如果你想已经矢量化功能矢量(如x**2在原岗位的例子),使用的是比什么都更快(注意对数标度):

在此处输入图片说明

如果您确实需要向量化,那么使用哪种变体并不重要。

在此处输入图片说明


复制图的代码:

import numpy as np
import perfplot
import math


def f(x):
    # return math.sqrt(x)
    return np.sqrt(x)


vf = np.vectorize(f)


def array_for(x):
    return np.array([f(xi) for xi in x])


def array_map(x):
    return np.array(list(map(f, x)))


def fromiter(x):
    return np.fromiter((f(xi) for xi in x), x.dtype)


def vectorize(x):
    return np.vectorize(f)(x)


def vectorize_without_init(x):
    return vf(x)


perfplot.show(
    setup=lambda n: np.random.rand(n),
    n_range=[2 ** k for k in range(20)],
    kernels=[f, array_for, array_map, fromiter, vectorize, vectorize_without_init],
    xlabel="len(x)",
)

I’ve tested all suggested methods plus np.array(map(f, x)) with perfplot (a small project of mine).

Message #1: If you can use numpy’s native functions, do that.

If the function you’re trying to vectorize already is vectorized (like the x**2 example in the original post), using that is much faster than anything else (note the log scale):

enter image description here

If you actually need vectorization, it doesn’t really matter much which variant you use.

enter image description here


Code to reproduce the plots:

import numpy as np
import perfplot
import math


def f(x):
    # return math.sqrt(x)
    return np.sqrt(x)


vf = np.vectorize(f)


def array_for(x):
    return np.array([f(xi) for xi in x])


def array_map(x):
    return np.array(list(map(f, x)))


def fromiter(x):
    return np.fromiter((f(xi) for xi in x), x.dtype)


def vectorize(x):
    return np.vectorize(f)(x)


def vectorize_without_init(x):
    return vf(x)


perfplot.show(
    setup=lambda n: np.random.rand(n),
    n_range=[2 ** k for k in range(20)],
    kernels=[f, array_for, array_map, fromiter, vectorize, vectorize_without_init],
    xlabel="len(x)",
)

回答 1

如何使用numpy.vectorize

import numpy as np
x = np.array([1, 2, 3, 4, 5])
squarer = lambda t: t ** 2
vfunc = np.vectorize(squarer)
vfunc(x)
# Output : array([ 1,  4,  9, 16, 25])

How about using numpy.vectorize.

import numpy as np
x = np.array([1, 2, 3, 4, 5])
squarer = lambda t: t ** 2
vfunc = np.vectorize(squarer)
vfunc(x)
# Output : array([ 1,  4,  9, 16, 25])

回答 2

TL; DR

@ user2357112所述,应用函数的“直接”方法始终是在Numpy数组上映射函数的最快,最简单的方法:

import numpy as np
x = np.array([1, 2, 3, 4, 5])
f = lambda x: x ** 2
squares = f(x)

通常应避免使用np.vectorize它,因为它运行不佳,并且有(或遇到)许多问题。如果要处理其他数据类型,则可能需要研究以下所示的其他方法。

方法比较

以下是一些简单的测试,用于比较三种映射函数的方法,本示例在Python 3.6和NumPy 1.15.4中使用。首先,用于测试的设置功能:

import timeit
import numpy as np

f = lambda x: x ** 2
vf = np.vectorize(f)

def test_array(x, n):
    t = timeit.timeit(
        'np.array([f(xi) for xi in x])',
        'from __main__ import np, x, f', number=n)
    print('array: {0:.3f}'.format(t))

def test_fromiter(x, n):
    t = timeit.timeit(
        'np.fromiter((f(xi) for xi in x), x.dtype, count=len(x))',
        'from __main__ import np, x, f', number=n)
    print('fromiter: {0:.3f}'.format(t))

def test_direct(x, n):
    t = timeit.timeit(
        'f(x)',
        'from __main__ import x, f', number=n)
    print('direct: {0:.3f}'.format(t))

def test_vectorized(x, n):
    t = timeit.timeit(
        'vf(x)',
        'from __main__ import x, vf', number=n)
    print('vectorized: {0:.3f}'.format(t))

用五个元素(从最快到最慢排序)进行测试:

x = np.array([1, 2, 3, 4, 5])
n = 100000
test_direct(x, n)      # 0.265
test_fromiter(x, n)    # 0.479
test_array(x, n)       # 0.865
test_vectorized(x, n)  # 2.906

具有100多个元素:

x = np.arange(100)
n = 10000
test_direct(x, n)      # 0.030
test_array(x, n)       # 0.501
test_vectorized(x, n)  # 0.670
test_fromiter(x, n)    # 0.883

并且具有1000或更多的数组元素:

x = np.arange(1000)
n = 1000
test_direct(x, n)      # 0.007
test_fromiter(x, n)    # 0.479
test_array(x, n)       # 0.516
test_vectorized(x, n)  # 0.945

不同版本的Python / NumPy和编译器优化将产生不同的结果,因此请针对您的环境进行类似的测试。

TL;DR

As noted by @user2357112, a “direct” method of applying the function is always the fastest and simplest way to map a function over Numpy arrays:

import numpy as np
x = np.array([1, 2, 3, 4, 5])
f = lambda x: x ** 2
squares = f(x)

Generally avoid np.vectorize, as it does not perform well, and has (or had) a number of issues. If you are handling other data types, you may want to investigate the other methods shown below.

Comparison of methods

Here are some simple tests to compare three methods to map a function, this example using with Python 3.6 and NumPy 1.15.4. First, the set-up functions for testing:

import timeit
import numpy as np

f = lambda x: x ** 2
vf = np.vectorize(f)

def test_array(x, n):
    t = timeit.timeit(
        'np.array([f(xi) for xi in x])',
        'from __main__ import np, x, f', number=n)
    print('array: {0:.3f}'.format(t))

def test_fromiter(x, n):
    t = timeit.timeit(
        'np.fromiter((f(xi) for xi in x), x.dtype, count=len(x))',
        'from __main__ import np, x, f', number=n)
    print('fromiter: {0:.3f}'.format(t))

def test_direct(x, n):
    t = timeit.timeit(
        'f(x)',
        'from __main__ import x, f', number=n)
    print('direct: {0:.3f}'.format(t))

def test_vectorized(x, n):
    t = timeit.timeit(
        'vf(x)',
        'from __main__ import x, vf', number=n)
    print('vectorized: {0:.3f}'.format(t))

Testing with five elements (sorted from fastest to slowest):

x = np.array([1, 2, 3, 4, 5])
n = 100000
test_direct(x, n)      # 0.265
test_fromiter(x, n)    # 0.479
test_array(x, n)       # 0.865
test_vectorized(x, n)  # 2.906

With 100s of elements:

x = np.arange(100)
n = 10000
test_direct(x, n)      # 0.030
test_array(x, n)       # 0.501
test_vectorized(x, n)  # 0.670
test_fromiter(x, n)    # 0.883

And with 1000s of array elements or more:

x = np.arange(1000)
n = 1000
test_direct(x, n)      # 0.007
test_fromiter(x, n)    # 0.479
test_array(x, n)       # 0.516
test_vectorized(x, n)  # 0.945

Different versions of Python/NumPy and compiler optimization will have different results, so do a similar test for your environment.


回答 3

numexprnumbacython周围,此答案的目的是考虑这些可能性。

但是首先让我们说明一个显而易见的事实:无论您如何将Python函数映射到numpy数组,它都会保留为Python函数,这意味着每次评估:

  • numpy-array元素必须转换为Python对象(例如, Float)。
  • 所有的计算都是使用Python对象完成的,这意味着要占用解释器,动态分配和不可变对象的开销。

因此,由于上面提到的开销,实际上用于循环遍历数组的机制不会发挥很大的作用-它比使用numpy的内置功能要慢得多。

让我们看下面的例子:

# numpy-functionality
def f(x):
    return x+2*x*x+4*x*x*x

# python-function as ufunc
import numpy as np
vf=np.vectorize(f)
vf.__name__="vf"

np.vectorize被选为方法的纯Python函数类的代表。使用perfplot(请参阅此答案的附录中的代码),我们得到以下运行时间:

在此处输入图片说明

我们可以看到,numpy方法比纯python版本快10到100倍。对于更大的数组大小,性能下降可能是因为数据不再适合高速缓存。

值得一提的是,vectorize它还占用大量内存,因此内存使用常常是瓶颈(请参阅相关的SO问题)。还要注意,numpy的文档np.vectorize指出“主要是为了方便而不是性能而提供”。

需要性能时,应使用其他工具,除了从头开始编写C扩展名外,还有以下可能性:


人们经常听到,numpy性能是最好的,因为它是纯C语言。但是还有很多改进的空间!

向量化的numpy版本使用大量额外的内存和内存访问。Numexp库尝试对numpy数组进行平铺,从而获得更好的缓存利用率:

# less cache misses than numpy-functionality
import numexpr as ne
def ne_f(x):
    return ne.evaluate("x+2*x*x+4*x*x*x")

导致以下比较:

在此处输入图片说明

我无法解释上面图表中的所有内容:一开始我们会看到numexpr-library的开销更大,但是因为它更好地利用了缓存,所以对于较大的数组,它的速度要快大约10倍!


另一种方法是通过jit编译功能,从而获得真正的纯C UFunc。这是numba的方法:

# runtime generated C-function as ufunc
import numba as nb
@nb.vectorize(target="cpu")
def nb_vf(x):
    return x+2*x*x+4*x*x*x

它比原始的numpy方法快10倍:

在此处输入图片说明


但是,该任务可尴尬地可并行化,因此我们也可以使用prange它来并行计算循环:

@nb.njit(parallel=True)
def nb_par_jitf(x):
    y=np.empty(x.shape)
    for i in nb.prange(len(x)):
        y[i]=x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
    return y

不出所料,并行功能对于较小的输入而言较慢,但对于较大的输入则较快(几乎为2倍):

在此处输入图片说明


虽然numba专门研究使用numpy数组优化操作,但Cython是更通用的工具。提取与numba相同的性能更加复杂-相对于本地编译器(gcc / MSVC),通常归结为llvm(numba):

%%cython -c=/openmp -a
import numpy as np
import cython

#single core:
@cython.boundscheck(False) 
@cython.wraparound(False) 
def cy_f(double[::1] x):
    y_out=np.empty(len(x))
    cdef Py_ssize_t i
    cdef double[::1] y=y_out
    for i in range(len(x)):
        y[i] = x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
    return y_out

#parallel:
from cython.parallel import prange
@cython.boundscheck(False) 
@cython.wraparound(False)  
def cy_par_f(double[::1] x):
    y_out=np.empty(len(x))
    cdef double[::1] y=y_out
    cdef Py_ssize_t i
    cdef Py_ssize_t n = len(x)
    for i in prange(n, nogil=True):
        y[i] = x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
    return y_out

Cython导致功能变慢:

在此处输入图片说明


结论

显然,仅测试一个功能并不能证明任何事情。还要记住的是,对于所选的功能示例,内存的带宽是大于10 ^ 5个元素的瓶颈-因此,在该区域中,numba,numexpr和cython的性能相同。

最后,最终答案取决于函数的类型,硬件,Python分布和其他因素。例如,Anaconda-distribution使用Intel的VML来实现numpy的功能,从而在超越性功能(如,和类似功能)方面的性能要优于numba(除非它使用SVML,请参见此SO-post),例如exp,请参见以下SO-postsincos

但是从这次调查和到目前为止的经验来看,只要不涉及先验功能,numba似乎是性能最佳的最简单工具。


使用perfplot -package绘制运行时间:

import perfplot
perfplot.show(
    setup=lambda n: np.random.rand(n),
    n_range=[2**k for k in range(0,24)],
    kernels=[
        f, 
        vf,
        ne_f, 
        nb_vf, nb_par_jitf,
        cy_f, cy_par_f,
        ],
    logx=True,
    logy=True,
    xlabel='len(x)'
    )

There are numexpr, numba and cython around, the goal of this answer is to take these possibilities into consideration.

But first let’s state the obvious: no matter how you map a Python-function onto a numpy-array, it stays a Python function, that means for every evaluation:

  • numpy-array element must be converted to a Python-object (e.g. a Float).
  • all calculations are done with Python-objects, which means to have the overhead of interpreter, dynamic dispatch and immutable objects.

So which machinery is used to actually loop through the array doesn’t play a big role because of the overhead mentioned above – it stays much slower than using numpy’s built-in functionality.

Let’s take a look at the following example:

# numpy-functionality
def f(x):
    return x+2*x*x+4*x*x*x

# python-function as ufunc
import numpy as np
vf=np.vectorize(f)
vf.__name__="vf"

np.vectorize is picked as a representative of the pure-python function class of approaches. Using perfplot (see code in the appendix of this answer) we get the following running times:

enter image description here

We can see, that the numpy-approach is 10x-100x faster than the pure python version. The decrease of performance for bigger array-sizes is probably because data no longer fits the cache.

It is worth also mentioning, that vectorize also uses a lot of memory, so often memory-usage is the bottle-neck (see related SO-question). Also note, that numpy’s documentation on np.vectorize states that it is “provided primarily for convenience, not for performance”.

Other tools should be used, when performance is desired, beside writing a C-extension from the scratch, there are following possibilities:


One often hears, that the numpy-performance is as good as it gets, because it is pure C under the hood. Yet there is a lot room for improvement!

The vectorized numpy-version uses a lot of additional memory and memory-accesses. Numexp-library tries to tile the numpy-arrays and thus get a better cache utilization:

# less cache misses than numpy-functionality
import numexpr as ne
def ne_f(x):
    return ne.evaluate("x+2*x*x+4*x*x*x")

Leads to the following comparison:

enter image description here

I cannot explain everything in the plot above: we can see bigger overhead for numexpr-library at the beginning, but because it utilize the cache better it is about 10 time faster for bigger arrays!


Another approach is to jit-compile the function and thus getting a real pure-C UFunc. This is numba’s approach:

# runtime generated C-function as ufunc
import numba as nb
@nb.vectorize(target="cpu")
def nb_vf(x):
    return x+2*x*x+4*x*x*x

It is 10 times faster than the original numpy-approach:

enter image description here


However, the task is embarrassingly parallelizable, thus we also could use prange in order to calculate the loop in parallel:

@nb.njit(parallel=True)
def nb_par_jitf(x):
    y=np.empty(x.shape)
    for i in nb.prange(len(x)):
        y[i]=x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
    return y

As expected, the parallel function is slower for smaller inputs, but faster (almost factor 2) for larger sizes:

enter image description here


While numba specializes on optimizing operations with numpy-arrays, Cython is a more general tool. It is more complicated to extract the same performance as with numba – often it is down to llvm (numba) vs local compiler (gcc/MSVC):

%%cython -c=/openmp -a
import numpy as np
import cython

#single core:
@cython.boundscheck(False) 
@cython.wraparound(False) 
def cy_f(double[::1] x):
    y_out=np.empty(len(x))
    cdef Py_ssize_t i
    cdef double[::1] y=y_out
    for i in range(len(x)):
        y[i] = x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
    return y_out

#parallel:
from cython.parallel import prange
@cython.boundscheck(False) 
@cython.wraparound(False)  
def cy_par_f(double[::1] x):
    y_out=np.empty(len(x))
    cdef double[::1] y=y_out
    cdef Py_ssize_t i
    cdef Py_ssize_t n = len(x)
    for i in prange(n, nogil=True):
        y[i] = x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
    return y_out

Cython results in somewhat slower functions:

enter image description here


Conclusion

Obviously, testing only for one function doesn’t prove anything. Also one should keep in mind, that for the choosen function-example, the bandwidth of the memory was the bottle neck for sizes larger than 10^5 elements – thus we had the same performance for numba, numexpr and cython in this region.

In the end, the ultimative answer depends on the type of function, hardware, Python-distribution and other factors. For example Anaconda-distribution uses Intel’s VML for numpy’s functions and thus outperforms numba (unless it uses SVML, see this SO-post) easily for transcendental functions like exp, sin, cos and similar – see e.g. the following SO-post.

Yet from this investigation and from my experience so far, I would state, that numba seems to be the easiest tool with best performance as long as no transcendental functions are involved.


Plotting running times with perfplot-package:

import perfplot
perfplot.show(
    setup=lambda n: np.random.rand(n),
    n_range=[2**k for k in range(0,24)],
    kernels=[
        f, 
        vf,
        ne_f, 
        nb_vf, nb_par_jitf,
        cy_f, cy_par_f,
        ],
    logx=True,
    logy=True,
    xlabel='len(x)'
    )

回答 4

squares = squarer(x)

数组上的算术运算会自动按元素进行应用,并使用高效的C级循环,避免了所有适用于Python级循环或理解的解释器开销。

您想将所有元素应用于NumPy数组的大多数功能都可以使用,尽管有些功能可能需要更改。例如,if不能逐个元素地工作。您想要将其转换为使用类似numpy.where以下的构造:

def using_if(x):
    if x < 5:
        return x
    else:
        return x**2

变成

def using_where(x):
    return numpy.where(x < 5, x, x**2)
squares = squarer(x)

Arithmetic operations on arrays are automatically applied elementwise, with efficient C-level loops that avoid all the interpreter overhead that would apply to a Python-level loop or comprehension.

Most of the functions you’d want to apply to a NumPy array elementwise will just work, though some may need changes. For example, if doesn’t work elementwise. You’d want to convert those to use constructs like numpy.where:

def using_if(x):
    if x < 5:
        return x
    else:
        return x**2

becomes

def using_where(x):
    return numpy.where(x < 5, x, x**2)

回答 5

我相信在numpy的较新版本(我使用1.13)中,您可以通过将numpy数组传递给您为标量类型编写的函数来调用函数,它将自动将函数调用应用于numpy数组上的每个元素并返回另一个numpy数组

>>> import numpy as np
>>> squarer = lambda t: t ** 2
>>> x = np.array([1, 2, 3, 4, 5])
>>> squarer(x)
array([ 1,  4,  9, 16, 25])

I believe in newer version( I use 1.13) of numpy you can simply call the function by passing the numpy array to the fuction that you wrote for scalar type, it will automatically apply the function call to each element over the numpy array and return you another numpy array

>>> import numpy as np
>>> squarer = lambda t: t ** 2
>>> x = np.array([1, 2, 3, 4, 5])
>>> squarer(x)
array([ 1,  4,  9, 16, 25])

回答 6

在许多情况下,numpy.apply_along_axis是最佳选择。与其他方法相比,它的性能提高了约100倍-不仅对于微不足道的测试功能,而且对于numpy和scipy的更复杂的功能组成。

当我添加方法时:

def along_axis(x):
    return np.apply_along_axis(f, 0, x)

到perfplot代码,我得到以下结果: 在此处输入图片说明

In many cases, numpy.apply_along_axis will be the best choice. It increases the performance by about 100x compared to the other approaches – and not only for trivial test functions, but also for more complex function compositions from numpy and scipy.

When I add the method:

def along_axis(x):
    return np.apply_along_axis(f, 0, x)

to the perfplot code, I get the following results: enter image description here


回答 7

似乎没有人提到过内置的工厂生产ufuncnumpy软件包的方法:np.frompyfunc我再次进行了测试np.vectorize,其性能要比其高出20%到30%。当然,它可以按规定的C代码甚至numba(我还没有测试过的)性能很好,但是比起更好的选择np.vectorize

f = lambda x, y: x * y
f_arr = np.frompyfunc(f, 2, 1)
vf = np.vectorize(f)
arr = np.linspace(0, 1, 10000)

%timeit f_arr(arr, arr) # 307ms
%timeit vf(arr, arr) # 450ms

我还测试了较大的样本,并且改进成比例。另请参阅文档在这里

It seems no one has mentioned a built-in factory method of producing ufunc in numpy package: np.frompyfunc which I have tested again np.vectorize and have outperformed it by about 20~30%. Of course it will perform well as prescribed C code or even numba(which I have not tested), but it can a better alternative than np.vectorize

f = lambda x, y: x * y
f_arr = np.frompyfunc(f, 2, 1)
vf = np.vectorize(f)
arr = np.linspace(0, 1, 10000)

%timeit f_arr(arr, arr) # 307ms
%timeit vf(arr, arr) # 450ms

I have also tested larger samples, and the improvement is proportional. See the documentation also here


回答 8

正如提到的这篇文章,只是使用生成器表达式如下所示:

numpy.fromiter((<some_func>(x) for x in <something>),<dtype>,<size of something>)

As mentioned in this post, just use generator expressions like so:

numpy.fromiter((<some_func>(x) for x in <something>),<dtype>,<size of something>)

回答 9

以上所有答案比较都不错,但是如果您需要使用自定义函数进行映射,并且 numpy.ndarray,则需要保留数组的形状。

我只比较了两个,但它将保留的形状ndarray。我已经将数组与100万个条目进行比较。在这里,我使用平方函数,该函数也是内置在numpy中的,并且具有很大的性能提升,因为有需要,您可以使用自己选择的函数。

import numpy, time
def timeit():
    y = numpy.arange(1000000)
    now = time.time()
    numpy.array([x * x for x in y.reshape(-1)]).reshape(y.shape)        
    print(time.time() - now)
    now = time.time()
    numpy.fromiter((x * x for x in y.reshape(-1)), y.dtype).reshape(y.shape)
    print(time.time() - now)
    now = time.time()
    numpy.square(y)  
    print(time.time() - now)

输出量

>>> timeit()
1.162431240081787    # list comprehension and then building numpy array
1.0775556564331055   # from numpy.fromiter
0.002948284149169922 # using inbuilt function

在这里,您可以清楚地看到numpy.fromiter采用简单方法的效果很好,如果内置功能可用,请使用它。

All above answers compares well, but if you need to use custom function for mapping, and you have numpy.ndarray, and you need to retain the shape of array.

I have compare just two, but it will retain the shape of ndarray. I have used the array with 1 million entries for comparison. Here I use square function, which is also inbuilt in numpy and has great performance boost, since there as was need of something, you can use function of your choice.

import numpy, time
def timeit():
    y = numpy.arange(1000000)
    now = time.time()
    numpy.array([x * x for x in y.reshape(-1)]).reshape(y.shape)        
    print(time.time() - now)
    now = time.time()
    numpy.fromiter((x * x for x in y.reshape(-1)), y.dtype).reshape(y.shape)
    print(time.time() - now)
    now = time.time()
    numpy.square(y)  
    print(time.time() - now)

Output

>>> timeit()
1.162431240081787    # list comprehension and then building numpy array
1.0775556564331055   # from numpy.fromiter
0.002948284149169922 # using inbuilt function

here you can clearly see numpy.fromiter works great considering to simple approach, and if inbuilt function is available please use that.


回答 10

采用 numpy.fromfunction(function, shape, **kwargs)

参见“ https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html


声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。