问题:如何获得Numpy中向量的大小?

遵循“只有一种明显的方法”,如何在Numpy中获得向量(一维数组)的大小?

def mag(x): 
    return math.sqrt(sum(i**2 for i in x))

上面的方法有效,但是我不敢相信自己必须指定这样一个琐碎而核心的功能。

In keeping with the “There’s only one obvious way to do it”, how do you get the magnitude of a vector (1D array) in Numpy?

def mag(x): 
    return math.sqrt(sum(i**2 for i in x))

The above works, but I cannot believe that I must specify such a trivial and core function myself.


回答 0

您需要的功能是。(我认为它应该以基numpy的形式作为数组的一个属性-说x.norm()-哦,很好)。

import numpy as np
x = np.array([1,2,3,4,5])
np.linalg.norm(x)

您还可以输入ord所需的n阶范数的可选内容。假设您想要1范数:

np.linalg.norm(x,ord=1)

等等。

The function you’re after is . (I reckon it should be in base numpy as a property of an array — say x.norm() — but oh well).

import numpy as np
x = np.array([1,2,3,4,5])
np.linalg.norm(x)

You can also feed in an optional ord for the nth order norm you want. Say you wanted the 1-norm:

np.linalg.norm(x,ord=1)

And so on.


回答 1

如果您完全担心速度,则应改用:

mag = np.sqrt(x.dot(x))

以下是一些基准:

>>> import timeit
>>> timeit.timeit('np.linalg.norm(x)', setup='import numpy as np; x = np.arange(100)', number=1000)
0.0450878
>>> timeit.timeit('np.sqrt(x.dot(x))', setup='import numpy as np; x = np.arange(100)', number=1000)
0.0181372

编辑:当您必须采用许多向量的范数时,才能真正提高速度。使用纯numpy函数不需要任何for循环。例如:

In [1]: import numpy as np

In [2]: a = np.arange(1200.0).reshape((-1,3))

In [3]: %timeit [np.linalg.norm(x) for x in a]
100 loops, best of 3: 4.23 ms per loop

In [4]: %timeit np.sqrt((a*a).sum(axis=1))
100000 loops, best of 3: 18.9 us per loop

In [5]: np.allclose([np.linalg.norm(x) for x in a],np.sqrt((a*a).sum(axis=1)))
Out[5]: True

If you are worried at all about speed, you should instead use:

mag = np.sqrt(x.dot(x))

Here are some benchmarks:

>>> import timeit
>>> timeit.timeit('np.linalg.norm(x)', setup='import numpy as np; x = np.arange(100)', number=1000)
0.0450878
>>> timeit.timeit('np.sqrt(x.dot(x))', setup='import numpy as np; x = np.arange(100)', number=1000)
0.0181372

EDIT: The real speed improvement comes when you have to take the norm of many vectors. Using pure numpy functions doesn’t require any for loops. For example:

In [1]: import numpy as np

In [2]: a = np.arange(1200.0).reshape((-1,3))

In [3]: %timeit [np.linalg.norm(x) for x in a]
100 loops, best of 3: 4.23 ms per loop

In [4]: %timeit np.sqrt((a*a).sum(axis=1))
100000 loops, best of 3: 18.9 us per loop

In [5]: np.allclose([np.linalg.norm(x) for x in a],np.sqrt((a*a).sum(axis=1)))
Out[5]: True

回答 2

另一个选择是einsum对两个数组使用numpy中的函数:

In [1]: import numpy as np

In [2]: a = np.arange(1200.0).reshape((-1,3))

In [3]: %timeit [np.linalg.norm(x) for x in a]
100 loops, best of 3: 3.86 ms per loop

In [4]: %timeit np.sqrt((a*a).sum(axis=1))
100000 loops, best of 3: 15.6 µs per loop

In [5]: %timeit np.sqrt(np.einsum('ij,ij->i',a,a))
100000 loops, best of 3: 8.71 µs per loop

或向量:

In [5]: a = np.arange(100000)

In [6]: %timeit np.sqrt(a.dot(a))
10000 loops, best of 3: 80.8 µs per loop

In [7]: %timeit np.sqrt(np.einsum('i,i', a, a))
10000 loops, best of 3: 60.6 µs per loop

但是,似乎确实存在一些与调用它相关的开销,这可能会使它在输入较小的情况下变慢:

In [2]: a = np.arange(100)

In [3]: %timeit np.sqrt(a.dot(a))
100000 loops, best of 3: 3.73 µs per loop

In [4]: %timeit np.sqrt(np.einsum('i,i', a, a))
100000 loops, best of 3: 4.68 µs per loop

Yet another alternative is to use the einsum function in numpy for either arrays:

In [1]: import numpy as np

In [2]: a = np.arange(1200.0).reshape((-1,3))

In [3]: %timeit [np.linalg.norm(x) for x in a]
100 loops, best of 3: 3.86 ms per loop

In [4]: %timeit np.sqrt((a*a).sum(axis=1))
100000 loops, best of 3: 15.6 µs per loop

In [5]: %timeit np.sqrt(np.einsum('ij,ij->i',a,a))
100000 loops, best of 3: 8.71 µs per loop

or vectors:

In [5]: a = np.arange(100000)

In [6]: %timeit np.sqrt(a.dot(a))
10000 loops, best of 3: 80.8 µs per loop

In [7]: %timeit np.sqrt(np.einsum('i,i', a, a))
10000 loops, best of 3: 60.6 µs per loop

There does, however, seem to be some overhead associated with calling it that may make it slower with small inputs:

In [2]: a = np.arange(100)

In [3]: %timeit np.sqrt(a.dot(a))
100000 loops, best of 3: 3.73 µs per loop

In [4]: %timeit np.sqrt(np.einsum('i,i', a, a))
100000 loops, best of 3: 4.68 µs per loop

回答 3

我发现最快的方法是通过inner1d。这是它与其他numpy方法的比较方式:

import numpy as np
from numpy.core.umath_tests import inner1d

V = np.random.random_sample((10**6,3,)) # 1 million vectors
A = np.sqrt(np.einsum('...i,...i', V, V))
B = np.linalg.norm(V,axis=1)   
C = np.sqrt((V ** 2).sum(-1))
D = np.sqrt((V*V).sum(axis=1))
E = np.sqrt(inner1d(V,V))

print [np.allclose(E,x) for x in [A,B,C,D]] # [True, True, True, True]

import cProfile
cProfile.run("np.sqrt(np.einsum('...i,...i', V, V))") # 3 function calls in 0.013 seconds
cProfile.run('np.linalg.norm(V,axis=1)')              # 9 function calls in 0.029 seconds
cProfile.run('np.sqrt((V ** 2).sum(-1))')             # 5 function calls in 0.028 seconds
cProfile.run('np.sqrt((V*V).sum(axis=1))')            # 5 function calls in 0.027 seconds
cProfile.run('np.sqrt(inner1d(V,V))')                 # 2 function calls in 0.009 seconds

inner1d比linalg.norm快3倍,头发比einsum快

Fastest way I found is via inner1d. Here’s how it compares to other numpy methods:

import numpy as np
from numpy.core.umath_tests import inner1d

V = np.random.random_sample((10**6,3,)) # 1 million vectors
A = np.sqrt(np.einsum('...i,...i', V, V))
B = np.linalg.norm(V,axis=1)   
C = np.sqrt((V ** 2).sum(-1))
D = np.sqrt((V*V).sum(axis=1))
E = np.sqrt(inner1d(V,V))

print [np.allclose(E,x) for x in [A,B,C,D]] # [True, True, True, True]

import cProfile
cProfile.run("np.sqrt(np.einsum('...i,...i', V, V))") # 3 function calls in 0.013 seconds
cProfile.run('np.linalg.norm(V,axis=1)')              # 9 function calls in 0.029 seconds
cProfile.run('np.sqrt((V ** 2).sum(-1))')             # 5 function calls in 0.028 seconds
cProfile.run('np.sqrt((V*V).sum(axis=1))')            # 5 function calls in 0.027 seconds
cProfile.run('np.sqrt(inner1d(V,V))')                 # 2 function calls in 0.009 seconds

inner1d is ~3x faster than linalg.norm and a hair faster than einsum


回答 4

scipy.linalg(或numpy.linalg)中使用功能norm

>>> from scipy import linalg as LA
>>> a = 10*NP.random.randn(6)
>>> a
  array([  9.62141594,   1.29279592,   4.80091404,  -2.93714318,
          17.06608678, -11.34617065])
>>> LA.norm(a)
    23.36461979210312

>>> # compare with OP's function:
>>> import math
>>> mag = lambda x : math.sqrt(sum(i**2 for i in x))
>>> mag(a)
     23.36461979210312

use the function norm in scipy.linalg (or numpy.linalg)

>>> from scipy import linalg as LA
>>> a = 10*NP.random.randn(6)
>>> a
  array([  9.62141594,   1.29279592,   4.80091404,  -2.93714318,
          17.06608678, -11.34617065])
>>> LA.norm(a)
    23.36461979210312

>>> # compare with OP's function:
>>> import math
>>> mag = lambda x : math.sqrt(sum(i**2 for i in x))
>>> mag(a)
     23.36461979210312

回答 5

您可以使用toolbelt vg简洁地执行此操作。它是numpy之上的一个轻层,它支持单个值和堆叠的向量。

import numpy as np
import vg

x = np.array([1, 2, 3, 4, 5])
mag1 = np.linalg.norm(x)
mag2 = vg.magnitude(x)
print mag1 == mag2
# True

我在上次启动时创建了该库,该库的灵感来自于这样的用法:简单的想法在NumPy中过于冗长。

You can do this concisely using the toolbelt vg. It’s a light layer on top of numpy and it supports single values and stacked vectors.

import numpy as np
import vg

x = np.array([1, 2, 3, 4, 5])
mag1 = np.linalg.norm(x)
mag2 = vg.magnitude(x)
print mag1 == mag2
# True

I created the library at my last startup, where it was motivated by uses like this: simple ideas which are far too verbose in NumPy.


声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。