python和numpy:数组片的和

时间:2022-08-09 21:23:42

I have 1-dimensional numpy array (array_) and a Python list (list_).

我有一个一维的numpy数组(array_)和一个Python列表(list_)。

The following code works, but is inefficient because slices involve an unnecessary copy (certainly for Python lists, and I believe also for numpy arrays?):

下面的代码可以工作,但是效率很低,因为片包含了不必要的拷贝(当然对于Python列表,我也相信对于numpy数组也是如此?)

result = sum(array_[1:])
result = sum(list_[1:])

What's a good way to rewrite that?

用什么好方法重写它?

4 个解决方案

#1


13  

Slicing a numpy array doesn't make a copy, as it does in the case of a list.

分割一个numpy数组不能复制,就像在列表中那样。

As a basic example:

作为一个基本的例子:

import numpy as np
x = np.arange(100)
y = x[1:5]
y[:] = 1000
print x[:10]

This yields:

这个收益率:

[   0 1000 1000 1000 1000    5    6    7    8    9]

Even though we modified the values in y, it's just a view into the same memory as x.

即使我们修改了y的值,它也只是x内存中的一个视图。

Slicing an ndarray returns a view and doesn't duplicate the memory.

分割一个ndarray会返回一个视图,不会重复内存。

However, it would be much more efficient to use array_[1:].sum() rather than calling python's builtin sum on a numpy array.

但是,使用array_[1:].sum()比在numpy数组上调用python的内建和要高效得多。

As a quick comparison:

作为一个简单的比较:

In [28]: x = np.arange(10000)

In [29]: %timeit x.sum()
100000 loops, best of 3: 10.2 us per loop

In [30]: %timeit sum(x)
100 loops, best of 3: 4.01 ms per loop

Edit:

编辑:

In the case of the list, if for some reason you don't want to make a copy, you could always use itertools.islice. Instead of:

在列表的情况下,如果出于某种原因您不希望复制,您可以始终使用itertools.islice。而不是:

result = sum(some_list[1:])

you could do:

你能做的:

result = sum(itertools.islice(some_list, 1, None))

In most cases this is overkill, though. If you're dealing with lists long enough that memory management is a major issue, then you probably shouldn't be using a list to store your values. (Lists are not designed or intended to store items compactly in memory.)

然而,在大多数情况下,这是过度的。如果您处理的列表足够长,内存管理是一个主要问题,那么您可能不应该使用列表来存储您的值。(列表并不是设计或打算在内存中存储的。)

Also, you wouldn't want to do this for a numpy array. Simply doing some_array[1:].sum() will be several orders of magnitude faster and won't use any more memory than islice.

同样,对于numpy数组,也不需要这样做。简单地做一个数组[1:].sum()将会比islice更快几个数量级,而且不会使用任何内存。

#2


8  

My first instinct was the same as Joe Kington's when it comes to lists, but I checked, and on my machine at least, islice is consistently slower!

我的第一反应和乔·金顿(Joe Kington)说的一样,但我检查了一下,至少在我的机器上,islice的速度一直较慢!

>>> timeit.timeit("sum(l[50:950])", "l = range(1000)", number=10000)
1.0398731231689453
>>> timeit.timeit("sum(islice(l, 50, 950))", "from itertools import islice; l = range(1000)", number=10000)
1.2317550182342529
>>> timeit.timeit("sum(l[50:950000])", "l = range(1000000)", number=10)
7.9020509719848633
>>> timeit.timeit("sum(islice(l, 50, 950000))", "from itertools import islice; l = range(1000000)", number=10)
8.4522969722747803

I tried a custom_sum and found that it was faster, but not by much:

我尝试了custom_sum,发现它更快,但不是很多:

>>> setup = """
... def custom_sum(list, start, stop):
...     s = 0
...     for i in xrange(start, stop):
...         s += list[i]
...     return s
... 
... l = range(1000)
... """
>>> timeit.timeit("custom_sum(l, 50, 950)", setup, number=1000)
0.66767406463623047

Furthermore, at larger numbers, it was slower by far!

而且,在更大的数字上,它的速度要慢得多!

>>> setup = setup.replace("range(1000)", "range(1000000)")
>>> timeit.timeit("custom_sum(l, 50, 950000)", setup, number=10)
14.185815095901489

I couldn't think of anything else to test. (Thoughts, anyone?)

我想不出还有什么可以测试的。(思想,有人知道吗?)

#3


3  

@Joe Kington (this is temporary answer to just show my timings, I'll remove it soon):

@Joe Kington(这是暂时的回复,只显示我的时间,我很快就会删除它):

In []: x= arange(1e4)
In []: %timeit sum(x)
100000 loops, best of 3: 18.8 us per loop
In []: %timeit x.sum()
100000 loops, best of 3: 17.5 us per loop
In []: x= arange(1e5)
In []: %timeit sum(x)
10000 loops, best of 3: 165 us per loop
In []: %timeit x.sum()
10000 loops, best of 3: 158 us per loop
In []: x= arange(1e2)
In []: %timeit sum(x)
100000 loops, best of 3: 4.44 us per loop
In []: %timeit x.sum()
100000 loops, best of 3: 3.2 us per loop

As far as my numpy(1.5.1) source tells, sum(.) is just a wrapper for x.sum(.). Thus with larger inputs execution time is same (asymptotically) for sum(.) and x.sum(.).

就我的numpy(1.5.1)源代码而言,sum(.)只是x.sum(.)的包装器。因此,对于和(.)和x.sum(.),有较大的输入执行时间是相同的(渐近)。

Edit: This answer was intended to be just a temporary one, but actually it (and its comments) may indeed be useful to someone. So I'll just leave it as it is just now, until someone really request me to delete it.

编辑:这个答案只是一个暂时的答案,但实际上它(和它的评论)可能确实对某些人有用。所以我现在就把它保留下来,直到有人要求我删除它。

#4


0  

I don't find x[1:].sum() significantly slower than x.sum(). For lists sum(x) - x[0] is faster than sum(x[1:])(about 40% faster OMM).

我没有发现x[1:].sum()明显比x.sum()慢。对于列表和(x) - x[0]比sum(x[1:])要快(约40%更快的OMM)。

#1


13  

Slicing a numpy array doesn't make a copy, as it does in the case of a list.

分割一个numpy数组不能复制,就像在列表中那样。

As a basic example:

作为一个基本的例子:

import numpy as np
x = np.arange(100)
y = x[1:5]
y[:] = 1000
print x[:10]

This yields:

这个收益率:

[   0 1000 1000 1000 1000    5    6    7    8    9]

Even though we modified the values in y, it's just a view into the same memory as x.

即使我们修改了y的值,它也只是x内存中的一个视图。

Slicing an ndarray returns a view and doesn't duplicate the memory.

分割一个ndarray会返回一个视图,不会重复内存。

However, it would be much more efficient to use array_[1:].sum() rather than calling python's builtin sum on a numpy array.

但是,使用array_[1:].sum()比在numpy数组上调用python的内建和要高效得多。

As a quick comparison:

作为一个简单的比较:

In [28]: x = np.arange(10000)

In [29]: %timeit x.sum()
100000 loops, best of 3: 10.2 us per loop

In [30]: %timeit sum(x)
100 loops, best of 3: 4.01 ms per loop

Edit:

编辑:

In the case of the list, if for some reason you don't want to make a copy, you could always use itertools.islice. Instead of:

在列表的情况下,如果出于某种原因您不希望复制,您可以始终使用itertools.islice。而不是:

result = sum(some_list[1:])

you could do:

你能做的:

result = sum(itertools.islice(some_list, 1, None))

In most cases this is overkill, though. If you're dealing with lists long enough that memory management is a major issue, then you probably shouldn't be using a list to store your values. (Lists are not designed or intended to store items compactly in memory.)

然而,在大多数情况下,这是过度的。如果您处理的列表足够长,内存管理是一个主要问题,那么您可能不应该使用列表来存储您的值。(列表并不是设计或打算在内存中存储的。)

Also, you wouldn't want to do this for a numpy array. Simply doing some_array[1:].sum() will be several orders of magnitude faster and won't use any more memory than islice.

同样,对于numpy数组,也不需要这样做。简单地做一个数组[1:].sum()将会比islice更快几个数量级,而且不会使用任何内存。

#2


8  

My first instinct was the same as Joe Kington's when it comes to lists, but I checked, and on my machine at least, islice is consistently slower!

我的第一反应和乔·金顿(Joe Kington)说的一样,但我检查了一下,至少在我的机器上,islice的速度一直较慢!

>>> timeit.timeit("sum(l[50:950])", "l = range(1000)", number=10000)
1.0398731231689453
>>> timeit.timeit("sum(islice(l, 50, 950))", "from itertools import islice; l = range(1000)", number=10000)
1.2317550182342529
>>> timeit.timeit("sum(l[50:950000])", "l = range(1000000)", number=10)
7.9020509719848633
>>> timeit.timeit("sum(islice(l, 50, 950000))", "from itertools import islice; l = range(1000000)", number=10)
8.4522969722747803

I tried a custom_sum and found that it was faster, but not by much:

我尝试了custom_sum,发现它更快,但不是很多:

>>> setup = """
... def custom_sum(list, start, stop):
...     s = 0
...     for i in xrange(start, stop):
...         s += list[i]
...     return s
... 
... l = range(1000)
... """
>>> timeit.timeit("custom_sum(l, 50, 950)", setup, number=1000)
0.66767406463623047

Furthermore, at larger numbers, it was slower by far!

而且,在更大的数字上,它的速度要慢得多!

>>> setup = setup.replace("range(1000)", "range(1000000)")
>>> timeit.timeit("custom_sum(l, 50, 950000)", setup, number=10)
14.185815095901489

I couldn't think of anything else to test. (Thoughts, anyone?)

我想不出还有什么可以测试的。(思想,有人知道吗?)

#3


3  

@Joe Kington (this is temporary answer to just show my timings, I'll remove it soon):

@Joe Kington(这是暂时的回复,只显示我的时间,我很快就会删除它):

In []: x= arange(1e4)
In []: %timeit sum(x)
100000 loops, best of 3: 18.8 us per loop
In []: %timeit x.sum()
100000 loops, best of 3: 17.5 us per loop
In []: x= arange(1e5)
In []: %timeit sum(x)
10000 loops, best of 3: 165 us per loop
In []: %timeit x.sum()
10000 loops, best of 3: 158 us per loop
In []: x= arange(1e2)
In []: %timeit sum(x)
100000 loops, best of 3: 4.44 us per loop
In []: %timeit x.sum()
100000 loops, best of 3: 3.2 us per loop

As far as my numpy(1.5.1) source tells, sum(.) is just a wrapper for x.sum(.). Thus with larger inputs execution time is same (asymptotically) for sum(.) and x.sum(.).

就我的numpy(1.5.1)源代码而言,sum(.)只是x.sum(.)的包装器。因此,对于和(.)和x.sum(.),有较大的输入执行时间是相同的(渐近)。

Edit: This answer was intended to be just a temporary one, but actually it (and its comments) may indeed be useful to someone. So I'll just leave it as it is just now, until someone really request me to delete it.

编辑:这个答案只是一个暂时的答案,但实际上它(和它的评论)可能确实对某些人有用。所以我现在就把它保留下来,直到有人要求我删除它。

#4


0  

I don't find x[1:].sum() significantly slower than x.sum(). For lists sum(x) - x[0] is faster than sum(x[1:])(about 40% faster OMM).

我没有发现x[1:].sum()明显比x.sum()慢。对于列表和(x) - x[0]比sum(x[1:])要快(约40%更快的OMM)。