Python / numpy:最有效的方法是对数组中的n个元素求和,以便每个输出元素是前n个输入元素的总和?

时间:2022-08-09 21:23:48

I want to write a function that takes a flattened array as input and returns an array of equal length containing the sums of the previous n elements from the input array, with the initial n - 1 elements of the output array set to NaN.

我想编写一个函数,它将一个扁平数组作为输入,并返回一个等长的数组,其中包含输入数组中前n个元素的总和,输出数组的初始n-1个元素设置为NaN。

For example if the array has ten elements = [2, 4, 3, 7, 6, 1, 9, 4, 6, 5] and n = 3 then the resulting array should be [NaN, NaN, 9, 14, 16, 14, 16, 14, 19, 15].

例如,如果数组有十个元素= [2,4,3,7,6,1,9,4,6,5]和n = 3,那么得到的数组应该是[NaN,NaN,9,14,16 ,14,16,14,19,15]。

One way I've come up with to do this:

我想出了这样做的一种方法:

def sum_n_values(flat_array, n): 

    sums = np.full(flat_array.shape, np.NaN)
    for i in range(n - 1, flat_array.shape[0]):
        sums[i] = np.sum(flat_array[i - n + 1:i + 1])
    return sums

Is there a better/more efficient/more "Pythonic" way to do this?

是否有更好/更有效/更“Pythonic”的方式来做到这一点?

Thanks in advance for your help.

在此先感谢您的帮助。

3 个解决方案

#1


9  

You can make use of np.cumsum, and take the difference of the cumsumed array and a shifted version of it:

你可以使用np.cumsum,并获取cumsumed数组和它的移位版本的区别:

n = 3
arr = np.array([2, 4, 3, 7, 6, 1, 9, 4, 6, 5])
sum_arr = arr.cumsum()
shifted_sum_arr = np.concatenate([[np.NaN]*(n-1), [0],  sum_arr[:-n]])
sum_arr
=> array([ 2,  6,  9, 16, 22, 23, 32, 36, 42, 47])
shifted_sum_arr
=> array([ nan,  nan,   0.,   2.,   6.,   9.,  16.,  22.,  23.,  32.])
sum_arr - shifted_sum_arr
=> array([ nan,  nan,   9.,  14.,  16.,  14.,  16.,  14.,  19.,  15.])

IMO, this is a more numpyish way to do this, mainly because it avoids the loop.

IMO,这是一种更加努力的方式,主要是因为它避免了循环。


Timings

计时

def cumsum_app(flat_array, n):
    sum_arr = flat_array.cumsum()
    shifted_sum_arr = np.concatenate([[np.NaN]*(n-1), [0],  sum_arr[:-n]])
    return sum_arr - shifted_sum_arr

flat_array = np.random.randint(0,9,(100000))
%timeit cumsum_app(flat_array,10)
1000 loops, best of 3: 985 us per loop
%timeit cumsum_app(flat_array,100)
1000 loops, best of 3: 963 us per loop

#2


7  

You are basically performing 1D convolution there, so you can use np.convolve, like so -

你基本上是在那里进行1D卷积,所以你可以使用np.convolve,就像这样 -

# Get the valid sliding summations with 1D convolution
vals = np.convolve(flat_array,np.ones(n),mode='valid')

# Pad with NaNs at the start if needed  
out = np.pad(vals,(n-1,0),'constant',constant_values=(np.nan))

Sample run -

样品运行 -

In [110]: flat_array
Out[110]: array([2, 4, 3, 7, 6, 1, 9, 4, 6, 5])

In [111]: n = 3

In [112]: vals = np.convolve(flat_array,np.ones(n),mode='valid')
     ...: out = np.pad(vals,(n-1,0),'constant',constant_values=(np.nan))
     ...: 

In [113]: vals
Out[113]: array([  9.,  14.,  16.,  14.,  16.,  14.,  19.,  15.])

In [114]: out
Out[114]: array([ nan,  nan,   9.,  14.,  16.,  14.,  16.,  14.,  19.,  15.])

For 1D convolution, one can also use Scipy's implementation. The runtimes with Scipy version seemed better for a large window size, as also the runtime tests listed next would try to investigate. The Scipy version for getting vals would be -

对于1D卷积,也可以使用Scipy的实现。对于大窗口大小,使用Scipy版本的运行时似乎更好,因为下面列出的运行时测试也会尝试调查。得到vals的Scipy版本将是 -

from scipy import signal
vals = signal.convolve(flat_array,np.ones(n),mode='valid')

The NaNs padding operation could be replaced by np.hstack : np.hstack(([np.nan]*(n-1),vals)) for better performance.

NaNs填充操作可以替换为np.hstack:np.hstack(([np.nan] *(n-1),vals))以获得更好的性能。


Runtime tests -

运行时测试 -

In [238]: def original_app(flat_array,n):
     ...:     sums = np.full(flat_array.shape, np.NaN)
     ...:     for i in range(n - 1, flat_array.shape[0]):
     ...:         sums[i] = np.sum(flat_array[i - n + 1:i + 1])
     ...:     return sums
     ...: 
     ...: def vectorized_app1(flat_array,n):
     ...:     vals = np.convolve(flat_array,np.ones(n),mode='valid')
     ...:     return np.hstack(([np.nan]*(n-1),vals))
     ...: 
     ...: def vectorized_app2(flat_array,n):
     ...:     vals = signal.convolve(flat_array,np.ones(3),mode='valid')
     ...:     return np.hstack(([np.nan]*(n-1),vals))
     ...: 

In [239]: flat_array = np.random.randint(0,9,(100000))

In [240]: %timeit original_app(flat_array,10)
1 loops, best of 3: 833 ms per loop

In [241]: %timeit vectorized_app1(flat_array,10)
1000 loops, best of 3: 1.96 ms per loop

In [242]: %timeit vectorized_app2(flat_array,10)
100 loops, best of 3: 13.1 ms per loop

In [243]: %timeit original_app(flat_array,100)
1 loops, best of 3: 836 ms per loop

In [244]: %timeit vectorized_app1(flat_array,100)
100 loops, best of 3: 16.5 ms per loop

In [245]: %timeit vectorized_app2(flat_array,100)
100 loops, best of 3: 13.1 ms per loop

#3


4  

The other answers here are probably closer to what you're looking for in terms of speed and memory, but for completeness you can also use a list comprehension to build your array:

这里的其他答案可能更接近你在速度和内存方面的要求,但为了完整性,你也可以使用列表理解来构建你的数组:

a = np.array([2, 4, 3, 7, 6, 1, 9, 4, 6, 5])
N, n = a.shape[0], 3
np.array([np.NaN]*(n-1) + [np.sum(a[j:j+n]) for j in range(N-n+1)])

returns:

收益:

array([ nan,  nan,   9.,  14.,  16.,  14.,  16.,  14.,  19.,  15.])

#1


9  

You can make use of np.cumsum, and take the difference of the cumsumed array and a shifted version of it:

你可以使用np.cumsum,并获取cumsumed数组和它的移位版本的区别:

n = 3
arr = np.array([2, 4, 3, 7, 6, 1, 9, 4, 6, 5])
sum_arr = arr.cumsum()
shifted_sum_arr = np.concatenate([[np.NaN]*(n-1), [0],  sum_arr[:-n]])
sum_arr
=> array([ 2,  6,  9, 16, 22, 23, 32, 36, 42, 47])
shifted_sum_arr
=> array([ nan,  nan,   0.,   2.,   6.,   9.,  16.,  22.,  23.,  32.])
sum_arr - shifted_sum_arr
=> array([ nan,  nan,   9.,  14.,  16.,  14.,  16.,  14.,  19.,  15.])

IMO, this is a more numpyish way to do this, mainly because it avoids the loop.

IMO,这是一种更加努力的方式,主要是因为它避免了循环。


Timings

计时

def cumsum_app(flat_array, n):
    sum_arr = flat_array.cumsum()
    shifted_sum_arr = np.concatenate([[np.NaN]*(n-1), [0],  sum_arr[:-n]])
    return sum_arr - shifted_sum_arr

flat_array = np.random.randint(0,9,(100000))
%timeit cumsum_app(flat_array,10)
1000 loops, best of 3: 985 us per loop
%timeit cumsum_app(flat_array,100)
1000 loops, best of 3: 963 us per loop

#2


7  

You are basically performing 1D convolution there, so you can use np.convolve, like so -

你基本上是在那里进行1D卷积,所以你可以使用np.convolve,就像这样 -

# Get the valid sliding summations with 1D convolution
vals = np.convolve(flat_array,np.ones(n),mode='valid')

# Pad with NaNs at the start if needed  
out = np.pad(vals,(n-1,0),'constant',constant_values=(np.nan))

Sample run -

样品运行 -

In [110]: flat_array
Out[110]: array([2, 4, 3, 7, 6, 1, 9, 4, 6, 5])

In [111]: n = 3

In [112]: vals = np.convolve(flat_array,np.ones(n),mode='valid')
     ...: out = np.pad(vals,(n-1,0),'constant',constant_values=(np.nan))
     ...: 

In [113]: vals
Out[113]: array([  9.,  14.,  16.,  14.,  16.,  14.,  19.,  15.])

In [114]: out
Out[114]: array([ nan,  nan,   9.,  14.,  16.,  14.,  16.,  14.,  19.,  15.])

For 1D convolution, one can also use Scipy's implementation. The runtimes with Scipy version seemed better for a large window size, as also the runtime tests listed next would try to investigate. The Scipy version for getting vals would be -

对于1D卷积,也可以使用Scipy的实现。对于大窗口大小,使用Scipy版本的运行时似乎更好,因为下面列出的运行时测试也会尝试调查。得到vals的Scipy版本将是 -

from scipy import signal
vals = signal.convolve(flat_array,np.ones(n),mode='valid')

The NaNs padding operation could be replaced by np.hstack : np.hstack(([np.nan]*(n-1),vals)) for better performance.

NaNs填充操作可以替换为np.hstack:np.hstack(([np.nan] *(n-1),vals))以获得更好的性能。


Runtime tests -

运行时测试 -

In [238]: def original_app(flat_array,n):
     ...:     sums = np.full(flat_array.shape, np.NaN)
     ...:     for i in range(n - 1, flat_array.shape[0]):
     ...:         sums[i] = np.sum(flat_array[i - n + 1:i + 1])
     ...:     return sums
     ...: 
     ...: def vectorized_app1(flat_array,n):
     ...:     vals = np.convolve(flat_array,np.ones(n),mode='valid')
     ...:     return np.hstack(([np.nan]*(n-1),vals))
     ...: 
     ...: def vectorized_app2(flat_array,n):
     ...:     vals = signal.convolve(flat_array,np.ones(3),mode='valid')
     ...:     return np.hstack(([np.nan]*(n-1),vals))
     ...: 

In [239]: flat_array = np.random.randint(0,9,(100000))

In [240]: %timeit original_app(flat_array,10)
1 loops, best of 3: 833 ms per loop

In [241]: %timeit vectorized_app1(flat_array,10)
1000 loops, best of 3: 1.96 ms per loop

In [242]: %timeit vectorized_app2(flat_array,10)
100 loops, best of 3: 13.1 ms per loop

In [243]: %timeit original_app(flat_array,100)
1 loops, best of 3: 836 ms per loop

In [244]: %timeit vectorized_app1(flat_array,100)
100 loops, best of 3: 16.5 ms per loop

In [245]: %timeit vectorized_app2(flat_array,100)
100 loops, best of 3: 13.1 ms per loop

#3


4  

The other answers here are probably closer to what you're looking for in terms of speed and memory, but for completeness you can also use a list comprehension to build your array:

这里的其他答案可能更接近你在速度和内存方面的要求,但为了完整性,你也可以使用列表理解来构建你的数组:

a = np.array([2, 4, 3, 7, 6, 1, 9, 4, 6, 5])
N, n = a.shape[0], 3
np.array([np.NaN]*(n-1) + [np.sum(a[j:j+n]) for j in range(N-n+1)])

returns:

收益:

array([ nan,  nan,   9.,  14.,  16.,  14.,  16.,  14.,  19.,  15.])