用numpy对数组的不均匀部分求和

时间:2022-08-09 21:23:18

Given an ndarray x and a one dimensional array containing the length of contiguous slices of a dimension of x, I want to compute a new array that contains the sum of all of the slices. For example, in two dimensions summing over dimension one:

给定一个ndarray x和一个一维数组,其中包含x维度的连续切片的长度,我想计算一个包含所有切片的和的新数组。例如,在二维中对维1求和:

>>> lens = np.array([1, 3, 2])
array([1, 3, 2])
>>> x = np.arange(4 * lens.sum()).reshape((4, lens.sum())).astype(float)
array([[  0.,   1.,   2.,   3.,   4.,   5.],
       [  6.,   7.,   8.,   9.,  10.,  11.],
       [ 12.,  13.,  14.,  15.,  16.,  17.],
       [ 18.,  19.,  20.,  21.,  22.,  23.]])
# I want to compute:
>>> result
array([[  0.,   6.,   9.],
       [  6.,  24.,  21.],
       [ 12.,  42.,  33.],
       [ 18.,  60.,  45.]])
# 0 = 0
# 6 = 1 + 2 + 3
# ...
# 45 = 22 + 23

The two ways that come to mind are:

我想到的两种方式是:

a) Use cumsum and fancy indexing:

a)使用累加和花哨的索引:

def cumsum_method(x, lens):
    xc = x.cumsum(1)
    lc = lens.cumsum() - 1
    res = xc[:, lc]
    res[:, 1:] -= xc[:, lc[:-1]]
    return res

b) Use bincount and intelligently generate the appropriate bins:

b)使用bincount智能生成相应的垃圾箱:

def bincount_method(x, lens):
    bins = np.arange(lens.size).repeat(lens) + \
        np.arange(x.shape[0])[:, None] * lens.size
    return np.bincount(bins.flat, weights=x.flat).reshape((-1, lens.size))

Timing these two on large input had the cumsum method performing slightly better:

对这两种大输入进行定时,cumsum方法的表现略好:

>>> lens = np.random.randint(1, 100, 100)
>>> x = np.random.random((100000, lens.sum()))
>>> %timeit cumsum_method(x, lens)
1 loops, best of 3: 3 s per loop
>>> %timeit bincount_method(x, lens)
1 loops, best of 3: 3.9 s per loop

Is there an obviously more efficient way that I'm missing? It seems like a native c call would be faster because it wouldn't require allocating the cumsum or the bins array. A numpy builtin function that does something close to this could likely be better than (a) or (b). I couldn't find anything through searching and looking through the documentation.

有没有一种明显更有效的方法让我错过?它看起来像一个本地c调用会更快,因为它不需要分配cumsum或bin数组。一个类似于这个的numpy内置函数可能比(A)或(b)更好。

Note, this is similar to this question, but the summation intervals aren't regular.

注意,这和这个问题很相似,但是求和间隔不是规则的。

1 个解决方案

#1


4  

You can use np.add.reduceat:

您可以使用np.add.reduceat:

>>> np.add.reduceat(x, [0, 1, 4], axis=1)
array([[  0.,   6.,   9.],
       [  6.,  24.,  21.],
       [ 12.,  42.,  33.],
       [ 18.,  60.,  45.]])

The list of indices [0, 1, 4] means: "sum the slices 0:1, 1:4 and 4:". You could generate these values from lens using np.hstack(([0], lens[:-1])).cumsum().

索引的列表[0,1,4]表示:“sum 0:1, 1:4和4:”。你可以用np从镜头中生成这些值。hstack([0],镜头(:1))).cumsum()。

Even factoring in the calculation of the indices from lens, a reduceat method is likely to be significantly faster than alternative methods:

即使考虑到从透镜计算的指标,还原法可能比其他方法快得多:

def reduceat_method(x, lens):
    i = np.hstack(([0], lens[:-1])).cumsum()
    return np.add.reduceat(x, i, axis=1)

lens = np.random.randint(1, 100, 100)
x = np.random.random((1000, lens.sum())

%timeit reduceat_method(x, lens)
# 100 loops, best of 3: 4.89 ms per loop

%timeit cumsum_method(x, lens)
# 10 loops, best of 3: 35.8 ms per loop

%timeit bincount_method(x, lens)
# 10 loops, best of 3: 43.6 ms per loop

#1


4  

You can use np.add.reduceat:

您可以使用np.add.reduceat:

>>> np.add.reduceat(x, [0, 1, 4], axis=1)
array([[  0.,   6.,   9.],
       [  6.,  24.,  21.],
       [ 12.,  42.,  33.],
       [ 18.,  60.,  45.]])

The list of indices [0, 1, 4] means: "sum the slices 0:1, 1:4 and 4:". You could generate these values from lens using np.hstack(([0], lens[:-1])).cumsum().

索引的列表[0,1,4]表示:“sum 0:1, 1:4和4:”。你可以用np从镜头中生成这些值。hstack([0],镜头(:1))).cumsum()。

Even factoring in the calculation of the indices from lens, a reduceat method is likely to be significantly faster than alternative methods:

即使考虑到从透镜计算的指标,还原法可能比其他方法快得多:

def reduceat_method(x, lens):
    i = np.hstack(([0], lens[:-1])).cumsum()
    return np.add.reduceat(x, i, axis=1)

lens = np.random.randint(1, 100, 100)
x = np.random.random((1000, lens.sum())

%timeit reduceat_method(x, lens)
# 100 loops, best of 3: 4.89 ms per loop

%timeit cumsum_method(x, lens)
# 10 loops, best of 3: 35.8 ms per loop

%timeit bincount_method(x, lens)
# 10 loops, best of 3: 43.6 ms per loop