Numpy:与唯一坐标位置对应的值的平均值

时间:2022-07-19 12:08:00

So, I have been browsing * for quite some time now, but I can't seem to find the solution for my problem

所以,我一直在浏览*很长一段时间了,但我似乎找不到解决问题的方法

Consider this

import numpy as np
coo = np.array([[1, 2], [2, 3], [3, 4], [3, 4], [1, 2], [5, 6], [1, 2]])
values = np.array([1, 2, 4, 2, 1, 6, 1])

The coo array contains the (x, y) coordinate positions x = (1, 2, 3, 3, 1, 5, 1) y = (2, 3, 4, 4, 2, 6, 2)

coo数组包含(x,y)坐标位置x =(1,2,3,3,1,5,1)y =(2,3,4,4,2,6,2)

and the values array some sort of data for this grid point.

并且值为此网格点排列某种数据。

Now I want to get the average of all values for each unique grid point. For example the coordinate (1, 2) occurs at the positions (0, 4, 6), so for this point I want values[[0, 4, 6]].

现在我想获得每个唯一网格点的所有值的平均值。例如,坐标(1,2)出现在位置(0,4,6),因此对于这一点,我想要值[[0,4,6]]。

How could I get this for all unique grid points?

我怎么能得到所有独特的网格点?

4 个解决方案

#1


3  

You can sort coo with np.lexsort to bring the duplicate ones in succession. Then run np.diff along the rows to get a mask of starts of unique XY's in the sorted version. Using that mask, you can create an ID array that would have the same ID for the duplicates. The ID array can then be used with np.bincount to get the summation of all values with the same ID and also their counts and thus the average values, as the final output. Here's an implementation to go along those lines -

您可以使用np.lexsort对coo进行排序,以便连续使用重复的。然后沿着行运行np.diff,以获得已排序版本中唯一XY的启动掩码。使用该掩码,您可以创建一个ID数组,该数组对于重复项具有相同的ID。然后可以将ID数组与np.bincount一起使用,以获得具有相同ID的所有值的总和,以及它们的计数,从而得到平均值,作为最终输出。这是一个沿着这些方向实现的实现 -

# Use lexsort to bring duplicate coo XY's in succession
sortidx = np.lexsort(coo.T)
sorted_coo =  coo[sortidx]

# Get mask of start of each unique coo XY
unqID_mask = np.append(True,np.any(np.diff(sorted_coo,axis=0),axis=1))

# Tag/ID each coo XY based on their uniqueness among others
ID = unqID_mask.cumsum()-1

# Get unique coo XY's
unq_coo = sorted_coo[unqID_mask]

# Finally use bincount to get the summation of all coo within same IDs 
# and their counts and thus the average values
average_values = np.bincount(ID,values[sortidx])/np.bincount(ID)

Sample run -

样品运行 -

In [65]: coo
Out[65]: 
array([[1, 2],
       [2, 3],
       [3, 4],
       [3, 4],
       [1, 2],
       [5, 6],
       [1, 2]])

In [66]: values
Out[66]: array([1, 2, 4, 2, 1, 6, 1])

In [67]: unq_coo
Out[67]: 
array([[1, 2],
       [2, 3],
       [3, 4],
       [5, 6]])

In [68]: average_values
Out[68]: array([ 1.,  2.,  3.,  6.])

#2


2  

You can use where:

你可以在哪里使用:

>>> values[np.where((coo == [1, 2]).all(1))].mean()
1.0

#3


1  

It is very likely going to be faster to flatten your indices, i.e.:

平整指数很可能会更快,即:

flat_index = coo[:, 0] * np.max(coo[:, 1]) + coo[:, 1]

then use np.unique on it:

然后在它上面使用np.unique:

unq, unq_idx, unq_inv, unq_cnt = np.unique(flat_index,
                                           return_index=True,
                                           return_inverse=True,
                                           return_counts=True)
unique_coo = coo[unq_idx]
unique_mean = np.bincount(unq_inv, values) / unq_cnt

than the similar approach using lexsort.

比使用lexsort的类似方法。

But under the hood the method is virtually the same.

但在引擎盖下,这种方法几乎是一样的。

#4


1  

This is a simple one-liner using the numpy_indexed package (disclaimer: I am its author):

这是一个简单的单行使用numpy_indexed包(免责声明:我是它的作者):

import numpy_indexed as npi
unique, mean = npi.group_by(coo).mean(values)

Should be comparable to the currently accepted answer in performance, as it does similar things under the hood; but all in a well tested package with a nice interface.

应该与当前接受的性能答案相媲美,因为它在引擎盖下做类似的事情;但所有这些都在一个经过良好测试的包中,界面很好。

#1


3  

You can sort coo with np.lexsort to bring the duplicate ones in succession. Then run np.diff along the rows to get a mask of starts of unique XY's in the sorted version. Using that mask, you can create an ID array that would have the same ID for the duplicates. The ID array can then be used with np.bincount to get the summation of all values with the same ID and also their counts and thus the average values, as the final output. Here's an implementation to go along those lines -

您可以使用np.lexsort对coo进行排序,以便连续使用重复的。然后沿着行运行np.diff,以获得已排序版本中唯一XY的启动掩码。使用该掩码,您可以创建一个ID数组,该数组对于重复项具有相同的ID。然后可以将ID数组与np.bincount一起使用,以获得具有相同ID的所有值的总和,以及它们的计数,从而得到平均值,作为最终输出。这是一个沿着这些方向实现的实现 -

# Use lexsort to bring duplicate coo XY's in succession
sortidx = np.lexsort(coo.T)
sorted_coo =  coo[sortidx]

# Get mask of start of each unique coo XY
unqID_mask = np.append(True,np.any(np.diff(sorted_coo,axis=0),axis=1))

# Tag/ID each coo XY based on their uniqueness among others
ID = unqID_mask.cumsum()-1

# Get unique coo XY's
unq_coo = sorted_coo[unqID_mask]

# Finally use bincount to get the summation of all coo within same IDs 
# and their counts and thus the average values
average_values = np.bincount(ID,values[sortidx])/np.bincount(ID)

Sample run -

样品运行 -

In [65]: coo
Out[65]: 
array([[1, 2],
       [2, 3],
       [3, 4],
       [3, 4],
       [1, 2],
       [5, 6],
       [1, 2]])

In [66]: values
Out[66]: array([1, 2, 4, 2, 1, 6, 1])

In [67]: unq_coo
Out[67]: 
array([[1, 2],
       [2, 3],
       [3, 4],
       [5, 6]])

In [68]: average_values
Out[68]: array([ 1.,  2.,  3.,  6.])

#2


2  

You can use where:

你可以在哪里使用:

>>> values[np.where((coo == [1, 2]).all(1))].mean()
1.0

#3


1  

It is very likely going to be faster to flatten your indices, i.e.:

平整指数很可能会更快,即:

flat_index = coo[:, 0] * np.max(coo[:, 1]) + coo[:, 1]

then use np.unique on it:

然后在它上面使用np.unique:

unq, unq_idx, unq_inv, unq_cnt = np.unique(flat_index,
                                           return_index=True,
                                           return_inverse=True,
                                           return_counts=True)
unique_coo = coo[unq_idx]
unique_mean = np.bincount(unq_inv, values) / unq_cnt

than the similar approach using lexsort.

比使用lexsort的类似方法。

But under the hood the method is virtually the same.

但在引擎盖下,这种方法几乎是一样的。

#4


1  

This is a simple one-liner using the numpy_indexed package (disclaimer: I am its author):

这是一个简单的单行使用numpy_indexed包(免责声明:我是它的作者):

import numpy_indexed as npi
unique, mean = npi.group_by(coo).mean(values)

Should be comparable to the currently accepted answer in performance, as it does similar things under the hood; but all in a well tested package with a nice interface.

应该与当前接受的性能答案相媲美,因为它在引擎盖下做类似的事情;但所有这些都在一个经过良好测试的包中,界面很好。