What declarations should I be incorporating with a logic function / index operation so that Cython does the heavy lifting?
我应该在逻辑函数/索引操作中加入哪些声明,以便Cython完成繁重的工作?
I have two large rasters in the form of numpy arrays of equal size. The first array contains vegetation index values and the second array contains field IDs. The goal is to average vegetation index values by field. Both arrays have pesky nodata values (-9999) that I would like to ignore.
我有两个相同大小的numpy数组形式的大型栅格。第一个数组包含植被索引值,第二个数组包含字段ID。目标是按字段平均植被指数值。这两个数组都有令人讨厌的nodata值(-9999),我想忽略它们。
Currently the function takes over 60 seconds to execute, which normally I wouldn’t mind so much but I'll be processing potentially hundreds of images. Even a 30 second improvement would be significant. So I’ve been exploring Cython as a way to help speed things up. I’ve been using the Cython numpy tutorial as a guide.
目前该功能需要超过60秒才能执行,这通常我不介意,但我会处理数百张图像。即使是30秒的改进也很重要。所以我一直在探索Cython作为一种帮助加快速度的方法。我一直在使用Cython numpy教程作为指南。
test_cy.pyx code:
import numpy as np
cimport numpy as np
cimport cython
@cython.boundscheck(False) # turn off bounds-checking for entire function
@cython.wraparound(False) # turn off negative index wrapping for entire function
cpdef test():
cdef np.ndarray[np.int16_t, ndim=2] ndvi_array = np.load("Z:cython_test/data/ndvi.npy")
cdef np.ndarray[np.int16_t, ndim=2] field_array = np.load("Z:cython_test/data/field_array.npy")
cdef np.ndarray[np.int16_t, ndim=1] unique_field = np.unique(field_array)
unique_field = unique_field[unique_field != -9999]
cdef int field_id
cdef np.ndarray[np.int16_t, ndim=1] f_ndvi_values
cdef double f_avg
for field_id in unique_field :
f_ndvi_values = ndvi_array[np.logical_and(field_array == field_id, ndvi_array != -9999)]
f_avg = np.mean(f_ndvi_values)
Setup.py code:
try:
from setuptools import setup
from setuptools import Extension
except ImportError:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
import numpy
setup(ext_modules = cythonize('test_cy.pyx'),
include_dirs=[numpy.get_include()])
After some researching and running:
经过一番研究和运行:
cython -a test_cy.pyx
It seems the index operation ndvi_array[np.logical_and(field_array == field_id, ndvi_array != -9999)]
is the bottleneck and is still relying on Python. I suspect I’m missing some vital declarations here. Including ndim
didn’t have any effect.
似乎索引操作ndvi_array [np.logical_and(field_array == field_id,ndvi_array!= -9999)]是瓶颈,仍然依赖于Python。我怀疑我在这里遗漏了一些重要的声明。包括ndim没有任何影响。
I’m fairly new to numpy as well so I'm probably missing something obvious.
我对numpy也很新,所以我可能错过了一些明显的东西。
1 个解决方案
#1
1
Your problem looks fairly vectorizable to me, so Cython might not be the best approach. (Cython shines when there are unavoidable fine grained loops.) As your dtype is int16
there is only a limited range of possible labels, so using np.bincount
should be fairly efficient. Try something like (this is assuming all your valid values are >= 0 if that is not the case you'd have to shift - or (cheaper) view-cast to uint16
(since we are not doing any arithmetic on the labels that should be safe) - before using bincount
):
你的问题对我来说看起来很容易上传,所以Cython可能不是最好的方法。 (当有不可避免的细粒度循环时,Cython会发光。)由于你的dtype是int16,因此只有有限范围的可能标签,所以使用np.bincount应该相当有效。尝试类似的事情(这假设所有有效值都是> = 0,如果不是你必须转移的情况 - 或者(更便宜)视图转换为uint16(因为我们没有对标签进行任何算术应该安全) - 在使用bincount之前):
mask = (ndvi_array != -9999) & (field_array != -9999)
nd = ndvi_array[mask]
fi = field_array[mask]
counts = np.bincount(fi, minlength=2**15)
sums = np.bincount(fi, nd, minlength=2**15)
valid = counts != 0
avgs = sums[valid] / counts[valid]
#1
1
Your problem looks fairly vectorizable to me, so Cython might not be the best approach. (Cython shines when there are unavoidable fine grained loops.) As your dtype is int16
there is only a limited range of possible labels, so using np.bincount
should be fairly efficient. Try something like (this is assuming all your valid values are >= 0 if that is not the case you'd have to shift - or (cheaper) view-cast to uint16
(since we are not doing any arithmetic on the labels that should be safe) - before using bincount
):
你的问题对我来说看起来很容易上传,所以Cython可能不是最好的方法。 (当有不可避免的细粒度循环时,Cython会发光。)由于你的dtype是int16,因此只有有限范围的可能标签,所以使用np.bincount应该相当有效。尝试类似的事情(这假设所有有效值都是> = 0,如果不是你必须转移的情况 - 或者(更便宜)视图转换为uint16(因为我们没有对标签进行任何算术应该安全) - 在使用bincount之前):
mask = (ndvi_array != -9999) & (field_array != -9999)
nd = ndvi_array[mask]
fi = field_array[mask]
counts = np.bincount(fi, minlength=2**15)
sums = np.bincount(fi, nd, minlength=2**15)
valid = counts != 0
avgs = sums[valid] / counts[valid]