First I know that there are many similarly themed question on SO, but I can't find a solution after a day of searching, reading, and testing.
首先,我知道有很多类似的主题问题,但是在经过一天的搜索、阅读和测试之后,我找不到一个解决方案。
I have a python function which calculates the pairwise correlations of a numpy ndarray (m x n). I was orginally doing this purely in numpy but the function also computed the reciprocal pairs (i.e. as well as calculating the the correlation betwen rows A and B of the matrix, it calculated the correlation between rows B and A too.) So I took a slightly different approach that is about twice as fast for matrices of large m (realistic sizes for my problem are m ~ 8000).
我有一个python函数计算的两两相关性numpy ndarray(m x n)。我被这样做纯粹numpy但功能也计算了互惠双(即以及计算相关矩阵的前后行a和B,计算之间的关系行B和a。)因此,我采用了一种稍微不同的方法,对于m(我的问题是m ~ 8000)的矩阵来说,它的速度是原来的两倍。
This was great but still a tad slow, as there will be many such matrices, and to do them all will take a long time. So I started investigating cython as a way to speed things up. I understand from what I've read that cython won't really speed up numpy all that much. Is this true, or is there something I am missing?
这很好,但还是有点慢,因为会有很多这样的矩阵,而且要完成它们都需要很长时间。所以我开始调查cython,以加快速度。我从我读到的内容中了解到,cython并不能真正加快numpy的速度。这是真的吗,还是有什么我遗漏了?
I think the bottlenecks below are the np.sqrt
, np.dot
, the call to the ndarray's .T
method and np.absolute
. I've seen people use sqrt
from libc.math
to replace the np.sqrt, so I suppose my first question is, are the similar functions for the other methods in libc.math
that I can use? I am afraid that I am completely and utterly unfamiliar with C/C++/C# or any of the C family languages, so this typing and cython business are very new territory to me, apologies if the reason/solution is obvious.
我认为下面的瓶颈是np。√6,np。点,调用ndarray的。t方法和np。绝对。我看到人们使用来自libc的sqrt。数学取代了np。我想我的第一个问题是,类似于libc的其他方法的函数。我能用的数学吗?我担心我完全不熟悉C/ c++ / c#或任何C族语言,所以这种类型和cython业务对我来说是非常新的领域,如果原因/解决方案是显而易见的,我就道歉。
Failing that, any ideas about what I could do to get some performance gains?
如果失败了,我能做些什么来提高性能呢?
Below are my pyx code, the setup code, and the call to the pyx function. I don't know if it's important, but when I call python setup build_ext --inplace
It works but there are a lot warnings which I don't really understand. Could these also be a reason why I am not seeing a speed improvement?
下面是我的pyx代码,设置代码,以及对pyx函数的调用。我不知道它是否重要,但是当我调用python setup build_ext时,它会工作,但是有很多警告我并不真正理解。这也是我没有看到速度提升的原因吗?
Any help is very much appreciated, and sorry for the super long post.
任何帮助都非常感谢,并为超长的帖子感到抱歉。
setup.py
setup . py
from distutils.core import setup
from distutils.extension import Extension
import numpy
from Cython.Distutils import build_ext
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [Extension("calcBrownCombinedP",
["calcBrownCombinedP.pyx"],
include_dirs=[numpy.get_include()])]
)
and the ouput of setup:
以及设置的ouput:
>python setup.py build_ext --inplace
running build_ext
cythoning calcBrownCombinedP.pyx to calcBrownCombinedP.c
building 'calcBrownCombinedP' extension
C:\Anaconda\Scripts\gcc.bat -DMS_WIN64 -mdll -O -Wall -IC:\Anaconda\lib\site-packages\numpy\core\include -IC:\Anaconda\include -IC:\Anaconda\PC -c calcBrownCombinedP.c -o build\temp.win-amd64-2.7\Release\calcbrowncombinedp.o
In file included from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/ndarraytypes.h:1728:0,
from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/ndarrayobject.h:17,
from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/arrayobject.h:15,
from calcBrownCombinedP.c:340:
C:\Anaconda\lib\site-packages\numpy\core\include/numpy/npy_deprecated_api.h:8:9: note: #pragma message: C:\Anaconda\lib\site-packages\numpy\core\include/numpy/npy_deprecated_api.h(8) : Warning Msg: Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
calcBrownCombinedP.c: In function '__Pyx_RaiseTooManyValuesError':
calcBrownCombinedP.c:4473:18: warning: unknown conversion type character 'z' in format [-Wformat]
calcBrownCombinedP.c:4473:18: warning: too many arguments for format [-Wformat-extra-args]
calcBrownCombinedP.c: In function '__Pyx_RaiseNeedMoreValuesError':
calcBrownCombinedP.c:4479:18: warning: unknown conversion type character 'z' in format [-Wformat]
calcBrownCombinedP.c:4479:18: warning: format '%s' expects argument of type 'char *', but argument 3 has type 'Py_ssize_t' [-Wformat]
calcBrownCombinedP.c:4479:18: warning: too many arguments for format [-Wformat-extra-args]
In file included from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/ndarrayobject.h:26:0,
from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/arrayobject.h:15,
from calcBrownCombinedP.c:340:
calcBrownCombinedP.c: At top level:
C:\Anaconda\lib\site-packages\numpy\core\include/numpy/__multiarray_api.h:1594:1: warning: '_import_array' defined but not used [-Wunused-function]
In file included from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/ufuncobject.h:311:0,
from calcBrownCombinedP.c:341:
C:\Anaconda\lib\site-packages\numpy\core\include/numpy/__ufunc_api.h:236:1: warning: '_import_umath' defined but not used [-Wunused-function]
writing build\temp.win-amd64-2.7\Release\calcBrownCombinedP.def
C:\Anaconda\Scripts\gcc.bat -DMS_WIN64 -shared -s build\temp.win-amd64-2.7\Release\calcbrowncombinedp.o build\temp.win-amd64-2.7\Release\calcBrownCombinedP.def -LC:\Anaconda\libs -LC:\Anaconda\PCbuild\amd64 -lpython27 -lmsvcr90 -o C:\cygwin64\home\Davy\SNPsets\src\calcBrownCombinedP.pyd
the pyx code - 'calcBrownCombinedP.pyx'
pyx代码- 'calcBrownCombinedP.pyx'
import numpy as np
cimport numpy as np
from scipy import stats
DTYPE = np.int
ctypedef np.int_t DTYPE_t
def calcBrownCombinedP(np.ndarray genotypeArray):
cdef int nSNPs, i
cdef np.ndarray ms, datam, datass, d, rs, temp
cdef float runningSum, sigmaSq, E, df
nSNPs = genotypeArray.shape[0]
ms = genotypeArray.mean(axis=1)[(slice(None,None,None),None)]
datam = genotypeArray - ms
datass = np.sqrt(stats.ss(datam,axis=1))
runningSum = 0
for i in xrange(nSNPs):
temp = np.dot(datam[i:],datam[i].T)
d = (datass[i:]*datass[i])
rs = temp / d
rs = np.absolute(rs)[1:]
runningSum += sum(rs*(3.25+(0.75*rs)))
sigmaSq = 4*nSNPs+2*runningSum
E = 2*nSNPs
df = (2*(E*E))/sigmaSq
runningSum = sigmaSq/(2*E)
return runningSum
The code that tests the above against some pure python - 'test.py'
上面的代码对一些纯python进行测试—'test.py'
import numpy as np
from scipy import stats
import random
import time
from calcBrownCombinedP import calcBrownCombinedP
from PycalcBrownCombinedP import PycalcBrownCombinedP
ms = [10,50,100,500,1000,5000]
for m in ms:
print '---testing implentation with m = {0}---'.format(m)
genotypeArray = np.empty((m,20),dtype=int)
for i in xrange(m):
genotypeArray[i] = [random.randint(0,2) for j in xrange(20)]
print genotypeArray.shape
start = time.time()
print calcBrownCombinedP(genotypeArray)
print 'cython implementation took {0}'.format(time.time() - start)
start = time.time()
print PycalcBrownCombinedP(genotypeArray)
print 'python implementation took {0}'.format(time.time() - start)
and the ouput of that code is:
这段代码的输出是:
---testing implentation with m = 10---
(10L, 20L)
2.13660168648
cython implementation took 0.000999927520752
2.13660167749
python implementation took 0.000999927520752
---testing implentation with m = 50---
(50L, 20L)
8.82721138
cython implementation took 0.00399994850159
8.82721130234
python implementation took 0.00500011444092
---testing implentation with m = 100---
(100L, 20L)
16.7438983917
cython implementation took 0.0139999389648
16.7438965333
python implementation took 0.0120000839233
---testing implentation with m = 500---
(500L, 20L)
80.5343856812
cython implementation took 0.183000087738
80.5343694046
python implementation took 0.161000013351
---testing implentation with m = 1000---
(1000L, 20L)
160.122573853
cython implementation took 0.615000009537
160.122491308
python implementation took 0.598000049591
---testing implentation with m = 5000---
(5000L, 20L)
799.813842773
cython implementation took 10.7159998417
799.813880445
python implementation took 11.2510001659
Lastly, the pure python implementation 'PycalcBrownCombinedP.py'
最后,纯python实现'PycalcBrownCombinedP.py'
import numpy as np
from scipy import stats
def PycalcBrownCombinedP(genotypeArray):
nSNPs = genotypeArray.shape[0]
ms = genotypeArray.mean(axis=1)[(slice(None,None,None),None)]
datam = genotypeArray - ms
datass = np.sqrt(stats.ss(datam,axis=1))
runningSum = 0
for i in xrange(nSNPs):
temp = np.dot(datam[i:],datam[i].T)
d = (datass[i:]*datass[i])
rs = temp / d
rs = np.absolute(rs)[1:]
runningSum += sum(rs*(3.25+(0.75*rs)))
sigmaSq = 4*nSNPs+2*runningSum
E = 2*nSNPs
df = (2*(E*E))/sigmaSq
runningSum = sigmaSq/(2*E)
return runningSum
1 个解决方案
#1
19
Profiling with kernprof
shows the bottleneck is the last line of the loop:
与kernprof的分析显示,瓶颈是循环的最后一行:
Line # Hits Time Per Hit % Time Line Contents
==============================================================
<snip>
16 5000 6145280 1229.1 86.6 runningSum += sum(rs*(3.25+(0.75*rs)))
This is no surprise as you're using the Python built-in function sum
in both the Python and Cython versions. Switching to np.sum
speeds the code up by a factor of 4.5 when the input array has shape (5000, 20)
.
这并不奇怪,因为您在Python和Cython版本中都使用Python内置函数sum。切换到np。当输入数组的形状(5000,20)时,sum可以将代码提高到4.5倍。
If a small loss in accuracy is alright, then you can leverage linear algebra to speed up the final line further:
如果一个小的精度损失是好的,那么你可以利用线性代数来加速最后一行:
np.sum(rs * (3.25 + 0.75 * rs))
is really a vector dot product, i.e.
是一个向量点积。
np.dot(rs, 3.25 + 0.75 * rs)
This is still suboptimal as it loops over rs
three times and constructs two rs
-sized temporary arrays. Using elementary algebra, this expression can be rewritten as
这仍然是次优的,因为它循环了三次rs,并构造了两个rs大小的临时数组。使用初等代数,这个表达式可以重写为。
3.25 * np.sum(rs) + .75 * np.dot(rs, rs)
which not only gives the original result without the round-off error in the previous version, but only loops over rs
twice and uses constant memory.(*)
它不仅给出了原来的结果,而且在之前的版本中没有舍入错误,而且只在rs上重复了两次,并使用了常量内存。
The bottleneck is now np.dot
, so installing a better BLAS library is going to buy you more than rewriting the whole thing in Cython.
现在的瓶颈是np。点,所以安装一个更好的BLAS库将会比重写整个Cython更能买到你。
(*) Or logarithmic memory in the very latest NumPy, which has a recursive reimplementation of np.sum
that is faster than the old iterative one.
(*)或者是最新的NumPy中的对数内存,它的递归重新实现了np。它比旧的迭代法快。
#1
19
Profiling with kernprof
shows the bottleneck is the last line of the loop:
与kernprof的分析显示,瓶颈是循环的最后一行:
Line # Hits Time Per Hit % Time Line Contents
==============================================================
<snip>
16 5000 6145280 1229.1 86.6 runningSum += sum(rs*(3.25+(0.75*rs)))
This is no surprise as you're using the Python built-in function sum
in both the Python and Cython versions. Switching to np.sum
speeds the code up by a factor of 4.5 when the input array has shape (5000, 20)
.
这并不奇怪,因为您在Python和Cython版本中都使用Python内置函数sum。切换到np。当输入数组的形状(5000,20)时,sum可以将代码提高到4.5倍。
If a small loss in accuracy is alright, then you can leverage linear algebra to speed up the final line further:
如果一个小的精度损失是好的,那么你可以利用线性代数来加速最后一行:
np.sum(rs * (3.25 + 0.75 * rs))
is really a vector dot product, i.e.
是一个向量点积。
np.dot(rs, 3.25 + 0.75 * rs)
This is still suboptimal as it loops over rs
three times and constructs two rs
-sized temporary arrays. Using elementary algebra, this expression can be rewritten as
这仍然是次优的,因为它循环了三次rs,并构造了两个rs大小的临时数组。使用初等代数,这个表达式可以重写为。
3.25 * np.sum(rs) + .75 * np.dot(rs, rs)
which not only gives the original result without the round-off error in the previous version, but only loops over rs
twice and uses constant memory.(*)
它不仅给出了原来的结果,而且在之前的版本中没有舍入错误,而且只在rs上重复了两次,并使用了常量内存。
The bottleneck is now np.dot
, so installing a better BLAS library is going to buy you more than rewriting the whole thing in Cython.
现在的瓶颈是np。点,所以安装一个更好的BLAS库将会比重写整个Cython更能买到你。
(*) Or logarithmic memory in the very latest NumPy, which has a recursive reimplementation of np.sum
that is faster than the old iterative one.
(*)或者是最新的NumPy中的对数内存,它的递归重新实现了np。它比旧的迭代法快。