cupy我觉得可以理解为cuda for numpy,安装方式pip install cupy
,假设
1
2
|
import numpy as np
import cupy as cp
|
那么对于np.XXX
一般可以直接替代为cp.XXX
。
其实numpy
已经够快了,毕竟是C写的,每次运行的时候都会尽其所能地调用系统资源。为了验证这一点,我们可以用矩阵乘法来测试一下:在形式上通过多线程并发、多进程并行以及单线程的方式,来比较一下numpy
的速度和对资源的调度情况,代码为
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
|
# th_pr_array.py
from threading import Thread
from multiprocessing import Process
from time import time as Now
import numpy as np
import sys
N = 3000
def MatrixTest(n,name,t):
x = np.random.rand(n,n)
x = x@x
print (f "{name} @ {t} : {Now()-t}" )
def thTest():
t = Now()
for i in range ( 5 ):
Thread(target = MatrixTest,args = [N,f 'th{i}' ,t]).start()
def prTest():
t = Now()
for i in range ( 5 ):
Process(target = MatrixTest,args = [N,f 'pr{i}' ,t]).start()
if __name__ = = "__main__" :
if sys.argv[ 1 ] = = "th" :
thTest()
elif sys.argv[ 1 ] = = "pr" :
prTest()
else :
t = Now()
for i in range ( 5 ):
MatrixTest(N, "single" ,t)
|
运行结果为
(base) E:\Documents\00\1108>python th_pr_numpy.py th
th0 @ 1636357422.3703225 : 15.23965334892273
th1 @ 1636357422.3703225 : 17.726242780685425
th2 @ 1636357422.3703225 : 19.001763582229614
th3 @ 1636357422.3703225 : 19.06676197052002
th4 @ 1636357422.3703225 : 19.086761951446533(base) E:\Documents\00\1108>python th_pr_numpy.py pr
pr3 @ 1636357462.4170427 : 4.031360864639282
pr0 @ 1636357462.4170427 : 4.55387806892395
pr1 @ 1636357462.4170427 : 4.590881824493408
pr4 @ 1636357462.4170427 : 4.674877643585205
pr2 @ 1636357462.4170427 : 4.702877759933472(base) E:\Documents\00\1108>python th_pr_numpy.py single
single @ 1636357567.8899782 : 0.36359524726867676
single @ 1636357567.8899782 : 0.8137514591217041
single @ 1636357567.8899782 : 1.237830400466919
single @ 1636357567.8899782 : 1.683635950088501
single @ 1636357567.8899782 : 2.098794937133789
所以说在numpy中就别用python内置的并行和并发了,反而会称为累赘。而且这么一比更会印证numpy的强大性能。
但在cupy
面前,这个速度会显得十分苍白,下面连续5次创建5000x5000的随机矩阵并进行矩阵乘法,
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
#np_cp.py
import numpy as np
import cupy as cp
import sys
from time import time as Now
N = 5000
def testNp(t):
for i in range ( 5 ):
x = np.random.rand(N,N)
x = x@x
print (f "np:{Now()-t}" )
def testCp(t):
for i in range ( 5 ):
x = cp.random.rand(N,N)
x = x@x
print (f "cp:{Now()-t}" )
if __name__ = = "__main__" :
t = Now()
if sys.argv[ 1 ] = = 'np' :
testNp(t)
elif sys.argv[ 1 ] = = 'cp' :
testCp(t)
|
最后的结果是
(base) E:\Documents\00\1108>python np_cp.py np
np:8.914457082748413(base) E:\Documents\00\1108>python np_cp.py cp
cp:0.545649528503418
而且非常霸道的是,当矩阵维度从5000x5000升到15000x15000后,cupy的计算时间并没有什么变化,充其量是线性增长,毕竟只要缓存吃得下,无论多么大的矩阵,乘法数也无非是按行或者按列增加而已。
以上就是python 详解如何使用GPU大幅提高效率的详细内容,更多关于Python GPU提高效率的资料请关注服务器之家其它相关文章!
原文链接:https://blog.csdn.net/m0_37816922/article/details/121223407