I'm trying to multiply each of the terms in a 2D array by the corresponding terms in a 1D array. This is very easy if I want to multiply every column by the 1D array, as shown in the numpy.multiply function. But I want to do the opposite, multiply each term in the row. In other words I want to multiply:
我要把二维数组中的每一项乘以一维数组中的相应项。如果我想把每一列都乘以一维数组,这很简单,如numpy所示。用函数。但是我想做相反的事情,把这一行中的每一项相乘。换句话说,我想乘以
[1,2,3] [0]
[4,5,6] * [1]
[7,8,9] [2]
and get
并获得
[0,0,0]
[4,5,6]
[14,16,18]
but instead I get
而是我
[0,2,6]
[0,5,12]
[0,8,18]
Does anyone know if there's an elegant way to do that with numpy? Thanks a lot, Alex
有谁知道是否有一种优雅的方式来处理numpy?非常感谢,亚历克斯
5 个解决方案
#1
53
Normal multiplication like you showed:
就像你展示的一般乘法:
>>> import numpy as np
>>> m = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> c = np.array([0,1,2])
>>> m * c
array([[ 0, 2, 6],
[ 0, 5, 12],
[ 0, 8, 18]])
If you add an axis, it will multiply the way you want:
如果你加上一个轴,它会以你想要的方式相乘:
>>> m * c[:, np.newaxis]
array([[ 0, 0, 0],
[ 4, 5, 6],
[14, 16, 18]])
You could also transpose twice:
你也可以转置两次
>>> (m.T * c).T
array([[ 0, 0, 0],
[ 4, 5, 6],
[14, 16, 18]])
#2
15
You could also use matrix multiplication (aka dot product):
你也可以使用矩阵乘法(即点积):
a = [[1,2,3],[4,5,6],[7,8,9]]
b = [0,1,2]
c = numpy.diag(b)
numpy.dot(c,a)
Which is more elegant is probably a matter of taste.
哪个更优雅可能是品味的问题。
#3
12
Yet another trick (as of v1.6)
还有一个技巧(v1.6)
A=np.arange(1,10).reshape(3,3)
b=np.arange(3)
np.einsum('ij,i->ij',A,b)
I'm proficient with the numpy broadcasting (newaxis
), but I'm still finding my way around this new einsum
tool. So I had play around a bit to find this solution.
我精通numpy广播(newaxis),但我仍在使用这个新的einsum工具。为了找到这个解,我做了一些尝试。
Timings (using Ipython timeit):
计时时间(使用Ipython):
einsum: 4.9 micro
transpose: 8.1 micro
newaxis: 8.35 micro
dot-diag: 10.5 micro
Incidentally, changing a i
to j
, np.einsum('ij,j->ij',A,b)
, produces the matrix that Alex does not want. And np.einsum('ji,j->ji',A,b)
does, in effect, the double transpose.
顺便说一句,把a i换成j, np。einsum('ij,j->ij' a,b)会生成Alex不想要的矩阵。而np。einsum('ji,j->ji,A,b)实际上是,双转置。
#4
8
I've compared the different options for speed and found that – much to my surprise – all options (except einsum
for small n
and diag
for large n
) are equally fast:
我已经比较了不同的速度选项,并且发现——让我惊讶的是——所有的选项(除了小n和diag对于大n)都是一样快的:
Code to reproduce the plot:
复制情节代码:
import numpy
import perfplot
def newaxis(data):
A, b = data
return A * b[:, numpy.newaxis]
def double_transpose(data):
A, b = data
return (A.T * b).T
def double_transpose_contiguous(data):
A, b = data
return numpy.ascontiguousarray((A.T * b).T)
def diag_dot(data):
A, b = data
return numpy.dot(numpy.diag(b), A)
def einsum(data):
A, b = data
return numpy.einsum('ij,i->ij', A, b)
perfplot.show(
setup=lambda n: (numpy.random.rand(n, n), numpy.random.rand(n)),
kernels=[
newaxis, double_transpose, double_transpose_contiguous, diag_dot,
einsum
],
n_range=[2**k for k in range(10)],
logx=True,
logy=True,
xlabel='len(A), len(b)'
)
#5
-3
Why don't you just do
你为什么不这么做呢
>>> m = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> c = np.array([0,1,2])
>>> (m.T * c).T
??
? ?
#1
53
Normal multiplication like you showed:
就像你展示的一般乘法:
>>> import numpy as np
>>> m = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> c = np.array([0,1,2])
>>> m * c
array([[ 0, 2, 6],
[ 0, 5, 12],
[ 0, 8, 18]])
If you add an axis, it will multiply the way you want:
如果你加上一个轴,它会以你想要的方式相乘:
>>> m * c[:, np.newaxis]
array([[ 0, 0, 0],
[ 4, 5, 6],
[14, 16, 18]])
You could also transpose twice:
你也可以转置两次
>>> (m.T * c).T
array([[ 0, 0, 0],
[ 4, 5, 6],
[14, 16, 18]])
#2
15
You could also use matrix multiplication (aka dot product):
你也可以使用矩阵乘法(即点积):
a = [[1,2,3],[4,5,6],[7,8,9]]
b = [0,1,2]
c = numpy.diag(b)
numpy.dot(c,a)
Which is more elegant is probably a matter of taste.
哪个更优雅可能是品味的问题。
#3
12
Yet another trick (as of v1.6)
还有一个技巧(v1.6)
A=np.arange(1,10).reshape(3,3)
b=np.arange(3)
np.einsum('ij,i->ij',A,b)
I'm proficient with the numpy broadcasting (newaxis
), but I'm still finding my way around this new einsum
tool. So I had play around a bit to find this solution.
我精通numpy广播(newaxis),但我仍在使用这个新的einsum工具。为了找到这个解,我做了一些尝试。
Timings (using Ipython timeit):
计时时间(使用Ipython):
einsum: 4.9 micro
transpose: 8.1 micro
newaxis: 8.35 micro
dot-diag: 10.5 micro
Incidentally, changing a i
to j
, np.einsum('ij,j->ij',A,b)
, produces the matrix that Alex does not want. And np.einsum('ji,j->ji',A,b)
does, in effect, the double transpose.
顺便说一句,把a i换成j, np。einsum('ij,j->ij' a,b)会生成Alex不想要的矩阵。而np。einsum('ji,j->ji,A,b)实际上是,双转置。
#4
8
I've compared the different options for speed and found that – much to my surprise – all options (except einsum
for small n
and diag
for large n
) are equally fast:
我已经比较了不同的速度选项,并且发现——让我惊讶的是——所有的选项(除了小n和diag对于大n)都是一样快的:
Code to reproduce the plot:
复制情节代码:
import numpy
import perfplot
def newaxis(data):
A, b = data
return A * b[:, numpy.newaxis]
def double_transpose(data):
A, b = data
return (A.T * b).T
def double_transpose_contiguous(data):
A, b = data
return numpy.ascontiguousarray((A.T * b).T)
def diag_dot(data):
A, b = data
return numpy.dot(numpy.diag(b), A)
def einsum(data):
A, b = data
return numpy.einsum('ij,i->ij', A, b)
perfplot.show(
setup=lambda n: (numpy.random.rand(n, n), numpy.random.rand(n)),
kernels=[
newaxis, double_transpose, double_transpose_contiguous, diag_dot,
einsum
],
n_range=[2**k for k in range(10)],
logx=True,
logy=True,
xlabel='len(A), len(b)'
)
#5
-3
Why don't you just do
你为什么不这么做呢
>>> m = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> c = np.array([0,1,2])
>>> (m.T * c).T
??
? ?