i am new to tensorflow. I am working with keras but for creating a customized loss function i am more or less forced to write a function in tensorflow. I get stuck at the point where i have to translate this following numpy for loop into tensorflow syntax.
我是tensorflow的新手。我正在使用keras但是为了创建一个自定义的损失函数,我或多或少*在tensorflow中编写一个函数。我陷入困境,我必须将numpy for循环转换为tensorflow语法。
for j in range(grid):
for k in range(modes):
for l in range(dim):
for m in range(dim):
lorentz[:,j,l,m] += 1J*osc_stre[:,l,m,k]/(energies[j]-e_j[:,k])
if l == m == k:
lorentz[:,j,l,m] += 1
Here you can see the initial shapes of the arrays:
在这里,您可以看到数组的初始形状:
e_j = zeros([sample_nr,modes],dtype='complex')
osc_stre = zeros([sample_nr,dim,dim,modes],dtype='complex')
lorentz = zeros([sample_nr,grid,dim,dim],dtype='compex')
energies[j] has the shape (grid)
能量[j]有形状(网格)
is it possible to handle this problem with tensorflow? Can anybody give me a hint how to translate this into tensorflow syntax? I have already tried a couple of things like the tensorflow while loop but one of the big problems is, that tensorflow objects do not support item assignment.
是否有可能用tensorflow处理这个问题?任何人都可以给我一个如何将其转换为tensorflow语法的提示吗?我已经尝试了一些像tensorflow while循环这样的东西,但其中一个大问题是,tensorflow对象不支持项目赋值。
EDIT:
i think i've come up with a solution for this simplified verison of the problem:
我想我已经为这个简化的问题验证提出了一个解决方案:
for j in range(grid):
for k in range(modes):
lorentz[j] += 1J*osc_stre[k]/(energies[j]-e_j[k])
if k == 0:
lorentz[j] += 1
the solution:
lorentz_list = []
tf_one = tf.ones([1], complex64)
tf_i = tf.cast(tf.complex(0.,1.), complex64)
energies_float = tf.cast(energies,float32)
energies_complex = tf.complex(energies_float,tf.zeros([energy_grid],float32))
for j in range(energy_grid):
lorentz_list.append(tf.add(tf_one,tf.reduce_sum(tf.multiply(tf_i,tf.divide(osc_stre_tot,tf.subtract(energies_complex[j],e_j))),-1)))
lorentz = tf.stack(lorentz_list)
1 个解决方案
#1
0
Assuming these:
-
lorentz.shape == (batch, grid, dim, dim)
and was zero before the loop. -
osc_stre.shape == (batch, dim, dim, modes)
-
energies.shape == (grid,)
e_j.shape == (batch, modes)
lorentz.shape ==(batch,grid,dim,dim)并且在循环之前为零。
osc_stre.shape ==(批量,暗淡,昏暗,模式)
energies.shape ==(网格,)
e_j.shape ==(批量,模式)
Then:
osc_stre = K.reshape(osc_stre, (-1, 1, dim, dim, modes))
energies = K.reshape(energies, (1, grid, 1, 1, 1))
e_j = K.reshape(e_j, (-1, 1, 1, 1, modes))
lorentz = 1J*osc_stre/(energies-e_j)
identity = np.zeros((1, 1, dim, dim, modes))
for d in range(min(modes,dim)):
identity[0,0,d,d,d] = 1
identity = K.variable(identity, dtype = tf.complex64)
lorentz += identity
lorentz = K.sum(lorentz, axis=-1)
#1
0
Assuming these:
-
lorentz.shape == (batch, grid, dim, dim)
and was zero before the loop. -
osc_stre.shape == (batch, dim, dim, modes)
-
energies.shape == (grid,)
e_j.shape == (batch, modes)
lorentz.shape ==(batch,grid,dim,dim)并且在循环之前为零。
osc_stre.shape ==(批量,暗淡,昏暗,模式)
energies.shape ==(网格,)
e_j.shape ==(批量,模式)
Then:
osc_stre = K.reshape(osc_stre, (-1, 1, dim, dim, modes))
energies = K.reshape(energies, (1, grid, 1, 1, 1))
e_j = K.reshape(e_j, (-1, 1, 1, 1, modes))
lorentz = 1J*osc_stre/(energies-e_j)
identity = np.zeros((1, 1, dim, dim, modes))
for d in range(min(modes,dim)):
identity[0,0,d,d,d] = 1
identity = K.variable(identity, dtype = tf.complex64)
lorentz += identity
lorentz = K.sum(lorentz, axis=-1)