将张量转换为Numpy数组 - 自定义丢失功能在keras中

时间:2021-08-23 22:26:17

I am trying to build a custom loss function in keras. Unfortunately i have little knowledge with tensor flow. Is there a way i can convert the incoming tensors into a numpy array so i can compute my loss function?

我正在尝试在keras中构建自定义丢失函数。不幸的是,我对张量流很少了解。有没有办法我可以将传入的张量转换成一个numpy数组,这样我可以计算我的损失函数?

Here is my function:

这是我的功能:

def getBalance(x_true, x_pred):

    x_true = np.round(x_true)
    x_pred = np.round(x_pred)

    NumberOfBars = len(x_true)
    NumberOfHours = NumberOfBars/60

    TradeIndex = np.where( x_pred[:,1] == 0 )[0]

    ##remove predictions that are not tradable
    x_true = np.delete(x_true[:,0], TradeIndex)
    x_pred = np.delete(x_pred[:,0], TradeIndex)

    CM = confusion_matrix(x_true, x_pred)

    correctPredictions = CM[0,0]+CM[1,1]
    wrongPredictions = CM[1,0]+CM[0,1]
    TotalTrades = correctPredictions+wrongPredictions
    Accuracy = (correctPredictions/TotalTrades)*100

    return Accuracy 

If its not possible to use numpy array's what is the best way to compute that function with tensorflow? Any direction would be greatly appreciated, thank you!

如果它不可能使用numpy数组,那么用tensorflow计算该函数的最佳方法是什么?任何方向都将不胜感激,谢谢!

Edit 1: Here are some details of my model. I am using a LSTM network with heavy drop out. The inputs are a multi-variable multi-time step. The outputs are a 2d array of binary digits (20000,2)

编辑1:以下是我的模型的一些细节。我正在使用一个LSTM网络,辍学率很高。输入是多变量多时间步骤。输出是二进制数字的二维数组(20000,2)

model = Sequential()

model.add(Dropout(0.4, input_shape=(train_input_data_NN.shape[1], train_input_data_NN.shape[2])))

model.add(LSTM(30, dropout=0.4, recurrent_dropout=0.4))

model.add(Dense(2))

model.compile(loss='getBalance', optimizer='adam')

history = model.fit(train_input_data_NN, outputs_NN, epochs=50,  batch_size=64, verbose=1, validation_data=(test_input_data_NN, outputs_NN_test))

1 个解决方案

#1


0  

EDIT: 1 Here is an untested substitution:

(took the liberty of normalizing the variable names )

(冒充了变量名称的规范化)

def get_balance(x_true, x_pred):

    x_true = K.tf.round(x_true)
    x_pred = K.tf.round(x_pred)

    # didnt see the  need for these
    # NumberOfBars = (x_true)
    # NumberOfHours = NumberOfBars/60

    trade_index = K.tf.not_equal(x_pred[:,1], 0 )

    ##remove predictions that are not tradable
    x_true_tradeable = K.tf.boolean_mask(x_true[:,0], trade_index)
    x_pred_tradeable = K.tf.boolean_mask(x_pred[:,0], trade_index)

    cm = K.tf.confusion_matrix(x_true_tradeable, x_pred_tradeable)

    correct_predictions = cm[0,0]+cm[1,1]
    wrong_predictions = cm[1,0]+cm[0,1]
    total_trades = correction_predictions + wrong_predictions
    accuracy = (correct_predictions/total_trades)*100

    return accuracy 

Original Answer

Welcome to SO. As you might know we need to compute the the gradient on the loss function. We can't compute the gradient correctly on numpy arrays (they're just constants).

欢迎来到SO。您可能知道我们需要计算损失函数的梯度。我们无法在numpy数组上正确计算渐变(它们只是常量)。

What is done ( in keras/theano which are the backends one uses with keras) is automatic differentiation on Tensors (e.g tf.placeholder()).This is not the entire story but what you should know at this point is that tf / theano gives us gradients by default on operators like tf.max, tf.sum.

做了什么(在keras / theano中是与keras一起使用的后端)是Tensors上的自动区分(例如tf.placeholder())。这不是整个故事,但你应该知道的是tf / theano默认情况下,我们为tf.max,tf.sum等运算符提供渐变。

What that means for you is all the operations on tensors (y_true and y_pred) should be rewritten to use tf / theano operators.

这对你意味着什么是对张量的所有操作(y_true和y_pred)都应该重写为使用tf / theano运算符。

I'll comment with what I think would be rewritten and you can substitute accordingly and test.

我会评论我认为会被重写的内容,你可以相应地替换并测试。

See tf.round used as K.tf.round where K is the reference to the keras backend imported as import keras.backend as K

请参阅用作K.tf.round的tf.round,其中K是作为导入keras.backend导入的keras后端的引用

x_true = np.round(x_true)  
x_pred = np.round(x_pred)

Grab the shape of the tensor x_true. K.shape. Compute the ratio over a constant could remain as it as Here

抓取张量x_true的形状。 K.shape。计算一个常数的比率可以保持原样

NumberOfBars = len(x_true) 
NumberOfHours = NumberOfBars/60

See tf.where used as K.tf.where

参见用作K.tf.where的tf.where

TradeIndex = np.where( x_pred[:,1] == 0 )[0] 

You could mask the tensor w/ a condition instead of deleting - see masking

您可以使用条件屏蔽张量而不是删除 - 请参阅屏蔽

##remove predictions that are not tradable
x_true = np.delete(x_true[:,0], TradeIndex) 
x_pred = np.delete(x_pred[:,0], TradeIndex)

See tf.confusion_matrix

CM = confusion_matrix(x_true, x_pred)

The computation that follow are computation overs constants and so remain essentially the same ( conditioned on
whatever changes have to made given the new API )

接下来的计算是计算过度常量,因此保持基本相同(取决于给定新API所做的任何更改)

Hopefully I can update this answer with a valid substitution that runs. But I hope this sets on the right path.

希望我能用一个有效的替换来更新这个答案。但我希望这是正确的道路。

A suggestion on coding style: I see you use three version of variable naming in your code choose one and stick with it.

关于编码风格的建议:我看到你在代码中使用三个版本的变量命名选择一个并坚持使用它。

#1


0  

EDIT: 1 Here is an untested substitution:

(took the liberty of normalizing the variable names )

(冒充了变量名称的规范化)

def get_balance(x_true, x_pred):

    x_true = K.tf.round(x_true)
    x_pred = K.tf.round(x_pred)

    # didnt see the  need for these
    # NumberOfBars = (x_true)
    # NumberOfHours = NumberOfBars/60

    trade_index = K.tf.not_equal(x_pred[:,1], 0 )

    ##remove predictions that are not tradable
    x_true_tradeable = K.tf.boolean_mask(x_true[:,0], trade_index)
    x_pred_tradeable = K.tf.boolean_mask(x_pred[:,0], trade_index)

    cm = K.tf.confusion_matrix(x_true_tradeable, x_pred_tradeable)

    correct_predictions = cm[0,0]+cm[1,1]
    wrong_predictions = cm[1,0]+cm[0,1]
    total_trades = correction_predictions + wrong_predictions
    accuracy = (correct_predictions/total_trades)*100

    return accuracy 

Original Answer

Welcome to SO. As you might know we need to compute the the gradient on the loss function. We can't compute the gradient correctly on numpy arrays (they're just constants).

欢迎来到SO。您可能知道我们需要计算损失函数的梯度。我们无法在numpy数组上正确计算渐变(它们只是常量)。

What is done ( in keras/theano which are the backends one uses with keras) is automatic differentiation on Tensors (e.g tf.placeholder()).This is not the entire story but what you should know at this point is that tf / theano gives us gradients by default on operators like tf.max, tf.sum.

做了什么(在keras / theano中是与keras一起使用的后端)是Tensors上的自动区分(例如tf.placeholder())。这不是整个故事,但你应该知道的是tf / theano默认情况下,我们为tf.max,tf.sum等运算符提供渐变。

What that means for you is all the operations on tensors (y_true and y_pred) should be rewritten to use tf / theano operators.

这对你意味着什么是对张量的所有操作(y_true和y_pred)都应该重写为使用tf / theano运算符。

I'll comment with what I think would be rewritten and you can substitute accordingly and test.

我会评论我认为会被重写的内容,你可以相应地替换并测试。

See tf.round used as K.tf.round where K is the reference to the keras backend imported as import keras.backend as K

请参阅用作K.tf.round的tf.round,其中K是作为导入keras.backend导入的keras后端的引用

x_true = np.round(x_true)  
x_pred = np.round(x_pred)

Grab the shape of the tensor x_true. K.shape. Compute the ratio over a constant could remain as it as Here

抓取张量x_true的形状。 K.shape。计算一个常数的比率可以保持原样

NumberOfBars = len(x_true) 
NumberOfHours = NumberOfBars/60

See tf.where used as K.tf.where

参见用作K.tf.where的tf.where

TradeIndex = np.where( x_pred[:,1] == 0 )[0] 

You could mask the tensor w/ a condition instead of deleting - see masking

您可以使用条件屏蔽张量而不是删除 - 请参阅屏蔽

##remove predictions that are not tradable
x_true = np.delete(x_true[:,0], TradeIndex) 
x_pred = np.delete(x_pred[:,0], TradeIndex)

See tf.confusion_matrix

CM = confusion_matrix(x_true, x_pred)

The computation that follow are computation overs constants and so remain essentially the same ( conditioned on
whatever changes have to made given the new API )

接下来的计算是计算过度常量,因此保持基本相同(取决于给定新API所做的任何更改)

Hopefully I can update this answer with a valid substitution that runs. But I hope this sets on the right path.

希望我能用一个有效的替换来更新这个答案。但我希望这是正确的道路。

A suggestion on coding style: I see you use three version of variable naming in your code choose one and stick with it.

关于编码风格的建议:我看到你在代码中使用三个版本的变量命名选择一个并坚持使用它。