Keras的核心原则是逐步揭示复杂性,可以在保持相应的高级便利性的同时,对操作细节进行更多控制。当我们要自定义fit中的训练算法时,可以重写模型中的train_step方法,然后调用fit来训练模型。
这里以tensorflow2官网中的例子来说明:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
|
import numpy as np
import tensorflow as tf
from tensorflow import keras
x = np.random.random(( 1000 , 32 ))
y = np.random.random(( 1000 , 1 ))
class CustomModel(keras.Model):
tf.random.set_seed( 100 )
def train_step( self , data):
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x, y = data
with tf.GradientTape() as tape:
y_pred = self (x, training = True ) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self .compiled_loss(y, y_pred, regularization_losses = self .losses)
# Compute gradients
trainable_vars = self .trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self .optimizer.apply_gradients( zip (gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self .compiled_metrics.update_state(y, y_pred)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self .metrics}
# Construct and compile an instance of CustomModel
inputs = keras. Input (shape = ( 32 ,))
outputs = keras.layers.Dense( 1 )(inputs)
model = CustomModel(inputs, outputs)
model. compile (optimizer = "adam" , loss = tf.losses.MSE, metrics = [ "mae" ])
# Just use `fit` as usual
model.fit(x, y, epochs = 1 , shuffle = False )
32 / 32 [ = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = ] - 0s 1ms / step - loss: 0.2783 - mae: 0.4257
<tensorflow.python.keras.callbacks.History at 0x7ff7edf6dfd0 >
|
这里的loss是tensorflow库中实现了的损失函数,如果想自定义损失函数,然后将损失函数传入model.compile中,能正常按我们预想的work吗?
答案竟然是否定的,而且没有错误提示,只是loss计算不会符合我们的预期。
1
2
3
4
5
6
7
8
|
def custom_mse(y_true, y_pred):
return tf.reduce_mean((y_true - y_pred) * * 2 , axis = - 1 )
a_true = tf.constant([ 1. , 1.5 , 1.2 ])
a_pred = tf.constant([ 1. , 2 , 1.5 ])
custom_mse(a_true, a_pred)
<tf.Tensor: shape = (), dtype = float32, numpy = 0.11333332 >
tf.losses.MSE(a_true, a_pred)
<tf.Tensor: shape = (), dtype = float32, numpy = 0.11333332 >
|
以上结果证实了我们自定义loss的正确性,下面我们直接将自定义的loss置入compile中的loss参数中,看看会发生什么。
1
2
3
4
5
6
|
my_model = CustomModel(inputs, outputs)
my_model. compile (optimizer = "adam" , loss = custom_mse, metrics = [ "mae" ])
my_model.fit(x, y, epochs = 1 , shuffle = False )
32 / 32 [ = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = ] - 0s 820us / step - loss: 0.1628 - mae: 0.3257
<tensorflow.python.keras.callbacks.History at 0x7ff7edeb7810 >
|
我们看到,这里的loss与我们与标准的tf.losses.MSE明显不同。这说明我们自定义的loss以这种方式直接传递进model.compile中,是完全错误的操作。
正确运用自定义loss的姿势是什么呢?下面揭晓。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
|
loss_tracker = keras.metrics.Mean(name = "loss" )
mae_metric = keras.metrics.MeanAbsoluteError(name = "mae" )
class MyCustomModel(keras.Model):
tf.random.set_seed( 100 )
def train_step( self , data):
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x, y = data
with tf.GradientTape() as tape:
y_pred = self (x, training = True ) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = custom_mse(y, y_pred)
# loss += self.losses
# Compute gradients
trainable_vars = self .trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self .optimizer.apply_gradients( zip (gradients, trainable_vars))
# Compute our own metrics
loss_tracker.update_state(loss)
mae_metric.update_state(y, y_pred)
return { "loss" : loss_tracker.result(), "mae" : mae_metric.result()}
@property
def metrics( self ):
# We list our `Metric` objects here so that `reset_states()` can be
# called automatically at the start of each epoch
# or at the start of `evaluate()`.
# If you don't implement this property, you have to call
# `reset_states()` yourself at the time of your choosing.
return [loss_tracker, mae_metric]
# Construct and compile an instance of CustomModel
inputs = keras. Input (shape = ( 32 ,))
outputs = keras.layers.Dense( 1 )(inputs)
my_model_beta = MyCustomModel(inputs, outputs)
my_model_beta. compile (optimizer = "adam" )
# Just use `fit` as usual
my_model_beta.fit(x, y, epochs = 1 , shuffle = False )
32 / 32 [ = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = ] - 0s 960us / step - loss: 0.2783 - mae: 0.4257
<tensorflow.python.keras.callbacks.History at 0x7ff7eda3d810 >
|
终于,通过跳过在 compile() 中传递损失函数,而在 train_step 中手动完成所有计算内容,我们获得了与之前默认tf.losses.MSE完全一致的输出,这才是我们想要的结果。
总结一下,当我们在模型中想用自定义的损失函数,不能直接传入fit函数,而是需要在train_step中手动传入,完成计算过程。
到此这篇关于tensorflow2 自定义损失函数使用的隐藏坑的文章就介绍到这了,更多相关tensorflow2 自定义损失函数内容请搜索服务器之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持服务器之家!
原文链接:https://www.cnblogs.com/geeks-reign/p/15060924.html