1 回答
TA贡献1719条经验 获得超6个赞
我目前正面临同样的问题,并使用Tensorboard以类似的方式处理了这个问题。
即使不推荐使用,您仍然可以通过子类化并在方法中手动计算梯度来管理网络每一层的梯度。write_gradstf.keras.Modelgradient.Tapetrain_step
与此类似的东西对我有用
from tensorflow.keras import Model
class TrainWithCustomLogsModel(Model):
def __init__(self, **kwargs):
super(TrainWithCustomLogsModel, self).__init__(**kwargs)
self.step = tf.Variable(0, dtype=tf.int64,trainable=False)
def train_step(self, data):
# Get batch images and labels
x, y = data
# Compute the batch loss
with tf.GradientTape() as tape:
p = self(x , training = True)
loss = self.compiled_loss(y, p, regularization_losses=self.losses)
# Compute gradients for each weight of the network. Note trainable_vars and gradients are list of tensors
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Log gradients in Tensorboard
self.step.assign_add(tf.constant(1, dtype=tf.int64))
#tf.print(self.step)
with train_summary_writer.as_default():
for var, grad in zip(trainable_vars, gradients):
name = var.name
var, grad = tf.squeeze(var), tf.squeeze(grad)
tf.summary.histogram(name, var, step = self.step)
tf.summary.histogram('Gradients_'+name, grad, step = self.step)
# Update model's weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
del tape
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, p)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
然后,您应该能够可视化训练中任何训练步骤的梯度分布,以及核值的分布。
此外,可能值得尝试绘制规范随时间分布图,而不是单个值。
添加回答
举报