我正在玩张量流2。我做了我自己的模型,类似于这里的做法。然后我创建了自己的拟合函数。现在我得到了有史以来最奇怪的事情。以下是我进行测试的笔记本的精确复制/粘贴输出:def fit(x_train, y_train, learning_rate=0.01, epochs=10, batch_size=100, normal=True, verbose=True, display_freq=100): if normal: x_train = normalize(x_train) # TODO: This normalize could be a bit different for each and be bad. num_tr_iter = int(len(y_train) / batch_size) # Number of training iterations in each epoch if verbose: print("Starting training...") for epoch in range(epochs): # Randomly shuffle the training data at the beginning of each epoch x_train, y_train = randomize(x_train, y_train) for iteration in range(num_tr_iter): # Get the batch start = iteration * batch_size end = (iteration + 1) * batch_size x_batch, y_batch = get_next_batch(x_train, y_train, start, end) # Run optimization op (backpropagation) # import pdb; pdb.set_trace() if verbose and (epoch * batch_size + iteration) % display_freq == 0: current_loss = _apply_loss(y_train, model(x_train, training=True)) current_acc = evaluate_accuracy(x_train, y_train) print("Epoch: {0}/{1}; batch {2}/{3}; loss: {4:.4f}; accuracy: {5:.2f} %" .format(epoch, epochs, iteration, num_tr_iter, current_loss, current_acc*100)) train_step(x_batch, y_batch, learning_rate) current_loss = _apply_loss(y_train, model(x_train, training=True)) current_acc = evaluate_accuracy(x_train, y_train) print("End: loss: {0:.4f}; accuracy: {1:.2f} %".format(current_loss, current_acc*100))import logginglogging.getLogger('tensorflow').disabled = Truefit(x_train, y_train)current_loss = _apply_loss(y_train, model(x_train, training=True))current_acc = evaluate_accuracy(x_train, y_train)print("End: loss: {0:.4f}; accuracy: {1:.2f} %".format(current_loss, current_acc*100))现在我的问题是,我如何在最后2行上得到不同的值!?我在做同样的事情对吧?我在这里完全困惑。我甚至不知道如何谷歌这个。
添加回答
举报
0/150
提交
取消