By : Dimitris.C.
Date : October 18 2020, 08:10 PM

I wish did fix the issue. That is the expected behavior. L2 regularization modifies the loss function by adding a penalty term (sum of squared weights) to reduce the generalization error. To compute the same validation loss within your callback, you will need to obtain the weights from each layer and compute their squared sum. The argument l from regularizers.l2 is the regularization coefficient for each layer. code :
from keras.layers import Dense, Input
from keras import regularizers
import keras.backend as K
from keras.losses import mean_squared_error
from keras.models import Model
from keras.callbacks import Callback
from keras.optimizers import RMSprop
import numpy as np
class ValidationCallback(Callback):
def __init__(self, validation_x, validation_y, lambd):
super(ValidationCallback, self).__init__()
self.validation_x = validation_x
self.validation_y = validation_y
self.lambd = lambd
def on_epoch_end(self, epoch, logs=None):
validation_y_predicted = self.model.predict(self.validation_x)
# Compute regularization term for each layer
weights = self.model.trainable_weights
reg_term = 0
for i, w in enumerate(weights):
if i % 2 == 0: # weights from layer i // 2
w_f = K.flatten(w)
reg_term += self.lambd[i // 2] * K.sum(K.square(w_f))
mse_loss = K.mean(mean_squared_error(self.validation_y, validation_y_predicted))
loss = mse_loss + K.cast(reg_term, 'float64')
print("My validation loss: %.4f" % K.eval(loss))
lambd = [0.01, 0.01]
input = Input(shape=(1024,))
hidden = Dense(1024, kernel_regularizer=regularizers.l2(lambd[0]))(input)
output = Dense(1024, kernel_regularizer=regularizers.l2(lambd[1]))(hidden)
model = Model(inputs=[input], outputs=output)
optimizer = RMSprop()
model.compile(loss='mse', optimizer=optimizer)
x_train = np.ones((2, 1024))
y_train = np.random.rand(2, 1024)
x_validation = x_train
y_validation = y_train
model.fit(x=x_train,
y=y_train,
callbacks=[ValidationCallback(x_validation, y_validation, lambd)],
validation_data=(x_validation, y_validation))
Share :

How to record val_loss and loss pre batch in keras
By : HR ATINO
Date : March 29 2020, 07:55 AM
like below fixes the issue Following and modyfing an example from here: code :
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
self.val_losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
self.val_losses.append(logs.get('val_loss'))
model = Sequential()
model.add(Dense(10, input_dim=784, init='uniform'))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
history = LossHistory()
model.fit(X_train, Y_train, batch_size=128, nb_epoch=20, verbose=0, validation_split=0.1,
callbacks=[history])
print history.losses
# outputs
'''
[0.66047596406559383, 0.3547245744908703, ..., 0.25953155204159617, 0.25901699725311789]
'''
print history.val_losses

loss, val_loss, acc and val_acc do not update at all over epochs
By : Mina Eskander
Date : March 29 2020, 07:55 AM
it should still fix some issue The softmax activation makes sure the sum of the outputs is 1. It's useful for assuring that only one class among many classes will be output. Since you have only 1 output (only one class), it's certainly a bad idea. You're probably ending up with 1 as result for all samples.

Do we need to add the regularization loss into the total loss in tensorflow models?
By : bobolopolis
Date : March 29 2020, 07:55 AM
I hope this helps you . You are right. You technically do not need reg_constant. You can control each layer regularization by the scale param, which can be the same for all layers. In this case you can just set reg_constant=1.

Keras  loss and val_loss increasing
By : Sharvil Shah
Date : March 29 2020, 07:55 AM
it fixes the issue You are using the cosine_proximity loss function of keras. This loss is 1 of the output does not match the target at all but is 1 if the target matches the output perfectly (see this and this). Therefore, a value which is converging to 1 is a good sign, as the actual difference between the target and the actual output is decreasing.

Training loss is available but val_loss = nan
By : user2239881
Date : March 29 2020, 07:55 AM
hope this fix your issue I wasn't passing in any validation data to the fit method! I needed to do something like this: model.fit(X_train, Y_train, validation_split=0.1, batch_size=8, epochs=30)



Related Posts :
