Manually computed validation loss different from reported val_loss when using regularization

Manually computed validation loss different from reported val_loss when using regularization

By : Dimitris.C.
Date : October 18 2020, 08:10 PM
I wish did fix the issue. That is the expected behavior. L2 regularization modifies the loss function by adding a penalty term (sum of squared weights) to reduce the generalization error.
To compute the same validation loss within your callback, you will need to obtain the weights from each layer and compute their squared sum. The argument l from regularizers.l2 is the regularization coefficient for each layer.
code :
from keras.layers import Dense, Input
from keras import regularizers
import keras.backend as K
from keras.losses import mean_squared_error
from keras.models import Model
from keras.callbacks import Callback
from keras.optimizers import RMSprop
import numpy as np

class ValidationCallback(Callback):
    def __init__(self, validation_x, validation_y, lambd):
        super(ValidationCallback, self).__init__()
        self.validation_x = validation_x
        self.validation_y = validation_y
        self.lambd = lambd

    def on_epoch_end(self, epoch, logs=None):
        validation_y_predicted = self.model.predict(self.validation_x)

        # Compute regularization term for each layer
        weights = self.model.trainable_weights
        reg_term = 0
        for i, w in enumerate(weights):
            if i % 2 == 0:  # weights from layer i // 2
                w_f = K.flatten(w)
                reg_term += self.lambd[i // 2] * K.sum(K.square(w_f))

        mse_loss = K.mean(mean_squared_error(self.validation_y, validation_y_predicted))
        loss = mse_loss + K.cast(reg_term, 'float64')

        print("My validation loss: %.4f" % K.eval(loss))

lambd = [0.01, 0.01]
input = Input(shape=(1024,))
hidden = Dense(1024, kernel_regularizer=regularizers.l2(lambd[0]))(input)
output = Dense(1024, kernel_regularizer=regularizers.l2(lambd[1]))(hidden)
model = Model(inputs=[input], outputs=output)
optimizer = RMSprop()
model.compile(loss='mse', optimizer=optimizer)

x_train = np.ones((2, 1024))
y_train = np.random.rand(2, 1024)
x_validation = x_train
y_validation = y_train

          callbacks=[ValidationCallback(x_validation, y_validation, lambd)],
          validation_data=(x_validation, y_validation))

Share : facebook icon twitter icon
How to record val_loss and loss pre batch in keras

How to record val_loss and loss pre batch in keras

Date : March 29 2020, 07:55 AM
like below fixes the issue Following and modyfing an example from here:
code :
class LossHistory(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self.losses = []
        self.val_losses = []

def on_batch_end(self, batch, logs={}):

model = Sequential()
model.add(Dense(10, input_dim=784, init='uniform'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

history = LossHistory()
model.fit(X_train, Y_train, batch_size=128, nb_epoch=20, verbose=0, validation_split=0.1,

print history.losses
# outputs
[0.66047596406559383, 0.3547245744908703, ..., 0.25953155204159617, 0.25901699725311789]
print history.val_losses
loss, val_loss, acc and val_acc do not update at all over epochs

loss, val_loss, acc and val_acc do not update at all over epochs

By : Mina Eskander
Date : March 29 2020, 07:55 AM
it should still fix some issue The softmax activation makes sure the sum of the outputs is 1. It's useful for assuring that only one class among many classes will be output.
Since you have only 1 output (only one class), it's certainly a bad idea. You're probably ending up with 1 as result for all samples.
Do we need to add the regularization loss into the total loss in tensorflow models?

Do we need to add the regularization loss into the total loss in tensorflow models?

By : bobolopolis
Date : March 29 2020, 07:55 AM
I hope this helps you . You are right.
You technically do not need reg_constant. You can control each layer regularization by the scale param, which can be the same for all layers. In this case you can just set reg_constant=1.
Keras - loss and val_loss increasing

Keras - loss and val_loss increasing

By : Sharvil Shah
Date : March 29 2020, 07:55 AM
it fixes the issue You are using the cosine_proximity loss function of keras. This loss is 1 of the output does not match the target at all but is -1 if the target matches the output perfectly (see this and this). Therefore, a value which is converging to -1 is a good sign, as the actual difference between the target and the actual output is decreasing.
Training loss is available but val_loss = nan

Training loss is available but val_loss = nan

By : user2239881
Date : March 29 2020, 07:55 AM
hope this fix your issue I wasn't passing in any validation data to the fit method! I needed to do something like this: model.fit(X_train, Y_train, validation_split=0.1, batch_size=8, epochs=30)
Related Posts Related Posts :
  • How to exit/terminate a job earlier and handle the raised exception in apscheduler?
  • python, print intermediate values while loop
  • python to loop over yaml config
  • D3.js is not recognized by PyCharm
  • Access the regularization paths obtained from ElasticNetCV in sklearn
  • Pattern table to Pandas DataFrame
  • Get the earliest date from a column (Python Pandas) after csv.reader
  • Get SystemError: Parent module '' not loaded, cannot perform relative import when trying to import numpy in a Cython Ext
  • Bash or Python : Append and prepend a string recursively in all .tex files
  • Changing a certain index of boolean list of lists change others, too
  • complex dataframe filtering request on the last occurence of a value in Panda/Python [EDIT]
  • How to repeatedly get the contents of a Text widget every loop with tkinter?
  • How to call the tornado.queues message externally
  • How can I use regex in python so that characters not included are disallowed?
  • Discarding randmly scattered empty spaces in pandas data frame
  • Get sums grouped by date by same column filtered by 2 conditions
  • Element disappears when I add an {% include %} tag inside my for loop
  • Django Rest Framework with either a slug or a pk lookup field for the DetailAPIView
  • Flask doesn't stream on Lambda
  • Generate all permutations of fixed length where the elements come from two different sets
  • Making function for calculating distance
  • How to handle multiprocessing based on the limit of CPU's
  • Django - static files is not working
  • Remove x axis and y axis black lines with matplotlib
  • tkinter: assigning multiple functions to one button
  • flask-jwt-extended: Fake Authorization Header during testing (pytest)
  • Setting pandas dataframe value based on row and column conditions
  • swig char ** as a pointer to a char *
  • Confusion over `a` and `b` attributes from scipy.stats.uniform
  • How can I do groupy.apply() without sort my index?
  • Querying Google Cloud datastore with ancestor not returning anything
  • Read value from one thread in Python: queue or global variable?
  • Django - context process query being repeated 102 times
  • Convert a list of images and labels to np array to train tensorflow
  • Lambda not supporting NLTK file size
  • Numpy ndarray image pixel mean for pixel values greater than zero: Normalizing image
  • Understanding output of np.corrcoef for two matrices of different sizes
  • Finding longest perfect match between two strings
  • what is wrong with my cosine similarity? Tensorflow
  • How to manage user content in django?
  • Receiving unsupported operand error while comparing random number and user input.
  • How to wrap the process of creating start_urls in scrapy?
  • How to mark 'duplicated sequence' in pandas?
  • Boolean indexing on multidimensionnal array
  • Unmodified column name index in patsy
  • Cleaner way to unpack nested dictionaries
  • Importing a python module to enable a script to be run from command line
  • Maya Python read and set optionMenu value via variable
  • How can I bind a property to another property in Kivy?
  • Python extracting specific line in text file
  • How to implement n-body simulation with pymunk?
  • Python / matplotlib: print to resolution and without white space / borders / margins
  • Sum up the second value from one dictionary with all values from another dictionary
  • Robot Framework: Open a chrome browser without launching URL
  • Generate inline Bokeh scatterplots in Jupyter using a for loop
  • Group list of dictionaries python
  • Efficient way to apply multiple Boolean mask to set values in a column using pandas
  • Lazy evaluation of a Python dictionary
  • id of xpath is getting changed every time in selenium python 2.7 chrome
  • Matplotlib RuntimeWarning displaying a 3D plot
  • shadow
    Privacy Policy - Terms - Contact Us © voile276.org