ValueError: No gradients provided for any variable

I have this error when running training on my model. I found this issue on different sites, but could not find a solution to my problem.

Here is my model :

import keras
import tensorflow as tf
import tensorflow.keras.layers as L
import tensorflow.keras.models as M
import tensorflow.keras.callbacks as C
import tensorflow.keras.utils as U

def make_model_lstm_pooling(inshape=50000):
    z = L.Input(shape=(inshape, 10))
    x = L.AveragePooling1D(pool_size=1, strides=100)(z)
    
    x = L.Bidirectional(
        L.LSTM(10,
            dropout=0.1,
            return_sequences=False,
            kernel_initializer='ones',
            bias_initializer='zeros')
    )(x)
    
    
    x = L.Dense(10, activation='linear')(x)
    x = L.Dense(1, activation='linear')(x)
    
    model = tf.keras.Model(z, x)
    model.compile(optimizer='adam')
    return model

I run the training then :

callback_lr = C.ReduceLROnPlateau(
                monitor='val_loss',
                patience=3,
                verbose=0,
                mode='min')

checkpoint = C.ModelCheckpoint(
                filepath='best_pool.h5',
                save_best_only=True,     
                monitor='val_loss', 
                mode='min')

model = make_model_lstm_pooling()
model.summary()
history = model.fit(
            X_train, Y_train,
            validation_data=(X_dev, Y_dev),
            epochs=100,
            callbacks=[checkpoint, callback_lr]
                   )

The whole error is this one :

ValueError: No gradients provided for any variable: ['bidirectional_16/forward_lstm_50/lstm_cell_83/kernel:0', 'bidirectional_16/forward_lstm_50/lstm_cell_83/recurrent_kernel:0', 'bidirectional_16/forward_lstm_50/lstm_cell_83/bias:0', 'bidirectional_16/backward_lstm_50/lstm_cell_84/kernel:0', 'bidirectional_16/backward_lstm_50/lstm_cell_84/recurrent_kernel:0', 'bidirectional_16/backward_lstm_50/lstm_cell_84/bias:0', 'dense_90/kernel:0', 'dense_90/bias:0', 'dense_91/kernel:0', 'dense_91/bias:0'].

I get the problem when running fit.

I saw that the problem can appear when using bad types : I have float type in the input and int in the labels. I have NO nan in the input.

I see in the error there is a problem related to the kernel initializer, the default is glorot_uniform, it appears to me it is not zeros, if I am not mistaken.

I tried to change kernel_initializer but did not improve.

Something else : I made a test on several samples, and in my test I have less samples then features. Have anyone any idea if the problem is related to this ?

Any help will be appreciated.

Topic pooling lstm tensorflow deep-learning

Category Data Science


I found the solution to my problem.

First of all, I had to declare a loss when compiling the model :

model.compile(
    optimizer='adam', 
    loss='mean_absolute_error', 
    metrics=['mean_absolute_error']
)

In addition to that, I changed the monitor for the checkpoint:

import tensorflow.keras.callbacks as C

checkpoint = C.ModelCheckpoint(
                filepath='best_pool.h5',
                save_best_only=True,     
                monitor='val_mean_absolute_error', 
                mode='min')

Changing the monitor gave a better result in this case.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.