Input 0 of layer max_pooling1d_3 is incompatible with the layer Error

Ok, so basically, i have some Tf-Idf features and some additional features like wordcount, sentiment on my data. Now, according to my knowledge, when we use Convolutional layer, the data needs to be converted to dimensional vectors. As shown below, is me converting them.

X_train_reshaped = X_train.reshape(X_train.shape[0], 3, 10, 1)
y_train_reshaped = y_train.reshape(y_train.shape[0], 1, 1,1)

Below is the shape

This is how X_train_reshaped is shown,

Now, below is my model that I have declared.

model = Sequential()

model.add(Conv1D(filters=3, kernel_size=1, activation='relu', input_shape=(3, 10, 1)))
model.add(MaxPool1D(pool_size=3, strides=2))

model.add(Flatten())

model.add(Dense(units=128,activation='relu'))

model.add(Dense(units=1,activation='sigmoid'))

# For a binary classification problem
model.compile(loss='binary_crossentropy', optimizer='adam')

Now as you can see, i have set the correct input size for my Conv1D first layer, yet an error is thrown at the Max pooling layer regarding some input size.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
ipython-input-79-ccbfc6703ce2 in module
      2 
      3 model.add(Conv1D(filters=3, kernel_size=1, activation='relu', input_shape=(3, 10, 1)))
---- 4 model.add(MaxPool1D(pool_size=3, strides=2))
      5 
      6 model.add(Flatten())

~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\training\tracking\base.py in _method_wrapper(self, *args, **kwargs)
    520     self._self_setattr_tracking = False  # pylint: disable=protected-access
    521     try:
-- 522       result = method(self, *args, **kwargs)
    523     finally:
    524       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\keras\engine\sequential.py in add(self, layer)
    226       # If the model is being built continuously on top of an input layer:
    227       # refresh its output.
-- 228       output_tensor = layer(self.outputs[0])
    229       if len(nest.flatten(output_tensor)) != 1:
    230         raise ValueError(SINGLE_LAYER_OUTPUT_ERROR_MSG)

~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, *args, **kwargs)
    967     #  model = tf.keras.Model(inputs, outputs)
    968     if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
-- 969       return self._functional_construction_call(inputs, args, kwargs,
    970                                                 input_list)
    971 

~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\keras\engine\base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
   1105         layer=self, inputs=inputs, build_graph=True, training=training_value):
   1106       # Check input assumptions set after layer building, e.g. input shape.
- 1107       outputs = self._keras_tensor_symbolic_call(
   1108           inputs, input_masks, args, kwargs)
   1109 

~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\keras\engine\base_layer.py in _keras_tensor_symbolic_call(self, inputs, input_masks, args, kwargs)
    838       return nest.map_structure(keras_tensor.KerasTensor, output_signature)
    839     else:
-- 840       return self._infer_output_signature(inputs, args, kwargs, input_masks)
    841 
    842   def _infer_output_signature(self, inputs, args, kwargs, input_masks):

~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\keras\engine\base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks)
    876           # overridden).
    877           # TODO(kaftan): do we maybe_build here, or have we already done it?
-- 878           self._maybe_build(inputs)
    879           inputs = self._maybe_cast_inputs(inputs)
    880           outputs = call_fn(inputs, *args, **kwargs)

~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\keras\engine\base_layer.py in _maybe_build(self, inputs)
   2597     # Check input assumptions set before layer building, e.g. input rank.
   2598     if not self.built:
- 2599       input_spec.assert_input_compatibility(
   2600           self.input_spec, inputs, self.name)
   2601       input_list = nest.flatten(inputs)

~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\keras\engine\input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
    213       ndim = shape.rank
    214       if ndim != spec.ndim:
-- 215         raise ValueError('Input ' + str(input_index) + ' of layer ' +
    216                          layer_name + ' is incompatible with the layer: '
    217                          'expected ndim=' + str(spec.ndim) + ', found ndim=' +

ValueError: Input 0 of layer max_pooling1d_5 is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 3, 10, 3)

I have number of questions, and it would be great if someone can clarify it.

  1. What is going on here and how to fix it? How does input_shape works?
  2. where I have seen some people set 'None' in front as well?
  3. Difference between input size and input shape?

I have read multiple stack overflow pages and others and still couldn't fix this, it would be great if someone can answer them and clarify my doubts.

Topic cnn tfidf deep-learning neural-network machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.