Understanding Conv1D Output Shape

I am a little confused with the output shape that Conv1D produces. Consider the code I have used as the following (a lot has been omitted for clarity):

input_shape = x_train_2trans.shape
# (7425, 24, 1)

model.add(Conv1D(filters=4, input_shape=input_shape[1:], kernel_size=(3), activation=LeakyReLU))
model.add(Dropout(0.2))
model.add(Dense(1))

I have tried 3 different kernel sizes of 3, 2 and 1, where the output size produced are:

(256, 2500, 12, 1), (256, 2500, 18, 1), (256, 2500, 24, 1), respectively.

What I am confused with is the difference of 6 between each drop of kernel size. To my understanding for a kernel size of 3, the 12 should be a 21; whereas the 18 for a kernel size of 2 should be 22, in order to fit into the shape of 24 with the specified kernel sizes.

Thanks in advance.

Topic keras convolutional-neural-network tensorflow machine-learning

Category Data Science


Most probably, the issue is in the input data.

Here is a toy example.

import numpy as np
from tensorflow.keras import layers

input = np.ones((100,24,1))
input_shape = input.shape

layer = layers.Conv1D(filters=4, input_shape=input_shape[1:], kernel_size=(2))# Kernel=2
out = layer(input)
out.shape

layer = layers.Conv1D(filters=4, input_shape=input_shape[1:], kernel_size=(4))# Kernel=4
out = layer(input)
out.shape

Output -
TensorShape([100, 23, 4])
TensorShape([100, 21, 4])

Last dimension is due to filter count which will become features for the next layer

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.