Encoder-Decoder performance time
I have two encoder-decoder models.
*First model:
*Second model:
When I check the performance of the models I get approximately the same performance time (First model ~ 42 sec, Second model ~ 40 sec). I train my model on GPU and check performance on CPU. I test it only on one large image where the size is 12348x12348. I was expecting the larger model that has more parameters to train (second model) to give me longer run time. Anyone can help me to understand why it is not the case here? Am I doing something wrong?
Topic lstm autoencoder rnn deep-learning neural-network
Category Data Science