Ignore Inception Model Auxiliary Loss during Inference
For inception model v1, the authors used auxiliary loss to avoid vanishing problem. So they added 2 auxiliary loss to help train their model as you see in the purple boxes below, but they did not use these during inference.
Problem: Once you train a model, you will save it along with its weights to do inference against new data, so I am not sure how they ignore that given that a model will be saved with all of its weights in one matrix that is supposed to hold the optimized values. Please let me know what you think.
Topic inception tensorflow deep-learning
Category Data Science