Is there a relationship between learning rate and training set size?
I have a large dataset to use for training a Neural Network model. However, I don't have enough resources to do a proper hyperparameters tuning on the whole dataset. Therefore, my idea is to tune the learning rate on the subset of data (let's say 10%), which won't obviously give as good estimate as the whole dataset would give, but since it's already a significant amount of data I would expect it might give the estimate which is sufficient enough.
However, I wonder if there is some relationship between learning rate and the training size, other than the fact that the estimate would be noisier when subsampling. And by the relationship I mean some rule of thumb, for example, increase your optimal LR, when you're increasing the training set size. I don't see any (apart from the relationship between LR and batch size, but it's actually other topic), but would love to make sure that I'm not missing anything.
Topic learning-rate sampling deep-learning neural-network
Category Data Science