Drop Out in Hyperparameter Optimisation

Is it correct to add dropout to each layer and that it is done as in the below example?

class MyHyperModel(kt.HyperModel):

def build_model(self, hp):

model = Sequential()
for i in range(hp.Int('dense_layers',1,4)):

  model.add(Dense(hp.Choice('units', choice_units), hp.Choice(activation, [elu, 
                  exponential, relu])))
  **model.add(layers.Dropout(hp.Choice('rate',[0.0,0.05,0.10,0.15,0.25])))**
  
model.add(Dense(1, hp.Choice(activation, [elu, relu])))

optimizer=tf.keras.optimizers.SGD(hp.Float('learning_rate',min_value=1e-6,
                                           max_value=1e-3,default=1e-5))
model.compile(loss='mse', optimizer=optimizer, metrics=['mse'])
return model

I.e. after each Dense layer, by adding model.add(layers.Dropout(hp.Choice('rate',[0.0,0.05,0.10,0.15,0.25]))) it will add dropout to each new Dense layer.

Is this true?

And if I wanted to vary the choice of dropout layer for each Dense layer, or if I wanted a different activation function selected at random for each layer, I believe this will not do this? How would I do this? Thanks!

Topic hyperparameter-tuning machine-learning-model dropout hyperparameter machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.