GAN optimizer settings in Keras
I am working on a Generative Adversarial Network, implementing in Keras. I have my generator model, G, and discriminator D, both are being created by two functions, and then the GAN model is created using these two models, like this light sample of the code:
gopt=Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
dopt=Adam(lr=0.00005, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
opt_gan = Adam(lr=0.00006, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
G= gmodel(......)
G.compile(loss=...., optimizer=gopt)
D=dmodel(..)
D.trainable = False
GAN=ganmodel(generator_model=G,discriminator_model=D,...)
GAN_model.compile(loss=["mae", "binary_crossentropy"], loss_weights=[0.5, 0.5], optimizer=opt_gan)
D.trainable = True
D.compile(loss='binary_crossentropy', optimizer=dopt)
now my question or better the confusion is, how the optimization will work when we train the GAN model? More precisely I am interested in the learning rate. When I train the GAN, what learning rate will be applied on the generator?
Since I have already compiled G before sending it to the GAN model, its optimizer should not change, and so the learning rate should be 0.0001? or the learning rate of GAN will be applied, ie 0.00006? How about the discriminator?
Topic learning-rate gan keras optimization machine-learning
Category Data Science