GAN optimizer settings in Keras

I am working on a Generative Adversarial Network, implementing in Keras. I have my generator model, G, and discriminator D, both are being created by two functions, and then the GAN model is created using these two models, like this light sample of the code:

gopt=Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
dopt=Adam(lr=0.00005, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
opt_gan = Adam(lr=0.00006, beta_1=0.9, beta_2=0.999, epsilon=1e-08)

G= gmodel(......)
G.compile(loss=...., optimizer=gopt)

D=dmodel(..)
D.trainable = False

GAN=ganmodel(generator_model=G,discriminator_model=D,...)

GAN_model.compile(loss=["mae", "binary_crossentropy"], loss_weights=[0.5, 0.5], optimizer=opt_gan)

D.trainable = True
D.compile(loss='binary_crossentropy', optimizer=dopt)

now my question or better the confusion is, how the optimization will work when we train the GAN model? More precisely I am interested in the learning rate. When I train the GAN, what learning rate will be applied on the generator?

Since I have already compiled G before sending it to the GAN model, its optimizer should not change, and so the learning rate should be 0.0001? or the learning rate of GAN will be applied, ie 0.00006? How about the discriminator?

Topic learning-rate gan keras optimization machine-learning

Category Data Science


The generator is only trained through the GAN. So you don't need to define an optimizer for the generator nor to compile it.

When you train the generator, you actually train the full GAN. Only the discriminator is trained independently.

To answer your question, the GAN parameters will be used when you train the GAN, and the discriminator parameters will be used when you train the discriminator. The generator parameters are never used and can be removed.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.