pytorch lightning produces no checkpoint when learning rate fine tuning ison

My problem is concerning with using the automatic learning rate finder of pytorch lightning. In case I use this feature there isn't any checkpoint output produced at any time during the training of the model.

I define the a trainer which I later use to tune the learning rate first and fitt the model later on as the next pseudo like code snippet shows:

    checkpoint = pl.callbacks.ModelCheckpoint(monitor=val_loss,save_last=True,period=1)
    trainer = pl.Trainer(
        auto_lr_find=True,
        max_steps=config[steps],
        gpus=config[gpus],
        precision=config[precision],
        accumulate_grad_batches=config[accumulate_grad_batches],
        checkpoint_callback=checkpoint,
        logger=logger,
        accelerator='ddp',
        plugins=[DDPPlugin(find_unused_parameters=True)],
    )

trainer.tune(model,train_dl,valid_dl)

print(model.learning_rate)

trainer.fit(model, train_dl, valid_dl)

Could some body point me in the right direction what could cause my problem?

I use pytorch-lightning 1.3.8 pypi_0 pypi under Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-77-generic x86_64) in a conda environment.

All your help is highly appreciated!

Topic finetuning learning-rate pytorch machine-learning-model training

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.