Modern reference on general Deep Learning theory

In the present, Deep Learning is experiencing lightspeed growth, with plethora of new architectures and radiant ideas emerging each month. Since the last few years several influential ideas, applied in different papers have been developed - among which most important are Transformer, EfficientNet, MobileNets. Is there some modern reference, in the spirit of the famous https://www.deeplearningbook.org/, in some way, generalizing the most prominent advances in Deep Learning?

Topic books deep-learning

Category Data Science


For the latest practices of deep learning architectures, I follow the Kaggle latest Competition's notebooks.
Say, for example, In the recent finished Cassava Leaf Deasese Computer Vision Competition, People are sharing the experimental notebooks on different State of the art Architectures Like Vision Transformer(Various versions pretrained - vit_large_patch16_384, vit_base_patch16_384 etc. models), Facebook's Data-efficient Image Transformers , EfficientNet (Noisy Student version, Imagenet Version) and etc.
https://www.deeplearningbook.org/ this book undoubtedly one of the great book. But for the latest practices if you follow the Kaggle notebooks you get a good explanation with codes.

I suggest you, Explore this Notebooks Section of the Casava Leaf Disease Classification Contest to get more Models and Various recent new released Loss Functions and so on.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.