What is the difference between Multi task learning and domain generalization

I was wondering about the differences between multi-task learning and domain generalization. It seems to me that both of them are types of inductive transfer learning but I'm not sure of their differences.

Topic transfer-learning domain-adaptation multitask-learning

Category Data Science


  • Domain generalization: Aims to train a model using multi-domain source data, such that it can directly generalize to new domains without need of retraining. Focusing, Multiple domains on same task

  • Multi-task learning (MTL): MTL is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. In other words, same domain on multiple tasks

Main Difference:

Domain generalization Multi-task learning
Multiple domain dataset on same task Same domain dataset on multiple tasks
As its a single task, no need for parallel execution Multiple tasks are executed in parallel

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.