What are the theoretical differences of multitask learning vs fine tuning based transfer learning?

Suppose, I have the following scenarios:

  1. I have a bunch of fruits, i.e., apple, orange, and banana. I simply made a multitask model, where my network first tell me which fruit it is, and then telling me the color of it. Suppose, if I give my network an apple, it tells me, (a) it is apple, (b) it is red. By doing some theoretical study, I have understood that it is one type of inductive transfer learning (TL) (correct me, if I am wrong). So here, the network is learning 2 task simultaneously.

  2. I have a bunch of objects, i.e., cube, ball, and triangle. Here also I want my network will do the same thing like scenario 1. So it will tell me, (a) whether it is a cube or not, and (b) then tell me the color. So what I did is, I transferred the learned weight and parameters from the network of scenario 1 to this scenario. Thus I performed the fine-tuning based TL in here.

So , from theoretical point of view, I have few confusions. I need to clarity my idea, and need some ideas from experts.

  1. If I consider the scenario 2, by definition of fine-tuning based TL, the task of scenario 1 (apple, and red) is my source task, and the task of scenario 2 (cube, and red) is the target task. From my understanding, I think that every inductive TL approach has a source task and target task. So, for scenario 2, thus it satisfies my understanding.

[REAL QUESTIONS] 2. Now the confusion starts for my theoretical understanding. For scenario 1, it has also 2 tasks - (a) identify the fruit, (b) identify the color. So here, what will be my source task, and what will be my target task. For clarifying my theoretical description or explaining my thinking into words, I need to know this.

3. As I am doing 2 TL tasks here, how to define the whole scenario?

Topic transfer-learning deep-learning neural-network nlp machine-learning

Category Data Science


As per my understanding,, in the both scenerio above when learning the object type and color what you are doing is a multitask learning. That is you are teaching your model to do two task simultaneously, 1. Predicting Object type(Which fruit/shape) and 2. Predicing color. So it is not like TL rather Multi task learning.

1. TL does have some source task.But all you do in TL is that you freeze the whole model except the last few layers and retrain them on your specific dataset.(or add fews layers at the end and train them). TL doesn't train your model from beginning fro new task. But the aboe scenerios are not TL. So it is ambiguous to name them source and target task.

2.As said above, both of them are multitask learning rather than TL.So t is ambiguous to name them source and target task.

3.Both of the scenerio mentioned is multitask learning. Although you can use any of them as your source task and do a TL on it to work on the other. For example, you can train your Fruits and Color model in scenerio 1 and do a transfer learning on it for the second scenerio(to learn shape and color). As they have some same properties underneath, so transfer learning is a valid option here.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.