How does T5 model work on input and target data while transfer learning?

I am working on a project where I want the model to generate job description based on Role, Industry, Skills. I have trained my data and got the resultant output.

I am aware that the T5 model was trained on C4 data in an unsupervised manner. The various different techniques applied including denoising, corrupted span, etc. But I am not able to understand how it works for classification problem.

My concern is if I pass my input and target variables for training how the model is going to train. Also give me a brief idea as how the model is going to react to input and output data.

Any links, resources is appreciated. Thankyou.

Topic huggingface transformer nlg transfer-learning nlp

Category Data Science


T5 is in fact a sequence-to-sequence model, it has an encoder that generates some hidden states representing the input and a decoder that generates the output. When you fine-tune the model you can happily ignore how the model was pre-trained and only train for your specific task as schematically shown in the original Google blog post.

enter image description here

For fine-tuning, you just get your supervised training data and feed them as the input and output and train as you would train any other model. There are some minimum examples in the Huggigface Transformers documentation.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.