What are the benefits of combining semi-supervised and supervised learning methods?
I've been looking into semi-supervised learning more, specifically label propagation and label spreading. When reading through tutorials and some papers I've seen it mentioned that often times the results of label propagation will then be used build a supervised model. It's not clear to me why this is necessary, or that it is beneficial. What is the purpose of building another model with the results of label propagation after you have already obtained the labels for your unknown data? Couldn't you just use label propagation for predicting any new labels that you encounter in the future? I assume this has something to do with label propagation being a transductive algorithm? But I've seen that the algorithm can be extended to an inductive algorithm, is that correct? Furthermore, if you're building a model using labels that are predictions themselves doesn't this have the propensity to introduce a lot of bias into said model?
Topic supervised-learning semi-supervised-learning classification
Category Data Science