How is model evaluation and re-training done after deployment without ground truth labels?
Suppose I deployed a model by manual labeling the ground truth labels with my training data, as the use case is such that there's no way to get the ground truth labels without humans. Once the model is deployed, if I wanted to evaluate how the model is doing on live data, how can I evaluate it without sampling some of that live data, that doesn't come with ground truth labels, and manually giving it the ground truth labels? And then once evaluating its performance on that labeled live data sample, using that as the new training set as the new model. That's the only approach I can think of where you can't discern ground truth without human intervention, and it doesn't some too automated to me.
Is there any other way to do this without the manuel labelling?
Topic model-evaluations mlops training
Category Data Science