how to improve recall by retraining a model on its feedback

I am creating a supervised model using sensitive and scarce data. For the sake of discussion, I've simiplified the problem statement by assuming that I'm creating a model for identifying dogs.

Let's say I am creating a model to identify dogs in pictures. I trained it with few positive and negative examples. I could not gather a lot of data because it is scarce. Therefore, the model accuracy is not good (say f-score = 0.64). I deployed this model in production. When the model predicts, I label the output of model as True Positive and False Positive. Then I train the model using these labels again.

I problem that I see with this approach is that I do not know when the model missed a dog picture i.e. False-Negative and hence I cannot retrain the model on such examples. Therefore, the current approach will only improve my model's Precision (TP/(TP+FP)) and not the Recall (TP/(TP+FN)).

Please suggest

  1. how can I improve the model's Recall
  2. do you see any other problem with my approach

Topic pretraining machine-learning-model reinforcement-learning accuracy machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.