Closed Domain Question Answering which doesn't answer Questions

I've been exploring Closed Domain Question Answering Implementations which have been trained on SQuAD 2.0 dataset. Ideally, it should not answer questions which the context text corpus doesn't contain answers to. But while implementing such models using the Haystack repo or the FARM repo, I'm finding that it always answers these questions even when it shouldn't. Is there any implementation available that takes into account the fact that it shouldn't answer questions when it doesn't find a suitable answer.

References:

  1. https://colab.research.google.com/github/deepset-ai/haystack/blob/update-tutorials/tutorials/Tutorial3_Basic_QA_Pipeline_without_Elasticsearch.ipynb#scrollTo=KS4nTwxIbRb6

  2. https://github.com/deepset-ai/FARM

  3. https://github.com/deepset-ai/haystack

  4. https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2

  5. https://colab.research.google.com/drive/1UrKlHlf68hD3wwQDTctx2cQs6FMUxLLH?usp=sharing#scrollTo=J4jxYsxaG77O

Topic question-answering stanford-nlp nlp

Category Data Science


Question Answering model of simpletransformers takes data with is_impossible option in the training phase. Also during prediciton, it does not generate answers for all questions since the model learns which questions should be answered.

You can find details below:

https://pypi.org/project/simpletransformers/

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.