If Bert can handle only 512 inputs. Why you can provide such long contexts in QA Pipeline?
For example, I use Pipeline from Huggingface Transformers to use a QA model card like this.
qa_pipeline = pipeline(
question-answering,
model=model,
tokenizer=wicharnkeisei/thai-bert-multi-cased-finetuned-xquadv1-finetuned-squad)
And perform inference like this.
qa_pipeline(question='What does the fox say?',context=open('context01.txt','r').read())
The parameter context which I passed a string from loaded context file that is very long (more than 1000 words or characters)
How could the model handle this? And I wonder if there is a way to train this model with longer inputs?
Topic huggingface question-answering bert transformer nlp
Category Data Science