Conceptually, how to deal with facts and time in GPT-3 and Language Models

When exploring text generation using various large language models, I frequently come across generated text which presents facts which are plain out wrong. I am not talking about fake news or bias, rather I am talking about dated pieces of information which were once correct, but are no longer correct. When looking around for pros and cons of language models, I don't really see complaints about this as one of the greatest cons.

When we finetune models, and with the pretrained models themselves which are frozen in time, how do we account for information that may not be correct in the future?

The general knowledge learned is wonderful but there is always going to be a drift in knowledge to a point which is less and less relevant. Take Stack Overflow for example: some questions from the original couple of years still have some truth while others have not aged well and perhaps are currently invalid questions and/or answers.

Topic openai-gpt language-model

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.