How to model prior informaton in sequential models?

Are there any approaches to model prior information in sequential models? Such as in Sequence classification.

For example, I have an input sequence [[Z, 0, 1], [Y, 1, 1]]. I need to classfy this into one of A, B,C, D, E. But from prior knowledge I know that if the input is Y, the outputs would most likely be one of A, B, or C. Hence, I can initialize the model such that there is 25% prob its A and 25% prob is B and 25% prob it is C if the input has the feature Y. And thereafter let the model learn from the data. Using this domain specific prior information would help since the data is noisy and limited.

Any suggestions on what kinds of ML models this could be incorporated into?

Some ideas based on what i found. I looked at RNNs. These priors dont seem to fit directly into the framework. Maybe I need to add fixed lookup that scales the logits according to the priors.

Other than this, are there other modelling techniques this fits naturally into?

Topic bayesian-networks rnn neural-network machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.