Questions of understanding - Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation
I'm currently analysing the paper Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation (Post, Vilar 2018): https://arxiv.org/abs/1804.06609 I have understanding problems how the data is processed. For example: the paper is writing about beams, banks and hypothesises and I have no idea what these terms mean. How would you describe these terms and are there any tutorial sources you would recommend for understanding the dynamic beam allocation?
Category:
Data Science