TY - GEN
T1 - A correlational encoder decoder architecture for pivot based sequence generation
AU - Saha, Amrita
AU - Khapra, Mitesh M.
AU - Chandar, Sarath
AU - Rajendran, Janarthanan
AU - Cho, Kyunghyun
N1 - Publisher Copyright:
© 1963-2018 ACL.
PY - 2016
Y1 - 2016
N2 - Interlingua based Machine Translation (MT) aims to encode multiple languages into a common linguistic representation and then decode sentences in multiple target languages from this representation. In this work we explore this idea in the context of neural encoder decoder architectures, albeit on a smaller scale and without MT as the end goal. Specifically, we consider the case of three languages or modalities X, Z and Y wherein we are interested in generating sequences in Y starting from information available in X. However, there is no parallel training data available between X and Y but, training data is available between X & Z and Z & Y (as is often the case in many real world applications). Z thus acts as a pivot/bridge. An obvious solution, which is perhaps less elegant but works very well in practice is to train a two stage model which first converts from X to Z and then from Z to Y. Instead we explore an interlingua inspired solution which jointly learns to do the following (i) encode X and Z to a common representation and (ii) decode Y from this common representation. We evaluate our model on two tasks: (i) bridge transliteration and (ii) bridge captioning. We report promising results in both these applications and believe that this is a right step towards truly interlingua inspired encoder decoder architectures.
AB - Interlingua based Machine Translation (MT) aims to encode multiple languages into a common linguistic representation and then decode sentences in multiple target languages from this representation. In this work we explore this idea in the context of neural encoder decoder architectures, albeit on a smaller scale and without MT as the end goal. Specifically, we consider the case of three languages or modalities X, Z and Y wherein we are interested in generating sequences in Y starting from information available in X. However, there is no parallel training data available between X and Y but, training data is available between X & Z and Z & Y (as is often the case in many real world applications). Z thus acts as a pivot/bridge. An obvious solution, which is perhaps less elegant but works very well in practice is to train a two stage model which first converts from X to Z and then from Z to Y. Instead we explore an interlingua inspired solution which jointly learns to do the following (i) encode X and Z to a common representation and (ii) decode Y from this common representation. We evaluate our model on two tasks: (i) bridge transliteration and (ii) bridge captioning. We report promising results in both these applications and believe that this is a right step towards truly interlingua inspired encoder decoder architectures.
UR - http://www.scopus.com/inward/record.url?scp=85024094852&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85024094852&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85024094852
SN - 9784879747020
T3 - COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016: Technical Papers
SP - 109
EP - 118
BT - COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016
PB - Association for Computational Linguistics, ACL Anthology
T2 - 26th International Conference on Computational Linguistics, COLING 2016
Y2 - 11 December 2016 through 16 December 2016
ER -