TY - GEN
T1 - Learning to parse and translate improves neural machine translation
AU - Eriguchi, Akiko
AU - Tsuruoka, Yoshimasa
AU - Cho, Kyunghyun
N1 - Funding Information:
We thank Yuchen Qiao and Kenjiro Taura for their help to speed up the implementations of training and also Kazuma Hashimoto for his valuable comments and discussions. This work was supported by JST CREST Grant Number JPMJCR1513 and JSPS KAKENHI Grant Number 15J12597 and
Funding Information:
We thank Yuchen Qiao and Kenjiro Taura for their help to speed up the implementations of training and also Kazuma Hashimoto for his valuable comments and discussions. This work was supported by JST CREST Grant Number JPMJCR1513 and JSPS KAKENHI Grant Number 15J12597 and 16H01715. KC thanks support by eBay, Face-book, Google and NVIDIA.
Publisher Copyright:
© 2017 Association for Computational Linguistics.
PY - 2017
Y1 - 2017
N2 - There has been relatively little attention to incorporating linguistic prior to neural machine translation. Much of the previous work was further constrained to considering linguistic prior on the source side. In this paper, we propose a hybrid model, called NMT+RNNG, that learns to parse and translate by combining the recurrent neural network grammar into the attention-based neural machine translation. Our approach encourages the neural machine translation model to incorporate linguistic prior during training, and lets it translate on its own afterward. Extensive experiments with four language pairs show the effectiveness of the proposed NMT+RNNG.
AB - There has been relatively little attention to incorporating linguistic prior to neural machine translation. Much of the previous work was further constrained to considering linguistic prior on the source side. In this paper, we propose a hybrid model, called NMT+RNNG, that learns to parse and translate by combining the recurrent neural network grammar into the attention-based neural machine translation. Our approach encourages the neural machine translation model to incorporate linguistic prior during training, and lets it translate on its own afterward. Extensive experiments with four language pairs show the effectiveness of the proposed NMT+RNNG.
UR - http://www.scopus.com/inward/record.url?scp=85040614869&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85040614869&partnerID=8YFLogxK
U2 - 10.18653/v1/P17-2012
DO - 10.18653/v1/P17-2012
M3 - Conference contribution
AN - SCOPUS:85040614869
T3 - ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
SP - 72
EP - 78
BT - ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers)
PB - Association for Computational Linguistics (ACL)
T2 - 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017
Y2 - 30 July 2017 through 4 August 2017
ER -