Improving tree-based neural machine translation with dynamic lexicalized dependency encoding

被引:14
|
作者
Yang, Baosong [1 ]
Wong, Derek F. [1 ]
Chao, Lidia S. [1 ]
Zhang, Min [2 ]
机构
[1] Univ Macau, Nat Language Proc & Portuguese Chinese Machine Tr, Dept Comp & Informat Sci, Macau, Peoples R China
[2] Soochow Univ, Inst Artificial Intelligence, Suzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Syntactic modeling; Dynamic parameters; Tree-RNN; Neural machine translation (NMT);
D O I
10.1016/j.knosys.2019.105042
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tree-to-sequence neural machine translation models have proven to be effective in learning the semantic representations from the exploited syntactic structure. Despite their success, tree-to-sequence models have two major issues: (1) the embeddings of constituents at the higher tree levels tend to contribute less in translation; and (2) using a single set of model parameters is difficult to fully capture the syntactic and semantic richness of linguistic phrases. To address the first problem, we proposed a lexicalized dependency model, in which the source-side lexical representations are learned in a head-dependent fashion following a dependency graph. Since the number of dependents is variable, we proposed a variant recurrent neural network (RNN) to jointly consider the long-distance dependencies and the sequential information of words. Concerning the second problem, we adopt a latent vector to dynamically condition the parameters for the composition of each node representation. Experimental results reveal that the proposed model significantly outperforms the recently proposed tree-based methods in English-Chinese and English-German translation tasks with even far fewer parameters. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Character-Level Encoding based Neural Machine Translation for Hindi language
    Rathod, Divya
    Yadav, Arun Kumar
    Kumar, Mohit
    Yadav, Divakar
    NEURAL PROCESSING LETTERS, 2025, 57 (02)
  • [32] Improving Transformer-Based Neural Machine Translation with Prior Alignments
    Nguyen, Thien
    Nguyen, Lam
    Tran, Phuoc
    Nguyen, Huu
    COMPLEXITY, 2021, 2021
  • [33] Controlling Byte Pair Encoding for Neural Machine Translation
    Tacorda, Alfred John
    Ignacio, Marvin John
    Oco, Nathaniel
    Roxas, Rachel Edita
    2017 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2017, : 168 - 171
  • [34] Auto-Encoding Variational Neural Machine Translation
    Eikema, Bryan
    Aziz, Wilker
    4TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP (REPL4NLP-2019), 2019, : 124 - 141
  • [35] Improving Neural Machine Translation by Bidirectional Training
    Ding, Liang
    Wu, Di
    Tao, Dacheng
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3278 - 3284
  • [36] Corpus Augmentation for Improving Neural Machine Translation
    Li, Zijian
    Chi, Chengying
    Zhan, Yunyun
    CMC-COMPUTERS MATERIALS & CONTINUA, 2020, 64 (01): : 637 - 650
  • [37] Improving Neural Machine Translation by Retrieving Target Translation Template
    Li, Fuxue
    Chi, Chuncheng
    Yan, Hong
    Zhang, Zhen
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT IV, 2023, 14089 : 658 - 669
  • [38] Phrase Dependency Machine Translation with Quasi-Synchronous Tree-to-Tree Features
    Gimpel, Kevin
    Smith, Noah A.
    COMPUTATIONAL LINGUISTICS, 2014, 40 (02) : 349 - 401
  • [39] Extract and Attend: Improving Entity Translation in Neural Machine Translation
    Zeng, Zixin
    Wang, Rui
    Leng, Yichong
    Guo, Junliang
    Tan, Xu
    Qin, Tao
    Liu, Tie-yan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 1697 - 1710
  • [40] Based on Gated Dynamic Encoding Optimization, the LGE-Transformer Method for Low-Resource Neural Machine Translation
    Xu, Zhizhan
    Zhan, Siqi
    Yang, Wei
    Xie, Qianglai
    IEEE ACCESS, 2024, 12 : 162861 - 162869