Widening the bottleneck of lexical choice for non-autoregressive translation

被引:0
|
作者
Ding, Liang [1 ]
Wang, Longyue [2 ]
Liu, Siyou [3 ]
Luo, Weihua [2 ]
Zhang, Kaifu [2 ]
机构
[1] Univ Sydney, Sydney, Australia
[2] Alibaba Int Digital Commerce, Hangzhou, Peoples R China
[3] Univ Macau, Macau, Peoples R China
来源
关键词
Lexical choice; Non-autoregressive translation; Low-frequency word; Knowledge distillation; New benchmark;
D O I
10.1016/j.csl.2024.101765
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, non-autoregressive models have enjoyed great popularity in natural language processing (NLP) communities, and slowly crept into the main body of research such as speech recognition and computer vision. Non-autoregressive translation (NAT) has been proposed to improve the decoding efficiency of translation models by predicting all tokens independently and simultaneously. To reduce the complexity of the raw data, knowledge distillation (KD) is the preliminary step for training NAT models by leveraging autoregressive translation (AT). In this study, we first reveal that the discrepancy between the raw and the KD data leads to lexical choice errors on predicting low-frequency words. Then we bridge the gap by exploiting three architecture-free approaches without introducing any computational cost: (1) Model Level, where we introduce an extra Kullback-Leibler divergence term derived by comparing the lexical choice of NAT model and that embedded in the raw data; (2) Parallel Data Level, where we reactivate low-frequency information by proposing raw pre-training and reverse KD training; (3) Monolingual Data Level, where we transfer both the knowledge of the bilingual raw data and that of the new monolingual data to the NAT model. We conduct experiments on widely-used NAT benchmarks (i.e. WMT14 English-German and WMT16 Romanian-English) over two advanced NAT architectures. Results demonstrate that the proposed approaches can significantly and universally improve translation quality by reducing translation errors on low-frequency words. Extensive analyses demonstrate that (1) these approach generates translations that contain more low-frequency words; (2) these techniques can be used together profitably to further recall the useful information lost in the standard KD; (3) enlarging the monolingual data consistently improves the BLEU scores, while this trend does not hold when further scaling the monolingual data. To this end, we establish a new NAT benchmarks by validating our approaches on three additional datasets varying from languages and scales (i.e. WMT17 Chinese-English, WMT19 English-German and WAT17 Japanese-English). We will release data, code and models, which we hope can significantly promote research in this field.2
引用
收藏
页数:16
相关论文
共 50 条
  • [21] Non-Autoregressive Machine Translation: It's Not as Fast as it Seems
    Helel, Jindrich
    Haddow, Barry
    Birch, Alexandra
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 1780 - 1790
  • [22] Non-Autoregressive Translation by Learning Target Categorical Codes
    Bao, Yu
    Huang, Shujian
    Xiao, Tong
    Wang, Dongqi
    Dai, Xinyu
    Chen, Jiajun
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 5749 - 5759
  • [23] RenewNAT: Renewing Potential Translation for Non-autoregressive Transformer
    Guo, Pei
    Xiao, Yisheng
    Li, Juntao
    Zhang, Min
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 12854 - 12862
  • [24] Glancing Transformer for Non-Autoregressive Neural Machine Translation
    Qian, Lihua
    Zhou, Hao
    Bao, Yu
    Wang, Mingxuan
    Qiu, Lin
    Zhang, Weinan
    Yu, Yong
    Li, Lei
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 1993 - 2003
  • [25] Imitation Learning for Non-Autoregressive Neural Machine Translation
    Wei, Bingzhen
    Wang, Mingxuan
    Zhou, Hao
    Lin, Junyang
    Sun, Xu
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 1304 - 1312
  • [26] Learning to Rewrite for Non-Autoregressive Neural Machine Translation
    Geng, Xinwei
    Feng, Xiaocheng
    Qin, Bing
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3297 - 3308
  • [27] Aligned Cross Entropy for Non-Autoregressive Machine Translation
    Ghazvininejad, Marjan
    Karpukhin, Vladimir
    Zettlemoyer, Luke
    Levy, Omer
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [28] Non-Autoregressive Document-Level Machine Translation
    Bao, Guangsheng
    Teng, Zhiyang
    Zhou, Hao
    Yan, Jianhao
    Zhang, Yue
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 14791 - 14803
  • [29] Non-autoregressive Machine Translation with Disentangled Context Transformer
    Kasai, Jungo
    Cross, James
    Ghazvininejad, Marjan
    Gu, Jiatao
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [30] Efficient Domain Adaptation for Non-Autoregressive Machine Translation
    You, Wangjie
    Guo, Pei
    Li, Juntao
    Chen, Kehai
    Zhang, Min
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 13657 - 13670