Exploring Human-Like Translation Strategy with Large Language Models

被引:11
|
作者
He, Zhiwei [1 ]
Liang, Tian [2 ]
Jiao, Wenxiang [3 ]
Zhang, Zhuosheng [1 ]
Yang, Yujiu
Wang, Rui [1 ]
Tu, Zhaopeng [3 ]
Shi, Shuming [3 ]
Wang, Xing [3 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Tsinghua Univ, Tsinghua, Peoples R China
[3] Tencent AI Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Computational linguistics;
D O I
10.1162/tacl_a_00642
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) have demonstrated impressive capabilities in general scenarios, exhibiting a level of aptitude that approaches, in some aspects even surpasses, human-level intelligence. Among their numerous skills, the translation abilities of LLMs have received considerable attention. Compared to typical machine translation that focuses solely on source-to-target mapping, LLM-based translation can potentially mimic the human translation process, which might take preparatory steps to ensure high-quality translation. This work explores this possibility by proposing the MAPS framework, which stands for Multi-Aspect Prompting and Selection. Specifically, we enable LLMs first to analyze the given source sentence and induce three aspects of translation-related knowledge (keywords, topics, and relevant demonstrations) to guide the final translation process. Moreover, we employ a selection mechanism based on quality estimation to filter out noisy and unhelpful knowledge. Both automatic (3 LLMs x 11 directions x 2 automatic metrics) and human evaluation (preference study and MQM) demonstrate the effectiveness of MAPS. Further analysis shows that by mimicking the human translation process, MAPS reduces various translation errors such as hallucination, ambiguity, mistranslation, awkward style, untranslated text, and omission. Source code is available at https://github.com/zwhe99/MAPS-mt.
引用
收藏
页码:229 / 246
页数:18
相关论文
共 50 条
  • [41] Simul-LLM: A Framework for Exploring High-Quality Simultaneous Translation with Large Language Models
    Agostinelli, Victor
    Wild, Max
    Raffel, Matthew
    Ahmed, Kazi
    Fuad, Asif
    Chen, Lizhong
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 10530 - 10541
  • [42] Towards More Human-Like Episodic Memory for More Human-Like Agents
    Brom, Cyril
    Lukavsky, Jiri
    INTELLIGENT VIRTUAL AGENTS, PROCEEDINGS, 2009, 5773 : 484 - +
  • [43] A Proposed Framework for Human-like Language Processing of ChatGPT in Academic Writing
    Mahyoob M.
    Algaraady J.
    Alblwi A.
    International Journal of Emerging Technologies in Learning, 2023, 18 (14) : 282 - 293
  • [44] Situated Language Understanding with Human-like and Visualization-Based Transparency
    Perlmutter, Leah
    Kernfeld, Eric
    Cakmak, Maya
    ROBOTICS: SCIENCE AND SYSTEMS XII, 2016,
  • [45] Recognizing Emotional Body Language Displayed by a Human-like Social Robot
    McColl, Derek
    Nejat, Goldie
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2014, 6 (02) : 261 - 280
  • [46] Improving Machine Translation Formality with Large Language Models
    Yang, Murun
    Li, Fuxue
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (02): : 2061 - 2075
  • [47] LEVERAGING LARGE LANGUAGE MODELS WITH VOCABULARY SHARING FOR SIGN LANGUAGE TRANSLATION
    Lee, Huije
    Kim, Jung-Ho
    Hwang, Eui Jun
    Kim, Jaewoo
    Park, Jong C.
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [48] Human-Like Eukaryotic Translation Initiation Factor 3 from Neurospora crassa
    Smith, M. Duane
    Gu, Yu
    Querol-Audi, Jordi
    Vogan, Jacob M.
    Nitido, Adam
    Cate, Jamie H. D.
    PLOS ONE, 2013, 8 (11):
  • [49] Human-Like Delicate Region Erasing Strategy for Weakly Supervised Detection
    En, Qing
    Duan, Lijuan
    Zhang, Zhaoxiang
    Bai, Xiang
    Zhang, Yundong
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3502 - 3509
  • [50] Think from Words(TFW): Initiating Human-Like Cognition in Large Language Models Through Think from Words for Japanese Text-Level Classification
    Gan, Chengguang
    Zhang, Qinghao
    Mori, Tatsunori
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS, PT II, NLDB 2024, 2024, 14763 : 43 - 55