Unlocking the Power of Large Language Models for Entity Alignment

被引:0
|
作者
Jiang, Xuhui [1 ,2 ,3 ]
Shen, Yinghan [1 ]
Shi, Zhichao [1 ,2 ]
Xu, Chengjin [3 ]
Li, Wei [1 ]
Li, Zixuan [1 ]
Guo, Jian [3 ]
Shen, Huawei [1 ]
Wang, Yuanzhuo [1 ]
机构
[1] Chinese Acad Sci, CAS Key Lab AI Safety, Inst Comp Technol, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing, Peoples R China
[3] Int Digital Econ Acad, IDEA Res, Shenzhen, Peoples R China
关键词
KNOWLEDGE;
D O I
暂无
中图分类号
学科分类号
摘要
Entity Alignment (EA) is vital for integrating diverse knowledge graph (KG) data, playing a crucial role in data-driven AI applications. Traditional EA methods primarily rely on comparing entity embeddings, but their effectiveness is constrained by the limited input KG data and the capabilities of the representation learning techniques. Against this backdrop, we introduce ChatEA, an innovative framework that incorporates large language models (LLMs) to improve EA. To address the constraints of limited input KG data, ChatEA introduces a KG-code translation module that translates KG structures into a format understandable by LLMs, thereby allowing LLMs to utilize their extensive background knowledge to improve EA accuracy. To overcome the over-reliance on entity embedding comparisons, ChatEA implements a two-stage EA strategy that capitalizes on LLMs' capability for multi-step reasoning in a dialogue format, thereby enhancing accuracy while preserving efficiency. Our experimental results verify ChatEA's superior performance, highlighting LLMs' potential in facilitating EA tasks. The source code is available at https://github.com/jxh4945777/ChatEA/.
引用
收藏
页码:7566 / 7583
页数:18
相关论文
共 50 条
  • [1] On the Calibration of Large Language Models and Alignment
    Zhu, Chiwei
    Xu, Benfeng
    Wang, Quan
    Zhang, Yongdong
    Mao, Zhendong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 9778 - 9795
  • [2] Unlocking the Power of ChatGPT, Artificial Intelligence, and Large Language Models: Practical Suggestions for Radiation Oncologists
    Waters, Michael R.
    Aneja, Sanjay
    Hong, Julian C.
    PRACTICAL RADIATION ONCOLOGY, 2023, 13 (06) : E484 - E490
  • [3] Fundamental Limitations of Alignment in Large Language Models
    Wolf, Yotam
    Wies, Noam
    Avnery, Oshri
    Levine, Yoav
    Shashua, Amnon
    arXiv, 2023,
  • [4] Social Value Alignment in Large Language Models
    Abbol, Giulio Antonio
    Marchesi, Serena
    Wykowska, Agnieszka
    Belpaeme, Tony
    VALUE ENGINEERING IN ARTIFICIAL INTELLIGENCE, VALE 2023, 2024, 14520 : 83 - 97
  • [5] Investigating Cultural Alignment of Large Language Models
    AlKhamissi, Badr
    ElNokrashy, Muhammad
    AlKhamissi, Mai
    Diab, Mona
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 12404 - 12422
  • [6] Hybrid Alignment Training for Large Language Models
    Wang, Chenglong
    Zhou, Hang
    Chang, Kaiyan
    Li, Bei
    Mu, Yongyu
    Xiao, Tong
    Liu, Tongran
    Zhu, Jingbo
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 11389 - 11403
  • [7] Large Language Models for Latvian Named Entity Recognition
    Viksna, Rinalds
    Skadina, Inguna
    HUMAN LANGUAGE TECHNOLOGIES - THE BALTIC PERSPECTIVE (HLT 2020), 2020, 328 : 62 - 69
  • [8] Unlocking the Capabilities of Large Language Models for Accelerating Drug Development
    Anderson, Wes
    Braun, Ian
    Bhatnagar, Roopal
    Romero, Klaus
    Walls, Ramona
    Schito, Marco
    Podichetty, Jagdeep T.
    CLINICAL PHARMACOLOGY & THERAPEUTICS, 2024, 116 (01) : 38 - 41
  • [9] A Causal View of Entity Bias in (Large) Language Models
    Wang, Fei
    Mo, Wenjie
    Wang, Yiwei
    Zhou, Wenxuan
    Chen, Muhao
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 15173 - 15184
  • [10] Unlocking the Potentials of Large Language Models in Orthodontics: A Scoping Review
    Zheng, Jie
    Ding, Xiaoqian
    Pu, Jingya Jane
    Chung, Sze Man
    Ai, Qi Yong H.
    Hung, Kuo Feng
    Shan, Zhiyi
    BIOENGINEERING-BASEL, 2024, 11 (11):