Exploiting Pre-Trained Language Models for Black-Box Attack against Knowledge Graph Embeddings

被引:0
|
作者
Yang, Guangqian [1 ]
Zhang, Lei [1 ]
Liu, Yi [2 ]
Xie, Hongtao [1 ]
Mao, Zhendong [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Peoples Daily Online, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Knowledge Graph; Adversarial Attack; Language Model;
D O I
10.1145/3688850
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite the emerging research on adversarial attacks against knowledge graph embedding (KGE) models, most of them focus on white-box attack settings. However, white-box attacks are difficult to apply in practice compared to black-box attacks since they require access to model parameters that are unlikely to be provided. In this article, we propose a novel black-box attack method that only requires access to knowledge graph data, making it more realistic in real-world attack scenarios. Specifically, we utilize pre-trained language models (PLMs) to encode text features of the knowledge graphs, an aspect neglected by previous research. We then employ these encoded text features to identify the most influential triples for constructing corrupted triples for the attack. To improve the transferability of the attack, we further propose to fine-tune the PLM model by enriching triple embeddings with structure information. Extensive experiments conducted on two knowledge graph datasets illustrate the effectiveness of our proposed method.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] NMT Enhancement based on Knowledge Graph Mining with Pre-trained Language Model
    Yang, Hao
    Qin, Ying
    Deng, Yao
    Wang, Minghan
    2020 22ND INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): DIGITAL SECURITY GLOBAL AGENDA FOR SAFE SOCIETY!, 2020, : 185 - 189
  • [42] From Word Embeddings to Pre-Trained Language Models: A State-of-the-Art Walkthrough
    Mars, Mourad
    APPLIED SCIENCES-BASEL, 2022, 12 (17):
  • [43] Pre-trained language models with domain knowledge for biomedical extractive summarization
    Xie Q.
    Bishop J.A.
    Tiwari P.
    Ananiadou S.
    Knowledge-Based Systems, 2022, 252
  • [44] Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey
    Bhargava, Prajjwal
    Ng, Vincent
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 12317 - 12325
  • [45] Plug-and-Play Knowledge Injection for Pre-trained Language Models
    Zhang, Zhengyan
    Zeng, Zhiyuan
    Lin, Yankai
    Wang, Huadong
    Ye, Deming
    Xiao, Chaojun
    Han, Xu
    Liu, Zhiyuan
    Li, Peng
    Sun, Maosong
    Zhou, Jie
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 10641 - 10656
  • [46] Enhancing pre-trained language models with Chinese character morphological knowledge
    Zheng, Zhenzhong
    Wu, Xiaoming
    Liu, Xiangzhi
    INFORMATION PROCESSING & MANAGEMENT, 2025, 62 (01)
  • [47] XDAI: A Tuning-free Framework for Exploiting Pre-trained Language Models in Knowledge Grounded Dialogue Generation
    Yu, Jifan
    Zhang, Xiaohan
    Xu, Yifan
    Lei, Xuanyu
    Guan, Xinyu
    Zhang, Jing
    Hou, Lei
    Li, Juanzi
    Tang, Jie
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 4422 - 4432
  • [48] Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making
    Fang, Xuanjie
    Cheng, Sijie
    Liu, Yang
    Wang, Wei
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 7322 - 7336
  • [49] Gauging, enriching and applying geography knowledge in Pre-trained Language Models
    Ramrakhiyani, Nitin
    Varma, Vasudeva
    Palshikar, Girish Keshav
    Pawar, Sachin
    INFORMATION PROCESSING & MANAGEMENT, 2025, 62 (01)
  • [50] General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference
    Du, Jingfei
    Ott, Myle
    Li, Haoran
    Zhou, Xing
    Stoyanov, Veselin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020,