Exploiting Pre-Trained Language Models for Black-Box Attack against Knowledge Graph Embeddings

被引:0
|
作者
Yang, Guangqian [1 ]
Zhang, Lei [1 ]
Liu, Yi [2 ]
Xie, Hongtao [1 ]
Mao, Zhendong [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Peoples Daily Online, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Knowledge Graph; Adversarial Attack; Language Model;
D O I
10.1145/3688850
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite the emerging research on adversarial attacks against knowledge graph embedding (KGE) models, most of them focus on white-box attack settings. However, white-box attacks are difficult to apply in practice compared to black-box attacks since they require access to model parameters that are unlikely to be provided. In this article, we propose a novel black-box attack method that only requires access to knowledge graph data, making it more realistic in real-world attack scenarios. Specifically, we utilize pre-trained language models (PLMs) to encode text features of the knowledge graphs, an aspect neglected by previous research. We then employ these encoded text features to identify the most influential triples for constructing corrupted triples for the attack. To improve the transferability of the attack, we further propose to fine-tune the PLM model by enriching triple embeddings with structure information. Extensive experiments conducted on two knowledge graph datasets illustrate the effectiveness of our proposed method.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] BERT-MK: Integrating Graph Contextualized Knowledge into Pre-trained Language Models
    He, Bin
    Zhou, Di
    Xiao, Jinghui
    Jiang, Xin
    Liu, Qun
    Yuan, Nicholas Jing
    Xu, Tong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 2281 - 2290
  • [22] An empirical study of pre-trained language models in simple knowledge graph question answering
    Nan Hu
    Yike Wu
    Guilin Qi
    Dehai Min
    Jiaoyan Chen
    Jeff Z Pan
    Zafar Ali
    World Wide Web, 2023, 26 : 2855 - 2886
  • [23] KG-prompt: Interpretable knowledge graph prompt for pre-trained language models
    Chen, Liyi
    Liu, Jie
    Duan, Yutai
    Wang, Runze
    KNOWLEDGE-BASED SYSTEMS, 2025, 311
  • [24] An Empirical study on Pre-trained Embeddings and Language Models for Bot Detection
    Garcia-Silva, Andres
    Berrio, Cristian
    Manuel Gomez-Perez, Jose
    4TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP (REPL4NLP-2019), 2019, : 148 - 155
  • [25] Disentangling Semantics and Syntax in Sentence Embeddings with Pre-trained Language Models
    Huang, James Y.
    Huang, Kuan-Hao
    Chang, Kai-Wei
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 1372 - 1379
  • [26] AID: Active Distillation Machine to Leverage Pre-Trained Black-Box Models in Private Data Settings
    Hoang, Trong Nghia
    Hong, Shenda
    Xiao, Cao
    Low, Bryan
    Sun, Jimeng
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 3569 - 3581
  • [27] Exploiting Pre-Trained Network Embeddings for Recommendations in Social Networks
    Lei Guo
    Yu-Fei Wen
    Xin-Hua Wang
    Journal of Computer Science and Technology, 2018, 33 : 682 - 696
  • [28] Exploiting Pre-Trained Network Embeddings for Recommendations in Social Networks
    Guo, Lei
    Wen, Yu-Fei
    Wang, Xin-Hua
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2018, 33 (04) : 682 - 696
  • [29] Probing Simile Knowledge from Pre-trained Language Models
    Chen, Weijie
    Chang, Yongzhu
    Zhang, Rongsheng
    Pu, Jiashu
    Chen, Guandan
    Zhang, Le
    Xi, Yadong
    Chen, Yijiang
    Su, Chang
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 5875 - 5887
  • [30] Text-Augmented Open Knowledge Graph Completion via Pre-Trained Language Models
    Jiang, Pengcheng
    Agarwal, Shivam
    Jin, Bowen
    Wang, Xuan
    Sun, Jimeng
    Han, Jiawei
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 11161 - 11180