BERT-ER: Query-specific BERT Entity Representations for Entity Ranking

被引:15
|
作者
Chatterjee, Shubham [1 ]
Dietz, Laura [1 ]
机构
[1] Univ New Hampshire, Durham, NH 03824 USA
基金
美国国家科学基金会;
关键词
Query-specific Entity Representations; Entity Ranking; BERT;
D O I
10.1145/3477495.3531944
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Entity-oriented search systems often learn vector representations of entities via the introductory paragraph from the Wikipedia page of the entity. As such representations are the same for every query, our hypothesis is that the representations are not ideal for IR tasks. In this work, we present BERT Entity Representations (BERT-ER) which are query-specific vector representations of entities obtained from text that describes how an entity is relevant for a query. Using BERT-ER in a downstream entity ranking system, we achieve a performance improvement of 13-42% (Mean Average Precision) over a system that uses the BERT embedding of the introductory paragraph from Wikipedia on two large-scale test collections. Our approach also outperforms entity ranking systems using entity embeddings from Wikipedia2Vec, ERNIE, and E-BERT. We show that our entity ranking system using BERT-ER can increase precision at the top of the ranking by promoting relevant entities to the top. With this work, we release our BERT models and query-specific entity embeddings fine-tuned for the entity ranking task.(1)
引用
收藏
页码:1466 / 1477
页数:12
相关论文
共 50 条
  • [1] Query-Specific Knowledge Summarization with Entity Evolutionary Networks
    Yang, Carl
    Gan, Lingrui
    Wang, Zongyi
    Shen, Jiaming
    Xiao, Jinfeng
    Han, Jiawei
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 2121 - 2124
  • [2] Global Entity Disambiguation with BERT
    Yamada, Ikuya
    Washio, Koki
    Shindo, Hiroyuki
    Matsumoto, Yuji
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 3264 - 3271
  • [3] Dynamic Collective Entity Representations for Entity Ranking
    Graus, David
    Tsagkias, Manos
    Weerkamp, Wouter
    Meij, Edgar
    de Rijke, Maarten
    PROCEEDINGS OF THE NINTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'16), 2016, : 595 - 604
  • [4] E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT
    Poerner, Nina
    Waltinger, Ulli
    Schuetze, Hinrich
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 803 - 818
  • [5] Analyzing How BERT Performs Entity Matching
    Paganelli, Matteo
    Del Buono, Francesco
    Baraldi, Andrea
    Guerra, Francesco
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2022, 15 (08): : 1726 - 1738
  • [6] Telugu named entity recognition using bert
    Gorla, SaiKiranmai
    Tangeda, Sai Sharan
    Neti, Lalita Bhanu Murthy
    Malapati, Aruna
    INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS, 2022, 14 (02) : 127 - 140
  • [7] Telugu named entity recognition using bert
    SaiKiranmai Gorla
    Sai Sharan Tangeda
    Lalita Bhanu Murthy Neti
    Aruna Malapati
    International Journal of Data Science and Analytics, 2022, 14 : 127 - 140
  • [8] A joint model for entity and relation extraction based on BERT
    Bo Qiao
    Zhuoyang Zou
    Yu Huang
    Kui Fang
    Xinghui Zhu
    Yiming Chen
    Neural Computing and Applications, 2022, 34 : 3471 - 3481
  • [9] A joint model for entity and relation extraction based on BERT
    Qiao, Bo
    Zou, Zhuoyang
    Huang, Yu
    Fang, Kui
    Zhu, Xinghui
    Chen, Yiming
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (05): : 3471 - 3481
  • [10] Scholarly Text Classification with Sentence BERT and Entity Embeddings
    Piao, Guangyuan
    TRENDS AND APPLICATIONS IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2021, 2021, 12705 : 79 - 87