BERT-ER: Query-specific BERT Entity Representations for Entity Ranking

被引:15
|
作者
Chatterjee, Shubham [1 ]
Dietz, Laura [1 ]
机构
[1] Univ New Hampshire, Durham, NH 03824 USA
基金
美国国家科学基金会;
关键词
Query-specific Entity Representations; Entity Ranking; BERT;
D O I
10.1145/3477495.3531944
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Entity-oriented search systems often learn vector representations of entities via the introductory paragraph from the Wikipedia page of the entity. As such representations are the same for every query, our hypothesis is that the representations are not ideal for IR tasks. In this work, we present BERT Entity Representations (BERT-ER) which are query-specific vector representations of entities obtained from text that describes how an entity is relevant for a query. Using BERT-ER in a downstream entity ranking system, we achieve a performance improvement of 13-42% (Mean Average Precision) over a system that uses the BERT embedding of the introductory paragraph from Wikipedia on two large-scale test collections. Our approach also outperforms entity ranking systems using entity embeddings from Wikipedia2Vec, ERNIE, and E-BERT. We show that our entity ranking system using BERT-ER can increase precision at the top of the ranking by promoting relevant entities to the top. With this work, we release our BERT models and query-specific entity embeddings fine-tuned for the entity ranking task.(1)
引用
收藏
页码:1466 / 1477
页数:12
相关论文
共 50 条
  • [11] Chinese named entity recognition model based on BERT
    Liu, Hongshuai
    Jun, Ge
    Zheng, Yuanyuan
    2020 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE COMMUNICATION AND NETWORK SECURITY (CSCNS2020), 2021, 336
  • [12] IRanker: Query-Specific Ranking of Reviewed Items
    Shahbazi, Moloud
    Wiley, Matthew
    Hristidis, Vagelis
    2017 IEEE 33RD INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2017), 2017, : 211 - 214
  • [13] What's in a Name? Are BERT Named Entity Representations just as Good for any other Name?
    Balasubramanian, Sriram
    Jain, Naman
    Jindal, Gaurav
    Awasthi, Abhijeet
    Sarawagi, Sunita
    5TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP (REPL4NLP-2020), 2020, : 205 - 214
  • [14] JointSem: Combining Query Entity Linking and Entity based Document Ranking
    Xiong, Chenyan
    Liu, Zhengzhong
    Callan, Jamie
    Hovy, Eduard
    CIKM'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2017, : 2391 - 2394
  • [15] Impact of Training Instance Selection on Domain-Specific Entity Extraction using BERT
    Salhofer, Eileen
    Liu, Xinglan
    Kern, Roman
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES: PROCEEDINGS OF THE STUDENT RESEARCH WORKSHOP, 2022, : 83 - 88
  • [16] Result Diversification Based on Query-Specific Cluster Ranking
    He, Jiyin
    Meij, Edgar
    de Rijke, Maarten
    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, 2011, 62 (03): : 550 - 571
  • [17] Research on BERT-Based Audit Entity Extraction Method
    Xiang, Rui
    Li, Weibo
    Yan, Hua
    2021 4TH INTERNATIONAL CONFERENCE ON ROBOTICS, CONTROL AND AUTOMATION ENGINEERING (RCAE 2021), 2021, : 176 - 180
  • [18] Entity Recognition Method for Judicial Documents Based on BERT Model
    Chen J.
    He T.
    Wen Y.-Y.
    Ma L.-T.
    Dongbei Daxue Xuebao/Journal of Northeastern University, 2020, 41 (10): : 1382 - 1387
  • [19] Chinese mineral named entity recognition based on BERT model
    Yu, Yuqing
    Wang, Yuzhu
    Mua, Jingqin
    Li, Wei
    Jiao, Shoutao
    Wang, Zhenhua
    Lv, Pengfei
    Zhu, Yueqin
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 206
  • [20] Fault Entity Identification for Power CommunicatioEquipment Incorporating BERT and BiGRU
    Nian, Zhong-Yuan
    Liu, Yan-Chang
    Sun, Xiao-Da
    Mu, Chun-Fang
    Cui, Ming-Shi
    Wang, Chong
    Journal of Network Intelligence, 2024, 9 (03): : 1574 - 1588