How Confident Was Your Reviewer? Estimating Reviewer Confidence from Peer Review Texts

被引:7
|
作者
Bharti, Prabhat Kumar [1 ]
Ghosal, Tirthankar [2 ]
Agrawal, Mayank [1 ]
Ekbal, Asif [1 ]
机构
[1] Indian Inst Technol Patna, Dept Comp Sci & Engn, Daulatpur, India
[2] Charles Univ Prague, Fac Math & Phys, Inst Formal & Appl Linguist, Prague, Czech Republic
来源
DOCUMENT ANALYSIS SYSTEMS, DAS 2022 | 2022年 / 13237卷
关键词
Peer reviews; Confidence prediction; Deep neural network;
D O I
10.1007/978-3-031-06555-2_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The scholarly peer-reviewing system is the primary means to ensure the quality of scientific publications. An area or program chair relies on the reviewer's confidence score to address conflicting reviews and borderline cases. Usually, reviewers themselves disclose how confident they are in reviewing a certain paper. However, there could be inconsistencies in what reviewers self-annotate themselves versus how the preview text appears to the readers. This is the job of the area or program chair to consider such inconsistencies and make a reasonable judgment. Peer review texts could be a valuable source of Natural Language Processing (NLP) studies, and the community is uniquely poised to investigate some inconsistencies in the paper vetting system. Here in this work, we attempt to automatically estimate how confident was the reviewer directly from the review text. We experiment with five data-driven methods: Linear Regression, Decision Tree, Support Vector Regression, Bidirectional Encoder Representations from Transformers (BERT), and a hybrid of Bidirectional Long-Short Term Memory (BiLSTM) and Convolutional Neural Networks (CNN) on Bidirectional Encoder Representations from Transformers (BERT), to predict the confidence score of the reviewer. Our experiments show that the deep neural model grounded on BERT representations generates encouraging performance.
引用
收藏
页码:126 / 139
页数:14
相关论文
共 50 条