Assessing and predicting the quality of peer reviews: a text mining approach

被引:3
作者
Meng, Jie [1 ,2 ]
机构
[1] Chinese Acad Sci, Innovat Acad Microsatellites, Shanghai, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
关键词
Peer review; Review quality; Text mining; Machine learning; Review assessment; INSTRUMENT;
D O I
10.1108/EL-06-2022-0139
中图分类号
G25 [图书馆学、图书馆事业]; G35 [情报学、情报工作];
学科分类号
1205 ; 120501 ;
摘要
PurposeThis paper aims to quantify the quality of peer reviews, evaluate them from different perspectives and develop a model to predict the review quality. In addition, this paper investigates effective features to distinguish the reviews' quality. Design/methodology/approachFirst, a fine-grained data set including peer review data, citations and review conformity scores was constructed. Second, metrics were proposed to evaluate the quality of peer reviews from three aspects. Third, five categories of features were proposed in terms of reviews, submissions and responses using natural language processing (NLP) techniques. Finally, different machine learning models were applied to predict the review quality, and feature analysis was performed to understand effective features. FindingsThe analysis results revealed that reviewers become more conservative and the review quality becomes worse over time in terms of these indicators. Among the three models, random forest model achieves the best performance on all three tasks. Sentiment polarity, review length, response length and readability are important factors that distinguish peer reviews' quality, which can help meta-reviewers value more worthy reviews when making final decisions. Originality/valueThis study provides a new perspective for assessing review quality. Another originality of the research lies in the proposal of a novelty task that predict review quality. To address this task, a novel model was proposed which incorporated various of feature sets, thereby deepening the understanding of peer reviews.
引用
收藏
页码:186 / 203
页数:18
相关论文
共 43 条
[1]  
Angelidis S., 2018, Transactions of the Association for Computational Linguistics, V6, P17
[2]  
Anjum O, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P518
[3]   Peer Grading the Peer Reviews: A Dual-Role Approach for Lightening the Scholarly Paper Review Process [J].
Arous, Ines ;
Yang, Jie ;
Khayati, Mourad ;
Cudre-Mauroux, Philippe .
PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, :1916-1927
[4]   Quantifying the quality of peer reviewers through Zipf's law [J].
Ausloos, Marcel ;
Nedic, Olgica ;
Fronczak, Agata ;
Fronczak, Piotr .
SCIENTOMETRICS, 2016, 106 (01) :347-368
[5]   Closed versus open reviewing of journal manuscripts: how far do comments differ in language use? [J].
Bornmann, Lutz ;
Wolf, Markus ;
Daniel, Hans-Dieter .
SCIENTOMETRICS, 2012, 91 (03) :843-856
[6]   Aspect-based Sentiment Analysis of Scientific Reviews [J].
Chakraborty, Souvic ;
Goyal, Pawan ;
Mukherjee, Animesh .
PROCEEDINGS OF THE ACM/IEEE JOINT CONFERENCE ON DIGITAL LIBRARIES IN 2020, JCDL 2020, 2020, :207-216
[7]  
Chall J.S., 1995, Readability revisited: The new Dale-Chall readability formula
[8]  
Cortes Corinna, 2021, arXiv
[9]  
Falkenberg L J., 2018, Limnology and Oceanography Bulletin, V27, P1, DOI DOI 10.1002/LOB.10217
[10]   THE EFFECTS OF BLINDING ON ACCEPTANCE OF RESEARCH PAPERS BY PEER-REVIEW [J].
FISHER, M ;
FRIEDMAN, SB ;
STRAUSS, B .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 1994, 272 (02) :143-146