Can We Automate Scientific Reviewing?

Arxiv.org  April 8, 2021
The number of scientific papers generated has skyrocketed. Providing high-quality reviews of this growing number of papers is a significant challenge. Researchers at Carnegie Mellon University discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers. They collected a dataset of papers in the machine learning domain, annotated them with different aspects of content covered in each review, and trained targeted summarization models that take in papers to generate reviews. The results showed that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews, but the generated text can suffer from lower constructiveness for all aspects except the explanation of the core ideas of the papers. They summarized eight challenges in the pursuit of a good review generation system and potential solutions, which, hopefully, will inspire more future research on this subject. Data set and codes https://github.com/neulab/ReviewAdvisor are available for public use…read more. Open Access TECHNICAL ARTICLE.  Related article : Can science writing be automated, Science Daily April 18, 2019 

Posted in Bibliometrics and tagged .

Leave a Reply