AI can help address demand for high-quality, unbiased peer-review
Posted on : November 16th 2021
Peer review is in high demand, despite its inherent flaws, which range from the possibility of bias among peer reviewers to procedural integrity to the stretch of time to publication. Some academics and publishers believe artificial intelligence (AI) might alleviate or minimize some of these concerns. However, can a computer algorithm evaluate a research article better than a human?
With the increase in the volume of academic publications, journal editors are constantly under pressure to find reviewers to assess the quality of academic work as quickly and efficiently as possible. According to Dimensions data, over 4.2 million articles were published in 2019, up from 2.2 million just a decade ago. The growing volume of scientific manuscripts published, as well as the increasing need for high-quality peer-review, demands the use of advanced decision support technologies to ensure that these papers are evaluated effectively, comprehensively, and consistently.
The potential of Artificial Intelligence (AI) to boost productivity and reduce reviewer burden has attracted much interest. AI is increasingly being used to assist in the evaluation of papers as well as to support the peer-review process.
Artificial intelligence enables scalability while maintaining stringent quality standards. Correcting language errors, verifying ethics statements, and finding flaws in images are all time-consuming activities that can contribute to reviewer fatigue. Other tasks, such as screening for conflicts of interest amongst authors and reviewers or detecting plagiarism, are only possible with technological support. Machine learning algorithms can help identify such problems to help authors, editors, and reviewers make better editorial decisions.
AI-powered platforms ensure that articles submitted for peer review meet the criteria required for high-quality scientific research. This technology aids editors and reviewers by highlighting potential problems in manuscripts. These concerns can then be addressed or clarified during the manuscript review process. Tagging potential issues that need to be addressed allows human specialists to make more efficient and effective editorial choices, and reducing the time to publication for authors while maintaining the highest quality standards.
A suite of automated technologies are now available to help with peer review. A software called StatReviewer validates the accuracy of the statistics and methods in the manuscripts. The tool can evaluate statistics in standard formats and presentation styles from a number of scientific disciplines. To do this, it ensures that publications accurately provide information such as sample sizes, information regarding subject blinding, and baseline data. StatReviewer can also detect indicators of fraudulent behaviour.
Earlier this year, open-access publisher Frontiers developed the state-of-the-art Artificial Intelligence Review Assistant (AIRA) to help editors, reviewers and authors evaluate the quality of manuscripts. AIRA examines each manuscript and can provide up to 20 recommendations in seconds, including assessing the quality of the language, the integrity of the statistics, detecting plagiarism, and identifying potential conflicts of interest.
While these tools can ensure that a manuscript is of high quality, they are not meant to replace the role of a reviewer in terms of evaluation. Concerns have been raised that machine-learning algorithms trained on previously published papers may perpetuate existing biases in peer review. Furthermore, because the algorithms are very domain specific, they are only scalable in a few areas. Algorithms are not yet intelligent enough to allow an editor to accept or reject a manuscript purely based on the data extracted. While the algorithms will take some time to refine, it would make sense to automate a lot of things for the reason that a lot of things in peer review remain standard.
AI certainly has the potential to enhance certain elements of conventional peer review, and publishers are already deploying it for some fundamental jobs within the workflow. It will necessitate specific standards and processes for determining which parts of the review process can or should be automated, and when they must rely on human supervision. As technology advances, more and more aspects of the peer review process are expected to benefit. However, for the near future, there will always be a need for human involvement and final decision-making.
Straive has invested technology and SMEs as part of its Innovation labs and deployed solutions around reviewer search and transfer management. Our long-term engagements with our partners clearly demonstrate our capabilities across the publishing value chain. Be it our work with upstream solutions such as Transfer Desk, or Reviewer Search or downstream solutions like our MARC distribution platform, we have a comprehensive portfolio that allows us to drive change seamlessly.
Download our whitepaper to know more about the importance of integrating new technology-mediated communication standards into successful, broadly recognized peer review models.
We want to hear from you
Leave a Message
Our solutioning team is eager to know about your
challenge and how we can help.