Recently, several high-profile cases have brought attention to potential flaws and difficulties within the traditional model for peer review. This has led to an increased interest in alternative models and support for peer review, which was the focus of this session, covering topics such as peer review evaluation, independent peer review, and open peer review. Each speaker offered a unique summary of the work he or she has done and continues to do within the peer review process, showcasing some exciting options sprouting up along the traditional path.
Adam Etkin, of PRE Peer Review Evaluation, was the first speaker. He described the current state of peer review and the general criticisms arising from recent cases of fraud and bad science reaching publication, as well as concerns over the ability of individuals to “game” the existing system. PRE has developed a service called PRE-Val for sharing information related to the peer review a submission has gone through and allowing publishers to present this analysis alongside the published submission. Etkin hopes this will help establish trust and transparency in the peer review process.
Etkin confronted what he called “the myth that peer review is broken”, noting that “bad apples spoil the bunch.” However, Etkin said that most still believe that peer review is helpful and a consideration when selecting where to submit. This is where PRE-val comes into play.
The service leverages metadata from the submission system to confirm the paper has undergone peer review in the manner advertised by the publisher, providing independent third-party verifications of the peer review process at the journal and article level. PRE runs the collected data through their process and provides a badge for the submission to the publishing platform via an application programming interface. The badge can be placed anywhere the publisher wants a signal of peer review to be present (e.g., journal article page, search results, aggregator sites, article metric pages). The content exposed by the badge is determined on a case-by-case basis with the publisher.
Rubriq and its efforts to provide independent peer review were the next topic covered. Jody Plank, Rubriq product manager, explained that Rubriq offers a rigorous, double-blind review of manuscripts within a two-week period using reviewers with a published track record of expertise in the area covered by the paper. The reviews are generally conducted before submission; the intention is for a round of presubmission peer review that improves an article prior to submission.
“Presubmission peer review is not super novel,” said Plank. “Anyone who’s been in a research lab knows that people share their paper with friends and fellow researchers ahead of time, but it can be hard to rely on friends to give an honest opinion— some people like to be nice to friends. Independent services can provide honest feedback and allow authors to make a great first impression at their top-choice journal.”
A big difference between Rubriq’s model and most instances of professional peer review is that Rubriq’s reviewers are compensated for their work. Each reviewer receives a $100 honorarium, which he or she can choose to keep or donate to charity. Plank acknowledged that not all academics are in favor of this practice.
Rubriq currently boasts a network of nearly 4,000 reviewers and uses three for each submission reviewed. All reviewers hold doctoral-level degrees or tenure-track professorship in their fields. If Rubriq’s pool does not meet the requirement for an area, new reviewers with appropriate expertise are recruited.
For consistency, Rubriq reviews are performed using a scorecard as an assessment tool and guideline for reviewers. The scorecard offers both quantitative evaluation and qualitative commentary because reviewers need to justify the selections they’ve made on the scorecard. Reviewers rate items using checkboxes, and space is available for commentary specific to each point. The end product is a report provided to authors with scores broken down across categories and with comments aggregated by section.
The final topic of the session was open peer review. Chi Van Dang spoke of his experiences as a member of the eLife’s Board of Reviewing Editors. Dang explained that eLife has worked to diminish the presence of the “vicious reviewer”—a reviewer who may attack an author’s work rather than provide constructive feedback. At eLife, if you want to serve as a reviewer, your name will be shared with other reviewers. This, Dang said, helps mitigate the influence of vicious reviewers because their comments will be seen by their peers in the field. This openness in the review process extends to the publication; the major points from the decision letter after peer review and author responses are published with the paper.
According to Dang, participation rates for optional open policies have been positive. By the numbers, 95 percent of eLife authors choose to have their decision letter and responses published along with their submission, 23 percent of reviewers agree to share their names with authors, and 80 percent of reviewers agree to share their names with another journal in the event of a rejection if passed onto another journal.