Webinar Report

New Editorial Models for Decentralized Publishing

Scientific publishing is undergoing a quiet transformation. As preprints become mainstream, a new generation of editorial models is emerging. In a recent webinar hosted by CSE, 6 experts presented a range of decentralized approaches to evaluation and curation, from community-led peer review networks to automated transparency audits. Together, these models raise a timely question: What if dissemination, evaluation, and certification no longer needed to happen in the same place, or even in the same order? As one speaker put it, “Decoupling dissemination from formal peer review provides an opportunity to improve how we assess research.”

Decoupling Publishing

Richard Sever (openRxiv), cofounder of bioRxiv1 (2013) and medRxiv2 (2019), has helped shape the current era of preprint adoption in the life sciences. His presentation framed preprints as a foundational shift; not a workaround, but a deliberate reordering of priorities: make the work public first, then evaluate. In contrast to the traditional submission-to-publication pipeline, which can take months or years, preprints allow researchers to share findings in hours or days. Sever emphasized that this decoupling opens the door to “more ongoing forms of review” and the development of “multi-dimensional trust signals for readers—both human and machine.” 

A modular publishing ecosystem is where dissemination, evaluation and certification are separate but interoperable. Researchers and platforms can experiment with new models. Transparency, reproducibility, and speed are prioritized. Decoupling creates space for new types of evaluation and innovation, and emphasizes trust, efficiency, community ownership, and resistance from legacy systems. But decentralized models are not just theoretical—they are increasingly available and growing.

From Journal Gatekeeping to Shared Evaluation

If preprints unbundle dissemination from formal peer review, then Review Commons3—an initiative supported by EMBO4—takes the next step by decoupling peer review from the journal brand. As Thomas Lemberger (EMBO/Review Commons) explained, Review Commons coordinates high-quality, journal-agnostic peer review of preprints, allowing authors to receive structured feedback and respond before submitting to any of its 28 affiliated journals. The result is a “reviewed preprint” that can be transferred with minimal friction across multiple publishers—avoiding redundant rounds of review and editorial negotiation.

This model, Lemberger noted, is not just about efficiency. It represents a broader cultural shift, one EMBO is actively supporting through policy: As of 2023, reviewed preprints now qualify researchers for EMBO’s Young Investigator Programme, provided the reviews are independent, transparent, and rigorous. “We’re promoting a different way of thinking,” he said. “It’s not just about where a paper ends up—it’s about how it’s evaluated and how that process is made visible.”

Review Commons thus serves both as a practical infrastructure and a proof of concept: rigorous peer review can be decoupled from journals without sacrificing quality. The underlying values of transparency, transferability, and trust align with the broader momentum toward distributed models of research evaluation.

Publish, Review, Curate: Community-Owned Infrastructure

While some decentralized models seek to plug into existing journals, Peer Community In5 (PCI) takes a more radical approach: building a platform that replaces the journal altogether. As Thomas Guillemaud (Peer Community In) explained, PCI offers a fully open, noncommercial system for evaluating and curating preprints. Manuscripts are submitted to thematic PCI communities, such as PCI Ecology or PCI Evolution, which organize rigorous peer review and, when appropriate, issue a public recommendation.

This “publish-review-curate” model breaks with traditional gate-keeping in several ways. First, it is free to authors and readers, supported by academic institutions rather than subscriptions or article processing charges. Second, it embraces open peer review: reviewer names and comments are public, and decisions are made transparently. Third, authors of PCI-recommended preprints can publish them “as is” in the Peer Community Journal6 (a Diamond Open Access platform) or submit them to a growing list of PCI-friendly journals that accept these reviews in lieu of starting the process from scratch.

For Guillemaud and his colleagues, PCI is more than a workflow; it is a philosophical stance. It encourages a return to academic values: nonprofit publishing, method-first evaluation, and community stewardship. “Researchers do nearly everything already—writing, editing, reviewing,” he noted. “Why not also reclaim the system itself?”

Reclaiming Peer Review for the Many, Not the Few

Much of the debate about peer review reform focuses on speed, transparency, or editorial efficiency. But as Roseline Dzekem Dine (PREreview, Review and Curate Network) reminded the audience, deeper inequities persist: especially in who is invited to review, whose expertise is recognized, and who is excluded from the conversation altogether. Drawing on data from eLife7 and other studies, she highlighted how traditional peer review remains skewed, dominated by reviewers from North America and Europe, and disproportionately male. Early-career researchers (ECRs), meanwhile, often contribute as “ghostwriters,” coauthoring reviews but receiving no credit.

In response, PREreview8 was founded to transform peer review into a more inclusive, community-led process. Its platform supports open reviews of preprints, published under a CC BY license and assigned DOIs. Reviewers can sign their name, use a pseudonym, or remain anonymous. The platform integrates with ORCID, offers multilingual resources, and runs Open Reviewers training, with a focus on recognizing bias and centering lived experience. “We work to amplify the voices of those historically excluded from academic systems,” Dzekem Dine said, “so their contributions are recognized and empowered to drive systemic change.”

She also described the Review and Curate Network9 (RCN), launched in 2024 to support open science and peer review across the Global South. Using platforms like PREreview and AfricArXiv,10 RCN organizes collaborative, open reviews of preprints from African researchers—reviews that are published, credited, and discoverable through services like Sciety.11

Together, PREreview and RCN push the decentralized publishing conversation beyond structure into representation and justice. They ask not only how we evaluate science, but also: Who gets to evaluate it—and on whose terms?

Curation as Community Building

In the expanding world of preprints, speed is not the only challenge—so is sense-making. How do researchers keep up with thousands of new studies each month? How do they find the ones that matter, or discuss them in meaningful ways? That’s where preLights12 comes in. As Katherine Brown (preLights, The Company of Biologists) explained, the platform was launched in 2018 by The Company of Biologists to help ECRs highlight, comment on, and discuss preprints across the biological sciences. 

This model brings in the dimension of curation, community engagement, and ECR development: Members of the “preLighter” community (mostly PhD students and postdocs) select preprints they find exciting or important and write accessible summaries and commentaries. The initiative also offers tools like preLists,13 which group preprints by conference or theme, and postLights,14 which track how papers evolve from preprint to published version. In Brown’s words, preLights helps “curate the preprint literature, build community among researchers, and train the peer reviewers of the future.”

Unlike formal peer review, preLights is more conversational than judgmental. It emphasizes explanation, enthusiasm, and interpretation, while still helping to surface quality and relevance. Importantly, it also provides professional development opportunities for ECRs, through writing, public engagement, and mentorship. Community-building happens not just online, but in Slack groups, workshops, and networking events.

What preLights demonstrates is editorial evaluation does not have to be binary or bureaucratic. It can be distributed, social, and community-driven. It can help researchers navigate the flood of preprints while building networks of trust and commentary outside formal peer review. In doing so, it helps make the decentralized publishing ecosystem more usable, human, and approachable, especially for researchers just beginning to find their voice.

Trust at Scale—Automating Rigor and Transparency

In a decentralized publishing landscape, the role of journals as arbiters of quality becomes less central. But as Anita Bandrowski (SciScore, SciCrunch Inc.) pointed out, that raises a pressing question: How do we ensure transparency and rigor at scale? Her answer: Build tools that can help assess these dimensions directly and systematically. SciScore,15 the tool developed by her team at SciCrunch Inc.,16 is an automated system that analyzes scientific manuscripts for the presence (or absence) of key reproducibility criteria.

SciScore generates a quantitative “trust signal,” in other words, a scorecard that flags whether a paper reports elements such as blinding, randomization, power analysis, or identifiable key resources like antibodies and cell lines. These features are linked not just to good practice, but to real outcomes. For example, Bandrowski noted that replication studies in the Reproducibility Project scored significantly higher than the original papers they attempted to replicate.

The tool also tracks adoption of RRIDs (Research Resource Identifiers), which help make key materials findable and verifiable. When journals or preprint servers prompt authors to include RRIDs, the rate of “untraceable” resources drops dramatically. “We’ve seen a 66% decrease in problematic cell lines when authors receive alerts,” Bandrowski noted. “That’s not just metadata—that’s impact.”

SciScore does not replace human review, but complements it. It offers consistency, auditability, and scale, which is particularly valuable in a world where preprints are abundant and journal gatekeeping is no longer the sole filter. As Bandrowski put it, the goal is not to score papers for the sake of scoring, but to embed reproducibility and transparency into the editorial ecosystem—whether centralized or not. SciScore’s underlying message fits neatly with the broader theme of the webinar: that evaluation can be modular, machine-assisted, and more transparent, without depending on traditional gatekeepers.

Discussion: Adoption, Aspiration, and the Weight of the Status Quo

A common thread across the webinar was that the barriers to adoption are less technical than cultural and structural:

  • Researchers are still incentivized to publish in high-impact journals tied to prestige, tenure, and funding.
  • Institutions and funders are only beginning to treat reviewed preprints or alternative metrics as legitimate.
  • Many in the research community still equate quality with journal name, not process transparency.

At the same time, these models share an aspiration to shift the locus of control away from commercial publishers and toward the research community itself. Whether through community curation, open peer review, automated scoring, or reviewer empowerment, each initiative aims to make publishing more transparent, inclusive, and interoperable.

And yet, questions remain: Will reviewed preprints gain enough institutional backing to become career currency? Can platforms like PCI scale without compromising their core values? Will automated tools like SciScore be used to support rigor? And if each of these models functions well on its own, could they also work better together? Imagine a paper posted as a preprint, reviewed on PREreview, curated by a PCI community, scored by SciScore, and tracked postpublication by preLights. This interoperable world would require shared infrastructure, standards, and commitment to collaboration.

What the panelists offered was not a single solution, but a compelling mosaic of approaches, each addressing a different flaw in the traditional system, with delays, opacity, exclusivity, lack of rigor, lack of credit. Together, they are reshaping what peer review and publishing can look like: modular, transparent, community-driven, and in many cases, independent of journal gate-keeping.

The future of scientific communication may not lie in replacing journals wholesale, but in reconfiguring the system, in other words, keeping what works, rethinking what does not, and letting different models coexist and evolve. Various pilots and experiments are underway. Can the wider ecosystem in scholarly communication evolve with them?

Resources and Links

  1. https://www.biorxiv.org/
  2. https://www.medrxiv.org/
  3. https://www.reviewcommons.org/
  4. https://www.embo.org/
  5. https://peercommunityin.org/.
  6. https://peercommunityjournal.org/
  7. https://elifesciences.org/
  8. https://prereview.org/
  9. https://prereview.org/en-us/clubs/review-curate-network#:~:text=The%20network%20aims%20to%20foster,the%20rigor%20of%20traditional%20Science.
  10. https://info.africarxiv.org/
  11. https://sciety.org/
  12. https://prelights.biologists.com/
  13. https://prelights.biologists.com/what-are-prelists/
  14. https://prelights.biologists.com/news/journey-of-the-preprint-introducing-postlights/
  15. https://sciscore.com/
  16. https://www.scicrunch.com/

 

Steven D Smith, DPhil (https://orcid.org/0000-0001-5729-4247), is with STEM Knowledge Partners.