Viewpoint

Stop, Collaborate, and Listen: Working Together to Enhance a Scientific System Under Pressure

Download Article

“No crisis… but no complacency.”  

In late September 2019, the National Academies of Sciences, Engineering, and Medicine (NASEM) convened a workshop1 in Washington DC “to discuss the current state of transparency in reporting pre-clinical biomedical research” entitled “Enhancing Scientific Reproducibility through Transparent Reporting.” The workshop is part of a larger NASEM committee project exploring the role of “Reproducibility and Replicability in Science2 that generated an excellent report on ongoing efforts and recommendations to improve reproducibility, replicability, and overall confidence in science.

As I described in my October 2019 Newsletter,3 the report defines reproducibility narrowly, in a way that is sometimes referred to as computational reproducibility: being able to take the same data, code, methods, and any other variables and produce the same interpretation and conclusions. Replicability is defined as being able to generate consistent results across studies using different data but trying to answer the same question; for example, a drug trial that shows effectiveness in one population should be just as effective in a similar population. In both cases, in order to reproduce or replicate the original study, independent researchers need comprehensive knowledge of all specific methodological details that produced the results, as well as access to data, codes, and experimental materials. This past decade has brought a renewed focus on how science is conducted, along with frequent high-profile retractions and instances of scientific fraud, leading to much discussion of a “Reproducibility Crisis” that is afflicting science.

Into this terrain, the NASEM committee has ventured and the quote that starts this article comes from Committee Chair, Harvey Fineberg, and summarizes the findings of the report: calling it a reproducibility “crisis” is a bit overblown, but that doesn’t mean that we can be complacent either.

Who “we” is in this context is important and a key feature of this workshop. The National Academies is possibly unique in its ability to bring together all the stakeholders in the scientific research endeavor: journals/publishers, institutions, funders, and the researchers themselves. What follows are some of the key takeaways, at least in my opinion.

Journals/Publishers

Journals and publishers have an essential role in helping to enforce appropriate and consistent transparency both in published research and in the review and publication process. Greater transparency can, in turn, expose problems, both unintentional and malicious, aid in reproduction/replication, and (hopefully) boost public trust in science.  Some of the steps that were discussed that journals can take include the following:

  • Improve the Quality of Published Methods. If reproducibility and replicability are essential to good science, then Methods sections have to be easy to follow and contain sufficient information to enable replication of the study without requiring weeks of back and forth with authors. Methods sections are like recipes, but if every recipe required that you consult three other cookbooks, order ingredients that take months to arrive, and personally contact Julia Child to clarify important details (good luck with that), no one would ever cook for themselves. While some journals have taken the extra step of independently reproducing research prior to publication (see, e.g., the American Journal of Political Science4), that is not feasible for many types of research; however, journals can insist that methods are as transparent as possible and include a technical review of manuscripts to ensure compliance. There are also an increasing number of repositories and services for protocols and source code that allow journals to increase transparency without increasing word count.

The point of a checklist is not simply to check the boxes, but to communicate expectations to all involved.

  • Use Checklists and Guidelines. Reporting and methodological checklists, such as those promoted by the EQUATOR Network,5 can be controversial: the thinking being that they provide a false sense of security and another administrative bureaucracy. While this can be true if the checklist is thought of as an end unto itself, when integrated into the review process as part of a larger framework and as a tool for establishing norms, it can be an effective component in improving transparency. The point of a checklist is not simply to check the boxes, but to communicate expectations to all involved. Checklists tell authors what elements need to be included in their articles, give editors and reviewers an outline for reviewing methods, and provide easy-to-understand quality checks for nonscientists, including editorial staff. It’s for this reason that the authors of the new set of minimum standards6 for research materials, data, analysis, and reporting (MDAR) explicitly refer to what they are developing as a framework, of which a checklist is simply one component.
  • Require Availability to Data and Materials. In addition to knowing exactly how a study was conducted in order to reproduce or replicate it, researchers need access to the primary data and materials used in that study. While there are some legitimate reasons that data cannot be shared, many journals are moving to make data sharing the norm, with only a few explicitly stated exceptions allowed. For journals not ready to make public data availability a requirement, even requiring disclosure of data availability can change research practice. When authors must explicitly state in their article that they will not make their data available, as required by level 1 of the TOP Guidelines,7 it may cause the journal to question why that is the case. For materials, journals can require or encourage authors to deposit their materials in repositories such as Addgene or The Jackson Laboratory8,9 and use persistent standardized identifiers to ensure the correct materials are being used. As a bonus to authors, depositing materials saves authors from having to prepare them for anyone who comes asking.
  • Be Open to Transparency Innovations. By transparency innovations, I mean as an example, new modes of peer review, such as incorporating preprint servers and registered reports, along with more open communication, such as transparent peer review. Preprint servers allow for more eyes on research before final publication, increasing the chance the errors or oversights are caught. Registered reports, wherein authors submit a research plan to a journal that is provisionally accepted prior to the completion of the study, help avoid publication bias toward positive results or selective reporting. Many of these innovations, like registered reports, refocus research on the scientific process, and not just the results.
  • Avoid Requesting Additional Underpowered Experiments. Likewise, an item that was raised repeatedly is that editors and reviewers should avoid asking authors to add underpowered experiments to revisions, for example, to add “clinical relevance.” As suggested by Dr Brian Nosek of the Center for Open Science, asking for additional experiments at revision may be a way to incorporate a version of registered reports into the review process. When a journal invites revision of an article with additional experiments, authors submit their research plan for those new experiments and the manuscript is provisionally accepted based on the strength of that plan. The revised manuscript is then published with the new experiments regardless of their outcome, removing some pressure on authors.
  • Signal Trustworthiness. Finally, as discussed in a recent PNAS article entitled Signaling the Trustworthiness of Science10 by NASEM President Marcia McNutt and some of the attendees of this workshop, including Richard Sever and Veronique Kiermer, journals can do a better job of promoting how they are “safeguarding science’s norms.” Greater transparency and adherence to standards and guidelines are encouraged, along with newer forms of recognition, such as badges that indicate, for example, when authors make their data and materials openly available.

A Community of Collaborators

I’ve outlined some steps that journals and publishers can take to enhance scientific reproducibility, but here’s the rub: in many ways, journals are effectively the end of the process. Journals can enforce many of these guidelines on the back end, but if researchers aren’t aware of them, and incentivized to adhere to them, there is only so much that can be done at this late stage of the research process. This is where the other stakeholders come in, particularly funders and institutions.

Journals can enforce many of these guidelines on the back end, but if researchers aren’t aware of them, and incentivized to adhere to them, there is only so much that can be done at this late stage of the research process.

Funders play a key role, because they are there from the start of a research project and hold two of the biggest carrots: 1) money and 2) the potential for more money. The funders present at the workshop discussed ways they were working to promote transparency in their funded research and support researchers who devote time and effort to contributing to the scientific community through sharing of materials, data, and code. Funders were encouraged to incorporate data management, availability, and transparency plans into the grant process and establish enforcement mechanisms to ensure compliance, such as requiring evidence that those plans were followed when renewing grants. It was also suggested that checklists and reporting guidelines be introduced from the very beginning of a research project, preventing surprises and saving time when research is submitted to journals later.

Institutions should then serve as both the facilitators and supporters of good research practices. The research librarians present discussed ways that institutions and librarians are connecting researchers with the appropriate training and resources that can help them succeed. Much of the needed infrastructure exists at institutions, nonprofits, and government agencies, and librarians can serve a vital role in helping researchers navigate this system and develop a workflow for reproducible research. Institutions must then ensure that their promotion process incentivizes good research practice and that tenure committees consider the quality of research articles, not just quantity. As Fineberg said, the charge to these committees should be “I know you can count, but can you read?”

The researchers present noted that the reward aspect is essential, as practicing good science takes time and effort, and in a competitive academic environment, they need to know that their investments will pay off. As Yarimar Carrasquillo, an early career researcher from the National Center for Complementary and Integrative Health at the NIH discussed, the time and resources needed to replicate studies and then turnaround and produce new rigorous and transparent research could be spent cranking out multiple flashy, yet flimsy, articles so they need to know that institutions and funders will reward the former and not the latter.

Bringing it back to publications, Carrasquillo further suggested that journals can significantly reduce the time spent on replication by publishing transparent research and comprehensive methods following the suggestions above. The problem isn’t necessarily with a failure to replicate as that can lead to new discoveries and scientific insights. Instead, time wasted on replications that are drawn out due to poorly defined methods, errors, or unstated biases benefit no one, and hinder the advancement of science. Greater transparency is key to greater replicability, but as the workshop highlighted, it will take all of the stakeholders, journals/publishers, institutions, funders, and researchers, collaborating to build and support the necessary cultural changes11 from research infrastructure through to journal policies.

When the meeting adjourned, the weather was nice, so I took the roundabout way to the Metro via the National Mall, and on a whim, I wandered into the (free) Smithsonian National Museum of American History.12 As I strolled through the “Places of Invention” exhibition in the Science and Innovation wing, I was struck by how many of the skills highlighted as essential for groundbreaking inventions and innovations were the same as those discussed at the workshop as being needed to foster reproducible and replicable science: collaboration, communication, adaptability, and more.


On the topic of replication, the cover of this issue a detail from “Fractal Tree No. 4” by Dr Robert Fathauer. Much of Dr Fathauer’s work in this series, found on his website http://robertfathauer.com and on Twitter @RobFathauerArt, is created using mathematical processes to precisely replicate sections of branches until the images become more abstract and yet still very much of nature.


In this issue, Stavroula Kousta, Erika Pastrana, and Sowmya Swaminathan (who was instrumental in the development of the MDAR framework and organizing the NASEM workshop) provide “Three Approaches to Support Reproducible Research,” including implementing a checklist for transparent reporting in life science articles, supporting computational reproducibility through peer review of code, and publishing registered reports. Next, Emma Shumeyko outlines steps to create a Journal Review Club as part of “Engaging Early Career Scientists with Hands-On Peer Review”; Corley-Ann Parker shines some light on “The Editor’s Role in Avoiding Gender Bias”; Pam Goldberg Smith gives “A Guinea Pig’s Perspective” in cross-training at a portfolio of journals; and Becky Rivard and Jessica LaPointe tell us “How to Explain Your Role to Non-Editors” for Production and Copy Editing. The Fall 2019 issue wraps up with a book review of the new edition of the classic The Copyeditor’s Handbook and a continuation of our collection of Meeting Reports from the 2019 CSE Annual Meeting. We hope that you will find these reports, and all of the articles published in Science Editor, helpful in your efforts to publish the best science possible.

References and Links

  1. http://www.nationalacademies.org/hmd/Activities/Research/DrugForum/2019-Sept-25.aspx
  2. https://www.nap.edu/catalog/25303/reproducibility-and-replicability-in-science
  3. https://www.csescienceeditor.org/newsletter/october-2019-replication-all-i-ever-wanted/
  4. https://www.csescienceeditor.org/article/science-editor-symposium-reproducibility-reporting-guidelines/
  5. http://www.equator-network.org/
  6. https://osf.io/preprints/metaarxiv/9sm4x/
  7. https://cos.io/top/
  8. http://www.addgene.org/
  9. https://www.jax.org/
  10. https://www.pnas.org/content/116/39/19231
  11. https://cos.io/blog/strategy-culture-change/
  12. https://americanhistory.si.edu/