Features

Not Bad Apples but Bad Systems: AAAS Session Looks Ahead to National Academies Report on Integrity in Scientific Research

Do lapses in scientific integrity stem mainly from rare moral flaws in researchers? Or does the environment for science encourage deviating from ethical ideals in doing and reporting research? The latter perspective pervaded the session “Integrity of Science”, held 13 February at the 2015 American Association for the Advancement of Science annual meeting.

The session, organized by Thomas Arrison of the US National Academy of Sciences, featured members of the committee preparing the forthcoming National Academies report Integrity in Scientific Research, which will identify challenges the scientific community faces in ensuring integrity and recommend measures to help address them. Extensive audience discussion followed the set of presentations.

From the Committee Chair

Robert M Nerem (Georgia Institute of Technology, Atlanta), who chairs the committee preparing the report, noted that although the core values of science remain the same, much in the research environment has changed since the National Academies issued the 1992 report Responsible Science: Ensuring the Integrity of the Research Process. He said these changes over the past 2 decades have included increased collaboration, greater globalization, new technology, and intensified competition for funding. Given such changes, he explained, the National Academies appointed in 2012 a committee to prepare the new report, now slated for publication in 2015. He said he viewed the current session in part as an opportunity for broader input into the report. I

ssues that the committee has addressed, Nerem said, have included the definition of research misconduct; the responsibilities of researchers, sponsors, and institutions; the responsibilities of scientific disciplines and the journals in them; the availability of researchers’ data to others; and authorship of publications. He observed that norms for authorship differ among disciplines, and he posed the questions of whether each author’s role should be stated and whether all coauthors should review a manuscript, even when a paper lists hundreds of authors. He also raised the question of whether research misconduct has become more common or is just receiving more attention.

In closing, Nerem said that if the research community does not address issues of research integrity, the government will. He said he hoped the report will facilitate ongoing dialogue on these issues.

On Detrimental Research Practices

The 1992 report divided lapses in integrity into 3 categories: misconduct (fabrication, falsification, or plagiarism), questionable research practices, and misconduct not unique to the research environment. In the new report, the second category is being renamed detrimental research practices (DRPs). Speaker Paul Root Wolpe (Emory University, Atlanta, Georgia) focused largely on these practices, which he noted were more than questionable. Wolpe identified authorship abuses as a major category of DRP and said the committee devoted considerable attention to them. Other DRPs that he identified included failure to share data and code, exploitative supervision of graduate students and others, misleading statistical analysis short of falsification, and abusive or irresponsible practices by journals.

Wolpe noted that the available statistics on scientific misconduct do not capture the full amount of such behavior—because, for example, some instances go undetected, are unreported because of power relationships, or are not pursued. He described 3 sets of consequences of scientific misconduct: costs (including monetary costs, the human toll, and the basing of later research on false premises), diminished integrity of science as an enterprise, and decreased public trust.

Speaking as a sociologist, Wolpe noted the need to consider institutional incentives to engage in scientific misconduct. He thus endorsed taking systems views rather than focusing on individuals. Subsequent speakers provided such views.

In the Changing Technological Environment

Victoria C Stodden (University of Illinois at Urbana-Champaign) spoke on “Integrity, Reproducibility, and the Changing Technological Environment for Research”. She identified 3 realms where technological advances have implications for integrity of research: big data (and data-driven discovery); the increase in computational power, permitting extensive simulations; and the existence of “deep intellectual contributions now encoded only in software”. She indicated that whereas the deductive sciences (such as mathematics and formal logic) and the empirical sciences (involving hypothesis testing) have established methods to identify and correct errors, computational science has not yet developed such standards.

Stodden said the 2012 workshop “Reproducibility in Computational and Experimental Mathematics”, held by ICERM (the Institute for Computational and Experimental Research in Mathematics), yielded a useful report (available at stodden.net/icerm_report.pdf). She also discussed copyright, which she characterized as a barrier to what scientists try to accomplish. Alternatives, she noted, include open source software, Creative Commons licenses, and the Reproducible Research Standard, a set of license recommendations for computational science. The slides from Stodden’s talk, which include references and links, can be accessed at web.stanford.edu/~vcs/talks/AAAS2015-STODDEN.pdf.

On Why Researchers Misbehave

Brian C. Martinson (HealthPartners Research Foundation, Minneapolis, Minnesota) observed that some 20 years ago, scientific misconduct was viewed as the action of the occasional “bad apple”, and science was seen as self-correcting. Citing evidence from surveys, however, he reported that behavior reflecting lack of scientific integrity is not a rare exception. He emphasized that integrity in science consists of more than just avoiding fabrication, falsification, and plagiarism and noted that many scientists admit to practices, such as inadequate record-keeping, showing lack of rigor.

To help illustrate points, Martinson presented 2 case studies: 1 from outside science and 1 from within it. The first case, which entailed bank fraud, helped show how people often fail to recognize ethical aspects of situations and how fear of loss tends to affect how one frames decisions. The second case, involving scientific misconduct, helped show that extreme pressure for research funding can be an incentive to transgress. Martinson explained that as established scientists have trained new scientists, who in turn have trained others, the number of scientists has multiplied and hyper-competition for resources has ensued. In this hypercompetitive environment, he said, scientists fear losing their careers or laboratories if funding is not won and so face pressure to behave unethically. Martinson also noted that dependence on “soft money” to support one’s work can pose a conflict of interest.

In summarizing, Martinson stated that whereas unethical behavior in science has tended to be seen as a failing of the individual, humans do not behave in voids but rather are influenced by situations and incentives. To promote integrity in science, he concluded, science needs structural and cultural reforms.

On Improving Practices

The last speaker, C.K. Gunsalus (National Center for Professional and Research Ethics, Urbana, Illinois), addressed “Upgrading Practices: Challenges and Tasks for Researchers and Institutions”. As humans, Gunsalus observed, we tend to fool ourselves, and incentives can contribute to our cognitive biases. To help identify factors that may cloud one’s ethical judgment, Gunsalus advocated use of the acronym TRAGEDIES: Temptation, Rationalization, Ambition, Group authority and pressures, Entitlement, Deception, Incrementalism, Embarrassment, and Stupid systems.

Regarding systems, Gunsalus noted the folly of calling for one type of behavior while rewarding another—such as when teamwork is endorsed but a winner-takes-all approach spurs competition. Among other systemic factors that Gunsalus said could undermine integrity were the large numbers of scientists and papers; the limited amounts of time, attention, and money; high turnover in personnel; and existence of conflicts of interest.

Institutional challenges that Gunsalus identified included the tendency for those investigating alleged misconduct to be colleagues of and identify with those accused, systemic pressures and incentives, power dynamics, the desire for money and prominence, the “star system” (with excessive deference to prominent researchers), and areas of ambiguity regarding norms. She acknowledged the difficulty of maintaining a robust system for identifying and resolving problems relating to research integrity.

Gunsalus then offered recommendations for individuals and institutions. Individuals, she said, should know pitfalls, have habits and structures to counter the potential for problems, attend to environmental influences, and perhaps contribute to systemic reforms. Tasks that she identified for institutions included focusing on environments; discussing, sharing, and implementing best practices, about which much has been written; both advocating and demonstrating institutional integrity; protecting those who report possible research misconduct; assessing facts, not personalities (“Even flakes can be right.”); and conducting credible investigations. A question she raised was whether to introduce peer review of reports from misconduct investigations.

Open Discussion—and Looking Ahead

Many questions and comments from audience members followed the set of presentations. An attendee asked about potential tasks for journals, as the session abstract had listed this topic, but the speakers had said little about it. The respondent said the committee had discussed the topic at length. He observed that journals’ attitudes toward integrity-related issues had changed in the last few years, and he noted much convergence among medical journals on matters such as disclosing conflicts of interest and noting authorship contributions. He also said the committee would welcome creative ideas on how journals can help address the challenges faced.

Other points made in the discussion included the following:

  • Emphasis on publication metrics can lead scientists to sacrifice quality for quantity and speed.
  • Good mentorship, rather than only didactic teaching of ethics, is needed.
  • Framing reproducibility as quality assurance may promote appropriate behavior.
  • Institutions can have conflicts of interest, and so maybe outsiders should conduct misconduct investigations.
  • Emphasis on extrinsic recognition rather than intrinsic motivation may promote lack of integrity. So may situations, for example regarding funding, in which stakes are high and wins are rare.
  • Perhaps the National Academies publication On Being a Scientist: A Guide to Responsible Conduct in Research should be updated in keeping with the forthcoming report.
  • Protections should exist for graduate students and postdoctoral fellows who submit allegations of research misconduct.
  • Science-related jobs outside academia as well as within it should be viewed as appropriate for PhDs.

The discussion also included debate about whether science is a business.

An attendee requested the expected publication date for the Integrity in Scientific Research report, previously slated for early 2015 release. Stifling an uneasy-sounding laugh, Committee Chair Nerem estimated that—considering the time needed for completion, review, and response—the report would become available in summer 2015. The report will be intended for researchers, research institutions, funders, journals, and groups in scientific disciplines. Reports from the National Academies can be accessed at www.nap.edu/.