Annual Meeting Reports

Improving Statistical and Methodological Reviews with Automation

As an introduction to this session, like any good scientist, Timothy Houle, associate professor in the Departments of Anesthesiology and Neurology of Wake Forest School of Medicine, first identified a problem: statistics in biomedical research is a problem because most biomedical researchers are not statisticians and therefore do not have the expertise to evaluate their approach to study design or data analysis critically. Similarly, most peer reviewers, despite their best intentions, are not qualified to critique methods or statistical analyses adequately. In fact, the quality of statistical review is a growing concern for readers of the medical literature to the extent that the poor quality of much published medical research has been labeled as a “scandal”. Several publications reported that low statistical power and skewed or biased findings are prevalent in the literature.1–5 Unfortunately, only a small percentage of journals use a professional statistician as part of the standard review process. As Houle stated, “thus, despite the best intentions of all involved, the peer-review system is not particularly well suited to providing high-quality criticism of the statistical methods of reviewed manuscripts.”6

StatReviewer, the software described in this informative session, was created to solve many problems related to statistical review. StatReviewer “looks for” for critical elements in biomedical manuscripts. Those elements include a statement about the standardized reporting guidelines in use for the particular study (such as CONSORT or STROBE), use of uniform requirements for medical journals, and appropriate use and reporting of P values. The process starts as the software scans the manuscript (which had been cut and pasted into fields on the software site) and parses the document into sections. It runs thousands of algorithms for each section, checking to see whether required reporting elements are provided. Next, the user sees a numbered list of criticisms organized by section that can be inserted into the critique or simply e-mailed to the author. The presenters noted that StatReviewer is in beta testing, and they encouraged attendees to try it out.

Chad Devoss, founder of Next Digital Publishing, followed Houle’s presentation to explain more about the software itself. StatReviewer is Web based and works with iterative algorithms equating to tens of thousands of checks per manuscript. Limitations at the time of this presentation include the following: StatReviewer accepts input of manuscript sections rather than instant document import, StatReviewer augments statistical review but cannot take its place, and further comments are needed to perfect the system. Future enhancements to the system will include new built-in statistical checks, machine learning to result eventually in 99% accuracy, and the ability to integrate into manuscript-system workflows with journal-specific elements.

Dana Turner, project manager at Wake Forest School of Medicine, provided examples of three published manuscripts to demonstrate that the output of software analysis is a numbered list of “suggested improvements”. Audience response was enthusiastic, and most attendees seemed pleased that there was some help on the horizon to augment statistical review of peer-reviewed manuscripts.


  1. Dupuy A, Simon RM Critical review of published microarray studies for cancer outcome and guidelines on statistical analysis and reporting. J Natl Cancer Inst. 99:147–157.
  2. Ioannidis JP Molecular bias. Eur J Epidemiol. 20:739–745.
  3. Ioannidis JP Why most published research findings are false. PLoS Med. 2:e124.
  4. Ioannidis JP Contradicted and initially stronger effects in highly cited clinical research. JAMA. 294:218–228.
  5. Ioannidis JP Large scale evidence and replication: insights from rheumatology and beyond. Ann Rheum Dis. 64:345–346.
  6. Horrobin DF Something rotten at the core of science? Trends Pharm Sci. 22:51–52.