Features

Retraction Watch Blog—Keeping an Eye on Science Publishing: Q&A with Ivan Oransky and Adam Marcus

Retraction Watch is a relatively new blog, founded in August 2010 by Ivan Oransky and Adam Marcus (http://retractionwatch.wordpress.com/). Within that short time, however, the blog has sparked interest and gathered readers: in a little over 3 months, Retraction Watch logged the first 100,000 page views; the next 100,000 were reached in just 2 months. More than 400 people have subscribed to the blog and receive e-mail notification of new posts. For an independent blog with no public relations or marketing, those numbers are encouraging to the founders.

Oransky and Marcus seem to have tapped into an important issue in our field, one of utmost importance to Science Editor readers. They kindly agreed to an interview in January 2011, which follows.

In their first post, Oransky and Marcus present four reasons for the creation of the blog: To examine the scientific process, to provide an informal repository and prompt discussion of a retraction database, to uncover fraud and misuse of funds, and to promote consistency among journals. They write, “Retractions are therefore a window into the scientific process.”

Question: Since you’ve started this project, what are you seeing through this window? Is the scientific process fundamentally flawed?

Answer: We don’t believe the scientific process is fundamentally flawed. In fact, we’re encouraged by how well it works, most of the time. And we’d say the same about the publishing process, which is the avenue we’ve chosen for exploring this world. Most of the time, that works quite well, too. What we’re digging up and commenting on are occasions when something went wrong—which is completely expected in a human endeavor— and how they were handled. We are troubled by a lack of transparency in many of those cases.

Question: The COPE guidelines on retractions were published in December 2009 (and we’ll be reprinting them in our journal in this issue). CSE has retraction guidelines, updated in its white paper in 2009. Your blog launched in August 2010. Do you think we’re perhaps getting near a tipping point of sorts (thanks, Malcolm Gladwell)—one that might lead to the creation of what you mention in reason #2—a retraction database?

Answer: We think such a database would be a great idea because retractions are often buried or poorly publicized. In fact, it’s not uncommon for unknowing scientists to continue citing retracted papers. We’ve been approached by a few groups about compiling or curating one. Getting that off the ground will require some resources.

Question: Are institutions or funding groups doing enough when fraud is uncovered?

Answer: After 5 months of Retraction Watch, it’s hard to draw any major conclusions. However, what we’ve noticed—and been troubled by—is that universities and other institutions often keep whatever investigations they’ve done under lock and key instead of releasing the findings. We understand that there are competing priorities and personnel issues, but we think there’s a lot of space between violating confidentiality and current practice.

Question: Are journals doing enough to uncover fraud?

Answer: Some are; some aren’t. We’ve written about journals that are doing a great job of uncovering plagiarism and other misconduct, using such tools as CrossCheck. We’d citeAnesthesia & Analgesia as one example. Others seem to allow investigations to fall into a black hole. Or they rely on universities to conduct investigations, saying it’s not their job, but then don’t disclose much about the findings. We think that’s inconsistent with the gatekeeper function that journals seem to claim.

Question: Are journals doing enough to educate authors, reviewers, and even editors? What might we do better?

Answer: This is a pet peeve of ours. Many retraction notices are opaque, buried, or both. We’ve called for journals to publicize retractions better: Any journal that press releases studies should press release retractions. Science does that, and Nature at least press releases any retraction of a paper that it had originally press released. Then there’s PNAS, which won’t press release any retractions. We’re not sure how that squares with scientific transparency. It could also do better at publicizing retraction processes and policies. Another suggestion: Journals ought to put retraction notices (and even corrections) outside their subscription firewalls. If not, if you don’t subscribe, you can’t read the notices. Just another step toward transparency.

Question: Is there a way to harness technology better to mark the literature in some way to alert readers and researchers that material has been retracted?

Answer: We’re not software developers, but we imagine there are better ways to harness technology to do that. “E-mail me whenever this study is cited” should be able to alert readers to retractions, because they’re just another form of citation. There are less technical ways that should be used better, too: In general, abstracts are marked as “retracted” pretty well, although a recent study found that as many as one-third weren’t. And even when they are, the word “retracted” could appear larger.

Question: The reasons described above provide a rationale for why this blog is important. But how did you get here? What is in your background (educational, professional, personal) that prompted interest in this topic?

Answer: Both of us were bitten by the “retractions often mean big stories” bug as journalists—Ivan at The Scientist, and Adam at Anesthesiology News. Adam, after all, broke the story of Scott Reuben, the anesthesiologist whose research fraud led to more than 20 retractions and a jail sentence. More boilerplate background is at http://retractionwatch.wordpress.com/aboutadam-marcus/ and http://retractionwatch.wordpress.com/about/.

Question: Both of you have full-time jobs. When do you find time to write for this blog? Do you have a goal for the number of posts per week or per month?

Answer: As with most labors of love, we find time because we’re passionate about this subject. And we have patient wives who understand. We have a loose goal of posting something new every business day. We’ve come pretty close; our average is more than four per week. We find that frequency keeps readers engaged. But even with that kind of volume, we have a singlespaced page full of potential posts at any given time. There’s a lot of material when it comes to retractions.

Question: How does your collaboration work? Do you discuss posts before they go live?

Answer: We discuss all our posts before they go live, except breaking news that just can’t wait—updates on Anil Potti, [a Duke University researcher who is alleged to have invented key statistical analyses in a study of how breast cancer responds to chemotherapy] for example, for which a lot of other outlets cover the story. We share strong feelings on transparency and on what makes a good story. But we also each bring something different to the table and have particular types of posts that we each like to do. It’s a true meeting of the minds that makes a relatively highvolume blog work. And we like to trade bottles of wine.

Question: Do you have any suggestions for would-be bloggers in the sciences?

Answer: Blogs work best when they’re in a niche that you obsess over. The posts almost write themselves. Starting a blog just to have a blog is likely to end in frustration. As for specific tips: (1) Link out as much as you can. It offers your readers background and context, and it means that other blogs can find you through trackbacks. (2) Think of yourself as a curator, picking up on tips from readers and from all over the Web. (3) Use social media, such as Twitter, to have conversations with lots of people at once rather than seeing them as ways to broadcast your content. (4) Develop a thick skin. Don’t ignore comments; accept feedback but don’t get defensive. We’re all about transparency, after all.

Question: The recommendations you’ve gathered and posted are great—they look like some overwhelmingly positive feedback. Have you received any negative feedback?

Answer: Initially, yes. One journal editor in particular expressed concern that we were doing the blog as an exercise in “gotcha” reporting. We weren’t, as he quickly recognized. But we had some exchanges by e-mail that got pretty heated. Now, however, the feedback is remarkably positive— especially from family members.

Question: Who do you think your readers are? Journal editors? Journalists? The general public?

Answer: We figure that anyone who cares about how science is done is a potential reader—and contributor—whether it’s tips or comments that make our coverage stronger. Journalists can find great story ideas, as Paul Raeburn pointed out on Knight Science Journalism Tracker. Journal editors can see how their practices compare with those of their competitors. Scientists can learn about the sometimes messy process of correcting the scientific literature. And although we’re not afraid to use scientific terminology, we hope that the general public will have a better understanding of how the scientific process works. It’s a big mystery to a lot of otherwise well-informed people, and we’d like to see that change. The list of people who’ve signed up for our daily e-mails suggests that we’re hitting all those audiences.

Question: How do you find out about retractions? Other blogs? Journal press releases? Other?

Answer: No great mystery there. Most we learn about through databases like PubMed and Google Scholar. But we also have been receiving a regular feed of tips from readers, which we very much appreciate.

Question: Have you been picked up by other blogs? By any major news outlets?

Answer: There’s a good list of such pickups here: http://retractionwatch.wordpress.com/what-people-are-saying-about-retractionwatch/. For highlights, major news outlets include The New York Times, Toronto Sun, Dutch daily NRC Handelsblad, and The Guardian(http://www.guardian.co.uk/commentisfree/2011/jan/15/bad-scienceacademic-journal-retraction). Blogs include PZ Myers’ Pharyngula (ScienceBlogs), Ed Yong’s Not Exactly Rocket Science(Discover), BoingBoing.net, Marc Abrahams’ Improbable Research, and Gary Schwitzer’sHealthNewsReview Blog.

Question: Is there anything else you’d like the Science Editor reader to know?

Answer: I think that’s pretty exhaustive! Thanks for your interest.