This time may be different: protests against police brutality and systemic racism have continued for almost a month with no sign of ending. Institutions and organizations to reckon with their role in this system and look for ways to help. As such, many journals and editors are increasing their efforts to improve diversity, equity, and inclusion in the sciences. One action every journal and organization can take right now is take a hard look at who is invited to contribute to the journal, as editors, reviewers, editorial board members, and authors of invited content, to see how Black scientists and other traditionally underrepresented groups fare. Chances are the answer is: Not great. But you see, when making these decisions editors look at objective metrics such as the number of reviews completed and publication record and uh, who they know, and actually it’s the field itself that isn’t very diverse, so what can you do, really?
Hold on. Let’s step back a bit…
Last week, #ShutDownSTEM gained traction among a number of high-profile organizations and journals, including Nature and Science, who either explicitly signed on or tacitly participated by going dark on social media that day. The premise of that action (as expressed on their website), along with others such as #BlackInTheIvory and #BlackInAcademia, is that systemic racism is prevalent in STEM and academia in general and needs to be addressed with direct action. (More evidence is provided in the Scientific American article “Silence Is Never Neutral; Neither Is Science“.)
So, if we acknowledge that there is a racism problem in academia, or at a minimum a lack of diversity, wouldn’t it follow that many of the metrics and signifiers of merit that come out of academia are tainted too? Publication history, h-index, job title, and more all depend, at least in part, on the institutions, communities, mentorships, and labs you are welcomed into. That’s not to say that Black researchers and others from underrepresented groups can’t thrive in that system, but if your mantra is “merit above all else” when selecting reviewers, editorial board members, and authors of invited content, you run the risk of perpetuating flawed systems and metrics. And because these journal contributions help with career advancement in academia, a cycle is formed, and the diversity of a field stays stagnate.
But the cycle can be broken if journals prioritize inclusion and take additional steps. One possible approach was described by M. Rivera Mindt and co-authors in their article Advancing Science through Diversity and Inclusion in the Editorial Process: A Case Study. Editors at The Clinical Neuropsychologist set up a team of editors specifically focused on “culture, gender, and diversity” and made their work a priority. As the authors describe, one of the team’s main tasks was finding scholars from diverse backgrounds and recommending them to the editors as possible contributors to the journal. Editors still relied on metrics as part of the final selection process, but not solely. And it worked: due to their efforts, the percentage of consulting editors that are women rose from 23% to 50% and editors from diverse backgrounds increased from 2% to 13%, with a recognition that more work can be done.
Similar work is being done in scholarly publishing organizations through the Coalition for Diversity and Inclusion in Scholarly Communications (C4DISC), as outlined in their introductory Science Editor article. C4DISC was founded to address these issues head on by raising awareness, eliminating barriers, and working with their members in achieving diversity and inclusion within their organizations. CSE is a founding member, and as part of this work, CSE President Carrissa Gilman recently launched a new Task Force to review “what resources, programs, and activities we have in this space and what more we need to be doing.” If you are interested in getting involved, please contact her at firstname.lastname@example.org.
As the authors of the C4DISC article point out, when it comes to a lack of diversity and inclusion, the “problem is not simply one of numbers, but of the support systems that such numbers indicate.” With additional effort, journals and editors can make a positive change to these support systems.
Editor-in-Chief, Science Editor
ANNOUNCEMENT: Science Editor Profile Series
The recent protests and calls for diversity and change have further intensified the focus on the identity and humanity of our fellow professionals. So, it seems like an apropos time to restart our long-running series of CSE Member profiles and expand them to include to include any science editor or related professional. Because everyone’s time is incredibly valuable these days, we’re hoping to set these up as video chat interviews, so we only need to take a few minutes out of your day. If you’re interested in being interviewed, let us know and/or volunteer a colleague we should reach out to. We’re particularly interested in highlighting Black editors and professionals and others from traditionally underrepresented groups. Send your info or suggestions to us at email@example.com
From the Archives
As a sampling of previous Science Editor profiles and interviews, I encourage you to check out the following articles:
Recent Early Online Article
I’m not sure why “social distancing” became the term to describe the safe approach to this pandemic when “physical distancing” seems more appropriate. In fact, as our SoMe columnist Jennifer Regala describes in her new article, Social Media in a Pandemic: Virtual Connections While Social Distancing, social media can be used to connect with colleagues and strengthen professional bonds free from any fear of viral transmission. I hope her column inspires you to stay physically distant, but socially engaged this summer.
Resource of the Month
This month researchers from the Methods in Research on Research (MiRoR) project published their “tool for assessing the quality of peer-review reports in biomedical research” dubbed ARCADIA. After reviewing the literature and surveying editors and authors about what constitutes a good peer review, the authors developed a 14-question checklist they assert can be used to systematically evaluate the quality of peer review reports at a journal. While one can quibble with any checklist (for example, by including items that might best fall under the domain of specialized statistical or technical reviewers, the tool may be better at evaluating the overall peer review process of an article not the individual peer review reports), it does seem that once it’s validated it could serve as a helpful resource for assessing the effectiveness of interventions into the peer review process.
Many times, small changes can make a huge difference for certain populations that we should consider adopting them when possible. For example, recently Jess Watkin (@fekkledfudge) a “Blind PhD Candidate” at the University of Toronto pointed out on Twitter that capitalizing each word of a hashtag (eg, #BlackLivesMatter vs #blacklivesmatter) makes them accessible to those who need to use a screen reader, allowing them to participate in conversations online. Micro-aids like these won’t solve society’s problems, but they can make life a little a better overall.
Feedback and suggestions are always welcome at firstname.lastname@example.org.
We are also always looking for new submissions or article suggestions you may have; for more details, see our Information for Authors.