Annual Meeting Reports

Journal Development and Ranking

In her introduction to the session, Barbara Meyers Ford stated that the main focus of the session would be measures of journal quality, especially the impact factor, and that the session would include discussion of how a better understanding of journal metrics could improve a journal’s ranking in its field.

Marie E McVeigh identified three major types of data that journal editors can use to evaluate their journals. One, called performance metrics or ranking, consists of quantitative data regarding frequency of citation. Examples include the impact factor, which is “a measure of the frequency with which the average article in a journal has been cited in a particular year or period”1 , and the Eigenfactor score, which is essentially the number of current-year citations to articles published in selected journals in the 5 previous years.2 The second type of data, called citation network, identifies the citation relationships of a journal—for example, whether the journal cites or is cited. The third type of data, publication information, regards such items as academic discipline or journal of origin. McVeigh noted that many people emphasize only performance metrics and therefore restrict the journal citation reports to rankings.

McVeigh referred to a survey of current and former editors of top medical journals that showed that a journal’s impact factor can be increased by expanding editorial staff, offering improved author services, and selecting articles more carefully. However, McVeigh stated that the best way to improve a journal’s impact factor is to ensure the highest quality of publication possible, both scientifically and with respect to journal management. Excessive self-referencing and manipulation of contents in an effort to increase the impact factor can actually lower it. Publishing review articles or original research helps to increase the impact factor. Finally, McVeigh outlined rules for using such measures as impact factor in ranking journals, including comparing like with like. She concluded by noting that journal ranking, like an elephant, has many parts: impact, influence, relevance, value, prestige, and importance.

Mauricio Rocha e Silva described how journals can increase their visibility online and in databases. “It’s no use being visible with low quality. Quality is the name of the game,” Rocha e Silva declared. He said that peer review is the gold standard and that citations are a surrogate. He described two main flavors of peer reviewers: sweet (“I’ll accept unless I find something bad”) and sour (“I’ll reject unless there’s a lot to recommend it”). He advised editors to go for the sour peer reviewers.

Rocha e Silva defined the impact factor as “a disreputable character, with whom no self-respecting editor wishes to be seen in public, but without whom no editor can live happily, in private”. He said that journals with impact factors should be considered important because only a small fraction of the more than 300,000 journals listed on the periodicals database Web site were included in the two main indexes from which impact factors are derived: the Journal Citation Reports issued by Thomson Reuters (nearly 10,000 journals) and the SCImago indexes based on Elsevier’s Scopus database (16,500 journals).

Rocha e Silva showed how the impact factor of a journal might be estimated before it is actually computed—but with erroneous results. He explained how citations could be overestimated, depending on the timing of the publication of the article and on who cites it. Rocha e Silva said that citations suggest quality, but he advised editors not to overrate impact factors. “Go for quality! That is really all that matters,” he said.


  1. Introducing the Impact Factor [Internet]. New York (NY): Thomson Reuters; [cited 2010 Sep 20]. Available from:
  2. Journal Citation Reports [Internet]. New York (NY): Thomson Reuters; [updated 2010 Jun 10; cited 2010 Sep 20]. Available from: