Orgtheory.net



Free-Riding on Power Laws: Questioning the validity of the Impact Factor as a measure of research quality in organization studiesJoel Baum, University of TorontoAbstractThe simplicity and apparent objectivity of the Institute for Scientific Information’s Impact Factor has resulted in its widespread use to assess the quality of organization studies journals and by extension the impact of the articles they publish and the achievements of their authors. After describing how such uses of the Impact Factor can distort both researcher and editorial behavior to the detriment of the field, I show how extreme variability in article citedness permits the vast majority of articles – and journals themselves – to free-ride on a small number of highly-cited articles. I conclude that the Impact Factor has little credibility as a proxy for the quality of either organization studies journals or the articles they publish, resulting in attributions of journal or article quality that are incorrect as much or more than half the time. The clear implication is that we need to cease our reliance on such a non-scientific, quantitative characterization to evaluate the quality of our work.“Not everything that can be counted counts and not everything that counts can be counted.” Albert EinsteinThe Impact Factor: Origins, Evolution, and Validity as a Measure of Research QualityThompson Scientific is a database company that owns and publishes the Institute for Scientific Information (ISI) Web of Knowledge, which includes the Science Citation Index, Social Science Citation Index, and Journal Citation Reports. Central to the Journal Citation Reports is the Impact Factor (IF), which the ISI describes as, “a systematic and objective means to critically evaluate the world’s leading journals.” The IF was devised in the 1960s by Eugene Garfield, a library scientist, as a way to measure journal usage based on the mean number of citations per article within a specific period of time. A journal’s IF is calculated by counting the number of current year citations to articles published by the journal during the preceding two years and dividing the count by the number of articles the journal published in those two years. Recently, the ISI introduced a five-year version of the IF (i.e., current year citations to articles published by the journal during the preceding five years divided by the number of articles published in those years) to account for differences in article obsolescence rates across fields.Garfield’s original idea was to sort journals by citation rates to aid in determining which to include in library collections (or indexes). Over the last decade, however, increasing electronic availability, and aggressive marketing by Thomson Scientific, which acquired the ISI in 1992, has transformed the IF from a usage-based sorting device, into a definitive quantitative rating of the quality of journals, particular articles appearing in them and, by corollary, their authors. The IF is now in widespread use to evaluate researchers, serving a central function in academic hiring, peer review, and grants decisions – uses for which it was never intended, and which Garfield himself has repeatedly called misleading and inappropriate, particularly in the context of tenure and promotion decisions (e.g., Garfield, 1999, 2006).In response to such uses, researchers increasingly emphasize publishing in high-IF journals, often at the expense of journals that might be more appropriate for their work, and that could provide greater developmental value, fairness, and speed in the editorial process, a higher probability of acceptance, and shorter publication lags. IFs can thus affect the kinds of studies conducted, as academics shift their research questions and methods to accommodate the tastes of high-IF journals, and focus on producing the most rewarding rather than most ground-breaking work. The focus on publishing in high-IF journals has led some academic observers to comment that what a researcher contributes to our understanding is becoming less important than where it is published, and that that the credo “publish or perish” has been replaced by “publish in high-impact journals or perish” (e.g., Brumback, 2008; Monastersky, 2005).The power of the IF to attract research and resources also encourages editorial strategies designed to “engineer” higher IFs (Reedijk and Moed, 2008). One well-known tactic is to publish more review articles because they tend to attract greater citations than research articles. Another is to publish more editorials, book reviews, and news items, which are not considered “articles” by the ISI, and so excluded from the denominator of IF calculations, while any citations they receive are still counted in the numerator. Although such items generally attract few (if any) citations, some journals publish editorials citing multiple articles from their recent issues, and others invite “non-articles” (e.g., provocative mini-reviews, perspectives, editorials, etc.) that often garner many citations. Articles also tend to preferentially cite the journal in which they are published, sometimes at the request of journal editors prior to manuscript acceptance – a practice bordering on extortion. Journal editors may also reject particular manuscripts, for example, studies in narrow or peripheral topic areas, or papers from lesser known authors and regions of the world, because such papers are likely to attract limited citations and so depress their journals’ IFs.Beyond discussion of these (and other) limitations and biases, a more fundamental outstanding concern is that neither the validity of the IF as a measure of journal quality, nor its efficacy as a surrogate measure of individual article impact has been documented. It is surprising, therefore, that within organization studies, journal IFs have become increasingly (and for the most part unquestioningly) central to evaluations of scholarly achievement. Moreover, the tendency has increasingly been to ascribe the IF of a journal to each article published within it. As a result, comments such as, “She publishes in top-tier journals,” or “His papers are published in second-tier journals,” where journal tiers are based on IFs, are by now customary and decisive in determining research quality within search and tenure committee meetings.The veracity of such attributions rests on two key assumptions. One is that a journal’s IF is representative of its articles, and the other is that the IF objectively measures a journal’s “quality.” The first issue concerns measurement validity. For a journal’s IF to be representative of its articles, the citedness of its articles must follow a Gaussian distribution, with a narrow variance around the mean; that is, around the journal’s IF. It is well-known, however, that researchers vary greatly in scientific productivity and influence. Lotka (1926) observed that the productivity distribution of researchers was highly skewed, and proposed his inverse square law, which states that the number of researchers producing n papers is approximately proportional to 1/n2. Studies corroborate the skewness of research productivity distributions and show that they are well described by inverse power law functions, as well as more sophisticated bibliometric distribution functions that achieve even closer fits with empirical distributions of publications (Hubert, 1981).Although article citedness has not been subject to the same level of scrutiny as researcher productivity, available evidence suggests that the distribution of citations to published research may often be equally skewed (e.g., Brouthers et al., 2005; Folly et al., 1981; Naranan, 1971; Seglen, 1992; Wall, 2009). If the distribution of citations to a journal’s articles is highly skewed, with few articles near the mean, it is inappropriate to ascribe its IF to individual articles published within it. High skewness in the citedness of a journal’s articles also undermines the IF as an index of journal citedness by making a small number of highly-cited articles disproportionately influential in determining the journal’s IF and its resultant ranking. If being cited is taken as an indicator of quality, then the practice of associating a journal’s IF with the quality of its articles has the potential to be seriously misleading: The majority of articles published in high-IF journals may be basking, along with the journals themselves, in the reflected glory of a small minority of citation-worthy articles. The second issue – whether the IF provides an objective measure of journal “quality” – is more conceptual. The IF implies the equivalence of quality and citedness, and in doing so begs the question: “What (if anything) is a citation?” The term “impact” in Impact Factor is intended to convey the idea that citations represent the mechanism through which research propagates itself and advances. Broadly, citations represent notions of use, reception, utility, influence, and significance. But why authors cite particular articles, and the meaning of citations, is more complex than the simple normative view of citations as an indicator of intellectual debt (Cozzens, 1989). As Bavelas (1978) observes, “the two extremes … of reasons [for citation] might be true scholarly impact at the one end (e.g., significant use of the cited author’s theory, paradigm, or method) and less-than-noble purposes at the other (e.g., citing the journal editor’s work or plugging a friend’s publications).”Interpretation of citations is thus not necessarily straightforward, and citation‐based metrics less “objective” than typically asserted. Although citedness plausibly has something to do with “quality” per se, it is by no means a direct measure of quality, the assessment of which requires human judgment. Remarkably little attention has been given to development of a model of journal quality and whether and how it might relate to the IF. Development of such a model is unfortunately beyond the scope of my brief analysis, however, and so the broader question of how to characterize and measure journal quality more meaningfully remains for future work. Granting citedness some correspondence with scientific quality, my more modest aim is to explore the validity of IF-based journal quality rankings, and their use to evaluate individual articles (and, by association, their authors) in organization studies. The analysis documents the massively uneven citedness of articles within a given journal and explores the influence of this skewness for the meaning and veracity of journal IFs in the field. The conclusion is that the IF cannot reasonably be employed to ascertain the quality of a particular journal, article, or researcher, and that it is vital to communicate this lack of validity to those who would impose the IF as a measure of scientific excellence on our field.Citation Distributions of Organization Studies JournalsFigure 1 shows the distribution of the 88 “management” journals listed in the 2008 ISI Journal Citation Reports, indexed by both 2-year and 5-year their IFs. The distributions are highly skewed (left panels), producing straight lines in log-log plots (right panels), suggesting Paretian, or Power Law, rather than Gaussian distributions. Research on organizations thus appears neatly stratified into citation-based ranks according to academic journals. Moreover, since the average citedness of these journals tends to be relatively stable over time, the assignment of ranks to journals based on their IFs seems justifiable.Insert Figure 1 about here.Journal impact factors are not representative of individual journal articlesTo assess the validity of such a journal-based hierarchy, over time citation data were obtained for articles published in five well-known, empirically-oriented organization studies journals that, in 2008, varied in mean citedness and ISI management journal rankings as follows:Academy of Management Journal2-year IF 6.1, ISI rank #25-year IF 7.7, ISI rank #3Administrative Science Quarterly2-year IF 2.9, ISI rank #9 5-year IF 6.3, ISI rank #5Organization Science2-year IF 2.6, ISI rank #135-year IF 5.5, ISI rank #7Journal of Management Studies2-year IF 2.6, ISI rank #145-year IF 3.5, ISI rank #19Organization Studies2-year IF 1.9, ISI rank #295-year IF 2.7, ISI rank #24Citation data were collected from the Social Science Citation Index for all articles (excluding editorials, book reviews, news items, and errata) published in each journal from 1990, the year Organization Science was founded, through 2007. This timeframe permits the relative impact of articles published early in the observation period to be revealed over time, as well as comparison with narrower article cohorts consistent with the IF.Figure 2 plots the distribution of article citations per year, by journal, for 1990-2007. As the panels in the figure show, for each journal, the distribution of citations per year is highly skewed, with articles receiving 0-1 citations the largest group for each journal (left panels), and the straight lines in the log-log plots (right panels) suggesting a Power Law distribution of article citations. Figures 3 and 4 show equivalent plots for citations received in 2008 by articles each journal published during 2003-07 and 2006-07, respectively. The distributions for these sampling frames, which match those used to compute 5-year and 2-year 2008 IFs, exhibit the same skewness and linear log-log trends.Insert Figures 2, 3 and 4 about here.Although the most cited articles constitute a small fraction of journal content, they contribute disproportionately to journals’ citedness. To illustrate this, Figure 5 presents cumulative contribution functions based on citations per year to articles each journal published from 1990 to 2007. Figures 6 and 7 give comparable figures for citations in 2008 to articles published in 2003-07 and 2006-07. The graphs show the percent of a journal’s total citations as a function of article rank in the journal’s citation distribution, starting with the most highly-cited article.Insert Figures 5, 6 and 7 about here.The contribution functions are remarkably similar across the journals. For 1990-2007, the top 20% of articles account for 50-60% of each journal’s total yearly citations, and the most cited half of the articles for 80-90% of the total. Citations received in 2008 by articles published in 2003-07 and 2006-07 yield comparable functions despite the smaller numbers of articles in the samples. Articles in the most-cited half are cited 5-10 times more frequently than those in the least-cited half – and in two cases (for 2006-07), half the journal’s articles account for all of the journal’s citations.Assigning the same value to all articles in these journals thus overstates the impact of many articles considerably, while at the same time understating importantly the impact of a highly-cited few. To illustrate this problem, the panels in Table 1 compare mean article citations to quartile article citations for each journal. Table 1a shows that during 1990-2007, mean citations per year exceed median citations per year by up to 170%, and by 350% or more over the 25th percentile. In contrast, mean citations per year understate the impact for articles at the 75th percentile by up to one-third. Strikingly, the mean represents just 5-11% of the maximum; the most highly-cited article in a journal is thus receives 10-20 times more citations than the average article in the same journal.Tables 1b and 1c repeat the analysis for citations in 2008 to articles published in 2003-07, and 2006-07, with the means thus corresponding to the 2008 5-year and 2-year IFs. Mean 2008 citations exceed the median number of citations by as much as 185% and 585% over citations at the 25th percentile. Again, however, citations to articles at the 75th percentile always exceed the mean 2008 citations, understating their impact by up to nearly one-half. The mean represents 9-25% of the maximum, and so the article in a journal receiving the most 2008 citations is cited 4-10 times more frequently than the average article in the same journal.Insert Tables 1a-c about here.Use of the IF to infer the quality of individual articles (or their authors) thus seems unfounded. Only a fraction of articles are cited anywhere near the journal mean. For most articles, the IF greatly overstates impact. For the rest, impact is understated, and massively so for the few most highly-cited articles. Overall, there appears to be little correlation between the citedness of an individual article and the IF of the publishing journal. Journal impact factors are not representative of journal qualityThe great variability in citedness of articles within each journal not only makes inferences on individual paper quality from IFs problematic, it also has important implications for the meaning and significance attached to inferences of journal quality based on mean article citedness. One implication of the variability in article citedness is that sparsely-cited articles published in high IF journals will often attract fewer citations than highly-cited articles published in low IF journals. As Table 1c shows, for example, articles in the top quartile of 2008 citations in Organization Studies are as or more frequently cited than the median articles at Journal of Management Studies, Organization Science and Administrative Science Quarterly as well as bottom quartile articles at Academy of Management Journal. Similar overlaps are apparent in Tables 1a and 1b. To assess the magnitude of these overlaps more generally, consider, as a na?ve benchmark, the yearly median number of citations per article in these journals. How many Organization Studies articles exceed the median citations per article for Administrative Science Quarterly or Academy of Management Journal? Ex-post, achieving such citedness would seem to merit such articles publication in the higher-ranked journals. Conversely, how many Academy of Management Journal or Administrative Science Quarterly articles fail to earn the median for articles published in Organization Studies? Ex-post, these articles would seem not to have merited publication in the higher-ranked journals. The main point here is not that particular articles are misplaced, however, but rather to assess whether the journal in which an article appears is likely to be informative about its citedness.Insert Tables 2a-c about here.The answers to these questions are given in the panels in Table 2. Table 2a shows that during 1990-2007, nearly one-half of all articles published in Organization Science either exceed the median yearly citations of articles appearing in the higher-ranked journals, or fall below the median of articles in the lower ranked journals. Moreover, roughly the same percentage of articles (5-10%) published in Organization Studies and Journal of Management Studies exceed the median citations for Academy of Management Journal and Administrative Science Quarterly as articles published in these journals fall below median citations for Journal of Management Studies and Organization Studies. Tables 2b and 2c, based on 2008 citations to articles published in 2003-07 and 2006-07, reinforce these results. Indeed, when only 2008 citations are considered, the rate at which articles exceed the median citations of higher ranked journals, or fall below the median for lower ranked journals increases, surpasses 50% in some cases.A second and related implication is that because a small fraction of a journal’s papers account for the majority of its citations, calculations of mean citations used to derive journal IFs depend disproportionately on this small proportion of articles. This dependence is illustrated graphically in Figure 8, which illustrates the dependence on highly-cited articles of the 2008 2-year and 5-year IFs as well as mean citations per year for articles published from 1990 to 2007.Insert Figure 8 about here.In particular, Figure 8 shows the change in the value of the 2-year and 5-year 2008 IFs and mean citations per year for 1990-2007, when recomputed after excluding 1) the four most highly cited articles, 2) the top 10% most highly cited articles, and 3) the top 20% most highly-cited articles. The results are remarkable. The value of the 2-year IF falls 11-30% after removing the top 4 cited articles, and drops 38-59% after excluding the top 20%. Comparable figures are 4-26% and 32-49% for the 5-year IF, and 3-11% and 35-51% for mean citations per year during 1990-2007. Clearly, IFs depend critically on highly-cited articles, and although sensitivity to the top 4 declines with the number of articles considered, removal of the top 20% of articles consistently result in a drop of 30-50% or more to the IF/mean.Notably, journal IFs vary in their dependence on highly-cited articles, with journals exhibiting “fatter-tailed” citation distributions demonstrating greater dependence. As a result, exclusion of highly-cited articles rearranges the journal rankings. If, for example, just the top-4 cited articles are excluded from the 2-year IF calculation, Administrative Science Quarterly drops below both Organization Science and Journal of Management Studies, while Organization Science falls below Journal of Management Studies. This reversal of ranks also obtains if the top 20% is excluded. The 5-year IFs also show instability, with Administrative Science Quarterly and Organization Science again switching ranks without the top-4 cited articles, and maintaining rough parity with the top 10% and top 20% excluded. Mean citations per year for the 1990-2007 period show similar drops to the 2-year and 5-year IFs, but no changes in ranks occur. Overall, IF-based journal quality labels contain substantial error. In the same way that the skewness of article citedness makes it difficult to infer article citedness from journal citedness, the skewness of article citedness also makes it difficult to distinguish journals based on mean article citedness. As a result, the information a journal’s IF yields about the citedness of papers published within it – and the journal itself – is remarkably imprecise, and even dramatically misleading. Using the IF as a proxy for the quality of either journals or the articles they publish would thus seem to have little credibility. Making assertions about journal quality that are incorrect as much or more than one-half of the time is surely less than satisfactory.Discussion and ConclusionAs organization theorists, we aspire to base our hiring, promotion, and funding decisions on the criterion of research quality. Such assessments are exacting, however. Our desire is to be selective, to reward and encourage excellence, and to emphasize and support significant or promising approaches. Peer review has been the standard method of research evaluation and academic decisions in our field (Starbuck, 2003). In attempting to put such decisions on a more rational footing, we have turned a common tool of social science, quantitative analysis, on ourselves. Seduced by its simplicity and apparent objectivity, we have hastily adopted the IF not only to assess journal quality, but also, by extension, the impact of the articles journals publish, as well as the achievements of these articles’ authors – uses for which the IF was never intended (Garfield, 1999, 2006).Although it seems unnecessary to observe how quantitative characterizations of human activity have come to prominence of late (Denis et al., 2006), it does seem necessary to observe that IFs and citations are indicators of a particular, and relatively uncertain, kind of performance, and are not direct measures of quality (Moed 2005). Moreover, as a measure of research quality, the IF is fundamentally flawed. Attaching the same value to each article published in a given journal masks extreme variability in article citedness, and permits the vast majority of articles – and journals themselves – to free-ride on a small number of highly-cited articles, which are principal in determining journal Impact Factors. This would be the case even if all the IF’s other more widely discussed biases, objections, and limitations could somehow be addressed.Typically, a measure found to be ill-conceived, unreliable, and invalid will fall into disrepute and disuse among the members of a scientific community. Remarkably, this has not been the case with the IF among organization theorists; indeed it is, if anything, gaining attention and being applied more frequently – and insidiously. Critically, its expanding use has the potential to distort researcher and editorial behavior in ways that are highly detrimental to the field. It is curious that we would choose to rely upon such a non-scientific method as the IF to evaluate the quality of our work. More curious is that we would do so as unquestioningly as we have. Why we have done so is not entirely clear. But that we need to stop is. ReferencesBaldi, S. (1998), “Normative versus social constructivist processes in the allocation of citations: A network-analytic model.” American Sociological Review, 63,?829-846.Bavelas, J.B. (1978), “The social psychology of citations.” Canadian Psychological Review, 19: 158-163. Brouthers, K.D., Mudambi, R., and Reeb, D.M. (2005) “The homerun hypothesis: Influencing the boundaries of knowledge.” SSRN Working Paper: , M. (1982), “The citation impact factor: Another dubious index of journal quality.” American Psychologist, 37(8), 975-977.Brumback, R.A. (2008), “Worshiping False Idols: The Impact Factor Dilemma.” Journal of Child Neurology, 23, 365-367.Cozzens, S.E. (1989), “What do citations count? The rhetoric‐first model.” Scientometrics, 15, 437‐447. Denis, J.-L., Langley, A., and Rouleau, L. (2006), “The power of numbers in strategizing.” Strategic Organization, 4, 349-377.Evidence Report. (2007), The use of bibliometrics to measure research quality in the UK higher education system. A report produced for the Research Policy Committee of UK Universities by Evidence Ltd.Folly, G., Hajtman, B., Nagy, J.I., and Ruff, I. (1981), “Some methodological problems in ranking scientists by citation analysis.” Scienfometrics, 3, 135-147.Garfield, E. (1999), “Journal impact factor: a brief review.” Canadian Medical Association Journal 161(8).Garfield, E. (2006), “The history and meaning of the journal impact factor.” Journal of the American Medical Association, 295, 90–93.Hecht, F., Hecht, B. and Sandberg, A. (1998), “The journal ‘impact factor’: a misnamed, misleading, misused measure.” Cancer Genetics and Cytogenetics, 104(2), 77-81.Hubert, J.J. (1981), “General bibliometric models.” Library Trends (Special Issue on Bibliometrics), 30, 65-82.Institute for Scientific Information (1994), “The ISI Impact Factor.” Thomson Scientific. , A.P. (2003), “Understanding the limitations of the journal impact factor.” Journal of Bone and Joint Surgery, 85, 2449-2454.Lotka, A. J. (1926), “The frequency distribution of scientific productivity.” Journal of the Washington Academy of Science, 16, 317-323.Moed, H.F. (2005) Citation Analysis in Research Evaluation, Springer, Heidelberg.Moed, H.F., and Van Leeuwen T.N. (1996), “Impact factors can mislead.” Nature, 381, 186.Monastersky, R. (2005), “The number that is devouring science.” Chronicle of Higher Education, 14, October.Naranan, S. (1971), “Power law relations in science bibliography: A self-consistent interpretation.” Journal of Documentation, 27, 83-97.Reedijk, J. and Moed, H.F. (2008), “Is the impact of journal impact factors decreasing.” Journal of Documentation, 64, 183-192.Rossner M., Van Epps H., and Hill E. (2007), “Show me the data.” Journal of Cell Biology, 179, 1091-1092.Rousseau, R. (2005), “Median and percentile impact factors: A set of new indicators.” Scientometrics, 63, 431–441.Seglen, P.O. (1992), “The skewness of science.” Journal of the American Society for Information Science, 43, 628-638.Seglen, P.O. (1997), “Why the impact factor of journals should not be used for evaluating research.” British Medical Journal, 314, 497. Starbuck, W.H. (2003), “Turning lemons into lemonade: where is the value in peer reviews?” Journal of Management Inquiry, 12, 344-351.Starbuck, W.H. (2005), “How much better are the most prestigious journals? The statistics of academic publication.” Organization Science, 16, 180-200.Wall, H.J. (2009), “Don’t get skewed over by journal rankings.” The B.E. Journal of Economic Analysis and Policy, 9(1), Article 34.Weingart, P. (2005), “Impact of bibliometrics upon the science system: Inadvertent consequences?” Scientometrics, 62, 117-131. Figure 1. Management journals impact factor distributionDistribution and log-log plots of the 2-year and 5-year IFs for all ‘Management’ journals listed in the 2008 Journal Citation Report, enumerated in IF cohorts (0.00-0.99, 1.00-1.49, 1.50-2.00, etc.) on basis of their impact factors. Note that the axes – not the plotted values – are logged for the log-log plots.Figure 2. Distribution of citations per year to journal articles published 1990-2007Distribution and log-log plots of SSCI citations per year for all articles published during the 1990-2007 period in Academy of Management Journal (906 articles), Administrative Science Quarterly (315 articles); Organization Science (651articles); Journal of Management Studies (765 articles); and Organization Studies (727 articles). Citations per year enumerated in cohorts 0.00-0.99, 1.00-1.99, 2.00-2.99, etc. Figure 3. Distribution of 2008 citations to journal articles published 2003-07Distribution and log-log plots of 2008 SSCI citations for all articles published during the 2003-2007 period in Academy of Management Journal (291 articles), Administrative Science Quarterly (83 articles); Organization Science (210 articles); Journal of Management Studies (342 articles); and Organization Studies (343 articles). Citations per year enumerated in cohorts 0.00-0.99, 1.00-1.99, 2.00-2.99, etc. Figure 4. Distribution of 2008 citations to journal articles published 2006-07Distribution and log-log plots of 2008 SSCI citations for all articles published during the 2006-2007 period in Academy of Management Journal (126 articles), Administrative Science Quarterly (34 articles); Organization Science (87 articles); Journal of Management Studies (138 articles); and Organization Studies (161 articles). Citations per year enumerated in cohorts 0.00-0.99, 1.00-1.99, 2.00-2.99, etc. Figure 5. Cumulative contribution of citations to journal articles published 1990-2007For each journal, articles published during the 1990-2007 period were distributed into 20 percentiles of the article citation distribution, each containing 5% of the articles, and arranged in order of decreasing citation per year rank.Figure 6. Cumulative contribution of citations to journal articles published 2003-07For each journal, articles published during the 2003-07 period were distributed into 10 percentiles of the article citation distribution, each containing 5% of the articles, and arranged in order of decreasing 2008 citation rank.Figure 7. Cumulative contribution of citations to journal articles published 2006-07For each journal, articles published during the 2006-07 period were distributed into 10 percentiles of the article citation distribution, each containing 5% of the articles, and arranged in order of decreasing 2008 citation rank.Figure 8. Impact of highly-cited articles on mean journal citationsFor each journal, 2-year and 5-year IFs were computed directly from 2008 citations recorded in the SSCI for articles published in each journal during the 2006-07 and 2003-07 periods, respectively. Mean citations per year were also computed for all articles published during the 1990-2007 period. IFs and mean citations per year were then recomputed after removing the 4 most highly-cited articles, the top 10% most highly-cited articles, and the top 20% most highly-cited articles. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download