1



The Relationship between Citations and Number of Downloads in Decision Support Systems

Daniel E. O’Leary

University of Southern California

Los Angeles, CA 90089-0441

oleary@usc.edu

February 2007

Revised – November 2007

Revised – January 2008

Revised – March 2008

Comments are Solicited

Acknowledgement: The author would like to acknowledge the extensive comments of the three anonymous referees on three earlier versions of this paper. Their extensive comments and requested revisions have made this a much better paper.

The Relationship between Citations and Downloads in Decision Support Systems

Abstract

In this increasingly digital age, the number of times a paper is downloaded and the number of citations to it are becoming indicators of the interest, visibility and impact of the paper. As a result, downloads and citations increasingly are becoming a part of the evaluation process of faculty, departments and universities.

This paper finds that the number of citations and downloads are closely related. A statistically significant relationship is found between different citation sources and the number of downloads of Decision Support Systems. In addition, the different sources of citation information are found to be highly correlated with each other.

Keywords: Decision Support Systems, Citations, Downloads, Google Scholar, SSCI, ISI World of Knowledge, Elsevier Science, Scopus, H-Index.

The Relationship between Citations and Downloads in “Decision Support Systems”

1. Introduction

Citations have long been an important way to evaluate the interest, visibility and impact of research, and the corresponding researchers. For example, Garfield and Welljams-Dorof [18] note that citation data have been used to evaluate people, departments, universities, and even countries for research impact. Citations also are used to evaluate the impact on the literature that faculty members have had as a part of tenure and promotion reviews. Ultimately, citations are important, because they provide economic benefit to the author being cited (e.g., Diamond [14]). Further, citations also provide additional benefits, such as prestige, to the author being cited.

Recently, citation data have become digital. For example, the traditional source, Social Science Citation Index (SSCI) is now available digitally over the Internet as ISI’s Web of Science (e.g., Howitt [22]). But SSCI is not the only citation information source in this increasingly digital world. SCOPUS, an emerging direct competitor to ISI’s Web of Science also is available digitally. Further, private companies, such as Elsevier provide citation information about the papers in the journals they publish. In addition, Google has moved into the citation business, with Google Scholar (Beta).

Not only have citations gone digital, but also research papers have increasingly become available on the Internet. Thus, additional information about those papers also has become available, e.g., information about how many times a paper has been viewed or downloaded. Although individuals rarely keep track, companies, whose revenues depend on downloaded papers watch such activity closely. As a result, increasingly, paper download information has become available. As that information becomes available, some departments, schools and universities, have begun to have faculty members provide download information as part of the annual performance review process or promotion and tenure processes. Accordingly, citation and download information are now increasingly being used to evaluate faculty members. As a result, we will examine some of the previous citation research in Decision Support Systems – the journal (DSS).

Previous DSS Citation Research

There has been limited research in the analysis of citations from or about DSS. Holasapple et al. [20] conducted a citation analysis of citations to DSS over the time frame 1985 – 1993. Their approach was to manually capture and summarize all of the citations to articles published in DSS during that time. They found that DSS generated the most citations, followed respectively by Management Science, Communications of the ACM, Management Information Systems Quarterly and Artificial Intelligence. Eom and Lee [15] analyzed the citations by 259 articles about decision support system applications covering the time period 1971 – 1989. They focused on determining the leading institutional and individual contributors to the decision support literature. They found that the University of Texas (Austin) and Massachusetts Institute of Technology were the leading institutions cited, and that A. Whinston and M. Scott-Morton were the leading individuals cited. Finally, Eom [16] manually created an even larger database of 944 citing papers and 23,768 cited records to investigate co-citation patterns, and resulting conceptual linkages between co-cited articles.

A common thread throughout has been the manual creation of databases that were then examined in detail. Existing citation engines such as SSCI were not used, while SCOPUS, Elsevier’s citation service and Google Scholar did not exist.

Purpose

Although citation and download information is being used to evaluate faculty, there is limited information as to how digital download and citation information sources are related to each other, particularly with respect to DSS. Accordingly, the purpose of this paper is two-fold:

• Investigate the relationship between multiple citation sources with respect to DSS citations

• Investigate the relationship between citation sources and the number of downloads for DSS

This Paper

Accordingly, this paper proceeds as follows. Section 1 has provided a brief background and discussed the purpose of the paper. Section 1 also reviewed some of the previous research in decision support systems on citation analysis. Section 2 investigates “Downloads,” while section 3 investigates “Citations.” Section 4 summarizes the research model and approach, while section 5 investigates the findings. Section 6 investigates some extensions, while section 7 summarizes the paper and discusses the contribution.

2. Download Information

As materials have become available digitally, new information about those materials also becomes available. For example, information about digitally available research papers, such as downloads or even citations across a particular set of other papers can now be made widely available over the Internet.

Elsevier was among the first to make download information available. Initially, it provided detailed information about the number of downloads of the most heavily viewed papers. For example, the number of downloads associated with the twenty-five most requested papers from April 2002 – April 2004 in Decision Support Systems is provided on the web.1 The downloaded counts exclude requests from robots, requests for abstracts, and requests from within the Elsevier domain. Similar information about other Elsevier journals is also available.

However, after being made available for a brief period, publishing information about number of downloads has not been “continued.” There are likely to be at least two reasons that numeric data about the number of downloads are no longer divulged. First, with such numbers, analysts could get insights into the company’s revenue stream that could put the journals at a competitive disadvantage. Second, journals could be ranked against each other. In such rankings there is only one that can be the most downloaded. For example, by comparing downloads for Strategic Information Systems to DSS we can see that there are more downloads for the latter journal.2 As a result, Elsevier adds a cavet to its development of most downloaded papers, known as its “top 25.” They note, “In addition, the TOP25 is not intended to infer any sort of preferential ranking to the journals, articles or subjects included, and should not be used as such, other than to present a general indicator of the readership behaviour of our users.”3

Although information about the “actual number of downloads” is not being made public, such information is surely available internally at download sources, such as Elsevier. For example, download information is available internally at IEEE and John Wiley. In addition, there is publicly available information, such as which papers are in the top 10, or top 25, most downloaded papers for a particular journal in a particular time period. In particular, for DSS, so-called “top 25” lists are available quarterly, starting with July-September 2004.

3. Citation Information and Emerging Questions

In contrast to download information, citation information has long been used to assess impact of research, based on the notion that if a research paper was cited then the paper impacted the paper that cited it. However, there are some emerging research questions regarding citation information in DSS.

SSCI / ISI Web of Science

Over the years there has been substantial attention given to SSCI / ISI Web of Science citations, e.g., [14], [18], [22], [27] and [37]. However, SSCI has been criticized for a number of reasons (e.g., [27]). SSCI appears to focus on English language journals, generally from North America or Western Europe. Because they capture citations from journals there are limited citations from conference proceedings. SSCI also has different levels of coverage of different fields, even within different sub-disciplines, including relatively incomplete journal database in fields such as information systems and operations management. Given these limitations, how correlated are the numbers of citations in SSCI with other emerging citation sources regarding DSS?

Other Citation Sources

Other sources such as Elsevier Science, SCOPUS and Google Scholar have not received the corresponding level of attention devoted to SSCI. Because Elsevier Science represents only 1800 + Elsevier journals and SCOPUS and Google Scholar are new (it is still in beta at this writing) that is probably not surprising.

Google Scholar probably provides the greatest range of citation resources, since virtually any resource on the Internet is a potential citation source. Further, Google Scholar does not limit its search through a subset of journals, since it covers materials on the Internet. Elsevier is the most restricted, gathering citations only from Elsevier. SCOPUS is a European company in contrast to ISI which is based in the United States. SCOPUS apparently gathers citations from a similar set of sources as SSCI [27].

Meho and Yang [27] investigate the impact of the citation source, analyzing rankings of Library and Information Science faculty based on ISI’s Web of Science, SCOPUS and Google Scholar. Among other results, they find that SCOPUS significantly influences citation source and count potentially altering author citation rankings compared to SSCI, that citation databases complement each other and that Google Scholar does not find the same “quality” citations as in ISI’s Web of Science or SCOPUS. However, DSS was not included in their analysis. In addition, it is not clear if the numbers of different citation sources are correlated in information systems journals, such as DSS.

Relationship Between Citation Sources

There is some controversy associated with using citation sources such as SSCI or SCOPUS. For example, as noted by Garfield and Welljams-Dorof [18], there is a lag between when a paper is published and when it is cited in a paper. One approach is to move the citation cycle to when a paper is posted on the web, either by the author or in prepublication. By citing papers posted to the web, the lag can be reduced. Using a citation source (e.g. Google Scholar), based on papers available on the web can limit that criticism.

However, many papers posted are not yet nor will they ever be published. As a result, this can introduce other problems such as quality control issues of the papers being used as the citation basis. However, Google Scholar is biased toward newer papers that only are available in digital formats on the Internet. Accordingly, there is a question as to whether or not the numbers of citations from such different sources of citations are correlated.

Citations as a Means of Determining “Top” Papers

Recently, citations from sources such as SSCI have played a prominent role in analysis of issues such as determination of the impact of papers. For example, Smith [37] used SSCI citations to investigate the issue as to what is a “top” journal paper. For the 626 articles published in fifteen finance journals in 1996, based on citations tracked for the period 1996 – January 2004, Smith [37, p. 145] found that the median number of cites for articles from the finance journals that were investigated was 4, while the mean was 9.55 cites, the 90th percentile had 24 cites and the 95% was 35 cites. However, there has not been the same analysis in information systems. Further, it is not clear what the role of different citation sources is in defining the distribution of top papers. For example, how does the distribution of papers change with a different citation source, e.g., would the rank of a paper differ based on citation source?

4. Model and Approach

In order to take into account both downloads and citations, this research employs a model where downloads provide a measure of potential citation, and citations provide a measure of impact. The model and approach to gathering data are discussed in this section.

Model: Visibility, Interest and Impact4

In the emerging digital world, a paper is first downloaded, it is read and analyzed, and then possibly cited. The extent that a paper is downloaded is a function of many possible factors, including “visibility” and “interest” to the research and teaching communities. A paper that is widely visible could be widely downloaded. Similarly, if there is substantial “interest” in a paper, that paper is likely to be downloaded. On the other hand, if a paper has little visibility or there is no interest in the paper it will not be downloaded.

Publication source and search engines may provide visibility. For example, publication in a well-know or well-circulated journal provides a paper visibility. Interest also is likely to reflect current concerns of business or emerging research trends. Further, in information systems, interest and visibility in a paper may be a result of a number of factors, such as subject, featured technology, keywords describing the paper, the author, the title or another paper citing the paper. Interest and visibility might also be promoted through meetings or paper presentations or personal discussions in which the author discusses the paper, its methodology, data and results.

Additional motivations also could result in the paper being downloaded, such as curiosity as to what research exists regarding a particular topic, courtesy to a fellow researcher or a referee recommendation. In addition, “maturity” of the paper and the field is likely to influence the number of downloads, as is the number of competing papers dealing with the same subject.

Availability also can influence downloads. For example, if competing alternatives are being considered for downloading, and one is free, but the other is not, then we would expect the free version be downloaded. In this study, availability is normalized by having all of the papers come from the same journal and same download source.

Once a paper has been downloaded, it potentially can have impact. For example, the downloaded paper might provide ideas that directly or indirectly would be used in research or teaching material. Those ideas might result in citations to the paper. In addition, other reasons, such as a referee requesting a particular reference, can lead to its citation. Ultimately, the resulting ideas garnered from a downloaded paper can manifest themselves in a paper that is published or simply placed on the Internet, which could lead to further visibility and further downloads, etc.. Alternatively, a downloaded paper may sit unread on a hard drive, and have no impact.

Accordingly, the extent to which a paper is downloaded provides one measure of its “potential” for its ability to impact the research and teaching communities. Similarly, the extent to which a paper is cited is one measure of its impact. This discussion is summarized in figure 1.

Figure 1

Interest and Visibility

[pic]

Approach

In order to investigate the relationship between the numbers of downloads and citations, we examine the 25 articles published in Decision Support Systems that were most heavily downloaded for the time period, April 2002 to April 2004. In February 2007, for each of the 25 most requested papers, citation and bibliographic information was gathered. Then for each of Google Scholar, SSCI, and Elsevier, the numbers of reported citations to each of the 25 papers was gathered. In Google Scholar each title was searched. In SSCI the first author was used as the basis for gathering citation information, except for the two cases where the first authors had two last names (hyphenated). Search on those last names proved problematic. As a result, in those cases the second author was used as the basis to gather the citations. For Elsevier, citation information was gathered along with the bibliographic information. In November 2007, the same citation information was gathered for Google Scholar, SSCI and Elsevier. In addition, citation information also was gathered from SCOPUS. Gathering data in both February and November allows bench marks from both dates and the difference between the two. The empirical analysis is examined in the next section.

5. Findings

The number of citations and downloads are summarized in table 1. In general, there were more citations in Google Scholar than any other source. Because Google Scholar searches papers available on the Internet that is probably not surprising. Further, in general, the number of citations in both SCOPUS and SSCI exceed those in Elsevier. Elsevier has the fewest citations. The number of citations in SCOPUS substantially exceeded those from SSCI. In February, for Google Scholar, SSCI and Elsevier the average number of citations was, respectively, 41.64, 11.68 and 7.6, respectively. In November, for the same research papers, for each of Google Scholar, SSCI, SCOPUS and Elsevier, the average number of citations was 55.6, 15.28, 25.56 and 8.88, respectively.

Elsevier could have more citations than SSCI, because some Elsevier journals are not in SSCI and the Elsevier search engine captures the citations before they are in SSCI. Elsevier owns SCOPUS so we would expect SCOPUS to include each of the citations from Elsevier.

As a benchmark we can compare the results to those found by Smith [37] for finance, in spite of the much smaller window for citations (eight years vs. four years). In comparison against the February data, all but three of the top 25 downloaded papers had SSCI citation levels at or above the median of 4 cites, while ten had more SSCI citations than the mean level. Three papers (12%) exceeded the 90th percentile level and one paper (4%) exceeded the 95th percentile level. In November, only one paper had citations less than the median, while 13 exceeded the mean, four exceeded the 90th percentile and two papers exceeded the 95th percentile.

Table 1 - Citations vs. Downloads

| |Feb 2007 |Feb 2007 |Feb 2007 |Nov 2007 |Nov 2007 |Nov 2007 |Nov 2007 |

|Google Scholar-Feb | | | | | | | |

|SSCI-Feb |0.929 | | | | | | |

|Elsevier-Feb |0.895 |0.919 | | | | | |

|Google Scholar-Nov |0.989 |0.937 |0.896 | | | | |

|SSCI-Nov |0.942 |0.987 |0.922 |0.956 | | | |

|SCOPUS-Nov |0.965 |0.970 |0.884 |0.968 |0.985 | | |

|Elsevier-Nov |0.888 |0.926 |0.991 |0.901 |0.930 |0.890 | |

In addition, I also correlated citations and number of downloads with the date of the publication of the paper, i.e., time. Respectively, Google Scholar (February), SSCI (February), Elsevier (February), Google Scholar (November), SSCI (November), SCOPUS (November) and Elsevier (November) are statistically significantly correlated with time (-.4587, -.3195, -.3901, -.4410, -.3083, -.3463 and -.3751). This is as expected, since correlations increase as time increases. However, the Number of Downloads is not statistically significantly correlated with time (-.0820). This is likely due in part to the limited two-year window over which the download information was captured, when compared to the five year window over which the papers were published (1999-2003).

6. Extensions

There are several extensions that can be made to the primary analysis presented here, such as using additional citation or download sources. Extensions could also deal with using top 25 lists without the number of downloads, determining a distribution of top papers, determining the comparative number of citations for a most downloaded paper and a normal paper, generalizing the so-called “H-Index” for downloads, analyzing which citation source has the most citations, and finally a paper life cycle approach.

Top 25 List with NO Download Information

As an extension to the original data and analysis, I also consider the relationship between the appearance of a paper on a top 25 list and the number of citations it receives. There are 11 lists over the time period July 2004 to March 2007, that include the 25 most downloaded papers and their rank (not the number of downloads).

In November 2007 data were gathered from the Elsevier web pages.5 I used papers from the first three quarters as the basis of the study, allowing at least two additional years or eight quarters for the papers to garner top 25 list appearances and number of citations. Using the first three quarters as a basis of analysis gave 53 papers to analyze. The number of “top 25” appearances of papers from the first three quarters is summarized in table 3.

Table 3 - Number of Appearances on Top 25 List By Quarter

|Date of List |July-Sept |Oct-Dec 2004 |Jan-Mar 2005 |Apr-June 2005|

| |2004 | | | |

|Elsevier | | | | |

|SSCI |0.9395 | | | |

|SCOPUS |0.9461 |0.9815 | | |

|Google Scholar |0.9202 |0.9667 |0.9833 | |

There are at least two extensions to this “top 25” analysis. First, as more quarterly data becomes available, that data could be added. Second, rather than just appearances on the lists, rank could be factored into the analysis. However, it is clear that simply appearance on a top 25 list is statistically significantly correlated with the number of citations for each of the four citation sources used here.

“Top Papers”

Researchers such as Smith [37] have pointed toward using citations to study characteristics of “top papers.” An important extension to that research is to set corresponding benchmarks in decision support or information systems. Further, citation benchmarks could come from different citation sources than SSCI, such as SCOPUS or Elsevier Science. In addition, the number of downloads can also be used.

In order to begin to assess the notion of a “top paper” in information systems, I generate the distribution of citations for a single year of Decision Support Systems. I chose 1999 in order to provide results analogous to Smith [37], whose 2004 paper used 1996, eight years prior to data collection. I use SCOPUS and Elsevier as the basis of the citations, although the approach can be extended to other citation sources. I use two different citation sources so that I can determine if the order of papers is influenced by particular citation source.

The distribution of citations results are summarized in table 6 for 1999. 68 papers were ranked from lowest (“Min”) to highest (“Max) in terms of citations. 0 was the minimum and 58 was the maximum number of citations. 50% had 2 or fewer citations in Elsevier and 7 or fewer citations in SCOPUS. As another example, the 70% column indicates that, in 1999, the 20th most cited paper had 14 citations in SCOPUS and 4 in Elsevier.

Further, the source of the citations can have a substantial impact on the rankings of particular journal papers. For example, the difference between the SCOPUS and Elsevier rankings of the citations averaged 7.41 slots or over 10% of the 68 slots. The range of differences was 0 to 38. As a result, an individual paper changed more that 50% of the ranking slots based on using a different citation source. Only 8 of 68 maintained the same rank.

Table 6 - Distribution of Citations in DSS for 1999

1999 |Min |10% |20% |30% |40% |50% |60% |70% |80% |90% |Max |Mean | |SCOPUS |0 |1 |3 |4 |5 |7 |10 |14 |17 |33 |58 |11.59 | |Elsevier |0 |0 |1 |1 |2 |2 |3 |4 |7 |13 |17 |3.85 | |Rank |68 |61 |54 |48 |41 |34 |27 |20 |14 |7 |1 | | |

Number of Citations by Most Downloaded vs. “Normal” Papers

Analysis of the population of papers from the entire year of 1999, also suggests that the number of citations for most downloaded papers exceed those of a “normal” paper (an “average” paper that may or may not be among the most downloaded). The average number of citations for SCOPUS and Elsevier, for the 25 most downloaded from 2002-2004, as of November 2007, was 25.56 and 8.88, more than double the average paper from 1999 (see table 7), even though the time since publication has been substantially less for the most downloaded papers. This research could be extended by gathering additional data about other years.

H-Index

The H-Index is emerging as a key index in the analysis of citations (Hirsch [19]). The “H – Index” is computed after the number of citations of an author or journal, etc. are each computed and ranked with the paper with the largest number of citations ranked first, the second largest number of citations ranked second, etc. The H-Index is the largest rank at which the number of citations is greater than or equal to the rank. For example, if the 30th ranked paper has 30 citations, then the H-Index would be 30. For Decision Support Systems I found the H-Index, using SSCI citations as 31. This number will vary based on which citation source is used.

Since the number of downloads are substantially higher than citations, this approach can be generalized and extended to downloads. For example, the “H/j – Index” can be defined as the largest rank greater than the number of downloads (or citations), divided by j. In the case of downloads, j would generally be chosen larger than 1. For example, if we divide the number of downloads, for the small sample in table 1 by 100, we find that the H/100 index is 12. In the case of citation analysis, such as that of a new faculty, perhaps an H < 1 would be appropriate, such as 1/2. This suggests that in an effort to mitigate the impact of time, ultimately H could be chosen as a function of the number of years that the faculty member has been an active researcher. Alternatively other transforms could be employed.

Most Citations

Individual researchers may be interested in which citation source provides a paper the most citations from Decision Support Systems. Although there were some cases where a particular paper did not conform to this progression, in general Google Scholar found the most and Elsevier the least. In particular, using the data to analyze the number of appearances in the top 25, I found the following average number of citations by source for papers listed on one of the quarterly top 25 listings, between July 2004 and March 2005. In contrast to [27], the average number of citations from SCOPUS was more than 60% more than SSCI: Google Scholar (18.85), SCOPUS (9.74), SSCI (5.89) and Elsevier (4.51). This finding could be critical in any study that ranks journals based on the number of citations. To use any one source in a comparison study could put different journals at an advantage or disadvantage. Journals differ in the relative number of citations by citation source.

Research Paper Life Cycle

One perspective is to view downloads and citations as part of the life cycle of a paper or book. Downloads and citations can occur throughout the life cycle of a paper. Downloads could occur as the paper is in different stages, such as working paper, articles in press, and after publication, because interest, visibility and availability are likely to vary throughout the life cycle. Similarly, citations could occur at the same or different levels at different stages.

In a life cycle model, some proportion of downloads ultimately are converted into citations, at each stage of the paper’s life cycle. For example, as seen in table 1, the average number of citations per download ranges from .005 to .0273 for the three different citation sources. One view is to see that as a “conversion” of downloads into citations, potentially as a paper moves through its life cycle. (However, as noted above, some citations can arise out of sources other than downloads. For example, some citations can result from paper copy available versions or from references by others.)

To the extent that download information could be gathered at different stages in the life cycle, downloads and citations at later stages might be predicted. Unfortunately, there is limited download information available to the general public.

Downloads as Knowledge and Behaviors4

The model discussed above in section 4, might also be extended to facilitate an alternative view of the constructs “download” and “citation,” and might be extended to explain the findings. For example, if downloads and citations are viewed as human behaviors or activities, theories and models in the area of sociology and psychology that explain human behaviors can be used to further analyze and explain the results. For example, Fishbein and Ajzen [17], with their Theory of Reasoned Action, Davis et al. [10], with their motivational Model, and Ajzen’s [1] Theory of Planned Behavior might be used. Citations and downloads might also be viewed from the perspective of knowledge management, providing still another perspective (e.g., O’Leary [31])

Additional Future Work4

Additional future work could focus on generating additional depth to any of the topics in this extension section. For example, the decay phenomena on the number of appearances on “top 25” download lists could be compared to other journals. Further, so-called “ahead of their time papers,” might be studied independently of other less downloaded papers to determine their characteristics. As another example, the research paper life cycle provides a lens to relate both citations and downloads as a paper goes through a natural aging. Such a life cycle analysis could change the way that we look at download and citation analysis, and expectations of research: Where the paper is in the life cycle would affect both downloads and citations, and might affect them differently. Citation and download research might focus on particular papers, authors or sets of papers, such as in a special issue. “Top paper” analysis could be extended beyond the single year analysis provided here for DSS to other years and other journals. Top author analysis could be extended from a classic citation analysis to include downloads.

In addition, researchers might focus on other sources of download information as it becomes available. Although this paper has centered on those papers for which the number of downloads was available, future research might consider examining other sources of download information than Elsevier or considering data such as simply appearing on top 25 or top 50 fifty lists. Further, additional research might consider other citation sources, such as INSPEC, both for their relationship to other citation sources and to number of downloads. Finally, extending the theoretical analysis to other theories of human behavior could fundamentally change citation research and soundly ground citations and downloads as more general behavior.

7. Summary and Contributions

Increasingly, information about downloads and citations is being captured digitally and made available on the Internet. That information is then being used to evaluate faculty on the impact of their research. Previous research dealing with DSS citations has focused mostly on data gathered manually, and has ignored the citation generation engines, such as SSCI and SCOPUS. In addition, new engines such as Google Scholar also provide citation information. Unfortunately, there is only limited knowledge about the extent to which these citations sources are correlated and the extent to which they are correlated to download information. As a result, this paper investigates a number of relationships between downloads and citation sources regarding Decision Support Systems. In particular, this paper finds

• A strong positive statistically significant relationship between the number of citations and downloads of papers in Decision Support Systems.

• A strong positive statistically significant relationship between the number of appearances on so-called “top 25” download lists and the number of citations of those papers from Decision Support Systems.

• A strong positive statistically significant relationship between the different citation sources for most downloaded papers from DSS.

• Papers among the most downloaded papers recieve a higher average number of citations than “normal” papers.

• Different citation sources give substantially different number of citations to DSS. Respectively, Google Scholar, SCOPUS, SSCI and Elsevier, provide greater numbers, although there are some differences for particular papers.

• Different sources of citations appear to generate different distributions of “top” papers.

What are the policy implications of this research? With downloads, as with almost all evaluation criterion, there is a danger that the number of downloads can be manipulated by the individual researcher. As a result, it is important to use multiple criteria wherever possible. However, this paper generates results that suggest that it is reasonable to use “most frequent download” or “top 25” download presence as surrogate for eventual citation and corresponding impact of an article. This could be especially useful for tenure cases or 3 and 4 year contract continuation reviews, where an assistant professor’s publications have not really had time to be strongly cited because they have only appeared in the last few years. Downloads may also be helpful in annual performance reviews, where the number of downloads could serve as a proxy for interest in research after only being available a short time. As Smith’s [37] research shows, simply because an article appears in a “top” journal, we cannot assume that it will have any impact on its field. For an assistant professor, who is recognized via “top 25” download placements in Elsevier journals we can be fairly confident that these publications will be highly visible in the near term and well-cited in the future.

Footnotes

1.

2.

3.

4. The author would like to thank one of the anonymous referees for suggesting this discussion.

5.

References

1] I. Ajzen, The Theory of Planned Behavior, Organizational Behavior and Human Decision Processes 50(2) (1991).

2] R. Benbunan-Fich, S.R.Hiltz and M.Turoff, A comparative content analysis of face-to-face vs. asynchronous group decision making, Decision Support Systems, 34 (4) (2003).

3] M. Beynon, S. Rasmequan and S. Russ, A new paradigm for computer-based decision support, Decision Support Systems, 33 (2) (2002).

4] G. D. Bhatt and J. Zaveri, The enabling role of decision support systems in organizational learning, Decision Support Systems, 32 (3) (2002).

5] J. M. Bloodgood and W. D. Salisbury, Understanding the influence of organizational change strategies on information technology and knowledge management strategies, Decision Support Systems, 31 (1) (2001).

6] N. Bolloju, M. Khalifa and E. Turban, Integrating knowledge management into enterprise environments for the next generation decision support, Decision Support Systems, 33 (2) (2002).

7] J. Chen and S. Lin, An interactive neural network-based approach for solving multiple criteria decision-making problems, Decision Support Systems, 36 (2) (2003).

8] J. Q. Chen and S. M. Lee, An exploratory cognitive DSS for strategic decision making, Decision Support Systems, 36, (2) (2003).

9] J. F. Courtney, Decision making and knowledge management in inquiring organizations: toward a new decision-making paradigm for DSS,

Decision Support Systems, 31 (1) (2001).

10] F. D. Davis, R. P. Bagozzi, and P. R. Warshaw, Extrinsic and Intrinsic Motivation to Use computers in the Workplace, Journal of Applied Social Psychology 22 (14) 1992.

11] R. Debreceny, M. Putterill, L-L. Tung, and A. L. Gilbert, New Tools for determination of the inhibitors of e-commerce, Decision Support Systems, 34 (2) (2003).

12] A. R. Dennis, T. A. Carte and G. G. Kelly, Breaking the rules: success and failure in groupware-supported business process reengineering, Decision Support Systems, 36 (1) (2003).

13] P. R. Devadoss, S. L. Pan and J. C. Huang, Structural analysis of e-government initiatives: a case study of SCO, Decision Support Systems, 34 (3) (2003).

14] A.M. Diamond, What is a Citation Worth?, Journal of Human Resources, 21 (2), 1986.

15] S. B. Eom and S. M. Lee, Leading U.S. universities and most influential contributors in decision support systems research (1971–1989) a citation analysis, Decision Support Systems, 9 (3) (1993).

16] S. B. Eom, Mapping the intellectual structure of research in decision support systems through author cocitation analysis (1971–1993), Decision Support Systems, 16 (4) (1996).

17] M. Fishbein, and I. Ajzen, Belief, Attitude, Intention and Behavior: An introduction to Theory and Research (Addison-Wesley, Reading, MA, 1975).

18] E. Garfield and A. Welljams-Dorof, Citation Data: Their Use as Quantitative Indicators for Science and Technology Evaluation and Policy Making, Science & Public Policy, 19 (5) (1992).

19] J. E. Hirsch, An Index to Quantify an Individual’s Scientific Research Output, Proceedings of the National Academy of Sciences, 102 (46) (2005).

20] C. W. Holsapple, L. E. Johnson, H. Manakyan and J. Tanner, An empirical assessment and categorization of journals relevant to DSS research,

Decision Support Systems, 14 (4) (1995).

21] G. Houben, K. Lenie and K. Vanhoof, A knowledge-based SWOT-analysis system as an instrument for strategic planning in small and medium sized enterprises, Decision Support Systems, 26 (2) (1999).

22] M. Howitt, ISI Spins a Web of Science, DATABASE, 21 (2) (1998).

23] W. W. Huang, K-K. Wei, R. T. Watson and B. C. Y. Tan, Supporting virtual team-building with a GSS: an empirical investigation, Decision Support Systems, 34 (4) (2003) 359-367.

24] M. Y. Kiang, A comparative assessment of classification methods,

Decision Support Systems, Volume 35, Issue 4, (July 2003).

25] R. Kohli, F. Piontek, T. Ellington, T. VanOsdol, M. Shepard and G. Brazel, Managing customer relationships through E-business decision support applications: a case of hospital–physician collaboration, Decision Support Systems, 32 (2) (2001).

26] M. M. Kwan and P. Balasubramaniam, Knowledge Scope: Managing Knowledge in Context, Decision Support Systems, 35 (4) (2003).

27] L. I. Meho, K. Yang, Impact of data sources on citation counts and rankings of LIS faculty: Web of Science versus Scopus and Google Scholar, Journal of the American Society for Information Science and Technology, (2007) Forthcoming.

28] M. Martinsons, R. Davison and D. Tse, The balanced scorecard: a foundation for the strategic management of information systems, Decision Support Systems, 25 (1) (1999).

29] A. P. Massey, M. M. Montoya-Weiss and K. Holcom, Re-engineering the customer relationship: leveraging knowledge assets at IBM, Decision Support Systems, 32 (2) (2001).

30] H. R. Nemati, D. M. Steiger, L. S. Iyer and R. T. Herschel, Knowledge warehouse: an architectural integration of knowledge management, decision support, artificial intelligence and data warehousing, Decision Support Systems, 33 (2) (2002).

31] D.E. O’Leary, Knowledge Management Systems: Converting and Connecting, IEEE Intelligent Systems, 13 (3) (1998).

32] M. Özbayrak and R. Bell, A knowledge-based decision support system for the management of parts and tools in FMS, Decision Support Systems, 35 (4) (2003).

33] P. Poon and C. Wagner, Critical success factors revisited: success and failure cases of information systems for senior executives, Decision Support Systems, 30 (4) (2001).

34] B. Rubenstein-Montano, J. Liebowitz, J. Buchwalter, D. McCaw, B. Newman and K. Rebeck, A systems thinking framework for knowledge management,

Decision Support Systems, 31(1) (2001).

35] M. J. Shaw, C. Subramaniam, G. W. Tan and M. E. Welge, Knowledge management and data mining for marketing, Decision Support Systems, 31(1) (2001).

36] J. P. Shim, Merrill Warkentin, James F. Courtney, Daniel J. Power, R. Sharda and C. Carlsson, Past, present, and future of decision support technology, Decision Support Systems, 33 (2) (2002).

37] S. D. Smith, Is an Article in a Top Journal a Top Article?, Financial Management, 33 (4) (2004).

38] L. Vaughan and D. Shaw, Bibliographic and Web Citations: What’s the Difference?, Journal of the American Society for Information Science and Technology, 54 (14) (2003).

39] L. A. West and T. J. Hess, Metadata as a knowledge management tool: supporting intelligent agent and end user access to spatial data,

Decision Support Systems, 32 (3) (2002).

-----------------------

Availability

(e.g., on-line)

Other

Sources

Other

Sources

Other

Motivations

(Maturity)

Measurable:

Elsevier, Scopus

SSCI, Google

Or Manual

Measurable:

Number, Appearance in Top 25

Citations

Downloads

Visibility

Interest

Impact

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download