JOURNAL RANKINGS: COMPARING REPUTATION, CITATION …



Journal Rankings: Comparing Reputation, Citation and Acceptance Rates

Abstract

Research productivity is important in school reputation as well as individual faculty evaluation. In order to evaluate research productivity, the quality of research is often measured by proxy through the number of journal articles and ratings of the journals in which they appear. Because of this there is significant pressure on faculty to publish in the “top journals”. There are several metrics for evaluating and ranking journals, each of them with its own merits and limitations. Some commonly used quantitative measures of research quality are citation analyses, acceptance rates, and whether or not a journal is peer reviewed. Alternatively, journals can be ranked qualitatively into stratified groups based on reputation. Reputation, in turn, may be correlated with perceived values of quantitative measures, and thus is more subjective.

The purpose of this research is to examine the extent of correlation between various measures of journal quality, in particular between quantitative and qualitative measures. The various measures are compared to examine the extent to which they are similar. Comparisons were also made among business departments. For this sample, overall journal rank was correlated with citation rate but not with acceptance rate. However, quantitative measures were not consistent among academic departments, indicating that journal rank can not be reliably used to make interdepartmental comparisons.

Keywords

Journal ranking, citation rate, impact factor, acceptance rate

Introduction

The mission of an academic institution is to extend the chain of knowledge from the infinite past into the infinite future. Teaching and publication project knowledge into the future. Published scholarly research adds to the body of knowledge and links it through citations to the research of those who have done this before. Published research is tangible evidence of scholarly activity.

Some academic institutions, in their role as employers, have developed incentive systems to motivate more and better research productivity. To link scholarly contributions with employment decisions such as hiring, promotion, and salary increments, it becomes necessary to quantify each individual’s contribution to the chain of knowledge. Journal rankings are sometimes used as a proxy to measure quality of research, faculty, and academic institutions. As academics are in the service sector, the measures compared in this paper are alternative ways to get information about the quality of a service.

There are inevitable measurement difficulties in the endeavor to measure service quality. The measures used here include citation rates, acceptance rates, and lists ranked by perceptions of journal quality. This paper compares the “objective” measures of citation and acceptance rates with the “subjective” measure of rank using data from one business school’s system for measuring the quality of scholarly publications. One would expect strong correlations among different measures if they are close proxies for the true value of scholarship. While most earlier studies focus on a single academic discipline, this study uses data aggregated over all departments in a business school and attempts to make comparisons among departments.

Measures of Quality Scholarship

CITATION RATE

Several measures of publication quality are studied in the journal ranking literature, each with its own strengths and shortcomings. Mainly, the measures fall into three categories: those based on citations, those based on acceptance rates, and those based on rankings of journals by professionals with some relevant expertise. Each is a proxy measure for journal quality; each is used to develop ordinal rankings or groups of “top” journals. Journal rank may in turn be used as a proxy for article quality, or author quality, or institution quality. How well does each type of measure reflect value? Are they consistent?

Citation is an indication that one scholar’s work is valuable to someone else. More citations should indicate higher value. MacRoberts & MacRoberts (1989) review some problems of citation analysis. Different rates of citation are customary in different fields. Multiple authors of an article are not always credited. There may be biased citing particularly when secondary sources are cited rather than initiators. Informal sources and tacit knowledge are usually not cited because they are not in papers and books. There is self-citation. Some citations are negative while others are positive. Not all literature is included in the SSCI which is the main source of citation impact factors. New journals and journals in specialized areas have fewer citations than general purpose and older, better known journals. When analyzing citation patterns, different authors use different length publication year windows for impact factors, making comparisons difficult. The most important papers are often not cited because their results are taken for granted to be part of the norm (Ritzberger, 2008); his example, users of the Nash equilibrium rarely cite Nash (1950). New developments are often in new journals which take a long time to get into citation indexes. The peer review system of journals is biased against authors that are not affiliated with top universities or are employed at non-academic institutions (Blank, 1991). Anthony F. J. Van Raan (2005) discusses some bibliometric problems of using citation index as a proxy for research quality, enumerating various possible sources of error in collection of the data used to compute a citation index. Citation is a complex process, practice differs among fields; he suggests citation is improper for evaluating research even aggregated at the level of large institutions. He advocates for improving bibliometric indicators and using them as support for, rather than replacement of, a peer-based evaluation procedure.

Citation rate can be computed for a journal as well as for an individual paper. Thus, citation rates are sometimes used in studies that compare journal quality and article quality.

ACCEPTANCE RATE

Acceptance is a vetting process. It indicates that blind reviewers found a scholar’s work to be a valuable contribution. Lower acceptance rate implies that the reviewing journal is more selective and that the accepted papers are more valuable than those rejected; lower acceptance rate of the publishing journal puts accepted manuscripts in a higher percentile of all manuscripts. However, acceptance rates depend on the journal’s review process which is not uniform across journals or fields. Acceptance can be affected by the number of reviewers, and the familiarity of a reviewer with the subject of the reviewed manuscript. In specialized areas there may be a smaller number of manuscripts submitted than to older, general purpose journals; this could bias the acceptance rate upward.

Acceptance rate is a measure of journal quality. It can not be computed for an individual paper.

PERCEPTION-BASED RANKINGS

Rankings of journals that rely on perceptions are open to criticism because of their subjectivity. But rankings are popular for use in evaluations because they are convenient to use. Since research and publication is a dynamic process, quality rankings of journals might change over time regardless of what measure is used. The subjective ranking is the least data intensive method of measurement and thus easiest to update. On the other hand, the paucity of data behind the ranking decisions leads to the subjectivity criticism. Ranking schemes are self-perpetuating (Dibb & Simkin, 2005) because academics will direct their manuscripts to highly ranked journals. They cite those journals in their manuscripts, which contributes to high rankings of those journals in the future.

Wang, Xia, Hollister, & Wang (2009) discuss problems of subjectivity in a different application of ranking, for international educational systems. They used a multicriteria model with quantitative data; even so, some subjectivity was incorporated into the process of developing scaled values from these data.

Ranking is a measure of journal quality. When part of an incentive system, journal quality is being used as a proxy for article quality, and by extension, author quality.

JOURNAL RANKING IN THE DISCIPLINES

The goal of some journal ranking studies is to create a journal list for a specific discipline. Such lists have been compiled for all the basic business disciplines as well as some specialty areas. Some scholars follow up by comparing their list to another metric. In accounting, Chow, Haddad, Singh, & Wu (2008) ranked journals, separating managerial and financial accounting. They compared perception-based and citation-based rankings. Nikolai & Howard (1983) ranked accounting journals for auditing, financial, managerial, and tax based on questionnaire responses; rankings differed in the different areas. Below the top two journals, they found that the rankings changed for respondents from different types of academic institutions and for assistant vs. full professors.

In economics, Liebowitz & Palmer (1984) developed a list of journals ranked by citation rates.

In finance, Borde, Cheney, & Madura (1999) ranked journals based on perceptions of finance department chairpersons.

In management, Clark & Wright (2007) compared citation ranking in 1995 and 2005; they found that consideration as a quality journal changes over time. Coe & Weinstock (1984) developed a list of management journals based on perceptions of management department chairpersons. Sharplin & Mabry (1985) created a ranking of management research journals based on number of citations in Administrative Science Quarterly and The Academy of Management Journal. They discussed the costs and benefits of ranking of management journals, concluding that the costs outweigh the benefits. Stahl, Leap, & Wei (1988) compared the Coe & Weinstock list to the Sharplin & Mabry list, using their results to evaluate productivity of academic institutions. Barman, Tersine, & Buckley (1991) developed a list of production and operations management journals.

In marketing, Luke, Doke, & Reed (1987) surveyed marketing faculty in order to rank journals based on both popularity/familiarity and importance/prestige. Their lists were similar but did not always overlap. Also, that study mixed in some general journals with marketing-specific journals. Dibb & Simkin (2005) used four ranking schemes to benchmark marketing journals.

For business ethics, Albrecht, Thompson, & Hoopes (2010) use a survey of scholars, Serenko & Bontis (2009) use citation rates, and Derry & Wicks (1996) use a survey of business ethics researchers within the Society for Business Ethics, respectively, to develop rankings of journals.

COMPARING MEASURES FOR JOURNAL RANKING

If subjective rankings, citation rates and acceptance rates are alternative measures of the true value of scholarship, one would expect a high degree of correlation between them. How well do perceived rank, acceptance rate, and citation rate correlate? Researchers have found general consistency within a give field, but not enough to use journal rank to reliably judge an individual article or to make comparisons between fields or between institutions.

Chow, Haddad, Singh, & Wu (2008) compared perception-based rankings for accounting journals from earlier studies with citation-based rankings that they developed using Google Scholar. They compared rank based on mean and median citations per article with the reference set of ranked journals. In general, the group of more highly ranked journals by perception also had higher citation counts. However, there were numerous reversals of order for a given journal in the group. There was considerable spread within the group of highly ranked journals; a proportion of articles in the top journals was rarely or never cited, and only a small number of articles in these journals had exceptionally high citation rates. Also, many highly cited articles were not in the top journals. So while there was general support for rankings, it was not sufficient to justify using journal rank as a proxy for individual publication quality.

Chow, Haddad, Singh, & Wu (2007) compared citation rates of articles with citation rates of accounting journals. While the top three accounting journals had higher average citation rates than the other six accounting journals examined, some articles in top journals were not highly cited and some articles in non-top journals were very highly cited. Their sample found 74 percent of the articles in the top-three journals attained top article status, as compared to 41 percent of those in the other six journals. From a low of 2.2 percent to a high of 63.1 percent of the articles in journals not top-ranked were as highly cited as those in the top ranked accounting journals. They concluded that journal rank might be a meaningful way to evaluate the aggregate publication value of schools or departments, but there would be significant errors in evaluating individuals by the proxy of journal rank rather than by evaluating articles on individual merit.

Ellis & Durden (1991) found rankings of economics journals related to citation frequency as well as to the past reputation of the journal as reflected in earlier rankings. However, the results also suggest a bias in the rankings toward the more theoretical or general journals and also toward the older, more established journals.

Smith (2004) used citation rates to measure journal quality in finance and also to measure article quality. He found high Type I errors – “top” article rejected by top journal and also high Type II errors – “non-top” article accepted by top journal , concluding that journal rank is a poor proxy for article quality. While in general articles with higher numbers of citations are found in journals with higher impact factors, because of the high error rates, it has poor predictive ability for a specific journal or a specific article.

Singh, Haddad, & Chow (2007) similarly study management journals with similar conclusions, that using journal ranking as a proxy for article quality can lead to substantial misclassification. They also found some change in the ranks of the top journals over time.

Haensly, Hodges, & Davenport (2009) related rankings to acceptance rates of journals in economics and finance. They found that lower acceptance rates were associated with higher citation count, citation impact factors, and survey-based rankings. However, rank for any given journal may be substantially different depending on the ranking method applied.

A number of studies compile sets or ranks of high quality journals, where journal rank or inclusion in the “top” list is a proxy for quality. Coe & Weinstock (1983) studied one such list and compared it to acceptance rates. They found that journal ranking was correlated with perceived acceptance rates, but not with actual acceptance rates. Coe & Weinstock (1984) found subjective rankings correlated with perceived acceptance rates, but perceived acceptance rates diverged from actual acceptance rates and had very large ranges.

Starbuck (2005) studied the review process in a group of economics, psychology, sociology, and management journals sorted into quality quintiles. He concluded that while higher prestige journals publish more high value articles, there is considerable randomness in editorial selection resulting in some low value articles in high prestige journals and some high value articles in low prestige journals. Therefore, evaluating article value primarily by the journal it is in will lead to some errors.

COMPARING DISCIPLINES

Comparisons of journal rankings among various academic areas are especially fraught with inconsistencies. Swanson (2004) compared the number of articles in top journals for the disciplines of accounting, finance, management, and marketing. He found that the number of articles and the proportion of faculty published in “major” journals differed between disciplines, and also changed over time. This suggests that quality norms evolve independently in each discipline, not surprising because referees rarely publish or review outside their primary field. Particularly interesting is the finding that the threshold for publication in the top n journals was different in different disciplines, so that the top three in one discipline might be comparable to the top two in another. Herron & Hall (2004) identified forty high-quality journals for each of nine specialty areas. Based on faculty perceptions, they developed a top 20 list overall, and a top 20 list for each of seven of the specialty areas. There was little similarity between the top journals of most specialty areas and the overall top 20 list. Again, this indicates that comparison between areas is difficult.

Even specialties within one discipline show significant differences. Weber & Stevenson (1981) divided academic accountants into five different groups: auditing, cost, financial, systems, and tax accounting. Their results suggest that specialties can confound the rankings of journals in the aggregate area. They conclude that ranking accounting departments by publication rates in top journals is not feasible because of differences in perceptions among different specialty areas. Chandy, Ganesh, & Henderson (1991) found that accounting journals are evaluated more highly by those in the field than by those outside it, leading to potential ambiguity in journal rankings.

Ritzberger (2008) ranked journals in economics and related business fields. He suggests that where a paper gets published is a very imperfect measure of its quality. He compared several rankings based on subjective opinion and found a number of inconsistencies. Albers (2009) compared journal rankings for economics and general business using journal lists from earlier researchers (including Ritzberger, 2008). He points out that any research using citations depends on the list of journals used. He also found inconsistencies in the ranking of economics journals compared to other business subfields, explaining that it is due in part to the problems inherent in trying to produce a master list of ranked journals across fields.

JOURNAL RANK AS A PROXY MEASURE

Stahl, Leap, & Wei (1988) examined institution rank using publication in top-ranked journals as a proxy measure. They identified academic affiliations of the authors of each of the articles in 25 top journals. Then they ranked schools by number of articles in the listed journals whose author was from that school (jointly authored articles got fractions of points). Rankings of the top 50 schools differed significantly for four different lists of top journals that they used. They also found a strong in-house editor effect--institutions housing the permanent editorial offices of a journal had a large share of the articles in that journal. They compared their rankings to a Wall Street Journal survey asking executives to rank the top 21 MBA programs. For each of the top-journal publication lists they studied, correlations were nonsignificant. Thus, they concluded that faculty research productivity and reputation of MBA programs are independent.

Adler & Harzing (2009) offer a wide-ranging critique of the present system of ranking journals and the incentive schemes and derivative institution rankings that follow from it. They discuss the difference between productivity (counting publications) and impact (citation count), neither of which is a true measure of quality. They advocate for a moratorium on rankings-based evaluations while better metrics and institutional systems are developed. “Rather than asking colleagues where something was published…ask how their research has made a difference” (Adler & Harzig, 2009, p. 91).

Most of these rankings, however, only cover one discipline. Even if these ranked lists are internally consistent they may not hold up for comparison among disciplines. That can be a problem if relative position in ranking is being used to allocate resources among departments or faculty as is done for budget or promotion decisions. The present study, including all six departments in a school of business, contributes to understanding the pitfalls of comparison among disciplines. It highlights how different proxy measures compare.

Method

Data for reputation category was collected from a regional northeast U.S. business school where journal rankings are used to evaluate the scholarly productivity of faculty. Among other factors, research contributions are evaluated by the number of articles and the quality of the journals in which they have been published. Journal quality is evaluated by using a list of A tier journals, B tier journals, and others. The lists for each department were compiled by department chairs and reflect their perception of the journals’ reputation in that field. The lists used in this study were most recently updated in 2008. A tier, as opposed to B tier, on that list serves as the qualitative measure for the present research. Citation and acceptance rates, the quantitative measures, were from published sources. All the journals in these lists are peer reviewed. Included journals are in Table 2. Although the lists studied may be unique to this school, the process is similar to that used by Howard & Nikolai (1983), Borde, Cheney, & Madura (1999), Coe & Weinstock (1984), Luke & Doke (1987), Albrecht, Thompson, Hoopes, & Rodrigo (2010), and Wicks, & Derry (1996).

The alternative measurement schemes studied here are based on quantitative measures, either acceptance rates or citation rates. Each of the three measurement schemes has some drawbacks as noted earlier. Acceptance rates were taken from Cabell’s Directory (2008). Citation rates are 2005 impact factors (Journal-).

This paper compares the subjective classification of journals into A tier and B tier lists based on reputation with their respective acceptance rates and citation rates. Since the dependent variable, A tier or B tier is nominal, discrete choice regression was used, (modeling tier as 1 or 0). See Hosmer & Lemeshow (2000) for a reference on discrete choice regression. Conclusions are drawn about consistency of subjective tier-listed journal values for acceptance and citation, as well as for consistency among departments.

Results

Average citation rates by department separated by A tier or B tier list for journals in the study are shown in Figure 1. Within any given department, the average citation rates are consistently higher for journals in the A tier list than for journals in the B tier list. Regression results for the most significant model with both acceptance and citation rates are shown in Table I. Probit regression shows overall correlation of A tier or B tier list with citation rate to be significant at the .1% level (Chi-Square=10.76, Pr>ChiSq=.001). This supports the findings of Chow, Haddad, Singh, & Wu (2008) and Ellis & Durden (1991).

The A tier journals for different departments, however, do not appear (Figure 1) to have consistent citation rates. This supports MacRoberts & MacRoberts (1989) contention that different rates of citation are customary in different fields, making it difficult to compare journals in different fields.

Average acceptance rates by department separated by A tier or B tier list for journals in the study are shown in Figure 2. Intuitively, A tier journals would be expected to have lower acceptance rates than B tier journals. This was true for three departments in this study; for the other two departments, the average acceptance rates for the A tier journals was higher than for the B tier journals. The probit regression shows overall acceptance rate was not significant (Chi-Square=1.25, Pr>ChiSq=.2633). Here acceptance rates appear not to be a good predictor of journal ranking either within or between departments. This underscores the difficulty of making across-department comparisons, as some researchers have, in contrast, found acceptance rates to be correlated with journal ranking. See, for example, Coe & Weinstock (1983) and Haensly, Hodges, & Davenport (2009).

An obvious area for further research would be to extend this study to a larger sample including other lists of perceived journal quality. An important follow up is to test whether each individual journal in its respective A tier list is consistent with the other “top” quality journals in it’s A tier list. A related area of inquiry would be to compare the scholarship productivity of institutions that use different methods for scoring publications.

Conclusion

These results suggest that subjectively compiled journal rankings should be used with caution. In this study the A tier and B tier journal rankings were correlated with citation rates but not with acceptance rates. A practical implication is that the subjective journal rankings are imperfect proxies for journal quality, since theoretically it should have been correlated with both. Any decisions based on these rankings are, by implication, also imperfect.

More importantly, quantitative measures were not consistent among rank lists of different academic departments. Even if it were reasonable to differentiate between publications in A tier as opposed to B tier journals within a given department, it would be difficult to make similar comparisons among departments. A practical implication is that an incentive system that uses these lists as a mechanism for allocating resources among departments might not lead to consistently rewarding higher quality research.

[insert Figure1 here]

Fig. 1

[insert Figure2 here]

Fig. 2

|Table I |

|Type III Analysis of Effects |

|Effect |DF |Wald |Pr > ChiSq | |

| | |Chi-Square | | |

|Acceptance |1 |1.2515 |0.2633 | |

|Citation |1 |10.758 |0.001 | |

| | | | | |

|Analysis of Parameter Estimates |

|Parameter |DF |Estimate |Standard Error |95% Confidence Limits |Chi- |Pr > ChiSq |

| | | | | |Square | |

|Intercept |1 |2.4202 |0.9482 |0.5618 |4.2786 |6.52 |0.0107 |

|Acceptance |1 |-0.0438 |0.0391 |-0.1205 |0.0329 |1.25 |0.2633 |

|Citation |1 |-0.6962 |0.2123 |-1.1123 |-0.2802 |10.76 |0.001 |

Acknowledgements

THE AUTHOR WISHES TO ACKNOWLEDGE STELA STEFANOVA AND VICTOR GLASS FOR THEIR HELP WITH STATISTICAL ANALYSIS.

References

Adler, Nancy J., & Harzing, Anne-Wil. (2009). When Knowledge Wins: Transcending the Sense and Nonsense of Academic Rankings. Academy of Management Learning & Education, 8(1), 72–95.

Albers, Sönke. (2009). Misleading Rankings of Research in Business. German Economic Review, 10(3), 352-363.

Albrecht, Chad, Thompson, Jeffery A., Hoopes, Jeffrey L., & Rodrigo, Pablo. (2010). Business Ethics Journal Rankings as Perceived by Business Ethics Scholars. Journal of Business Ethics, 95 (2), p. 227–237.

Alexander, Jennifer K., Scherer, Robert F., & Lecoutre, Marc. (2007). A Global Comparison of Business Journal Ranking Systems. Journal of Education for Business, 82(6), 321-327.

Blank, Rebecca M. (1991). The Effects of Double-Blind versus Single-Blind Reviewing: Experimental Evidence from the American Economic Review. The American Economic Review, 8 (5), 1041-1067.

Borde, Stephen F., Cheney, John M., & Madura, Jeff. (1999). A Note on Perceptions of Finance Journal Quality. Review of Quantitative Finance and Accounting, 12(1), 89-97.

Barman, Samir, Tersine, Richard J., & Buckley, M. Ronald. (1991). An Empirical Assessment of The Perceived Relevance and Quality of POM-Related Journals by Academicians. Journal of Operations Management, 10(2), 194-212.

Cabell David W.E. & English, Deborah L. (Ed.). (2008). Cabell's Directory of Publishing Opportunities. Beaumont, Texas: Cabell Pub. Co.

Chandy, P. R., Ganesh, G. K., & Henderson, G. V. (1991). Awareness and Evaluation of Selected Accounting Journals Inside and Outside the Discipline: An Empirical Study. Akron Business and Economic Review, 22(2), 214.

Chow, Chee W., Haddad, Kamal, Singh, Gangaram, & Wu, Anne. (2007). On Using Journal Rank to Proxy for an Article’s Contribution or Value. Issues in Accounting Education, 22(3), 411–427.

Chow, Chee W., Haddad, Kamal, Singh, Gangaram, & Wu, Anne. (2008). A Citation Analysis of the Top-Ranked Management and Financial Accounting Journals. International Journal of Managerial and Financial Accounting, 1(1), 29-44.

Clark, Timothy & Wright, Mike. (2007). Reviewing Journal Rankings and Revisiting Peer Reviews: Editorial Perspectives. Journal of Management Studies, 44(4) 612-621.

Coe, Robert K. & Weinstock, Irwin. (1983). Evaluating the Finance Journals: The Department Chairperson’s Perspective. The Journal of Financial Research, 6(4), 345-349.

Coe, Robert & Weinstock, Irwin. (1984). Evaluating the Management Journals: A Second Look. The Academy of Management Journal Vol. 27(3), 660-666.

Dibb, Sally & Simkin, Lyndon. (2005). Benchmarking the RAE Returns of Marketing Professors' Journal Publications. Journal of Marketing Management, 21(7), 879-896.

Ellis, Larry V. & Durden, Garey C. (1991). Why Economists Rank Their Journals the Way They Do. Journal of Economics and Business. 43(3), 265-270.

Haensly, Paul J., Hodges, Paul E., & Davenport, Shirley A. (2009). Acceptance Rates and Journal Quality: An Analysis of Journals in Economics and Finance. Journal of Business & Finance Librarianship, 14(1) 2 – 31.

Herron, Terri L., & Hall, Thomas W. (2004). Faculty Perceptions of Journals: Quality and Publishing Feasibility. Journal of Accounting Education, 22(3), 175–210.

Hosmer, David W. & Lemeshow, Stanley. (2000). Applied Logistic Regression(2nd ed.). New York: John Wiley and Sons, Inc.

Howard, T. P. & Nikolai, L. A. (1983). Attitude Measurement and Perceptions of Accounting Faculty Publication Outlets. The Accounting Review, 58(4), 765-776.

Journal- (2005). Retrieved February2, 2010, from

Liebowitz, S. J. & Palmer, J. P. (1984). Assessing the Relative Impacts of Economics Journals. Journal of Economic Literature, 22(1), 77-88.

Luke, Robert H., & Doke, E. Reed. (1987). Marketing Journal Hierarchies: Faculty Perceptions, 1986-87. Journal of the Academy of Marketing Science. 15(1), 74.

MacRoberts, Michael H. & MacRoberts, Barbara R. (1989). Problems of Citation Analysis: A Critical Review. Journal of the American Society for Information Science. 40 (5), 342-349.

Nash, John F. (1950). Equilibrium Points in N-Person Games. Proceedings of the National Academy of Sciences. 36, 48-49.

Nobes, Christopher W. (1985). International Variations in Perceptions of Accounting Journals. The Accounting Review. 60(4), 702.

Ritzberger, K. (2008). A Ranking of Journals in Economics and Related Fields. German Economic Review, 9(4), 402–430.

Serenko, Alexander & Bontis, Nick. (2009). A Citation-Based Ranking of the Business Ethics Scholarly Journals. International Journal of Business Governance and Ethics, 4(4), 390-399.

Sharplin, Arthur D., & Mabry, Rodney H. (1985). The Relative Importance of Journals Used in Management Research: An Alternative Ranking. Human Relations, 38(2), 139.

Shugan, Steven M. (2003). Journal Rankings: Save the Outlets for Your Research. Marketing Science, 22(4), 437-441.

Singh, Gangaram, Haddad, Kamal M., & Chow, Chee W. (2007). Are Articles in" Top" Management Journals Necessarily of Higher Quality? Journal of Management Inquiry, 16(4), 319-331.

Smith, Stanley D. (2004). Is an Article in a Top Journal a Top Article? Financial Management, 33(4), 133-149.

Stahl, Michael J., Leap, Terry L. & Wei, Zhu Z. (1988). Publication in Leading Management Journals as a Measure of Institutional Research Productivity. The Academy of Management Journal, 31(3), 707-720.

Starbuck, William H. (2005). How Much Better Are the Most-Prestigious Journals? The Statistics of Academic Publication. Organization Science, 16(2), 180-200.

Swanson, Edward P. (2004). Publishing in the Majors: A Comparison of Accounting, Finance, Management and Marketing. Contemporary Accounting Research, 21(1), 223-255.

Thompson, J.A., Hoopes, J.L., & Albrecht, P.C. (2010). Business Ethics Journal Rankings as Perceived by Business Ethics Scholars. Journal of Business Ethics, 95(2), 227-237.

Truex, Duane, Cuellar, Michael, & Takeda, Hirotoshi. (2009). Assessing Scholarly Influence: Using the Hirsch Indices to Reframe the Discourse. Journal of the Association for Information Systems, 10(7), 560-594.

Van Fleet, David D., McWilliams, Abagail, & Siegel, Donald S. (2000). A Theoretical and Empirical Analysis of Journal Rankings: The Case of Formal Lists. Journal of Management, 26(5), 839–861.

Van Raan, Anthony F. J. (2005). Fatal Attraction: Conceptual and Methodological Problems in the Ranking of Universities by Bibliometric Methods. Scientometrics, 62(1) 133-143.

Weber, Richard P. & Stevenson, W. C. (1981). Evaluations of Accounting Journal and Department Quality. The Accounting Review, 56(3), 596-612.

Wang, J., Xia, J., Hollister, K., & Wang, Y. (2009). Comparative Analysis of International Education Systems. International Journal of Information Systems in the Service Sector, Volume 1(1), 1-14.

Wicks, Andrew C. & Derry, Robbin. (1996). An Evaluation of Journal Quality: The Perspective of Business Ethics Researchers. Business Ethics Quarterly, 6(3), 359-371.

Table 2.

|Department |A Tier |B Tier |

|Accounting |Accounting Review |Accounting, Organization and Society |

| |Journal of Accounting ResearchО |Accounting Horizons* |

| |Journal of Accounting and Economics |Auditing: A Journal of Practice and Theory |

| |Contemporary Accounting Research |Journal of Accounting, Auditing and Finance* |

| | |International Journal of Accounting* |

| | |Journal of Management Accounting Research* |

| |  |Review of Accounting Studies* |

| | | |

|Economics |American Economic Review |Journal of International Economics |

| |Econometrica |Journal of Law and Economics |

| |Journal of Political Economy |Journal of EconometricsО |

| |Journal of Economic Theory |Journal of Monetary Economics |

| |Quarterly Journal of Economics |Review of Economics and Statistics |

| |Review of Economic StudiesО |  |

| | | |

|Finance |Journal of Finance |Journal of Financial Research* |

| |Journal of Financial Economics |Financial Management |

| |Review of Financial Studies |Journal of Financial Intermediation |

| |Journal of Financial and Quantitative Analysis |Journal of Banking and Finance |

| |  |Journal of Futures Markets |

| | | |

|Marketing |Journal of Marketing |Marketing ScienceО |

| |Journal of Marketing Research |Journal of Retailing |

| |Journal of the Academy of Marketing Science |Journal of Business Research |

| |Journal of Consumer Research |Journal of Advertising |

| | |Journal of Advertising Research |

| | |Journal of International Marketing |

| |  |Psychology & Marketing |

| | | |

|Management and Management |Administrative Sciences QuarterlyО |Journal of International Business Studies |

|Science | | |

| |Academy of Management Review |Organizational Science |

| |Academy of Management Journal |Organizational Behavior and Human Decision Processes |

| |Journal of Applied Psychology |Personnel Psychology |

| |Management Science |Strategic Management JournalО |

| |Operations Research | |

| |Decision Sciences | |

| |European Journal of Operations Research | |

| |Journal of Operations Management |  |

| | | |

|Law and Taxation |Yale University Law Review*О |  |

| |Stanford University Law ReviewО | |

| |Harvard University Law ReviewО | |

| |Columbia University Law ReviewО | |

| |New York University Law ReviewО | |

| |University of Chicago Law ReviewО | |

| |University of Michigan-Ann Arbor Law ReviewО | |

| |University of Virginia Law ReviewО | |

| |University of California-Berkeley Law Review*О | |

| |Duke University Law Review*О | |

| |National Tax JournalО | |

| |Tax Law Review*О | |

| |The Tax Lawyer*О |  |

* missing data for citation rate

Оmissing data for acceptance rate

[pic]

Fig. 1

[pic]

Fig.2

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download