Automatic summaries of earnings releases: Attributes and ...

Automatic summaries of earnings releases: Attributes and effects on investors' judgments

Eddy Cardinaels Department of Accountancy, Finance and Insurance, KU Leuven

Department of Accountancy, Tilburg University eddy.cardinaels@kuleuven.be

Stephan Hollander Department of Accountancy

Tilburg University s.hollander@tilburguniversity.edu

Brian J. White Department of Accounting The University of Texas at Austin brian.white@mccombs.utexas.edu

September 30, 2018

We thank Sarah Zechman (discussant), David Veenman, and workshop participants at the London Business School Accounting Symposium, the Financial Accounting and Reporting Section (FARS) Midyear Meeting, Radboud University Nijmegen, the University of Texas at Austin, the University of Melbourne, the University of Pittsburgh, the University of Amsterdam, the University of New South Wales, Jinan University, Tilburg University, NHH Norwegian School of Economics, and Texas A&M University for helpful comments. We also thank Arnaud Nicolas for generating expert summaries of earnings releases, and Ties de Kok and Shannon Garavaglia for their capable research assistance. This paper won the 2018 FARS Midyear Meeting Best Paper Award.

Automatic summaries of earnings releases: Attributes and effects on investors' judgments

ABSTRACT

Firms often include summaries with earnings releases. However, manager-generated summaries may be prone to strategic tone and content management compared to the disclosures they summarize. In contrast, computer algorithms can summarize large amounts of text without human intervention, and may provide useful summary information with less bias. We use multiple methods to provide evidence regarding the characteristics of automatic, algorithm-based summaries of earnings releases compared to summaries provided by managers. Results suggest that automatic summaries are generally less positively biased than management summaries, often without sacrificing the extent to which the summaries capture relevant information. We then conduct an experiment to test whether these differing attributes of automatic and management summaries affect individual investors' judgments. We find that investors who receive an earnings release accompanied by an automatic summary arrive at more conservative (i.e., lower) valuation judgments, and are more confident in those judgments. Overall, our results suggest that summaries affect investors' judgments, and that such effects can differ for management and automatic summaries.

Keywords: Management summary; automatic summary; corporate disclosure; individual investors; investor judgment

Data availability: Contact the authors

1. Introduction Public companies disclose more information than ever before (e.g., KPMG 2011; Loughran

and McDonald 2014; Dyer et al. 2017). Given the large volume of disclosure and evidence that individual investors are boundedly rational (Hirshleifer and Teoh 2003; Elliott et al. 2015), firms often provide summaries of key disclosures, including earnings releases. However, rather than presenting a balanced picture of the information disclosed in the underlying document, managers may engage in tone management and/or selectively highlight information that is favorable to the company (Henry 2008; Guillamon-Saorin et al. 2012; Huang et al. 2013, 2014). Against this backdrop, there may be a role for automatic, algorithm-based summarization of earnings releases. Summarization algorithms rely on statistical heuristics for sentence extraction to summarize large amounts of text without human intervention. As such, automatic summaries have the potential to reduce both information overload and bias. In this study, we investigate two questions. First, how do management and automatic summaries of earnings releases compare on a range of attributes, including bias and the extent to which they capture important information in the earnings release? Second, how do automatic and management summaries of earnings releases affect individual investors' judgments?

Investigating these issues is important for several reasons. First, because disclosures have become lengthier (Dyer et al. 2017), regulators and standard setters are starting to explore ways of simplifying financial reports (SEC 2013; FASB 2015), including summarization (SEC 2016). These efforts have led to calls for research on summarization to aid investors and others (Barth 2015). Thus, investigating management and automatic summaries has the potential to provide new insights to financial reporting regulators and accounting standard setters. Second, managergenerated summaries are already part of the financial reporting landscape. Our review of S&P

1

100 firms' disclosure practices indicates that 81% provided summaries in their fourth quarter 2015 earnings releases. However, there is scant evidence on the attributes of these summaries or whether this practice affects investors' judgments.

We use multiple methods to address our research questions. For our first research question, we start by selecting a summarization algorithm, of which there are several that are widely available (Nenkova and McKeown 2011). To identify an algorithm that is well-suited to summarizing earnings releases, we rely on user evaluation, and ask participants from Amazon's Mechanical Turk (MTurk) platform to evaluate summaries of two earnings releases: one summary provided by management and others generated by computer algorithms. To validate these evaluations, we use a technique from the field of information retrieval, known as RecallOriented Understudy for Gisting Evaluation (ROUGE), in which summaries are evaluated against a summary prepared by an experienced Investment Relations Officer (IRO). Based on these analyses, we select a summarization algorithm known as LexRank and compare the tone of management and LexRank summaries for a larger sample of hand-collected earnings releases.1 To address our second research question, we conduct an experiment to investigate the effect of automatic and management summaries on investors' valuation and other investment-related judgments.

In our user evaluation test, MTurk participants compare automatic and management summaries to the underlying text of two earnings releases on several dimensions. Results suggest that automatic summaries reflect the underlying text of the earnings release with less bias (i.e., present a more balanced picture) than management summaries. Participants also rate automatic summaries as no different from management summaries in capturing the important information

1 LexRank is short for Lexical PageRank, a method developed by Larry Page, one of the founders of Google.

2

in the earnings releases, and participants are equally likely to rely on automatic and management summaries. Comparing the various summarization algorithms, we find that LexRank outperforms the other algorithms, by producing summaries that are consistently rated as equal or superior to management summaries.2 This finding is supported by the ROUGE evaluation: compared to the management summary and other automatic summaries, LexRank better captures elements of the earnings release that the IRO deems important.

We then employ the LexRank algorithm to generate automatic summaries for a handcollected sample of S&P 100 firms that provide summaries with their fourth quarter 2015 earnings releases. We compare the tone of these automatic summaries to the tone of the management summaries, and compare the tone of both summary types to the tone of the underlying text in the earnings release. Results indicate that management summaries contain more positive tone words, and fewer negative tone words, than automatic summaries. Further, management summaries are more positive, and less negative, in tone compared to the underlying text in the earnings release, whereas automatic summaries are more similar in tone to the underlying text.

Collectively, these results, obtained with different methods, suggest that management summaries of earnings releases exhibit incremental, positive bias compared to the underlying text of the earnings release, but that automatic summaries largely do not exhibit such bias. Notably, our user evaluation and ROUGE tests suggest that this reduction in bias can be achieved without loss of informativeness.

In our main analysis, we experimentally test whether the differing attributes of automatic and management summaries affect individual investors' valuation and other investment-related

2 We provide detail on the summarization algorithms used in this study in the Online Appendix. For further details on the LexRank algorithm specifically, we refer the reader to Section 2.1.

3

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download