Best Practices for Conducting Online Ad Effectiveness Research

Best Practices for Conducting Online Ad Effectiveness Research

June 2011

Conducted for the IAB by Marissa Gluck, RadarResearch

Acknowledgements

The IAB is grateful to Marissa Gluck of radarresearch for developing and writing this best practices document. We would like to acknowledge the contributions made by the IAB Research Advisory Board. In addition, the IAB is grateful to the industry experts who were interviewed for this research effort.

IAB Research Advisory Board: Dave Coletti - Jim Dravillas - Google Stephanie Fried Co-Chair of the IAB Research Council - Vevo Scott MacDonald - Conde Nast Digital Dan Murphy ? Univision Interactive Media Susan Nathan - Turner Bruce Rogers - Beth Uyenco Shatto Co-Chair of the IAB Research Council - Microsoft Advertising

IAB Contact Information

Sherrill Mane SVP Industry Services, IAB 212 380-4702 Sherrill@

IAB Best Practices for Ad Effectiveness Research, June 2011, prepared by Marissa Gluck of radarresearch

1

Contents

EXECUTIVE SUMMARY 3 BACKGROUND AND METHODOLOGY 4 INTRODUCTION 5 PLANNING 5 RECRUITMENT 10 DEPLOYMENT 16 OPTIMIZATION AND ANALYSIS 18 CONCLUSION 19 APPENDIX 21

IAB Best Practices for Ad Effectiveness Research, June 2011, prepared by Marissa Gluck of radarresearch

2

Executive Summary

Online ad effectiveness research is an important tool for marketers seeking to understand how their campaigns perform. However, it is challenged by serious methodological limitations. Questions around recruitment, sample bias and deployment are hampering the validity of this research and undermining the industry as a whole. The growth in online advertising spend requires sound measurement and reliable methodologies to prove effectiveness.

By examining each component of online ad effectiveness research, the IAB hopes to determine best practices along a range of methodologies. To accomplish the goals of developing and articulating best practices, the IAB commissioned Marissa Gluck of radarresearch. This document provides a set of recommendations so that the industry can develop more rigorous research while employing better standard operating procedures during the scientific inquiry period.

While the intent is not to endorse one methodology over another, there are some clear trends emerging as industry dissatisfaction with live intercept recruitment increases. With live intercepts falling increasingly into disfavor due to lowered response rates, greater site clutter, and misalignments between campaign delivery and sample, panels are gaining favor with agencies, publishers and vendors. However, panels are far from an industry panacea today. If properly validated, panels can solve for some of the methodological deficiencies of intercept studies as we know them.

This paper looks across the spectrum of available methodologies to assess best practices in each phase of online effectiveness research, from planning to recruitment, deployment and finally, optimization and analysis. Within each phase, we examine the challenges the industry faces and propose prescriptive remedies to each of these challenges. The issues we look at include organizational hurdles such as planning and staffing, as well logistical and technological impediments such as cookie deletion and declining response rates.

Best practices and associated issues covered include:

Planning

Recruitment

Deployment

Optimization

When to use an ad Declining response effectiveness survey rates

Survey timing

Statistical differences between demographic groups

Optimal timing

Cookie deletion

Optimal survey length Data integrity

Lack of staff research Representative

experience

samples

Role of digital in ad ecosystem

Cost of true experimental design

IAB Best Practices for Ad Effectiveness Research, June 2011, prepared by Marissa Gluck of radarresearch

3

As the supply chain for online advertising has become increasingly complex, it has also opened up possibilities for better methods. As industry frustration grows, there is also an upside: greater willingness to experiment, more innovation, and an investment in developing alternatives.

Background and Methodology

BACKGROUND AND PURPOSE OF THE REPORT

In August 2010 the IAB published a report looking at the strengths and weaknesses of online ad effectiveness research methodologies conducted by Paul J. Lavrakas, PhD (). Building on that research, the organization wanted to create a set of IAB best practices for this kind of research--a set of recommendations for agencies, publishers, and vendors to ensure that ad effectiveness research has the greatest possible validity.

This document is not meant to be exhaustive in the scientific aspects of improving the research methodologies. Rather, it is intended as a best practices guide for the marketplace while the study of improvements to ad effectiveness research methodologies occurs. The document builds upon Dr. Lavrakas' previous report, and incorporates newly conducted interviews with relevant stakeholders.

METHODOLOGY USED TO CONDUCT THE EVALUATION STUDY

Multiple interviews with key stakeholders were conducted over the period of two months in February and March 2011. Executives on both the publisher and agency side were interviewed, as well as vendor executives. Both larger and smaller agencies were interviewed, and both research and analytics executives were represented to get a more comprehensive view of the landscape as well as how data from these studies is incorporated into the overall value chain.

These interviews also led to several experts sharing confidential print and online materials. Other relevant information was gathered via internet and academic searches for nonproprietary literature that had been published or presented on how the Internet advertising effectiveness is, and should be, measured.

IAB Best Practices for Ad Effectiveness Research, June 2011, prepared by Marissa Gluck of radarresearch

4

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download