Monitoring and Evaluation Manual - FSN Network

[Pages:158]Monitoring and Evaluation Manual

Prepared for ADRA International Food Security Department

Prepared by TANGO International, Inc.

March 2007

Table of Contents

Acronyms........................................................................................................................... iv Part I. Overview of Monitoring and Evaluation ................................................................. 1 1. Introduction..................................................................................................................... 1

1.1 Purpose of the Monitoring and Evaluation Manual ............................................................ 1 1.2 Using the Manual ................................................................................................................ 2 2. Guiding Principles and Elements of Monitoring and Evaluation ................................... 3 2.1 What is Monitoring and Evaluation? .................................................................................. 3 2.2 Why Does a Program Need a Monitoring and Evaluation System? ................................... 5 2.3 General Overview of Basic Elements of a M&E System ................................................... 6 2.4 Monitoring and Evaluation Framework .............................................................................. 7 2.5 General Overview of Indicators and Indicator Development ............................................. 8 2.6 Managing for Impact......................................................................................................... 10 3. Conceptual Frameworks for Programming.................................................................. 12 3.1 Conceptual Frameworks for Project Design ..................................................................... 12 3.2 The USAID/FFP Expanded Conceptual Framework for Food Security........................... 15 Part II. Key Components and Stages of M&E Systems .................................................. 17 4. Vulnerability or Holistic Assessments.......................................................................... 17 4.1 Assessment Preparation .................................................................................................... 17 4.2 Defining Assessment Objectives ...................................................................................... 20 4.3 Target Area Selection ....................................................................................................... 20 5. Problem Analysis .......................................................................................................... 21 5.1 Using Cause and Effect Logic in Project Design.............................................................. 22 5.2 Hierarchical Problem Analysis ......................................................................................... 23 5.3 From Problem Analysis to Project Strategy...................................................................... 24 6. Program Design and Logical Frameworks ................................................................... 27 6.1 Establishing SMART Goals and Objectives ..................................................................... 28 6.2 Design Principles for Monitoring and Evaluation ............................................................ 29 6.3 Logical Frameworks ......................................................................................................... 29 6.4 General Logical Frameworks: linking indicators to program design................................ 30 6.5 The Logical Framework Matrix (LogFrame).................................................................... 31 7. Steps to Setup a Monitoring Systems ........................................................................... 37 7.1 Six steps to Setup an M&E System .................................................................................. 37 7.2 Monitoring for Performance and Participation ................................................................. 40 7.3 Participatory Monitoring................................................................................................... 41 7.3 Monitoring the Risk and Vulnerability Context ............................................................... 43 8. Establishing Indicators and Performance Targets ........................................................ 45 8.1 What are Indicators and Targets ....................................................................................... 45 8.2 Types of Indicators ........................................................................................................... 46 8.3 Identifying what and how to measure ............................................................................... 47 8.4 Characteristics of Ideal Indicators .................................................................................... 48 8.5 Criteria for selection of sound indicators .......................................................................... 49 8.6 Outcome and impact indicators......................................................................................... 49 8.7 Performance targets and benchmarks ............................................................................... 50 8.9 Approaches to establishing/setting targets ........................................................................ 51 8.10 Limitations of targets ...................................................................................................... 51 9. Sampling: Key Concepts .............................................................................................. 52 9.1 Introduction to Sampling .................................................................................................. 52 9.2 Types of Sampling ............................................................................................................ 52 9.3 Key Definitions................................................................................................................. 53 9.4 Sampling Methods/Design................................................................................................ 54

ADRA ? Monitoring and Evaluation Manual

i

9.5 Non-Probability Sampling ................................................................................................ 56 10. Baseline Surveys and Data Management.................................................................... 58

10.1 Initial considerations in conducting a baseline survey.................................................... 58 10.2 Methods of data collection.............................................................................................. 59 10.3 Developing a baseline survey ......................................................................................... 60 10.4 Data collection process ................................................................................................... 63 10.5 Data management............................................................................................................ 65 11. Data Analysis .............................................................................................................. 67 11.1 Data Cleaning.................................................................................................................. 67 11.2 Quantitative Data Analysis ............................................................................................. 68 11.3 Qualitative Data Analysis ............................................................................................... 70 11.4 Data Interpretation and Presentation............................................................................... 70 12. Information Sharing .................................................................................................... 72 Part III. M&E Considerations for Sector-Specific Programming..................................... 76 13. M&E Considerations for Food Security Programs..................................................... 76 13.1 Seasonality and Cross-program Comparability .............................................................. 76 13.2 Food Availability ............................................................................................................ 76 13.3 Food Access .................................................................................................................... 77 13.4 Food Utilization .............................................................................................................. 79 13.5 Market Information ......................................................................................................... 80 14. M&E Considerations for Health and Nutrition........................................................... 80 14.1 Selection of Indicators .................................................................................................... 80 14.2 Integrated Disease Surveillance and Nutrition Surveillance Systems ............................ 82 14.3 Nutrition.......................................................................................................................... 83 15. M&E Considerations for Micro Enterprise Programs ................................................ 84 16. M&E Considerations for Reproductive Health Programs .......................................... 85 17. Integrating Cross-Cutting Issues into Data Collection ............................................... 87 17.1 HIV/AIDS ....................................................................................................................... 87 17.2 Gender............................................................................................................................. 88 17.3 Conflict ........................................................................................................................... 88 References......................................................................................................................... 89 Annexes ............................................................................................................................ 91 Annex 1: Components of a Humanitarian Information System ....................................... 92 Annex 2: Sampling ........................................................................................................... 94 Annex 4: Food Security Weathervane Indicators ........................................................... 106 Annex 5: Potential Indicators for Use in Health and HIV/AIDS ................................... 113 Annex 6: Potential Indicators for Use in Micro & Small Enterprise Development ....... 114 Annex 9: Example of Topical Outline for Use in Qualitative Assessments................... 122 Annex 10: Qualitative Assessment Tools ....................................................................... 127 Annex 11: Example of Community Questionnaire......................................................... 129 Annex 12: Example of Qualitative Data Matrix ............................................................. 140 Annex 13: Preparatory Tasks for Vulnerability Assessments ........................................ 142 Annex 14: How to Develop M&E Plan .......................................................................... 148

ADRA ? Monitoring and Evaluation Manual

ii

List of Figures

Figure 1: Project Hierarchy................................................................................................. 6 Figure 2: Monitoring and Evaluation Framework .............................................................. 8 Figure 3: Indicators and Targets Along With the Project Hierarchy ................................ 13 Figure 4: Livelihoods Framework .................................................................................... 13 Figure 5: USAID/FPP Expanded Conceptual Framework ............................................... 13 Figure 6: Hierarchical Problem Tree .................................................................................... 13 Figure 7: The Project Cycle ................................................................................................ 13 Figure 8: Linking the Results Framework to a Logical Framework ........................................ 30 Figure 9: A Logical Framework with indicators.................................................................... 31 Figure 10: Indicators for LogFrame ..................................................................................... 13 Figure 11: Partial Results framework for National TB Program............................................. 13 Figure 12: Definition of Results Based Management ............................................................ 13 Figure 13: Factors Influence People's Participation in M&E ................................................. 13 Figure 14: Key Steps in Creating and Selecting Indicators .................................................... 13 Figure 15: Transforming Ideas about Quality into Measures for Which Targets Can Be Set .... 50 Figure 16: Various Methods of Random Sampling ............................................................... 13 Figure 17: Essential Components of a Baseline Survey Plan .......................................... 61 Figure 18: TIPS for Questionnaire Design ........................................................................... 61 Figure 19: Sample Frequency Table ................................................................................. 69 Figure 20: Minimum Information Requirements for Food Availability........................... 77 Figure 21: Minimum Information Requirements for Food Access.......................................... 77 Figure 22: Information Requirements for Food Utilization .................................................... 77

List of Tables

Table 1: Types of Indicators and Purpose........................................................................... 9 Table 2: Common Sources of Secondary Data ..................................................................18 Table 3: Descriptive Information Obtained through Secondary Data Analysis ................19 Table 4:Generic LogFrame Matrix (WFP) ........................................................................32 Table 5: Objective Hierarchy Links to Monitoring & Evaluation .....................................38 Table 6: Difference between traditional evaluation and participatory evaluation .............42 Table 7: Sharing of Information needs to be based on Information Needs................................72

ADRA ? Monitoring and Evaluation Manual

iii

ADRA AIDS BCC CDC CDD CHW CORE DD DHS DME DR FANTA FFP GAVI HIF HFA HFS HIS HIV HLS HMIS ICHS IEC IFAD IMCI KAP MCH MEASURE MDG M&E MFI MICS MIS MSE NGO PMTCT PHC RH TB UNDP UNICEF USAID WHO

Acronyms

Adventist Development and Relief Agency International Auto Immune Deficiency Syndrome Behavior Change Communication Centers for Disease Control Control of Diarrheal Disease Community Health Worker Child Survival Collaborations and Resources Group Diarrheal Disease Demographic and Health Survey Design Monitoring and Evaluation Development Relief Food and Nutrition Technical Assistance Food for Peace Global Alliance for Vaccines and Immunization Hygiene Improvement Framework Health Facility Assessment Health Facility Survey Health Information Survey Human Immunodeficiency Virus Household Livelihood Security Health Management Information Systems Integrated Child Health Survey Information, Education and Communication International Fund for Agricultural Development Integrated Management of Childhood Illnesses Knowledge, Attitude, Practice Maternal and Child Health Monitoring and Evaluation to Assess and Use Results Millennium Development Goal Monitoring and Evaluation Micro-finance Institutions Multiple Indicator Cluster Survey Management Information System Micro and Small Enterprise Nongovernmental Organization Promotion of Maternal to Child Transmission Primary Health Care Reproductive Health Tuberculosis United Nations Development Programme United Nations Children's Fund United States Agency for International Development World Health Organization

ADRA Monitoring and Evaluation Manual

iv

Part I. Overview of Monitoring and Evaluation

1. Introduction

The Adventist Development and Relief Agency (ADRA) is an independent humanitarian organization established in 1984 by the Seventh-day Adventist Church for the specific purpose of providing individual and community development and disaster relief. ADRA serves people in over 125 countries regardless of ethnic, political or religious association. ADRA helps those in need, particularly those most vulnerable such as women, children and senior citizens.

ADRA partners with communities, organizations and governments to improve the quality of life for millions around the world through a range of core programming areas including food security, economic development, primary health care, emergency management and basis education. ADRA recognizes that current programming in each of these core areas can be strengthened through the improvement of staff capacity in project monitoring and evaluation.

Monitoring and Evaluation (M&E) has become a leading priority for many development and humanitarian organizations. Advancements in measurement approaches, indicators and targets, performance monitoring and managing for results (impact) have been made in recent years in order to adequately and effectively evaluate progress and program impact. M&E is essential in order to design appropriate, effective, measurable programs and projects, and to consistently and effectively monitor implementation and evaluate the impact of specific activities among target populations.

In line with this philosophy and in order to better meet programming goals, ADRA is seeking to enhance its program training with the inclusion of a monitoring and evaluation component. Building ADRA's capacity to establish effective M&E systems will improve program design and management, and ensure that new projects develop M&E plans appropriate to the particular programming and vulnerability context.

This manual is directed towards programmers and M&E technical staff within the ADRA network who are responsible for collecting, analyzing and distributing information on programs. It is designed to guide program managers and M&E staff in the establishment and use of monitoring and evaluation systems for large programs, specific program components and small projects alike. It should serve as a guide to improve understanding of M&E in general and increase competency in key aspects of practicing M&E in the field.

1.1 Purpose of the Monitoring and Evaluation Manual

This manual introduces fundamental concepts and components of M&E. It then presents definitions of the basic components of an effective M&E system and offers guidance for adapting each component to local programming contexts. It also provides key considerations for the development of appropriate M&E tools within the primary sectors in which ADRA works. Perhaps most importantly, the manual is intended to contribute to the learning environment within ADRA by describing the ways in which a comprehensive M&E system can be consistently used to inform problem analysis, program design, implementation, monitoring and reporting of evaluation findings.

ADRA Monitoring and Evaluation Manual

1

The M&E Manual is intended to strengthen the following principal competencies:

Understanding conceptual frameworks for program design and planning upon which monitoring and evaluation systems will be based;

Identifying and distinguishing between the key components of monitoring and evaluation systems;

Understanding the synergistic relationships between program design and management, and M&E systems in order to determine the expected impact and objectives and how they will be achieved;

Knowledge of the various tools and frameworks for M&E design planning and management;

Determining appropriate indicators and targets for both implementation processes as well as project outcomes and impact;

Identifying potential sources and tools for collecting and analyzing information, and tracking progress and impact;

Developing effective, flexible and responsive M&E Plans; and

Recording and sharing information on best practices and lessons learned in M&E throughout the organization.

1.2 Using the Manual

This manual is not intended to serve as a mandatory, "one size fits all" instruction booklet for project monitoring and evaluation. Rather, it offers a set of concepts and tools that will assist individual ADRA program staff to improve current approaches to M&E, thereby facilitating more consistent achievement of program objectives. The concepts and tools within this manual have been thoroughly tested and have been recognized as M&E "better practices" by NGOs and leading developing agencies including the International Food Policy Research Institute (IFPRI), the International Fund for Agricultural Development (IFAD), the World Food Programme (WFP) and the Food and Nutrition Technical Assistance (FANTA) Project.

In order to make the material optimally useful for ADRA staff with varying levels of M&E experience, the manual has been divided into three core sections:

Part One of the manual provides a broad overview of M&E as well as supporting conceptual frameworks that form the basis for effective multi-sector programming.

Part Two defines critical components of monitoring and evaluation systems and discusses the tools used in various stages of project M&E.

Part Three underlines the importance of designing M&E systems that are responsive to the various contexts in which ADRA's activities are implemented.

ADRA Monitoring and Evaluation Manual

2

2. Guiding Principles and Elements of Monitoring and Evaluation

Monitoring and Evaluation (M&E) has become an expected and necessary component of any development program or project. The primary purpose of M&E is to measure the degree to which an operational design is implemented as planned and how successfully it achieves its intended results. This section begins with an introduction to M&E and defines key components and principles necessary to follow this guide of establishing and improving M&E systems.

2.1 What is Monitoring and Evaluation?

What is monitoring? Monitoring is a continuous process of collecting and analyzing information to compare how well a project, a program or policy is being implemented against expected results. Monitoring aims at providing managers and major stakeholders with regular feedback and early indications of progress or lack thereof in the achievement of intended results. It generally involves collecting and analyzing data on implementation processes, strategies and results, and recommending corrective measures (IFRCS, 2007).

What is evaluation? Evaluation is the systematic and objective assessment of an ongoing or completed project, program or policy, its design, implementation and results. Evaluation determines the relevance and fulfillment of objectives, efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling incorporation of lessons learned into the decision making process of both recipients and donors (IFRCS, 2007).

Ultimately, M&E systems are designed to determine the impact of projects and/ or programs. However, it also entails a regular, systematic collection and analysis of information to track the progress of project implementation. When appropriately designed and implemented, an M&E system keeps projects on track and provides information to reassess priorities. In order to do so, monitoring and evaluation must be understood as an ongoing activity that ultimately confirms and explains the nature and degree of change a particular development intervention has had on its population.

Results monitoring provides information on the progress towards achieving objectives and on the impact the program is having in relation to the expected results. It involves:

Relating the work being done to the objectives on a continuous basis in order to provide a measure of progress Reviewing the approaches and strategies in response to the changing circumstances without losing the overall direction Identifying if there is need to change the objectives Identifying further information or research for learning purposes

Furthermore, although project monitoring and impact evaluation are both critical and complementary elements of an effective project, there is often a limited understanding of their distinct purposes and roles:

Project monitoring entails the process of routinely and consistently gathering information on the process of project implementation. Monitoring focuses primarily on

ADRA Monitoring and Evaluation Manual

3

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download