A



Improving School Quality in East Africa:

Randomized Evaluation of Policies to Create Local Accountability under Free Primary Education in Kenya and Uganda

RESEARCH PROPOSAL

Presented to

PEP-AusAid Policy Impact Evaluation Research Initiative

By

Prof. Germano Mwabu , Ms. Mumia Phyllis Machio,

Mrs. Racheal Nakhumicha Musitia, and Ms. Alice Muthoni Ng’ang’a

University of Nairobi, KENYA

Mr. Lawrence Bategeka, Ms. Madina Guloba, and Dr. Frederick Mugisha

Economic Policy Research Council, UGANDA

18 April 2008

Abstract

During the last decade, both Kenya and Uganda have introduced free primary education (FPE). While FPE has succeeded in increasing the quantity of children enrolled in school – dramatically so in Uganda – there is widespread concern that school quality has suffered. There are two primary explanations for this decline: the failure of school budgets and staffing levels to keep pace with enrolment; and the loss of local accountability as school management and funding has become centralized. We propose to conduct a randomized impact evaluation of two interventions designed to address these issues: co-funding of community-hired contract teachers and establishment of a Community-Based Monitoring System organized by the school management committee. This project will be implemented in collaboration with the Ministry of Education in each country, ensuring scalability while simultaneously building in-house capacity for impact evaluation in the ministries.

A. Aims and objectives

A.a. Study overview

This application proposes a randomized controlled trial of institutional interventions in the Kenyan and Ugandan primary education sector, with the aim of empowering local institutions to improve school quality. This project will take place in close collaboration with the Ministries of Education (MoE) in both countries in an effort to offer a learning-by-doing approach to make evidence-based policy making a central feature of the Kenyan and Ugandan educational framework.

The project proposes to build on – and fill in gaps in – work done under the policy of free primary education (FPE) in Kenya and Uganda. In Uganda, the combination of the abolition of school fees and the near doubling of education spending as a share of the budget have brought some impressive accomplishments: the number of primary schools and teachers both increased by more than 70% in the first 7 years of FPE, with a 73% increase in gross enrollment ratios in the first year alone. However, questions of quality persist. The remaining problem is evidenced by low primary completion rates; of the 2,159,850 pupils enrolled in primary one at the time of FPE introduction in 1997, only 22.5% reached primary seven in 2003. The common inability of parents’ associations to raise funds for school meals provides a partial explanation for poor attendance and low performance of enrolled pupils. Pupil/teacher ratios of 54:1 and pupil/classroom ratios of 94:1 as of 2003 had not risen to meet goals of 40:1 for these indicators. Understaffing and under-training of teachers are exacerbated by absenteeism.

In Kenya, increases in enrolment have been more modest, rising by approximately 19% from 2002 to 2004. However, even this relatively modest increase – in combination with a hiring freeze on new teachers implemented prior to FPE – has contributed to a severe teacher shortage in Kenyan classrooms yielding a pupil-teacher ratio of approximately 50:1. Additionally, the shift to a top-down funding system – replacing Kenya’s previous system of community fundraising built around harambees – has led to a decline in parental involvement in school management.

The research program will focus on two key interventions aimed at improving learning achievements in primary schools. The first is to provide a mechanism that assists communities to raise funds to address shortages of educational inputs according to their needs: hiring local contract teachers to address the shortage of centrally-hired teachers in Kenya, and funding locally determined projects in Uganda. The second is a Community-Based Monitoring System (CBMS) which provides a tool for parents and School Management Committees (SMC) to evaluate their school’s performance and hold teachers and head teachers accountable. These two interventions correspond to the two challenges in implementing FPE identified above – input shortages and lack of parental ownership.

The evaluation will be based on a randomized controlled trial involving an experimental group of approximately 196 schools Kenya and 100 schools in Uganda spanning multiple districts, to be conducted as follows. The study will employ a “cross-cutting” design, in which a subset of schools participates in either one, both, or neither intervention. The study will measure impacts by comparing differences across groups in outcomes of student and institutional performance before the intervention and 9-12 months after the start of the intervention.

A.b. Main research questions and core research objectives

The underlying premise of this research project is that the expansion of the primary education sector under FPE has (a) led to an influx of new students creating an acute shortage of teachers, (b) undermined the ability of schools to raise funding locally due to the abolition of fees – in some cases leaving schools worse off than prior to FPE – and (c) weakened community based monitoring of schools. This weakening of local monitoring appears to be linked to the abolition of fees. As parents no longer control the purse strings of the school, their incentive to monitor school performance has diminished.

The tension between the need for additional, external funds in the Kenyan and Ugandan education system, and the potential for outside funding to undermine local ownership of schools is at the center of this research project. The main question posed in this project is whether local accountability requires local fundraising, or whether merely strengthening community-based monitoring systems is sufficient. In so doing, the project will assess whether matching funds programs can provide sufficient incentives to allow local fundraising, even when parents and schools can not use the threat of exclusion as an enforcement device.

The primary objective of the project is to identify policies to improve the quality of education in Kenyan and Ugandan primary schools. The national scope of the project in each country, as well as close collaboration with the respective Ministries of Education, will ensure maximum potential for scalability. Further, conducting a comparable intervention in diverse districts and across countries will allow us to examine the extent of external validity problems. A secondary objective is to create a unit within the Directorate of Quality Assurance and Standards in Kenya and in the office of the Commissioner for Education Planning in Uganda that will be able to rigorously evaluate future policy innovations.

B. Background and policy relevance

B.a. Literature review directly relevant to main research questions

This research proposal addresses the question of how to improve the quality of service delivery in education in Uganda and Kenya in the wake of the introduction of free primary education. In both countries the introduction of FPE has been accompanied by the provision of centralized funding, a substantial increase in enrollment and a widespread concern that as a consequence the quality of education provision has declined. More specifically, the introduction of FPE has led to higher pupil:teacher ratios, an influx of students from poorer and less educated backgrounds, and a fall in local fundraising and a concomitant decline in local accountability.

A first step towards addressing these issues would be to increase resources and thereby raise the production of educational quality. There is already a large literature that investigates the impact of increasing inputs in the class room: these include among others material inputs such as text books, school buildings, school meals and extra teachers – the latter two interventions are also suggested in this proposal. Overall, the evidence of the effects of increased resources is mixed. Kremer et al. (2002) evaluate a program in which an NGO provided uniforms and class room construction and found that after 5 years, pupils in treatment schools had completed 15% more schooling. Kremer and Vermeersch (2004) investigate the effects of school meals on participation in early childhood education in a randomized experiment in rural Kenya. Participation in treatment schools was one third higher at 35.7%. The program also improved test scores, but only for children whose teachers were well trained prior to the program. Banerjee et al. (2006) report results from an intervention in India in which students had access to a computer assisted maths learning program for two hours per week. In the first year, maths test scores increased significantly by an average of 0.35 standard deviations and the gains persisted over time. However, based on randomized trials in Kenya, Glewwe et al. (2002), (2004) found little evidence that providing additional learning materials (in the form of text books and flip charts) increased educational achievement.

In a related vein, a number of papers examine the impact of additional teachers on test scores and learning outcomes. Banerjee et al. (2006) report results from a randomized evaluation in which schools were assigned female substitute teachers from the community who ran remedial classes for the weakest pupils. The remedial eduation program increased average test scores by 0.14 standard deviations in the first year and 0.28 in the second. Weaker students gained the most, while there was no evidence of spill over effects from reduced class sizes on children who remained in the regular class. Again, there is contrasting evidence which questions the efficacy of additional teachers. A randomized evaluation of a program in India that provided a second teacher to one-teacher schools did not improve test scores, but it did increase the participation of girls (see Banerjee et al.(2002)). Duflo et al. (2007) investigate an intervention in which schools hired an extra teacher on a short-term highly incentivized contract and then randomly allocated first grade students to the new contract teacher or the existing government teacher. They found that test scores improved markedly for those students taught by the contract teacher. However, government teachers reduced their effort and test scores remained the same for those students taught by the existing teacher despite class size being reduced by half.

The main lesson from the extensive literature on school inputs is that it is most likely not increases in inputs per se, but the way in which they are used, that has an impact on learning outcomes. In fact, the most important determinants are the contractual conditions under which additional inputs are supplied. This is of course particularly important when it comes to the employment of teachers, and as a consequence the recent literature has rightly focussed on how to induce teachers to supply quality teaching more effectively.

A common approach to providing incentives for teachers is to base these on test scores or other objective measures of student performance. Glewwe et al (2002) evaluate a program that provided prizes to teachers based on exam performance and low drop out rates. The authors found that test scores initially improved, but gains did not persist suggesting teaching to the test’. However, a similar intervention in India (Muralidharan and Sundararaman 2006) seems to have had large effects that were not attributable to ‘gaming’ the test.

Policies which are based on dynamic incentives by hiring teachers on short-term contracts with an implicit promise that good performance will lead to eventual graduation into tenured civil service employment have also proved highly effective. Kingdon (2006) finds that within the same school non-unionized teachers have significantly and substantially higher productivity than those who are employed by the goverment (and are therefore members of the teacher’s union). Similarly, Duflo et al. (2007) examined the impact of additional locally hired contract teachers in Kenyan schools. They found that students in schools with an extra contract teacher scored 0.16 standard deviations higher than students in the comparison schools, suggesting that the program as a whole was successful. Contract teachers were more likely to be in class and teaching during random visits than the civil service teachers in either the treatment schools or the comparison schools.

An added advantage of short-term contracts for teachers is that they can be combined effectively with increased monitoring activity either at a local or centralized level. Short-term contracts and the encouragement of local supervision tend to reinforce each other because monitoring is more effective when teachers can be disciplined via monetary penalties or non-renewal of contracts. Duflo (2006) examined a program in rural Kenya to reduce teacher absence. Teachers were given a camera with tamper proof date and time function with the instruction to have one of the children photograph the teacher and other students at the start and the end of the day in order to track teacher attendance. Teacher pay was then calculated as a direct function of attendance. As a result of the program, absences rates were reduced from an average of 42% in comparison schools to 22% in treatment schools. In addition, test scores also improved significantly. Duflo et al (2007) found that both existing government teachers and newly hired contract teachers performed better in schools where the School Management Committee had been empowered to monitor and train teachers.

While the results from the above interventions are impressive, there is some concern that financial incentives will undermine the intrinsic incentives of teachers (Kreps 1997) or otherwise demoralize certain workers (Fehr and Schmidt 2004). As pointed out by Dixit (1999), ‘hard’ monetary incentives have beneficial effects in some dimensions, but generate dysfunctional reactions in other dimensions. More specifically, incentivizing teachers to improve test scores may crowd out other important activies that are less easily measured (see Holmström and Milgrom, 1991). A final set of policy interventions therefore focusses on improving educational quality without necessarily resorting to monetary incentives. Citizen report cards and community based monitoring systems are based on information and training campaigns that enable communities to measure a set of indicators chosen to capture a number of dimensions of service delivery and to enable them to benchmark local schools, hospitals and other service providers. An experimental evaluation of such an informational intervention is ongoing in India and Banerjee et al. (2006) have identified considerable scope for intervention to improve the functioning of Village Education Committees (VECs) in Uttar Pradesh, India. Involving the community in monitoring teacher performance increases teacher's accountability and the social pressure exerted by the community provides additional incentives to perform well. Again, the evidence on this is mixed however. Banerjee, Deaton, and Duflo (2004) conducted a randomized experiment in India in which a member of the community was paid to collect information on absenteeism in local health-service provision. Those authors could not reject the hypothesis that engaging the community in monitoring activities alone had no impact on rates of absenteeism. More recent evidence from Uganda is more positive however. As shown in Björkman et al (2006) in the context of Ugandan health providers, the introduction of a simple Citizen Report Card substantially improved both the quantity and quality of service provision even in the absence of de jure authority to incentivize teachers.

B.b. Explanation of what are the gaps in this literature

The main gap in the existing literature is the failure to address the issue of scalability and external validity. All of the studies in the education sector reviewed above are restricted to a small number of districts within a country and conducted in conjunction with local NGOs. While the results of the experimental studies are impressive, it is unclear whether any of these policies can be scaled up at a national level with the government as the implementing agency. The strength of our study is that all the policies will be implemented at a nationally representative level by government agencies which are best placed to scale up the interventions if they are found to be effective. Secondly, our proposal notes the tension between the need for additional, external funds in the Kenyan and Ugandan education system, and the potential for outside funding to undermine local ownership of schools is at the center of this research project. Our proposed intervention is unique in assessing the impact of local participation in raising funds for school improvements.

B.c. Explanation of how filling these gaps is relevant to specific country policy issues

All the interventions proposed are based on an ongoing process of collaborative design with the Ministries of Education in Uganda and Kenya in order to address weaknesses in the education sector in the wake of the introduction of FPE. They are therefore highly relevant and directly address policy issues in the education sector in both countries.

C. Methods

C.a. General description of the intervention, population to be studied, outcomes of interest, timing of effects, existing data and/or data to be collected, methods to be used to analyze data.

General description of the intervention. The research program in each country will focus on two key interventions. In Kenya, we are working in collaboration with the Basic Education Directorate of the Kenyan MoE to formulate these two new programs to improve learning achievements in primary schools. The first is to assist communities in hiring local contract teachers to address the shortage of centrally-hired teachers from the national Teacher Service Commission. The second is a Community-Based Monitoring System (CBMS) which provides a tool for parents and School Management Committees (SMC) to evaluate their school’s performance and hold teachers and head teachers accountable. In Uganda, firstly, offers of matching funds will be made to a set of treatment schools, applicable toward the provision of school lunches or toward other needs selected by the SMC. As in Kenya, the second intervention will implement a CBMS. Both are being refined in consultation with the Commissioner for Education Planning and with district education officers. These two interventions correspond to the two challenges in implementing FPE identified above – shortages of key educational inputs and lack of parental ownership.

Population to be studied. The objective is to sample both treatment and control schools from a large number of districts representing a variety of agro-climatic, socio-economic and political contexts. The Kenyan sample will be nationally representative, building on the sampling frame established by the third round of the Southern and Eastern African Consortium for Monitoring Education Quality survey (SACMEQ) which was conducted in 2007 and spanned 196 schools in 57 districts covering all 8 provinces of Kenya, with a total of 3,299 students.

The Ugandan sample will be representative of rural primary school students in four districts. These have been tentatively selected as Apac, Hoima, Iganga, and Kiboga, which have been purposefully selected to embody the challenges of poor-performing districts across the four regions of Uganda. Characteristics of these districts are given in Table 1. Apac, for example, had a 2006 PLE failure rate of 16% overall, and 24% among females; these figures were 22% and 25% . Interestingly, Hoima – in spite of its geographic and economic proximity to Kiboga district, and in spite of the comparable levels of teachers and classrooms – vastly outperforms its neighbor: PLE failure rates in 2006 were 6% overall and 8% among females. Performance improvements in Hoima have been recent and marked: in 2002, the overall failure rate in Hoima was 19%. Thus the contrast between Hoima and Kiboga may provide some insight into the keys for success in rural areas.

Characteristics of Study Districts

|District |PLE overall failure |PLE female failure |Pupil-teacher ratio,|Pupil-classroom |

| |rate, 2006 |rate, 2006 |2005 |ratio, 2005 |

|Apac |20.8% |23.7% |60.6 |102.9 |

|Hoima |6.2% |7.7% |52.0 |69.0 |

|Iganga |19.3% |22.8% |58.2 |102.8 |

|Kiboga |22.1% |25.0% |52.3 |61.9 |

|Source: Ministry of Education and Sports. |

Outcomes of interest. The primary outcome of interest is student achievement, as measured by performance on national exams as well as custom-made literacy and numeracy exams to be developed in collaboration with the Kenya Institute of Education (KIE), charged with formulating the national curriculum, and the Commissioner for Education Planning and District Education Officers in Uganda. A secondary outcome of interest is the change in enrolment and transition to secondary school over the intervention period in treatment versus control schools.

Timing of Effects. For the primary outcome – exam performance – the impacts of the interventions should be observed at the end of the academic year. Impacts of both the CMBS and the co-financing intervention should continue for subsequent cohorts, contingent on continued funding of the programs. Subsequent survey work will assess whether effects diminish over time.

Data to be used/collected. Our research team has already received approval from the Kenyan Minister and Permanent Secretary of Education and Minister of Planning to access a variety of administrative data sources including: the Education Management Information System (EMIS); Kenya Certificate of Primary Education (KCPE) exam data; DQAS Panel Inspections reports; Southern and Eastern African Consortium to Monitor Education Quality (SACMEQ) II and III; the Kenya Integrated Household Budget Survey (KIHBS). We will also collaborate in design and implementation of the National Assessment System to Monitor Learning Achievements (NASMLA). District Education Officers in Uganda have agreed to provide Primary Leaving Examination results and administrative data on school funding and district-level inputs. These will be complemented in Uganda by school-level outcome data collected by EPRC researchers in collaboration with District Inspectors of Schools.

Methods for data analysis. The basic methodology will be a difference-in-differences model exploiting the randomized design of the intervention. In simplest form, this approach compares mean changes in academic performance between treatment and control schools.

C.b. The experiment/intervention (experimental projects only)

C.b.i. What experiment/intervention will you do?

The goal of the study is to work with the Ministry of Education in Kenya and Uganda, respectively, to develop and test scalable policy options that could be employed to improve student performance by aiding and strengthening the role of School Management Committees (SMCs). To this end, we propose to pilot and test three broad interventions detailed below.

Scorecards. The first intervention, to be applied in both countries, is the use of a School Performance Scorecard as a community-based monitoring tool. This scorecard would provide a simple framework to guide SMCs in execution of their statutory monitoring responsibilities. Use of this monitoring tool would be completed by the SMC, as described below. The project would evaluate two ways to use the monitoring tool, differing in their cost and design implications; these are referred to as top-down and bottom-up accountability designs.

The SMC scorecard would provide a forum for the gathering of information on teacher performance and the day-to-day management of the school, as overseen by the Head Teacher. It would be completed on a termly basis by the SMC as a whole, though some dimensions of the information contained on the scorecard would be gathered on a period basis throughout the term (e.g., teacher attendance and pedagogical practices). Scorecards would provide a concrete means of monitoring school performance.

Key dimensions of information to be gathered on the scorecard would comprise

• Teacher performance, including

o Teacher attendance/absenteeism;

o Preparation of schemes and lesson plans; consistency of teaching practices with plans;

o Classroom activities, measured via direct classroom observation and spot checks;

• Financial administration of the school, including

o Correspondence between budgeted and actual expenditures;

o Appropriate input purchases – availability of teaching materials and use of alternatives where appropriate;

o Perceived wastage or inefficiency in budget use;

• School facilities and maintenance, including

o Hygiene and sanitation conditions;

o Classroom conditions and repairs;

It is also suggested that the scorecards would be used to provide information to school management committees regarding the performance of other schools, as a means to benchmarking their performance.

School Meals. The second intervention, unique to Uganda, will provide matching grants for locally funded school lunches. Throughout the initial field visits of team members, problems of organizing school lunches were raised by parents, teachers, and SMC members as a severe constraint to performance in UPE schools. With this motivation, the second intervention explores institutional mechanisms for mobilizing community resources to support school feeding, while ensuring that pupils cannot be excluded from school on the basis of parents’ ability to pay.

We propose to pilot and evaluate a school meals program designed to address these twin obstacles to provision of this public good in schools. Two aspects of the proposed school-meals arm of the intervention are salient:

• Matching grants. In a subset of schools, the program would offer partial co-funding for school funding. Matching grants would be delivered to school accounts to cover expenditure on school meals conditional on the school’s successful mobilization of funds (or resources in kind) to cover a share of the costs of meal provision. Such school-based incentive schemes have been used effectively by NGOs in Uganda (e.g., World Vision) to engage communities in construction projects. Successful application to school meals would demonstrate a scaleable mechanism by which communities can meet recurring costs for key inputs at low cost to the Ministry of Education and Sports and in a manner consistent with the goal of universal primary education.

• Coordinated decision-making within geographic clusters. The mobility of children across schools poses a collective action problem for community mobilization, as indifferent parents switch out of schools attempting to raise funds for their own projects. The academic migration of marginal pupils is purported to have caused the collapse of school fundraising programs, even where the parents themselves have voted to fund school meals. This can be resolved by creating a form of cooperative decision-making across geographically proximate schools. If neighboring schools simultaneously undertake such a program, then the switching of parents across schools will no longer undermine provision of local public goods. Indeed, to the extent that coordination problems impede community provision of resources to schools, it may not be necessary to provide financial incentives (matching grants) to encourage local fundraising. Such a finding would provide a means to delivery of school meals at little cost to MoES; this can be tested explicitly by including in the study a group of schools that receive the forum for coordination of fundraising, with little or no co-financing component.

Contract Teachers. The third intervention, to be piloted in Kenya in coordination with the Ministry's Directorate of Basic Education, is to provide co-funding to SMCs to hire contract teachers. These contract teachers will address two potential gaps in the primary education system. First, providing extra teachers directly addresses the acute teacher shortage in many Kenyan primary schools. The short supply of centrally hired teachers from the Teacher Service Commission (TSC) has left many schools with overcrowded classrooms and an inability to fill teacher vacancies. However, due to the constraints placed on local fundraising by the Free Primary Education system, many schools find it impossible to raise monies to hire local teachers when TSC teachers are unavailable. The subsidized contract teachers would fill this gap.

Second, providing for contract teachers will test the importance of local control over teaching staff in creating accountability and quality service provision. One complaint expressed during field visits by the research team was that District Education Officers and the TSC are unresponsive when communities complain about poor performance by a teacher. Under-performing teachers are seen as invulnerable and, even if parents are successful in having them transferred, will simply be sent to a new school. The hypothesis being tested is that local contract teachers -- hired on a fixed term, renewable contract in which the SMC has control over hiring and firing -- will be more responsive to local demands.

C.b.ii. How will this work

C.b.ii.1 Who are the beneficiaries?

The primary beneficiaries will be primary-school age children in the catchment area of the treatment schools.

C.b.ii.2. How will they benefit?

The main benefit will accrue via improved learning outcomes and better quality of education delivery. This should result in more children graduating from school with better test scores, lower dropout rates, higher transition rates to secondary school and the necessary skills to compete successfully in the labor market.

C.b.ii.3. How do you draw the control group to which you compare the treated group?

Control schools will be chosen randomly and simultaneously with the treatment. Successful randomization will be tested along the following dimensions prior to the interventions: school size, class size, teacher qualification and pre-intervention test scores. In Kenya, the SACMEQ sample stratification will ensure that the treatment and control schools have equal representation from the urban and rural areas in each province. In Uganda, the sample will be drawn from primary data collected at district level. Schools will be drawn to be representative of rural sub-counties, with ‘blocking’ of the sample based on pre-intervention test results to ensure representation of low-performing (typically remote) schools.

The impact evaluation will use a “cross-cutting” design, in which the treatment group for each intervention is assigned orthogonally: half of the schools are offered co-funding for contract teachers; half the schools participate in the CBMS intervention; one-quarter participate in both, and one-quarter participate in neither.

C.b.iii. Who will do it?

All the experimental interventions will be conducted with the Ministry of Education and implemented by Ministry staff. In Kenya, we are working with the directorate of Quality Assurance and Standards and the Directorate of Basic Education. The DQAS is in charge of all the data collection and DBE will organize the actual interventions. In Uganda, we are working with the Commissioner for Education Planning and with the District Education Officers in the four districts to carry out the intervention. Our research team will collaborate in the design of the interventions, data collection and supervision in the field.

C.b.iv. What potential problems do you foresee and how will you overcome these?

Working with the Ministry of Education provides enormous scope for scaling up but also brings risks in terms of political uncertainty and bureaucratic inflexibility. Specifically, given the approaching Kenyan elections, it is unclear that the commitments to collaborate made by the Minister and PS will carry over to a new administration. To address this we are working hard to make as many contacts as possible with mid-level administration in the MoE prior to the elections, as these officers will remain in place regardless of the outcome of the election. Equally, we have scheduled additional time in January and February to renew our contacts and secure commitment at the highest level should the election result in a change of government.

A general concern with any kind of intervention must be dealt with here. To ensure compliance in the treatment schools, we will work closely with Ministry staff and embed the implementation and supervision of the interventions in the established monitoring systems of the respective ministries and districts, which we believe should go a long way towards ensuring compliance.

C.c. Data collection methods (experimental projects only)

C.c.i. Will a baseline data be collected or will you use existing data for the baseline?

In Kenya, our research team has already received approval from the Minister of Education and Minister of Planning, as well as the Permanent Secretary in the MoE, to access a variety of administrative data sources including:

• Education Management Information System (EMIS): the EMIS database contains information on enrolment, achievement, staffing, absenteeism, etc., collected by the District Education Offices (DEO) at the end of each term for every school in the country.

• Kenya Certificate of Primary Education (KCPE) exam: the database of KCPE exam scores administered by the Kenya National Examinations Council (KNEC) contains scores for every school in the country, disaggregated by gender and age group.

• Panel Inspections: the Directorate of Quality Assurance and Standards periodically conducts detailed inspections of the management of all schools in Kenya, including the conduct of the School Management Committee.

• SACMEQ: this database combines pupil exam scores in literacy and numeracy with detailed data on school resources, teacher characteristics and students’ socioeconomic backgrounds. This data will serve as part of our baseline for the evaluation. For subsequent data collection we will return to the same schools to create a panel of schools.

• Kenya Integrated Household Budget Survey (KIHBS): the KIHBS is a comprehensive household socio-economic survey covering all districts of Kenya.

In Uganda, administrative data available from District Education Officers district (already collected for two of four proposed treatment districts) provide school-level data on Primary Leaving Examinations, teachers, classrooms, and enrollment. These will provide the basis for sample selection and additional controls for the analysis. Additional baseline survey data will be collected to provide characteristics of communities, schools, and SMCs.

C.c.ii. What population will be studied?

The sample will represent the population of students in government-funded primary schools in all provinces of Kenya, and the population of students in rural government-funded schools in four poorly performing districts, spanning all regions of Uganda.

C.c.iii. Sampling design, sample size and statistical power

In Kenya, following the SACMEQ, our sample will be stratified along two dimensions: provinces and rural/urban location. As stated in section (C.a), the SACMEQ survey – which will constitute the baseline for our study – used these two stratifications to randomly sample 196 schools in 57 districts covering all 8 provinces of Kenya. All follow-up testing in the SACMEQ schools will include at least 20 students per school.

Based on previous studies, the SACMEQ research team calculated minimum sample sizes using an intra-school correlation coefficient of 0.4 for academic achievement. Assuming a minimum class size of 20 pupils, this implied that at least 172 schools were required in order to provide 95 percent confidence limits of (0.1 standard deviations in the school mean of pupil achievement.

In Uganda, a two-tiered selection procedure will be used, first to select treatment and control sub-counties and then to select study schools within each sub-county. 100 schools will be chosen in total, balanced across regions and – within regions – across the four study groups: pure control, CBMS only, co-funding only, and both interventions. Comparable survey instruments to pool the analysis across countries and expansion of treatment schools in a second year will strengthen statistical power.

Power Calculations

Estimates of statistical power for the experiment presented below explicitly take into account the fact that randomization will be done at the school rather than the individual level, as well as the sample size constraints imposed by the cross cutting design.

The cross-cutting design can be captured in the following model, where Y measures individual exam performance, i indexes individuals and j indexes schools, TEACH is a treatment dummy for receiving an additional teacher and, similarly, CBMS is a dummy denoting participation in the second intervention:

|[pic] |(1) |

The error term is decomposed into a common group element, vjt with variance τ2, and a household specific component, wijt with variance σ2.

Because the primary outcome of interest - pupil exam scores - are measured at the individual level, while randomization will be done at the group level, the basic formula underlying the power calculations will be the minimum detectable effect (MDE) under grouped randomization, as given by Bloom (2005):

|[pic] |(2) |

where n is the number of pupils examined per school, J denotes the number of schools in the sample, P is the proportion of the sample treated, α is the desired significance level, κ is the power of the proposed test, and ρ2 = τ2/(τ2 + σ2) is the intra-class correlation of test scores. We will also refer to the minimum detectable effect size (MDES), which is related to the MDE by a factor of (τ2 + σ2)0.5 and measures impacts in standard deviations of the outcome.

Armed with this formula, calculating the power of the cross-cutting design hinges primarily on the assumptions one is willing to make. If we are willing to ignore the possibility that there are important interaction effects between the interventions (i.e., to assume βX = 0) the MDE for the simple treatment effect of either contract teachers or CBMS will be given by equation (2) with P=0.5. It is possible to go one step further, however, and test the hypothesis that interactions don't matter by testing the significance of βX. The MDE for this parameter is also give by equation (2), but with only one-fourth of the sample receiving the dual treatment we have P=0.25, rendering a larger MDE relative to the simple treatment effect.

The Kenyan intervention will involve an additional cross-cutting dimension: full versus partial funding referred to in the previous comment. The statistical power required to analyze this distinction can be assessed in the same fashion described above.

Uganda

In Uganda, the study proposes to measure student performance outcomes using PLE results and standardized tests to be administered at both upper primary and lower-primary levels. Actual PLE results at student level from 2007 are available to perform power calculations. Two measures of student outcomes in the PLE are considered here: first, the aggregate score (and its standardized transformation, the z-score), and second, the probability that a registered pupil receives a passing mark.

Table 1 presents the estimated MDEs for alternative designs. Column (1) gives the basic design MDEs for PLE outcomes: approximately 30 students sitting the exam per school (parameter n), 100 schools (parameter j), and P=0.5 representing 50% of the sample treated. The MDE on the actual scores is given first. This is equal to between 0.52 and 0.40 points on maths and english subject scores, respectively, or just over 25% of a standard deviation. The MDE for the aggregate score is 1.84 points (on a scale from 4 – 36), which is approximately 28% of a standard deviation. Finally, the MDE for the probability that a registered student receives a passing grade is 0.12, which corresponds to a 12% increase in the pass rate, or 30% of a standard deviation. It is important to emphasize that each of these represents a conservative estimate of the MDE for the research design, with pre- and post-intervention data, since individual and group-level control variables can improve efficiency. Pre-intervention standardized test scores and school characteristics are likely to be particularly important in this regard.

Detecting effects in the PLE is made more difficult by the limited number of students who sit this exam. Since PLE results themselves can be made available for the four study districts as a whole, and not just the 100 study schools, we can also the MDE when treated schools are compared with a larger (random) sample of the population of schools from which they are randomly sampled in their respective districts. Column (2) presents analogous results for the MDE when the sample is comprised of all PLE-sitting students in 400 schools across the four districts (these districts have an average of around 200 schools each). Though individual or school-level covariates will not be available to add efficiency to this specification, the MDEs are considerably smaller as a consequence of the larger sample.

Table 1. Minimum detectable effects for alternative outcome measures and designs, Uganda

| | |(1) |(2) |(3) |

| |Design parameters |j=100, n=30, P=.5 |j=400, n=30, P=.125 |j=100, n=50, P=.5 |

| |Controls |district, gender |District, gender |district, gender |

|Outcome |English |0.69 (0.36) |0.52 (0.27) |0.68 (0.35) |

| |Math |0.53 (0.34) |0.40 (0.25) |0.52 (0.33) |

| |Avg score |2.46 (0.37) |1.84 (0.28) |2.43 (0.37) |

| |Pass indicator |0.12 (0.30) |0.09 (0.22) |0.12 (0.29) |

Note: Minimum detectable effect sizes in parentheses.

Column (3) provides indicative results for the ability to detect student performance impacts with standardized tests. The crucial assumption here is that the “intra-class” correlation – essentially the fraction of the variation in student outcomes that is explained by school-level fixed effects – is the same across both the PLE results and the standardized tests. In the 2007 PLE results, the intra-class correlation coefficient is equal to 0.41, after conditioning on district and gender of the student. (The same coefficient is equal to 0.48 in the raw scores, but some of what might be perceived as school-level variation in these raw scores is explained by district-level factors or student characteristics that will be controlled for with district fixed effects or first differences in our analysis.) Under the assumption that intra-class correlation is the same for the standardized test employed, then it is possible to plug in the design parameters for the use of standardized tests. Although the raw test scores will be different, the minimum detectable effect sizes, which are standardized, should be comparable. In pooled form, if there are 50 standardized test results per school, then the minimum detectable effect size would be approximately 0.37 for the average score (Column (3), row (3)). The gain from increases in the number of students per school is relatively small due to the high intra-class correlation. However, the existence of baseline data will increase the efficiency of this test as well.

Kenya

We can calculate the power of the design based on one of two data sources: the official scores on the Kenya Certificate of Primary Education (KCPE) exam administered by the Kenyan National Examination Council (KNEC) at the end of standard 8, or using scores from tests administered to a sample of schools by the Ministry of Education as part of the Southern and Eastern African Consortium to Monitor Education Quality (SACMEQ).   The advantage of the KCPE scores is that this exam will be one of the outcome indicators used in the proposed study.  The disadvantage is that, at present, we only have school level aggregate exam scores on which to base power calculations, separated by boys and girls.  Due to the inability to estimate intra-class correlation, this aggregation would however lead to a severe underestimation of the power of our design.  Therefore, we turn to the SACMEQ data for which we have pupil level information; however, the exam differs from that which will be used in the eventual study.  (Due to the low correlation between school level outcomes between KCPE and SAQMEQ scores, these results should be interpreted with caution.)

For the SAQMEQ data, we use three different measures: standardized scores for English, mathematics, and the average across all three subjects.  (The latter measure is the simple average of the standardized score for each subject exam where each is given equal weight.)  Calculations of the MDES are based on a power of .80 and a confidence level of 5%.  We take the number of groups J to be 196 and assume a group size (i.e., pupils per school) of 28, the average from the KCPE data.

Table 2. Minimum Detectable Effect Sizes for alternative outcome measures and test parameters, Kenya

| | |SAQMEQ |

| |Parameter estimated|Estimated |MDES (P=.5, |MDES (P=.25, j=196, n=28) |

| |(assumptions) |intra-class |j=196, n=28) | |

| | |correlation | | |

|Outcome |Reading |.38 |.28 |.32 |

| |Math |.46 |.25 |.29 |

| |Average |.45 |.27 |.32 |

Table 2 presents the reports MDESs for the Kenyan case. Column (3) gives the basic design MDESs for SAQMEQ scores assuming that 28 students sit the exam per school (parameter n), 196 schools (parameter j), and P=0.5 representing 50% of the sample treated. The MDES on on scores is about a quarter of a standard deviation. Column (4) reports the MDES for the cross cutting design in which P=0.25. In this case, the MDES rises to 0.32 standard deviations. Again, it is important to emphasize that each of these represents a conservative estimate of the MDES for the research design given that the availability of baseline data and covariates will substantially increase the precision of our estimates.

C.c.iv. Key data to be collected (and how this will be done)

In addition to the existing administrative data sources which will constitute the primary baseline in Kenya, the MoE has also agreed to collaborate with our research team in the design of the upcoming NASMLA. This will provide an opportunity to administer additional numeracy and literacy tests, covering students in lower grades, which will be repeated as part of a post-intervention follow-up survey. Original baseline data on communities (including demographics, access to services, and proxies for income), schools, and SMC characteristics (demographics and pre-intervention activity levels) will be collected in Uganda by EPRC-supervised research teams.

C.d. Modeling and testing

C.d.i. What model/idea will you test with these methods

The central hypotheses that the research program aims to test are:

1. Hiring additional contract teachers in Kenya will have a significant effect on student learning achievements. This may act simply by lowering teacher-pupil ratios or by changing the nature of the contract.

2. Providing school lunches or meeting other input shortages in Uganda will have a significant effect on student attendance and learning.

3. Giving parents a financial stake in the school (soliciting co-funding to hire contract teachers) will increase local accountability and community monitoring relative to pure top-down funding.

4. Informing school management committees of parents’ rights and teachers’ responsibilities and providing them with tools to monitor school performance will improve school management and student performance.

5. Community-based monitoring systems are particularly effective when parents have a financial stake in the school – i.e., there are complementarities between the interventions.

C.d.ii. What empirical methods will you use to do this testing?

The basic framework for impact evaluation will be a difference-in-differences model exploiting the randomized design of the intervention. In its simplest form, this approach compares the mean change in academic performance between treatment and control schools. We are also interested in the impact of the interventions on specific subpopulations of students within the school – e.g., boys and girls, high and low performers on the baseline exam, etc. – and subsets of schools – e.g., rural and urban, high and low socioeconomic status, high and low class size, etc. These will be estimated by interacting the treatment with the de-meaned measure of the dimension of interest.

The cross-cutting sample design will also allow us to test for complementarities between the two interventions by comparing schools that have received both interventions to those receiving only one or the other.

C.d.iii. What empirical problems do you foresee?

There are inherent challenges in using impact estimates from a pilot study to predict the impact of a nation-wide program which, if successful and scaled up, would target all schools. During the pilot, it is possible that the additional resources spent on treatment schools will attract students from nearby control schools. The direction of the bias introduced from this endogenous school switching is somewhat ambiguous ex ante. On the one hand the influx of students will to some extent undo the reduction in class-size achieved by hiring a contract teacher, thereby underestimating impact of a nationwide program. On the other hand, if these new students are above average in motivation – as evinced by their willingness to travel to a better-financed school – they may bias the estimate treatment effects upward. We can partially overcome this bias by restricting the sample to students enrolled pre-intervention when estimating impacts. However, if the new, above average students have positive peer effects on the originally designated treatment group, the upward bias will remain. Finally, one way to test whether this bias is important, is to compare control schools in the vicinity of treatment schools with a broader cross-section of schools that do not receive treatment (e.g., the population of all schools and test scores from the EMIS database). If this sorting bias matters, proximate control schools should experience a negative externality from the program and under-perform relative to the national average. To address a specific contamination concern that arises from implementing the intervention in conjunction with sub-county governments, treatment and control schools will be clustered within a number of distinct sub-counties.

C.e. Human subjects concerns

C.e.i. Any ethical, social, gender or environmental issues or risks which should be noted

Currently, schools in Uganda have typically ceased raising funds for lunches and other services in the absence of student exclusion as an enforcement mechanism, while schools which hire local contract teachers in Kenya raise funds through allegedly voluntary contributions from the parents in the school. Formally, under FPE no head teacher may turn away any student from a primary school. However, there is anecdotal evidence that all parents are expected to contribute in order to enroll their children. While these contributions are only a small fraction of the fees charged prior to FPE, they may nevertheless constitute an obstacle to enrolment for some extremely poor households.

Given this context, there is a potential concern that schools which receive partial funding for a contract teacher will be encouraged to demand additional contributions from all parents – potentially discouraging enrolment in some cases. To overcome this risk, it will be of paramount importance that the intervention is accompanied by clear guidance on guaranteeing unrestricted access to primary school and close monitoring from the Zonal Quality Assurance Officer in Kenya, and the District Inspector of Schools in Uganda, to ensure that the principles of FPE are upheld.

C.e.ii. Explanation of how project will comply with requirements of local ethics review boards (e.g., how will you do informed consent; how will you ensure that no one comes to any harm; how will you ensure confidentiality etc…)

There are two levels of informed consent that will be relevant for this project: consent to treatment, and consent to participate in the survey and examinations. Consent to treatment will be solicited at the school and community level through group meetings with the school management committee and the head teacher. In the case of the contract teacher intervention, any agreement to provide co-funding would require approval of the community at large. Such community meetings are periodically convened by the SMC to take such decisions.

Regarding consent to participate in the survey and examinations, all data collection under the project will be done as part of the MoE’s and districts’ regular monitoring systems and is therefore governed by the legal reporting requirements of schools.

As far as ensuring that no one comes to any harm, the interventions themselves present no downside risk to the affected students. Unlike a clinical drug trial with the risk of adverse side effects or complications, this project will simply add resources to the school’s budget and train SMC members in new skills.

The final data sets for analysis will ensure the complete confidentiality of all students, teachers and SMC members. Each individual will be assigned a unique identifier that can only be linked to their name by MoE staff.

D. Consultation and Dissemination Strategy

D.a. How, in the elaboration and execution of your project, will you consult with policy makers, civil society representatives and other parties interested in the research issues you examine?

The guiding principle behind the development of the current proposal has been that the interventions – and as much as possible, the process of implementing them and even analyzing them – should be replicable at a national level in Kenya and Uganda, both in terms of financial and political feasibility.

To achieve this goal, we have worked closely with the relevant Ministries of Education in the elaboration of our proposal. Admittedly, the education policymaking process in both Kenya and Uganda is fairly centralized. (Indeed, this is a focus of our project!) Thus many of the key CD partners are at the national level. In Kenya we have met with the Kenyan Institute of Education (in charge of curriculum) and the Teacher Service Commission (placement and continuing ed for teachers). Both of the interventions to be studied were initially proposed by the MoE’s Directorate of Basic Education, and our data collection efforts are all to be carried out by MoE staff in the Directorate of Quality Assurance and Standards and economists on secondment from the Kenya National Bureau of Statistics. In Uganda, our research team has met on multiple occasions with the Commissioner for Education Planning and has identified a counterpart in his office with whom to collaborate.

Going beyond the national level, both country teams have conducted preliminary field visits to meet with district staff, representatives of religious organizations sponsoring primary schools, parents, teachers, head teachers and other members of School Management Committees.

In Kenya, our research team has already conducted a 2-day training workshop with MoE staff covering program evaluation methodologies – including differences-in-differences, randomized controlled trials, suggestion designs, power calculations, etc. – and doing hands on work with STATA to ensure that MoE staff can play an active role in every step of the project.

D.b. How and where research results will be disseminated to academics, policy-makers and the public: publications, policy briefs, seminars, conferences, etc.

We feel strongly that the most effective way to help policymakers appreciate the relevance of rigorous policy analysis is to involve policymaking institutions in the research process. Thus the most important form of dissemination for this project is already underway – through the inclusion of Ministry staff in the fieldwork in Uganda and forming a collaboration relationship with the economists in the Ministry in Kenya.

Clearly an additional goal of the project is to produce high-quality, publishable academic research. This research will be disseminated through working paper series at UoN and EPRC, as well as through the African Economic Research Consortium.

Looking beyond the Ministries and academia, creating a consensus behind difficult policy reforms will require support from civil society. In this category, ‘sponsors’ are key players in the primary education system in both countries: these are religious or charitable organizations that contribute time and resources to managing and overseeing public schools. Once the project is underway we propose to make direct contact with representatives from the largest sponsoring organizations (the Catholic and Anglican churches, etc.), and once the follow-up survey is completed, to organize a workshop to share the lessons for school governance emerging from the study.

E. The study team

E.a. Principal investigator; brief bio and explanation as to why they are well suited to lead this project.

Co-Principal Investigator, Kenya: Dr. Germano Mwabu is Professor of Economics at the University of Nairobi. Previously he has held teaching or research positions at Yale University, Cornell University and the United Nations University (WIDER), and has been an active organizer of the African Economic Research Consortium (AERC). He has published more than three dozen refereed journal articles and edited three books. In working with the Ministry of Education, he also has access to a wide network of former students and supervisees now in the civil service.

Co-Principal Investigator, Uganda: Mr. Lawrence Bategeka is a Research Fellow at the Economic and Policy Research Consortium (EPRC) in Kampala, Uganda. Prior to his research career Mr. Bategeka was a school teacher and headmaster in the Uganda school system.

E.b. Other key research staff and their roles. List indicating age (or if they are under 30), sex, prior training and experience in the issues for each of the team members.

Mrs. Racheal Nakhumicha Musitia (female, age 34) is a former secondary school teacher, now pursuing a Master’s degree in economics at the University of Nairobi. Given her experience in Stata and various database management programs, she will play a key role in the analysis of administrative data from the MoE.

Ms. Mumia Phyllis Machio (female, age 24) is also Master’s student in the School of Economics at the University of Nairobi. She intends to pursue a Ph.d. in Economics, for which participation in this study will be directly applicable.

Ms. Alice Muthoni Ng’ang’a (female, age 26) is a lecturer at Strathmore College and is simultaneously pursuing a Master’s degree in Economics at the University of Nairobi. She is also a fully qualified accountant.

Dr. Frederick Mugisha is a Senior Research Fellow at EPRC. He has a PhD in Health Economics from University of Heidelberg. Mugisha joined the Centre in July 2007 from the African Population and Health Research Center where he led research work and published in the economics of health and education, and policy dialogue within east and southern Africa for 5 years. Mugisha has enjoyed national and international experience having worked with the Global Program on evidence for health policy at the World Health Organization headquarters in Geneva, Medical Research Council (UK) program on HIV/AIDS, and the Expanded Programme on Immunization in Uganda.

Ms. Madina Guloba joined the EPRC in July 2006 as an Assistant Research Fellow. She holds a Master of Arts degree in Economics obtained from the University of Dar Es Salaam under the Collaborative Masters Program of the African Economic Research Consortium.

Mr. Nicholas Kilimani joined the EPRC in July 2006 as an Assistant Research Fellow. He holds a Master of Arts degree in Economics obtained from the University of Botswana under the Collaborative Masters Program of the African Economic Research Consortium.

Mrs. Winnie Nabiddo joined the EPRC in July 2006 as an Assistant Research Fellow. She holds a Master of Arts degree in Economics obtained from the University of Nairobi under the Collaborative Masters Program of the African Economic Research Consortium.

E.c. Collaborators/consortium arrangements

E.c.i. Are there collaborators involved?

The main collaborator is the Ministry of Education in Kenya, and the Ministry of Education in Uganda (along with the four district education offices). The research team is also working closely with Prof. Mwangi Kimenyi who divides his time between the University of Connecticut and research work in Nairobi. The research team also receives some assistance from junior researchers based at the Centre for the Study of African Economies (CSAE), University of Oxford – Tessa Bold, Justin Sandefur and Andrew Zeitlin.

E.c.ii. Who does what?

The MoEs will have primary responsibility for implementing the intervention, collecting the data and monitoring compliance. The costs of implementing and evaluating the intervention will be financed by the research project. Prof. Kimenyi has been involved in the design of the research agenda and has facilitated contacts with all the major stakeholders in Kenya. The Oxford collaborators will provide field support during project implementation as well as some technical assistance in the analysis of the project data.

E.c.iii. How will you resolve disputes?

The project aims to prevent disputes by agreeing clear budgetary allocations. The research budget will be divided equally between the two applying institutions, University of Nairobi and EPRC. The research budget will be used to fund the interventions and research costs of the participating researchers at each institution. The Ministries will not be asked to fund any project activities over and above their normal operations (see also section C.b.iv on how we aim to address potential problems in the collaboration with the Ministry). Both Professor Kimenyi and the Oxford collaborators are independently funded by their home institutions. In terms of research collaboration, it is worthwhile to point out that Professor Kimenyi and Professor Mwabu have a long-standing relationship via the African Economic Research Consortium. EPRC, CSAE and the University of Nairobi have worked together as partners in the DFID-funded research consortium on Institutions for Pro-Poor Growth.

E.d. Past, current or pending projects in related areas involving team members: list with name of funding institution, title of project, list of team members involved.

Professor Mwabu and Mr. Bategeka are part of a consortium funded by DfID, “Improving institutions for pro-poor growth”, directed by Paul Collier. This consortium involves researchers from specific institutions in six developing countries, CSAE and Oxford. Other collaborators from CSAE are Stefan Dercon, Tessa Bold, Justin Sandefur, Roxana Gutierez and Andrew Zeitlin. Via the DFID-funded research consortium, Professor Kimenyi and Professor Mwabu, together with collaborators at CSAE, have been able to provide the ground work for some of the work proposed in this proposal, including in terms of offering basic capacity building on the principles of randomized evaluation. CSAE will be able to fund any assistance to the project provided by its staff but funding for the actual intervention is currently not available, justifying this proposal.

F. Timeline

Timeline for Kenya:

May 2008 – July 2008: Finalize intervention, prepare implementation, training of project staff.

August 2008 – September 2008: Baseline testing, start of intervention in the field.

October 2008 – April 2009: Intervention in the field.

November 2008 – March 2009: Analysis of baseline achievement data and administrative data records. Presentation of baseline results.

May 2009: Post-testing and data collation/collection.

June 2009 – October 2009: Analysis and presentation of results.

November 2009 – January 2010: Dissemination of results.

Timeline for Uganda:

May 2008 - June 2008: Finalize intervention, prepare implementation, training of project and district.

June 2008: Baseline testing, start of intervention in the field.

July 2008 - December 2009: Intervention in the field.

November 2008 - January 2009: Analysis of baseline achievement data, Primary Leaving Exam (December 2008), and intermediate outcomes (attendance, funding allocations) and administrative data records. Presentation of baseline results. Initial district dissemination activities.

November - December 2009: Post-testing and data collection, including Primary Leaving Exam 2009.

December 2009 - January 2010: Analysis and presentation of results.

January 2010 - February 2010: Dissemination of results.

G. Budget

See attached PEP_Budget.xls file.

H. Bibliography

Banerjee, A., Banerji, R., Duflo, E., Glennerster, R., and Khemani, S. (2006). “Can information campaigns spark local participation and improve outcomes? A study of primary education in Uttar Pradesh, India.” World Bank, Policy Research Working Paper No. 3967.

Banerjee, A.,Cole S.,Duflo E., and L. Linden, “Remedying education: Evidence from two randomized experiments in India”, CEPR Working Paper 5446 (2006)

Banerjee, A., A. Deaton, and E. Duflo. (2004). “Wealth, Health, and Health Services in Rural Rajasthan.” American Economic Review. 94(2): 326—330.

Banerjee, A. and E. Duflo. (2006). “Addressing Absence.” Journal of Economic Perspectives, 20(1): 117—132.

Bjorkman, M., Reinikka, R. And J. Svensson (2006): “Local Accountability”, Institute for International Economic Studies, Stockholm.

Bloom, H. S. (2005): “Randomizing groups to evaluate place-based programs," in

Learning more from social experiments, chap. 4, pp. 115 –172. Russell Sage Foundation.

Chaudhury, N., J. Hammer, M. Kremer, K. Muralidharan, and F. H. Rogers. (2006). “Missing in Action: Teacher and Health Worker Absence in Developing Countries.” Journal of Economic Perspectives, 20(1):91–116.

Duflo, E., Dupas, P. And M. Kremer (2007) “Peer Effects, Pupil-Teacher Ratios, and Teacher Incentives: Evidence from a Randomized Evaluation in Kenya.” Mimeo, Abdul Latif Jameel Poverty Action Lab (JPAL), Massachusetts Institute of Technology.

Fehr, E. and L. Goette. (2007). “Do Workers Work More if Wages Are High? Evidence from a Randomized Field Experiment.” American Economic Review, 97(1): 298—317.

Glewwe, P. Kremer, M. and S. Moulin (2002), “Textbooks and test scores: Evidence from a prospective evaluation” Mimeo,

Holmström B. and P. Milgrom (1991), “Multitask Principal-Agent Analysis: Incentive Contracts, Asset Ownership and Job Design”, Journal of Law, Economics, and Organization 7: 24–52.

Kremer, M. and C. Vermeersch (2004), “School meals, educational achievement, and school competition: Evidence from a randomized evaluation”, World Bank Policy Research Working Paper 3523

Kreps, D. (1997). “Intrinsic Motivation and Extrinsic Incentives,” American Economic Review, 87(2), 359—364.

Muralidharan, K. and V. Sundararaman. (2006). “Teacher Incentives in Developing Countries: Experimental Evidence from India.” Mimeo, Harvard University.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download