Close

From conception to implementation and scale-up: the role of evaluation throughout the program lifecycle 

A blog by Anu Rangarajan, Student Liaison Officer for the Centre for Evaluation.
A male nurse is delivering health education to clinic attendees, Uganda

Between 2015 and 2021, approximately $15 billon of development assistance went towards supporting maternal, newborn and child health annually. Various interventions were tested and implemented to reduce maternal and child mortality and morbidity in low-resource settings. Despite this vast effort, close to 300,000 women died in 2020 alone during and following pregnancy. And, the same year, about 13,700 children under the age of five years died every day. Most of these maternal and child deaths were from preventable and treatable causes. 

Every year and with every major health intervention, the story is the same. Billions of dollars are spent on water and sanitation, yet a large share of the world’s population lack access to these services; there is increased global focus and spending on nutrition, yet millions of children are affected by stunting or wasting; billions of dollars are spent on malaria control and elimination, yet millions are infected by malaria each year; and the list goes on. Why?

To find an answer, social scientists have spent the past two decades doing impact evaluations to identify interventions that work. Howard White, Director of Evaluation and Evidence Synthesis at the Global Development Network, talks about the four waves of the “evidence” revolution: starting with outcomes monitoring in the 1990s, to a shift to impact evaluations during the 2000s and 2010s, and a focus on systematic reviews and knowledge brokering as the third and fourth waves, respectively. Unsurprisingly, researchers frequently find that the program under evaluation failed to show impacts. Indeed, “proven programs are the exception, not the rule.” 

Impact evaluations, however, don’t explain why so many programs fail, and what can be done to improve the chances of a program’s success. Many social programs, particularly in low-resource settings, are attempting to tackle complex, deep-rooted problems, in contexts with limited financial resources, overburdened staff, low skill levels, and where attitudes and cultural beliefs make facilitating change particularly challenging. Solutions are not easy or straightforward and require rigorous testing and iterations to identify promising strategies that can be effective and eventually get implemented at scale. Evaluation approaches that focus on impact alone neglect these stages of program development and rollout.   

Evaluation can and should play an important role across the entire “life cycle” of a project or program: it can inform decisions to help set up strong program designs, rigorously assess its implementation, and if effective, inform scale-up. For instance, developmental evaluations are appropriate for interventions being designed in complex situations, and rapid cycle evaluations can be used to test programmatic elements during the design phase, or to test alternative approaches to service delivery when implementation is not going according to plan. Similarly, implementation efforts are most effective when they are informed by factors that may emerge to facilitate or challenge them (also known as determinants). Additionally, a number of frameworks are available to assess what types of interventions or concepts should be considered for scale-up and how to bring in measurement across the different stages of scale-up. Deliberately building in these evaluation elements through a project life cycle can help increase the chance of successful programs.  

Having worked in the evaluation field for over thirty years, and after conducting rigorous impact evaluations of a range of social programs, many of which showed little or no impact, I have come to appreciate the wide range of evaluation methodologies that exist but are, unfortunately, little used in program design and implementation. Some of these methodologies are from slightly different disciplines or areas of specialization, and not readily accessible to evaluation teams. To bridge this gap, I worked closely with leading experts and eminent scholars on program evaluation to produce an edited Handbook of Program Design and Implementation Evaluation (published by Oxford University Press), which brings under one roof these methodologies applicable across the life cycle of a program. They include how to conduct developmental evaluations, perform rapid-cycle evaluations, employ implementation science concepts, measure cost-effectiveness, scale up promising interventions, and evaluating change in systems. Incorporating such approaches at appropriate stages of a program’s life cycle can help improve its chances of success; doing so consistently across social programs should help improve their impact overall. This much-needed handbook, to be released later this summer, will serve as a valuable resource for social researchers, faculty and students, program practitioners, policy analysts, and funders of social programs and evaluations.

If you would like to get in touch with Anu, please email: Anuradha.Rangarajan1@student.lshtm.ac.uk

Fee discounts

Our postgraduate taught courses provide health practitioners, clinicians, policy-makers, scientists and recent graduates with a world-class qualification in public and global health.

If you are coming to LSHTM to study a distance learning programme (PG Cert, PG Dip, MSc or individual modules) starting in 2024, you may be eligible for a 5% discount on your tuition fees.

These fee reduction schemes are available for a limited time only.