In international development, everyone knows that good intentions are simply not enough. It is critical to agree on appropriate aims and then make sure that these can be achieved efficiently.
There are several different ways to achieve development goals. Take malaria, for example: approaches might include investing in vector control (reducing numbers of malaria-carrying mosquitoes); ensuring that people can access bednets; providing education on how to avoid contracting the disease; making chemoprophylaxis (prevention medication) more accessible; or treating malaria cases with better drugs, to name just a few.
We know that some ways of dealing with development challenges, such as malaria, will be more successful than others. Some approaches will have unintended consequences, they will vary in cost and will work in certain places but not in others. So how can those designing interventions decide which approaches to choose?
This is where evaluation studies come in: they aim to help development actors make the best choices. Evaluations can be used to improve programmes as they roll out and/or can try to estimate whether and how particular aims were achieved and whether this was better and more cost effective than other courses of action.
In order to design, run or interpret evaluations, budding development professionals need an understanding of the following.
1. Research study design, outcome measurement and statistical methods
Development programmes are often complex, but this does not mean that scientific methods such as experiments and careful analysis can’t aid a better understanding of whether programmes achieve their desired impact.
2. Social science methods
Development interventions depends on the complex interaction of multiple stakeholders and institutions. People’s goals and incentives differ, power is exercised and resisted in myriad ways, and choices are constrained by poverty or gender inequalities. Social science methods are required to make sense of these complexities to enable more effective implementation.
3. Cost-benefit analysis
When deciding how best to allocate limited resources, those designing interventions must be able to estimate the costs as well as the consequences of different programmes to ensure they get value for money. Cost-benefit analysis can also be used to compare programmes across different sectors, for instance, comparing heath and education interventions.
4. Evidence-based decision making
Understanding what is already known is essential to avoid duplication. Synthesising evidence means pulling together all that has been said about a subject, making judgments about what bits of information are most useful, summarising this evidence, and planning new studies that focus on the most important contributions.
Teams of development professionals will need all these skills to varying degrees. For instance, evaluation experts need to be able to design and implement evaluation studies, while programme managers offer the best perspective on what interventions may be feasible and need to know how to commission and interpret evaluations.
But it is not only development workers who need evaluation skills. Evaluation is about accountability, identifying waste and avoiding harmful effects, and so these skills will also be essential to enable civil society, democratic representatives, and government officials to hold NGOs and other development actors to account.
Where can you learn these skills?
Over the past few years, evaluation courses have mushroomed in institutions all over the world, ranging from full degrees to short courses, face-to-face or via distance learning, at various levels of difficulty. Some examples are listed below.
Evaluation skills are also developed and championed within organisations through on-the-job and peer-to-peer learning. It is great to see growing commitment within international development organisations and donor agencies to developing key evaluation skills for their staff. After all, as management consultant Peter Drucker said: “What gets measured gets managed”, and development matters too much to not be properly managed.
Some examples of training courses in impact evaluation – the list is not exhaustive.
- Evaluation for development programmes at London International Development Centre
- Impact evaluation design at Institute of Development Studies
- Impact evaluation for evidence-based policy in development, University of East Anglia
- Planning, monitoring and evaluation for complex development programmes, University of Bologna
- Building skills to evaluate development interventions, International Programme for Development Evaluation Training, Ottowa, Canada
- Impact evaluation collaborative, University of California, Berkeley
- Impact evaluation of interventions addressing social determinants of health, London School of Hygiene & Tropical Medicine
- MSc impact evaluation for international development, University of East Anglia
- Diploma in public policy and programme evaluation, Carleton University
- Graduate certificate in project monitoring and evaluation: course descriptions, American University
Conferences and seminars
LSHTM's short and specifically designed courses provide the opportunity for intensive study in specialised topics.
These courses enable participants to refresh their skills and keep up to date with the latest research and knowledge in public and global health.