Close

Process evaluation

Randomised controlled trials (RCTs) are considered the most rigorous way to evaluate intervention effectiveness. However, it is not enough for an evaluation to report solely on effectiveness. Evaluations should also provide information on the planning, delivery, and uptake of the intervention, the causal pathways through which the intervention is expected to act, and the contextual factors affecting the implementation and outcomes of the intervention – the process that takes place.

Key Resources For Learning About Process Evaluation

There is a growing call for the need to combine evaluation of outcomes with evaluation of process in evaluations of complex interventions. The UK Medical Research Council (MRC) made this recommendation in their updated guidance on the evaluation of complex interventions:

Developing and evaluating complex interventions: the new Medical Research Council guidance
Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. Bmj. 2008 Sep 29;337:a1655.
Summary Article
Full Guidance

They state that including a process evaluation is important to explain discrepancies between expected and observed outcomes, to understand how context influences outcomes, and to provide insights to aid implementation in other contexts.

The UK Medical Research Guidance to Process Evaluation is a helpful resource to aid conduct of process evaluations. It offers a comprehensive review of process evaluation theory and a practical guide on the planning, design, conduct, reporting and appraisal of process evaluations of complex interventions. This was accomplished through an extensive review of the literature examination of case-studies, and consultations with stakeholders.

Process evaluation of complex interventions: UK Medical Research Council (MRC) guidance
Moore G, Audrey S, Barker M, Bonell C, Hardeman W, Moore L, O’Cathain A, Tinati T, Wight D, Baird J. Process evaluation of complex interventions: UK Medical Research Council (MRC) guidance. 2014.
Summary Article
Full Guidance

The following book is another useful resource. It provides a review of the history and evolution of process evaluation as described in health education, health promotion and health behaviour literature. It also identifies and defines components of process evaluation and provides a guidance to their design. The examples included in the book vary and are organized into four categories: community, worksite, school, and national and state. The book may be most useful to those with sizeable knowledge about program evaluation and research.

Process evaluation for public health interventions and Research
Linnan, L. and A. Steckler (2002). San Francisco CA, Jossey Bass.

In a publication by Oakley et al., the authors use an example from the RIPPLE study: a pupil-led sex education intervention implemented in secondary schools in England, to argue that including a process evaluation would improve the science of many randomised controlled trials and outlines a framework for using process evaluation as an integral element of RCTs.

Process evaluation in the design of randomised controlled trials of complex interventions
Oakley A, Strange V, Bonell C, Allen E, Stephenson J & the RIPPLE Study Team. Process evaluation in the design of randomised controlled trials of complex interventions. British Medical Journal 2006; 332: 413-416.

A publication by Bonell et al., examined the issue of generalisability, referring to the lack of a framework to guide the assessment and reporting of generalisability of trials to inform policy and practice decisions. The authors suggest that documenting the delivery of an intervention by embedding an evaluation of process in trials could aid generalisability assessment.

Assessment of generalizability in trials of health interventions: suggested framework and systematic review
Bonell C, Oakley A, Hargreaves J, Strange V, Rees R. Trials of health interventions and empirical assessment of generalizability: suggested framework and systematic review. British Medical Journal 2006; 333: 346-349.

Implementation

What was the intervention? Can success or failure be attributed to the inherent intervention or its implementation?

Process evaluations attempt to document how an intervention is implemented and what was actually delivered, compared with that intended to be delivered. Implementation can be examined in terms of fidelity, reach, dose delivered as well any unanticipated additional activities or adaptations that had to be made to an intervention in a given context, such as changes to the content.

In the following article, the authors featured 162 outcome studies of primary and early secondary prevention programs published between 1980 and 1994 and examined the extent to which fidelity was verified and promoted and to which dose was documented in these evaluations. They also discuss the extent to which inconsistencies in fidelity and dosage may compromise the internal validity of evaluations of preventive interventions, and the potential effectiveness of these programs.

Program integrity in primary and early secondary prevention: are implementation effects out of control?
Dane, U. A., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: are implementation effects out of control? Clinical Psychology Review, 18(1), 23e45.

In the following article Sharon Mihalic from the Blueprints for Violence Prevention Initiative, an initiative to identify and implement model violence prevention and control programs in the State of Colorado, discusses the importance of implementation fidelity in preserving the behaviour change mechanisms that make the original models effective.

The importance of implementation fidelity
Mihalic, S. (2004). The importance of implementation fidelity. Emotional and Behavioral Disorders in Youth 4(4): 83–105.

A different publication by Durlak et al. examines 500 studies to assess the degree to which the level of implementation affects the outcomes obtained in promotion and prevention programs, as well as identify factors that affect the implementation process. They conclude that the collection of implementation data is an essential feature of program evaluations, and that more information is needed on which factors influence implementation and how they influence implementation in different community settings.

Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation
Durlak, J. A., & Dupre, E. P. (2008). Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327e350.

The issue of the appropriate balance between maintaining strict implementation fidelity versus enabling local adaptation is addressed in a different way in a paper by Penny Hawe and colleagues. They argue that in the case of complex interventions, fidelity should be judged based not on whether the precise form of delivery is faithful to what was intended but rather on the basis of fidelity of function – whether what is delivered locally should enable the achievement of the mechanisms of causation described in the intervention’s theory of change.

Complex interventions: how “out of control” can a randomised controlled trial be?
Hawe, P., Shiell, A., et al. (2004a). Complex interventions: how “out of control” can a randomised controlled trial be? British Medical Journal, 328, 1561e1563.

The following publication argues that this applied research in the implementation of interventions is a focus in the field of ‘Implementation Science’ and should be founded on theory that provides a foundation for understanding, designing, predicting, and evaluating dynamic implementation processes. The purpose of this paper is to contribute to a theoretical framework that characterizes and explains implementation processes in terms of the social processes that lead from inception to practice. It does so by building on Normalization Process Theory and integrating it with other social and cognitive theories from the disciplines of sociology and psychology.

Towards a general theory of implementation
May, C. (2013) Towards a general theory of implementation. Implementation Science, 8, 1, 18.

The use of mixed method designs in implementation research has been increasingly utilised to develop an understanding of and overcome barriers to implementation. The following literature review examines 22 studies of mental health services research between 2005 and 2009 that applied mixed methods in their implementation research and reported on how these methods were used and reasons for their use.

Mixed method designs in implementation research
Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Administration and Policy in Mental Health and Mental Health Services Research. 2011 Jan 1;38(1):44-53.

Mechanisms of Impact

Did the intervention have its intended effects? Can success or failure be attributed to the intended mechanism of change?

It can be helpful to distinguish a focus on ‘mechanisms’ as the way change occurs once an intervention has been initiated, from ‘implementation’ as the initial delivery of the intervention. The study of mechanisms includes a study of participant responses to the intervention, understanding how change is happening, and capturing the unintended consequences that may result from the intervention.

How the intervention will be received by participants can be examined in terms of its acceptability and dose received.

How change occurs and what shapes it is a fundamental question for many academic disciplines, which take different theory-driven approaches to understand. The LSHTM Centre for Evaluation brings together researchers from a range of disciplinary backgrounds who try to understand pathways of change through these approaches:

Theory of Change (ToC) and Diagrammatic Logic Models

The theory of change explains how an intervention is intended to produce the desired effect. It entails making hypothesis about the causal mechanisms by which the components and activities of an intervention will lead to outcomes. The logic model is a diagrammatic presentation of the theory of change.

The UK Medical Research Council’s (MRC) guidance to the development and evaluation of complex intervention notes that identifying and developing a theoretical understanding of the likely process of change is a key early task for developing a complex intervention or evaluating one that has already been developed.

Developing and evaluating complex interventions: the new Medical Research Council guidance
Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. Bmj. 2008 Sep 29;337:a1655.
Summary Article
Full Guidance

The ToC approach has been successfully used to design, implement and evaluate complex community initiatives, and more recently has been applied to complex health interventions, including at LSHTM. The way a ToC may improve the evaluation of interventions is by combining evaluations of intervention effectiveness with detailed process evaluations into one theoretical framework. The ToC approach to evaluation tests the hypothesised causal mechanisms with what is observed to have happened. This allows identifying which components and activities were strong and effective in achieving the outcomes, and which were weak or absent causing limited effects.

The ToC and logic model are developed in collaboration with stakeholders and draw on various sources of information, including academic theories and evidence from a range of disciplines, program experience, health care provider’s insights and service users and carer insights.

Evaluations based on the ToC have sometimes been referred to as theory-based evaluations. In the following article, the author describes how theory-based evaluation allow an understanding of how programs work by examining how the hypothesised causal mechanisms are occurring in practice, as well as how successfully implementation is achieved.

Theory-based evaluation: Past, present, and future
Weiss CH. Theory‐based evaluation: Past, present, and future. New directions for evaluation. 1997 Dec 1;1997(76):41-55.

In a review by Rogers et al., the authors highlight that this approach of basing an evaluation on a logic, or causal, model is not new and has been recommended by evaluators since the 1960s. They refer to it as the ‘program theory evaluation’ and describe its historical development, the current variations in theory and practice, and discuss the problems it poses.

Program theory evaluation: Practice, promise, and problems
Rogers, P.J., Petrosino, A., Huebner, T.A. and Hacsi, T.A. (2000) Program theory evaluation: Practice, promise, and problems, New Directions for Evaluation, 2000, 87, 5-13.

The following guide provides an orientation to the underlying principles and language of the logic model so it can be effectively used in program planning, implementation, and evaluation. It also offers a range of exercises and examples focused on the development of a logic model that reflects the underlying theory of change.

Logic model development guide
Kellogg Foundation, W.K. (2004) Logic model development guide, Battle Creek, MI: W.K Kellogg Foundation.

The following is another useful guide to developing logic models

Kirby D. BDI logic models: a useful tool for designing strengthening and evaluating programs to reduce adolescent sexual risk-taking pregnancy HIV and other STDs.

ToC can also be used to predict whether interventions might have harmful effects, both to mitigate and evaluate these.

“Dark logic” – theorising the harmful consequences of public health interventions
Bonell C, Jamal F, Melendez-Torres GJ, Cummins S. “Dark logic” – theorising the harmful consequences of public health interventions. Journal of Epidemiology and Community Health 2015;69(1):95-8.

Theory about the mechanisms and contextual factors that may affect them can be quantitatively tested through causal modelling. Mediation analysis is used to examine the hypothesised mechanisms, and analysis of moderation can be used to examine the contextual factors that may influence the effect of the intervention.

A causal modelling approach to the development of theory-based behaviour change programmes for trial evaluation
Hardeman, W., Sutton, S., Griffin, S., Johnston, M., White, A., Wareham, N.J. and Kinmonth, A.L. (2005) A causal modelling approach to the development of theory-based behaviour change programmes for trial evaluation. Health education research, 20, 6, 676-687.

Examples

LSHTM’s Centre for Global Mental Health is using Theory of Change in the implementation and evaluation of two major trials:

PRIME (Programme for Improving Mental health carE)

Using workshops to develop Theories of Change in five low and middle income countries: lessons from the Programme for Improving Mental Health Care (PRIME)
Breuer E, De Silva MJ, Fekadu A, Luitel NP, Murhar V, Nakku J, Petersen I, Lund C. Usingworkshops to develop theories of change in five low and middle income countries: lessons from the programme for improving mental health care (PRIME). International journal of mental health systems. 2014 Apr 30;8(1):1.

SHARE (South Asian Hub for Advocacy, Research and Education on Mental Health)

Theory of change: a theory-driven approach to the MRC framework for complex interventions
De Silva MJ, Breuer E, Lee L, Asher L, Chowdhary N, Lund C, Patel V. Theory of Change: a theory-driven approach to enhance the Medical Research Council’s framework for complex interventions. Trials. 2014 Jul 5;15(1):1.

Realist Evaluation

This approach to evaluation understands change as the results of actions of social agents operating in a specific context whereby the action leads to outcomes by triggering mechanisms. It emphasises that mechanisms are contingent on the context and that outcomes are produced by the interaction of context and mechanisms. Therefore, evaluations are based on these Context-Mechanism-Outcome configurations to explain ‘what works for whom in what circumstances and in what respects, and how?’ This is especially useful in exploring complex social phenomena.

Pawson and Tilley founded this approach to evaluations and produced the following book:

Realistic evaluation
Pawson, R., & Tilley, N. (1997). London: Sage.

There is a controversy in the field as to whether realist evaluation is an alternative or complement to the use of randomized controlled trials. The following publications are examples of the differing views:

‘Realist Randomised Controlled Trials’: a new approach to evaluating complex public health interventions
Bonell C, Fletcher A, Morton M, Lorenc T., Moore L. ‘Realist Randomised Controlled Trials’: a new approach to evaluating complex public health interventions. Social Science and Medicine 2012;75(12):2299-306.

Realist RCTs of complex interventions – an oxymoron
Marchal BWesthorp GWong GVan Bell SGreenhalgh TKegels GPawson R. Realist RCTs of complex interventions – an oxymoron. Soc Sci Med. 2013 94:124-8.

Methods don’t make assumptions, researchers do: a response to Marchal et al.
Bonell C, Fletcher A, Morton M, Lorenc T, Moore L. Methods don’t make assumptions, researchers do: a response to Marchal et al. Social Science and Medicine 2013; 94: 81-2.

Can “realist” randomised controlled trials be genuinely realist?
Van Belle SWong GWesthorp GPearson MEmmel NManzano AMarchal B. Can “realist” randomised controlled trials be genuinely realist? Trials. 2016;17(1):313.

Examples

Examples of such research at LSHTM include the current work done by Sara Van Belle on strategies, mechanisms and conditions to ensure public accountability at the local health system level in low and middle income countries and the work of Chris Bonell and colleagues on realist trials:

Realist trials and the testing of context-mechanism-outcome configurations: a response to Van Belle et al.
Bonell CWarren E, Fletcher A, Viner R. Realist trials and the testing of context-mechanism-outcome configurations: a response to Van Belle et al. Trials 2016 Oct 1;17(1):478.

How to develop a theory-driven evaluation design? Lessons learned from an adolescent sexual and reproductive health programme in West Africa
Van Belle, S., Marchal, B., Dubourg, D., Kegels, G. (2010) How to develop a theory-driven evaluation design? Lessons learned from an adolescent sexual and reproductive health programme in West Africa, BMC Public Health, 10, 741.

Is Realist Evaluation keeping its promise? A literature review of  methodological practice in Health Systems Research
Marchal, B.,Van Belle, S., Van Olmen, J., Hoeree, T., Kegels, G. (2012) Is Realist Evaluation keeping its promise? A literature review of  methodological practice in Health Systems Research, Evaluation, 18, 192-212.

Context

Contextual factors shape the theory of change and affect the implementation, causal mechanisms, and outcomes of an intervention. Process evaluators should capture how context is affected by an intervention, as well as how contextual factors can change an intervention.

Evaluators capture how contextual factors affect implementation by considering which components of the intervention had to be adapted, or modified, to fit the context. Contextual factors may also affect how the target audience receive and react to the intervention thereby influencing hypotheses of causal mechanisms which are generated with consideration as to how contextual factors might strengthen or weaken the intervention, and thereby affect outcomes.

Therefore, contextual factors should be considered across all aspects of a process evaluation.

The following examples stress the importance of considering context in the implementation of interventions:

Health system context and implementation of evidence-based practices—development and validation of the Context Assessment for Community Health (COACH) tool for low- and middle-income settings
Bergstrom A, Skeen S, Duc DM et al (2015). Health system context and implementation of evidence-based practices—development and validation of the Context Assessment for Community Health (COACH) tool for low- and middle-income settings. Implementation Science. 10; 120

Meeting the Challenges of Intervention Research in Health Science: An Argument for a Multimethod Research Approach
Hansen, H.P. & Tjørnhøj-Thomsen, T. Meeting the Challenges of Intervention Research in Health Science: An Argument for a Multimethod Research Approach. Patient (2016) 9: 193. doi:10.1007/s40271-015-0153-9