The Centre for Evaluation (CfE) held its annual retreat on Thursday, December 6, focusing on the theme of “Places, Spaces and Contexts.” Held “off campus” in the Hatton Gardens area, the retreat brought CfE members together for a day of discussions, networking, and musings on all and any issues raised regarding evaluation theory and practice at LSHTM.
We started the day with a set of 8 “speed talks” through which CfE members presented ongoing existing evaluation work and described how they accounted for or were challenged by context. For example, interventions developed through “human centred design” by definition will vary in each location where implemented, yet a standardised evaluation is expected across programme sites. The presenters described similar experiences of really needing to “drill down” into what happens at local level through participatory and process-oriented methods. Yet methods themselves need to be carefully matched to the social environment in which they are meant to capture key variables – this was illustrated by a study in which girls’ school attendance was an important outcome measure. While researchers initially thought girls might over-report attendance at school, in reality some girls hid from the researchers and marked themselves absent if they were worried they hadn’t completed study-related tasks, thus potentially leading to under-estimates of their attendance. In all the cases presented, strong formative work and mixed method process evaluations were highlighted as ways to track the realities of intervention delivery on the ground.
In the next session, three LSHTM researchers gave insights into different “place” related influences on research practice. First, Chris Grundy’s talk, “Why maps matter” grounded health evaluation in physical geography. The diseases and social phenomena we often try to measure have spatial distributions that can deepen understanding of how they work, and rapidly developing technology provides new opportunities (but also new ethical dilemmas) to what can be mapped and visualised. Unfortunately, while interest in the use of GIS, including open source data and electronic data collection methods increase, funding for ensuring there is a GIS specialist involved in health evaluation projects does not reflect this.
Next, Catherine Pitt gave an overview of “Economic Evaluations of Geographically Targeted Interventions,” highlighting the importance of good costing data to help prioritise use of scarce resources. Yet economic costs are highly context-specific, making it difficult to transfer findings from an economic evaluation in one place to targeted programmes elsewhere. She gave examples of how carefully designed cost modelling could be used to tackle the “transferability challenge” by showing the relative merits of different configurations of health packages.
Finally, Kathryn Oliver talked about the gap between evaluation evidence and its uptake and use by policy makers. In her talk entitled “What Makes Evidence Credible?” she highlighted the way that researchers and policymakers speak different languages and value different types of evidence and styles of persuasion. She illustrated how researchers can sometimes become ‘Rapunzel in the Ivory Tower’, feeling uncomfortable with the idea of making health information more anecdotal, emotional or engaging. Using analysis from her work on policymakers’ use of evidence in decision making, she urged researchers to become better at accepting political structures and processes, and make “professional friendships” to bridge the evidence-policy divide.
Following a massive and delicious lunch, Cicely Marston and Chris Bonell ensured we didn’t get too sleepy by engaging in a lively debate about the need for structured process evaluations. Although both admitted they agreed more than they disagreed, they encouraged discussion within the group by positing the use of pre-defined process evaluation frameworks (such as those developed by the MRC and realist evaluations) against a less structured and positivist and more participatory approach. Drawing on examples from their own work, Chris and Cicely talked about the merits of defining and identifying constructs such as context, mechanism, and outcomes versus working with communities most affected by interventions shape the nature of the research questions, and highlighted how poorly designed process evaluations can risk collecting too much data that then never gets analysed properly, or superficially conducting qualitative research without the requisite skills for meaningful analysis. This session encouraged discussion across all retreat participants about timing, design, and use of good process evaluations.
Every year, LSHTM donors, alumni and friends donate to help widen access to education through the provision of scholarships.
Scholarships are transformational, have the capacity to change the lives of the students who receive them, and alleviate the financial burden of study for talented and motivated recipients.