Close
Conference
series event

Tackling inequalities and exclusion in statistical research

This symposium aims to examine how results from statistical analysis are affected by the way that data arise and the algorithms that we use, and what methodological research on statistical design and analysis is required to identify and eliminate inherent inequalities. 

Programme

From left to right: Dr Rohini Mathur, Dr Shakir Mohamed, Dr Sherri Rose and Dr Darshali Vyas
From left to right: Dr Rohini Mathur, Dr Shakir Mohamed, Dr Sherri Rose and Dr Darshali Vyas
14.00 - 14.05: Welcome and introductions  

Speaker 

Professor Linda Sharples, LSHTM  

14.05 - 14.30: Methodological considerations around the use of routinely collected data to examine health inequalities in the UK  

Chair 

Dr Karla Diaz-Ordaz, LSHTM 

Speaker

Dr Rohini Mathur, LSHTM 

Rohini is an epidemiologist specialising in health equity research using electronic health records and cohort data. Her research is focused on identifying ethnic and social inequalities along the care pathway and the solutions necessary to address them. As well as stints working at the University of Waterloo and McGill University in Canada, and Queen Mary, University of London, she completed her PhD at LSHTM investigating how electronic health records can be best used to examine ethnic inequalities in health outcomes in the UK. A key focus was establishing the usability and completeness of ethnicity recorded in primary and secondary care databases.  

Abstract  

This talk will introduce the history, context, and meaning of ethnicity in the UK and how this compares to other countries, introduce standard coding systems for ethnicity and the quality and completeness of these data, and discuss some of the methodological challenges and considerations for conducting research into ethnic inequalities in a UK context. 

14.30 - 15.10: Fair machine learning for healthcare

Chair

Dr Karla Diaz-Ordaz, LSHTM  

Speaker  

Dr Shakir Mohamed, DeepMind

Shakir works on technical and sociotechnical questions in machine learning and artificial intelligence (AI) research, aspiring to make contributions to machine learning principles, applied problems in healthcare and environment, and ethics and diversity. Shakir is a research scientist and lead at DeepMind in London, an Associate Fellow at the Leverhulme Centre for the Future of Intelligence, and an Honorary Professor at University College London. Shakir is also a founder and trustee of the Deep Learning Indaba, a grassroots organisation aiming to build pan-African capacity and leadership in artificial intelligence. 

Abstract 

As statistical and algorithmic approaches for supporting healthcare are more widely developed, tested and integrated into practice, questions of fair and unbiased decision-making and patient care have become a core factor in assessing any algorithmic decision-support in the clinic. When care is not taken there have been several examples of poor outcomes, in opposition to the stated aims of developing such medical AI systems.  

In this talk we'll explore the topic of algorithmic fairness and how it is related to broader sets of concerns related to power, values and prosperity. We'll look at recent examples across areas from medical imaging, health records, mental health and how they might affect different types of people and patients to develop our understanding, and then explore paths towards a fairer machine learning for healthcare.

15.20 - 16.00: Race in prediction models: Reconsidering race correction   

Chair 

Professor Nick Jewell, LSHTM  

Speaker 

Dr Darshali Vyas, Harvard University  

Darshali is a Resident Physician in Medicine at Massachusetts General Hospital and has published a number of papers and commentaries on the flaws of race-based decision calculators in medicine. In her recent NEJM paper, she explored how diagnostic/prediction algorithms that included race as a factor can perpetuate race-based health inequalities (Hidden in Plain Sight - Reconsidering the Use of Race Correction in Clinical Algorithms. N Engl J Med. 2020 08 27; 383(9):874-882). 

Abstract  

The long overdue reckoning around race-based medicine has gained newfound attention in the past several years. One instance of race-based clinical practice comes through the process of "race correction," whereby medical algorithms adjust their final outputs based on a patient's assigned race or ethnicity. This practice has drawn scrutiny for its potential to divert resources or attention disproportionately and systematically towards White patients compared to minorities. In this talk, we will explore the logic of race correction, potential consequences of race correction, and clinical examples of the practice. We will end with a framework to evaluate the practice when encountered in clinical practice.

16.00 - 16.40: Statistical methods for algorithmic fairness in risk adjustment

Chair 

Professor Nick Jewell, LSHTM 

Speaker  

Dr Sherri Rose, Stanford University 

Sherri is an Associate Professor of Health Policy and Co-Director of the Health Policy Data Science Lab. Her research centres on developing and integrating statistical machine learning approaches to improve human health. Within health policy, she works on risk adjustment, ethical algorithms in health care, comparative effectiveness research and health program evaluation. Dr Rose is a Fellow of the American Statistical Association and her other honors include the Bernie J. O’Brien New Investigator Award, an NIH Director’s New Innovator Award, and the Mortimer Spiegelman Award. She comes from a low-income background and is committed to increasing justice, equity, diversity, and inclusion in the mathematical and health sciences. 

Abstract 

It is well-known in health policy that financing changes can lead to improved health outcomes and gains in access to care. More than 50 million people in the US are enrolled in an insurance product that risk adjusts payments, and this has huge financial implications—hundreds of billions of dollars. 

Unfortunately, current risk adjustment formulas are known to undercompensate payments to health insurers for certain marginalised groups of enrollees (by underpredicting their spending). This incentivizes insurers to discriminate against these groups by designing their plans such that individuals in undercompensated groups will be less likely to enroll, impacting access to health care for these groups. We will discuss new fair statistical machine learning methods for continuous outcomes designed to improve risk adjustment formulas for undercompensated groups. Then, we combine these tools with other approaches (e.g., leveraging variable selection to reduce health condition upcoding) for simplifying and improving the performance of risk adjustment systems, while centering fairness. 

Lastly, we discuss the paucity of methods for identifying marginalised groups in risk adjustment and more broadly in the algorithmic fairness literature, including groups defined by multiple intersectional attributes. Extending the concept of variable importance, we construct a new measure of "group importance" to identify groups defined by multiple attributes. This work provides policy makers with a tool to uncover incentives for selection in insurance markets and a path towards more equitable health coverage.  (Joint work with Anna Zink, Harvard & Tom McGuire, Harvard.) 

16.50 - 17.30: Discussion

Chair

Professor Linda Sharples 

Speakers 

  • Dr Sherri Rose 
  • Dr Darshali Vyas 
  • Dr Shakir Mohamed 
  • Dr Rohini Mathur 

Discussions

  • What are the key issues? 
  • What should statisticians/health data scientists/epidemiologists be aware of when designing studies/analysing data? 
  • What methodological research is required? 

Admission

Admission
Follow webinar link. Free and open to all. No registration required.

Contact

Contact
LSHTM Centre for Statistical Methodology logo