- Analysis of Clinical Trials
Theme Co-ordinators: Neal Alexander, Baptiste Leurent, Clemence Leyrat, Amy Mulick, Stephen Nash, Linda Sharples
Please see here for slides and audio recordings of previous seminars relating to this theme.
Randomised controlled trials (RCTs) are one of the most important tools to estimate effects of medical interventions. However, there are a number of issues relating to the statistical analysis of RCTs over which there is much debate. We aim to consider and raise discussion of a number of these issues, and suggest possible statistical approaches for dealing with these in the analyses of RCTs. We briefly introduce here some of these issues.
- Cluster randomised trials
- Covariate adjustment
- Subgroup analysis
- Missing data
- Sequential trials
- Good practice in trials
Cluster randomised trials
In cluster randomised trials (CRTs), groups of participants, rather than the participants themselves, are randomised to intervention groups1. This design is increasingly used to assess complex interventions, in particular community-level interventions. Many CRTs have been conducted at LSHTM to evaluate the effectiveness of health-care interventions, motivating a wide range of methodological research on their design2,3, analysis4,5 and reporting6 to take into account the specificities of such trials. One such challenge in CRTs arises because participants’ outcome measures are not independent and thus clustering must be accounted for in the analysis. For this reason, CRTs are also a main research area within the Design and Analysis for Dependent Data CSM theme.
Other issues arising from CRTs have been recently studied at the CSM, such as the risk of systematic baseline imbalance7, analysis in the presence of missing data8, and analysis of alternative cluster designs such as cluster cross-over trials9. Ongoing projects include research into spatial analysis of CRTs, the analysis strategy when only a small number of clusters are randomised, and the estimate of causal effects in CRTs with non-compliance.
- Hayes RJ, Moulton LH. Cluster Randomised Trials. Taylor & Francis; 2009. 338 p.
- Hayes RJ, Alexander ND, Bennett S, Cousens SN. Design and analysis issues in cluster-randomized trials of interventions against infectious diseases. Stat Methods Med Res. 2000 Apr;9(2):95–116.
- Thomson A, Hayes R, Cousens S. Measures of between-cluster variability in cluster randomized trials with binary outcomes. Stat Med. 2009 May 30;28(12):1739–51.
- Gomes M, Díaz-Ordaz K, Grieve R, Kenward MG. Multiple imputation methods for handling missing data in cost-effectiveness analyses that use data from hierarchical studies: an application to cluster randomized trials. Med Decis Mak Int J Soc Med Decis Mak. 2013 Nov;33(8):1051–63.
- Alexander N, Emerson P. Analysis of incidence rates in cluster-randomized trials of interventions against recurrent infections, with an application to trachoma. Stat Med. 2005 Sep 15;24(17):2637–47.
- Campbell MK, Piaggio G, Elbourne DR, Altman DG, for the CONSORT Group. Consort 2010 statement: extension to cluster randomised trials. BMJ. 2012 Sep 4;345(sep04 1):e5661–e5661.
- Leyrat C, Caille A, Foucher Y, Giraudeau B. Propensity score to detect baseline imbalance in cluster randomized trials: the role of the c-statistic. BMC Med Res Methodol. 2015;
- DiazOrdaz K, Kenward MG, Gomes M, Grieve R. Multiple imputation methods for bivariate outcomes in cluster randomised trials. Stat Med. 2016 Sep 10;35(20):3482–96.
- Morgan KE, Forbes AB, Keogh RH, Jairath V, Kahan BC. Choosing appropriate analysis methods for cluster randomised cross-over trials with a binary outcome. Stat Med. 2016 Sep 28;
In RCTs, unadjusted analysis provides an unbiased estimate of the treatment effect. However, even though randomisation should insure baseline characteristics (covariates) are broadly balanced between groups, chance imbalances can occur, especially in smaller trials. Covariate adjustment for important predictors of outcome can be used to allow for such imbalances1. In certain circumstances, adjustment using appropriate covariates can also be used to improve the power (or to reduce the required sample size) of an RCT irrespective of any baseline imbalance2.
Within the CSM, work has been conducted to look at the impact of covariate adjustment on power in real settings3, and also to investigate alternative strategies to multivariable regression for covariate adjustment, in particular the use of propensity score weighting4. Furthermore, methodological research has been conducted to extend covariate adjustment methods to more challenging randomised trials such as cross-over5 or cluster randomised trials6, in which both chance and systematic imbalance can occur. Recent work also includes the development of recommendations for the implementation of covariate adjustment in practice7.
- Altman DG. Adjustment for Covariate Imbalance. In: Biostatistics in Clinical Trials [Internet]. Chichester, UK: John Wiley & Sons, Ltd; 2001 [cited 2016 Oct 14]. p. 122–7.
- Hernández AV, Steyerberg EW, Habbema JDF. Covariate adjustment in randomized controlled trials with dichotomous outcomes increases statistical power and reduces sample size requirements. J Clin Epidemiol. 2004 May;57(5):454–60.
- Turner EL, Perel P, Clayton T, Edwards P, Hernández AV, Roberts I, et al. Covariate adjustment increased power in randomized controlled trials: an example in traumatic brain injury. J Clin Epidemiol. 2012 May;65(5):474–81.
- Williamson EJ, Forbes A, White IR. Variance reduction in randomised trials by inverse probability weighting using the propensity score. Stat Med. 2014 Feb 28;33(5):721–37.
- Kenward MG, Roger JH. The use of baseline covariates in crossover studies. Biostatistics. 2010 Jan 1;11(1):1–17.
- Gomes M, Grieve R, Nixon R, Ng ES-W, Carpenter J, Thompson SG. Methods for Covariate Adjustment in Cost-Effectiveness Analysis That Use Cluster Randomised Trials. Health Econ. 2012;21(9):1101–1118.
- Pocock SJ, McMurray JJV, Collier TJ. Statistical Controversies in Reporting of Clinical Trials: Part 2 of a 4-Part Series on Statistics for Clinical Trials. J Am Coll Cardiol. 2015 Dec 15;66(23):2648–62.
The analysis of RCTs by subgroups of individuals (e.g. according to age, gender or medical history) remains controversial and often misunderstood1–3. While a well conducted subgroup analysis can be justified, deviation from recommended practices can easily result in dubious findings3. It is recognised2,4 that such analyses should be limited to a few key baseline factors which are specified prior to any analyses being undertaken. In addition, an appropriate analysis should report effect estimates and confidence intervals within such subgroups, together with an overall interaction test, rather than separate p-values within each category of the subgroup. Irrespective of the findings, it is recommended to interpret them with caution until replicated. Sun et al. derived a useful list of criteria to assess the credibility of subgroup analysis findings5.
Alternatives methods which have been explored at the CSM include analysis based on risk score, potentially improving the power to explore subgroup effect, and allowing examination of the absolute net benefits in view of the patients’ baseline risk 4,6,7. Bayesian approach to subgroup analysis is another area of interest, allowing to take into account the clinical plausibility of the subgroup effect into the analysis8.
- Assmann SF, Pocock SJ, Enos LE, Kasten LE. Subgroup analysis and other (mis)uses of baseline data in clinical trials. Lancet. 2000;355(9209):1064-1069.
- Wang R, Lagakos SW, et al. Statistics in Medicine — Reporting of Subgroup Analyses in Clinical Trials. N Engl J Med. 2007;357:2189-2194.
- Wallach JD, Sullivan PG, et al. Evaluation of evidence of statistical support and corroboration of subgroup claims in randomized clinical trials. JAMA Internal Medicine. 2017 Apr 1;177(4):554-60.
- Pocock SJ, McMurray JJ V, Collier TJ. Statistical Controversies in Reporting of Clinical Trials Part 2 of a 4-Part Series on Statistics for Clinical Trials. J Am Coll Cardiol. 2015;66(23):2648-2662.
- Sun X, Briel M, Walter SD, Guyatt GH. Is a subgroup effect believable? Updating criteria to evaluate the credibility of subgroup analyses. BMJ. 2010;340(9209):c117.
- Pocock SJ, Lubsen J. More on Subgroup Analyses in Clinical Trials. 2008;19(8):2076-2077.
- Fox KAA, Poole-Wilson P, Clayton TC, et al. 5-year outcome of an interventional strategy in non-ST-elevation acute coronary syndrome: The British Heart Foundation RITA 3 randomised trial. Lancet. 2005;366(9489):914-920.
- White IR, Pocock SJ, Wang D. Eliciting and using expert opinions about influence of patient characteristics on treatment effects: A Bayesian analysis of the CHARM trials. Stat Med. 2005;24(24):3805-3821.
- Stone GW, Rizvi A, et al. Everolimus-eluting versus paclitaxel-eluting stents in coronary artery disease. N Engl J Med. 2010 May 6;362(18):1663-74.
Missing data are a common issue in clinical trials1 and can results in underpowered studies or biased findings. The issue of missing data is an active area of research in the CSM, with a dedicated theme, and website www.missingdata.org.uk.
Several guidelines have been published on missing data in the context of clinical trials2–4 . The report from the National Research Council concludes that methods should be chosen according to the plausibility of their underlying assumptions, and discourage simple fixes such as last observations carried forward. Because assumptions cannot be verified with the data at hand, conducting sensitivity analyses under alternative assumptions is also recommended2–4, however a review by Bell et al. found that this was rarely done in practice1.
Topics of particular interest in the CSM includes the use of multiple imputation5, for example in cluster-randomised trials6–8, and developing practical approaches for sensitivity analysis when data may be missing not at random4,9.
- Bell ML, Fiero M, Horton NJ, et al. Handling missing data in RCTs; a review of the top medical journals. BMC Med Res Methodol. 2014;14(1):118.
- Burzykowski T, Carpenter J, Coens C, et al. Missing data: Discussion points from the PSI missing data expert group. Pharm Stat. 2010;9(4):288-297.
- Little RJ, D’Agostino R, Cohen ML, et al. The prevention and treatment of missing data in clinical trials. N Engl J Med. 2012;367(14):1355-1360.
- Carpenter JR, Kenward MG. Missing Data in Randomised Controlled Trials — a Practical Guide.; 2007. http://missingdata.lshtm.ac.uk/downloads/rm04_jh17_mk.pdf.
- Carpenter JR, Kenward MG. Multiple Imputation and Its Application. John Wiley & Sons; 2013.
- DiazOrdaz K, Kenward MG, Gomes M, Grieve R. Multiple imputation methods for bivariate outcomes in cluster randomised trials. Stat Med. 2016;(February). doi:10.1002/sim.6935.
- Gomes M, Díaz-Ordaz K, Grieve R, Kenward M. Multiple imputation methods for handling missing data in cost-effectiveness analyses that use data from hierarchical studies: an application to cluster randomized trials. Med Decis Mak. 2013;33(8):1051-1063.
- Caille A, Leyrat C, Giraudeau B. A comparison of imputation strategies in cluster randomized trials with missing binary outcomes. Stat Methods Med Res. April 2014
- Carpenter JR, Roger JH, Kenward MG. Analysis of longitudinal trials with protocol deviation: a framework for relevant, accessible assumptions, and inference via multiple imputation. J Biopharm Stat. 2013;23(3):1352-1371.
In RCTs, intention-to-treat (ITT) analysis, in which patients are analysed with respect to the intervention they have been allocated to regardless of what they actually received, is considered the gold-standard1. ITT analysis estimates the effectiveness, or, following Carpenter et al. terminology, the de facto estimand, ie. “what would be the effect seen in practice?”2. It is however also of interest to estimate the effect of the intervention if the patients were to comply with it (the efficacy or de jure estimand). Although per protocol analysis (including only patients who complied with their allocated treatment) have been proposed for this, it is now well recognised that it does not maintain the randomisation and is therefore at risk of bias. Methods to estimate a Complier Average Causal Effect (CACE) have been proposed to estimate the causal effect of the intervention amongst participants who actually received it, such as instrumental variables3 or propensity scores4, but several questions require further investigation, including how to estimate the CACE when there is more than one active treatment, or when the required assumptions of the causal inference framework do not hold (see the CSM Causal Inference theme).
Ongoing research at the School looks at the use of multiple imputation of the compliance strata to estimate the CACE and also on how to tackle non-compliance in specific study designs such as in cluster randomised trials, in which the compliance decision can occur both at the individual and at the cluster level, or in non-inferiority trials, in which an underestimation of the treatment effect with ITT analysis can have important consequences in practice.
- Newell DJ. Intention-to-treat analysis: implications for quantitative and qualitative research. Int J Epidemiol. 1992 Oct;21(5):837–41.
- Carpenter JR, Roger JH, Kenward MG. Analysis of longitudinal trials with protocol deviation: a framework for relevant, accessible assumptions, and inference via multiple imputation. J Biopharm Stat. 2013;23(6):1352–71.
- Angrist JD, Imbens GW, Rubin DB. Identification of Causal Effects Using Instrumental Variables. J Am Stat Assoc. 1996;91(434):444–55.
- Porcher R, Leyrat C, Baron G, Giraudeau B, Boutron I. Performance of principal scores to estimate the marginal compliers causal effect of an intervention. Stat Med. 2015 Sep 17.
Sequential designs were originally based on industrial quality control, to monitor whether a process resulted in an abnormally high proportion of defects1. They focus on decision making rather than estimation. The simplest sequential trial designs involve assessing each person as their outcome becomes available, although they can be modified to analyse at regular intervals2-4. These modified designs can be used in a similar way as those based on the alpha-spending group sequential approach5, although the two approaches are based on different philosophies.
One of the simpler sequential designs is the triangular test for a single proportion. This is illustrated in the following figure which is from a non-comparative trial of three treatments for visceral leishmaniasis in East Africa6.
The outcome of interest is the number of patients who recovered, which is captured on the vertical axis. The horizontal axis (V) is proportional to the number of people recruited to the trial so far. The trial continues while the findings fall in the triangular region, which is calculated based on the acceptable cure rate and error probabilities. Findings above the upper boundary (as in the case here for the three treatments at the third time-point), indicate a favourable conclusion, while region below the triangle indicate an unfavourable conclusion.
Aspects of sequential designs being researched by members of the CSM and their collaborators include analysis methods for time points subsequent to assessment of the primary endpoint, and development of asymmetrical stopping boundaries e.g. the imposition of a minimum sample size7,8.
- Wald, A. Sequential Analysis. (Wiley, 1947).
- Whitehead, J. The Design and Analysis of Sequential Clinical Trials. 1st edn, (Ellis Horwood, 1983).
- Bellissant, E., Benichou, J. & Chastang, C. Application of the triangular test to phase II cancer clinical trials. Stat Med 9, 907-917 (1990).
- Ranque, S., Badiaga, S., Delmont, J. & Brouqui, P. Triangular test applied to the clinical trial of azithromycin against relapses in Plasmodium vivax infections. Malar J 1, 13 (2002).
- Lan, K. K. G. & DeMets, D. L. Discrete sequential boundaries for clinical trials. Biometrika 70, 659-663 (1983).
- Wasunna, M. et al. Efficacy and Safety of AmBisome in Combination with Sodium Stibogluconate or Miltefosine and Miltefosine Monotherapy for African Visceral Leishmaniasis: Phase II Randomized Trial. PLoS Negl Trop Dis 10, e0004880.
- Omollo, R. et al. Safety and efficacy of miltefosine alone and in combination with sodium stibogluconate and liposomal amphotericin B for the treatment of primary visceral leishmaniasis in East Africa: study protocol for a randomized controlled trial. Trials 12, 166 (2011).
- Allison, A. et al. Generalizing boundaries for triangular designs, and efficacy estimation at extended follow-ups. Trials 16, 522, doi:10.1186/s13063-015-1018-1 (2015).
Good Practice in Trials
Clinical trials need to comply with different regulatory and legal requirements depending on the jurisdiction(s) to which they are subject. In particular, they usually need to comply with Good Clinical Practice (GCP) as defined by the International Conference on Harmonization (www.ich.org).
In addition, some reporting guidelines have become de facto standards due to their adoption by medical journals. For clinical trials the most important is CONSORT1, or CONsolidated Standards of Reporting Trials (www.consort-statement.org/consort-2010), whose use is encouraged by the International Committee of Medical Journal Editors.
As well as contributing to the development of CONSORT and other reporting guidelines, LSHTM researchers have written prominent journal articles on the principles and practice of clinical trials. These include guidance on trial design2,3, writing and interpreting clinical trial reports4, interpreting results from trials which did, or did not, find an effect of the intervention on the primary endpoint5,6, and current statistical controversies in reporting clinical trials7.
- Schulz, K. F., Altman, D. G. & Moher, D. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Med 8, 18, doi:1741-7015-8-18 (2010).
- Pocock, S. J., Clayton, T. C. & Stone, G. W. Challenging Issues in Clinical Trial Design: Part 4 of a 4-Part Series on Statistics for Clinical Trials. J Am Coll Cardiol 66, 2886-2898 (2015).
- Pocock, S. J., Clayton, T. C. & Stone, G. W. Design of Major Randomized Trials: Part 3 of a 4-Part Series on Statistics for Clinical Trials. J Am Coll Cardiol 66, 2757-2766 (2015).
- Pocock, S. J., McMurray, J. J. & Collier, T. J. Making Sense of Statistics in Clinical Trial Reports: Part 1 of a 4-Part Series on Statistics for Clinical Trials. J Am Coll Cardiol 66, 2536-2549 (2015).
- Pocock, S. J. & Stone, G. W. The Primary Outcome Is Positive – Is That Good Enough? N Engl J Med 375, 971-979 (2016).
- Pocock, S. J. & Stone, G. W. The Primary Outcome Fails – What Next? N Engl J Med 375, 861-870, doi:10.1056/NEJMra1510064 (2016).
- Pocock, S. J., McMurray, J. J. & Collier, T. J. Statistical Controversies in Reporting of Clinical Trials: Part 2 of a 4-Part Series on Statistics for Clinical Trials. J Am Coll Cardiol 66, 2648-2662 (2015).
- Big Data and Machine Learning
Theme Co-ordinators: Elizabeth Williamson, Nuno Sepulveda, Jan van der Meulen,
This theme aims to provide a sharing space for methodological development on big data problems and its dissemination across the LSHTM research community.
Some of the key methodological issues that members of our theme are working on are:
- Methods for assessing and improving data quality
- Missing and poorly measured data
- Data linkage
- Data mining, multivariate statistics
- Causal inference for big data
- Stochastic models for high throughput technologies
- Machine learning
Some areas of application using big data within LSHTM are:
- Environmental epidemiology
- Health service evaluation
- Health economics studies
- Nutritional epidemiology
- Genomic epidemiology
- ‘Omics integration and systems biology
- Sero-epidemiology of infectious disease
- Analysis of microbiome
Throughout the year, we organise workshops and seminars aimed at bringing together researchers encountering methodological challenges in analysing big data, and methodologists with interests in relevant areas.
See our events page for more details.
Please see here for slides and audio recordings of previous seminars relating to this theme.
- Causal inference
This theme focuses on developing and applying state-of-the-art causal inference techniques to address important questions in public health.
Broadly speaking, the field of causal inference pertains to the identification and interpretation of causal effects from observational or experimental studies, as well as the statistical methods that enable us to draw inferences about these effects.
Selected research interests within the theme include (sorted alphabetically)
- Causal inference with missing and/or censored data
- Double robust statistical/machine learning methods
- Economic evaluations of healthcare programmes
- Electronic health records, genomics and other high-dimensional data
- Heterogeneous treatment effects and optimal treatment regimes
- Propensity score methods
- Target trial emulation
The centre for statistical methodology regularly organises causal inference seminars. Visit the events page or follow the CSM twitter account (@LSHTMstatmethod) to hear about upcoming seminars and webinars.
Past speakers include Philip Dawid, Vanessa Didelez, Richard Emsley, Miguel Hernán, Erica Moodie, Anders Skrondal, Mark van der Laan, and Stijn Vansteelandt.
In 2021, LSHTM hosted events as part of the virtual European Causal Inference Meeting (formerly the UK Causal Inference Meeting).
In 2016, LSHTM and the London School of Economics co-organised the conference.
- Health economics and policy evaluation
Theme Co-ordinators: Richard Grieve, Stephen O'Neill
This theme focuses on innovations in quantitative methodology that are motivated by health economic studies and policy evaluations.
Health economics is a branch of economics concerned with efficiency, effectiveness, value and behaviour in the production and consumption of health. Policy evaluation applies evaluation principles and methods to examine the impact of a policy. These settings raise new opportunities and challenges for the development of statistical methodology that is "fit for purpose".
Researchers within the Centre are working on the following areas which overlap with other themes in CSM, and the applications of these methods within the Global Health Economics Centre and the Evaluation Centre.
The areas are:
- Approaches for addressing external validity, non-compliance, clustered and missing data in health economic evaluations that use RCTs
- Causal inference approaches for health economic evaluation, that use observational data, including electronic health records
- Methods for policy evaluations that use longitudinal data (e.g. extensions to difference in difference or synthetic control methods)
- Methods for estimating heterogeneous treatment/policy effects, including machine learning (e.g. causal forests), and instrumental variable methods
Access slides and audio recordings of previous seminars relating to this theme.
- Missing Data and Measurement Error
Theme Co-ordinators: Ruth Keogh, James Carpenter, Karla Diaz-Ordaz, Chris Frost
Please see here for slides and audio recordings of previous seminars relating to this theme.
The problem of missing data is almost ubiquitous in medical research, in both observational studies and randomized trials. Until the advent of sufficiently powerful computers, much of the research in this area was focused on the problem of how to handle, in a practicable way, the lack of balance caused by incompleteness. A example of such a development was the key idea of the EM algorithm (Dempster et al 1976). As routine computation became less of a problem, attention moved to the much more subtle issue of the consequences of missing data on the validity of subsequent analyses. The seminal work was Rubin (1976), from which all subsequent work in this area has developed to a greater or lesser degree.
Although the underlying missing data concepts are the same for observational and randomized studies, the emphases differ somewhat in practice in the two areas. However, both are the subject of development within the Centre. From 2002, supported by several grants from the Economic and Social Research Council, an entire programme has been developed around the handling of missing data in observational studies. This includes the development of multiple imputation in a multilevel setting (e.g. Goldstein et al 2009, Carpenter et al 2010), a series of short courses, and the establishment of a leading website devoted to the topic:
which contains background material, answers to frequently asked questions, course notes, software, details of upcoming courses and events, a bibliography, and a discussion forum.
A central problem in the clinical trial setting is the appropriate handling of dropout and withdrawal in longitudinal studies. This has been the subject of great debate among academics, trialists and regulators for the last 10-15 years. Members of the centre have had long involvement in this (e.g. Diggle and Kenward 1994, Carpenter et al 2002). A textbook was published by Wiley on the broad subject of missing data in clinical studies (Molenberghs and Kenward 2007). More recently the UK NHS National Co-ordinating Centre for Research on Methodology commissioned a monograph on the subject which was published in 2008 (Carpenter and Kenward 2008). Members of the Centre are also actively involved in current regulatory developments. Two important documents have recently appeared. In the US an FDA commissioned National Research Council Panel on Handling Missing Data in Clinical Trials, chaired by Professor Rod Little, produced in 2010 a report, ‘The Prevention and Treatment of Missing Data in Clinical Trials.’ James Carpenter was one of several experts invited to give a presentation to this panel. Implementation of the guidelines in this report is to be discussed at the 5th Annual FDA/DIA Statistics Forum in April 2011, where Mike Kenward is giving the one day pre-meeting tutorial on missing data methodology. In Europe, again in 2010, the CHMP released their ‘Guideline on Missing Data in Confirmatory Clinical Trials’. James Carpenter, Mike Kenward and James Roger were members of the PSI working party that provided a response to the draft of this document (Burzykowski T et al. 2009).
At the School there continues a broad research programme in both the observational study and randomized trials settings, and there is an active continuing programme of workshops. Missing data is an issue for many of the studies run and analysed within the School and there is much cross-fertilization across different research areas. There are also strong methodological links with other themes, especially causal inference, indeed one recent piece of work explicitly connects the two areas (Daniel et al. 2011).
Those most directly involved in missing data research are:
Jonathan Bartlett, James Carpenter, Mike Kenward, James Roger (honorary), and two research students: Mel Smuk and George Vamvakis.
Many others have an interest in, and have contributed to, the area, including Rhian Daniel, Bianca de Stavola, George Ploubidis, and Stijn Vansteelandt (honorary).
Burzykowski T et al. (2009). Missing data: Discussion points from the PSI missing data expert group. Pharmaceutical Statistics. DOI: 10.1002/pst.391
Carpenter JR, Goldstein H and Kenward MG (2010). REALCOM-IMPUTE software for multilevel multiple imputation with mixed response types. Journal of Statistical Software, to appear.
Carpenter JR and Kenward MG (2008). Missing data in clinical trials – a practical guide. National Health Service Coordinating Centre for Research Methodology: Birmingham. Downloadable from http://www.haps.bham.ac.uk/publichealth/methodology/docs/invitations/Fi….
Carpenter J, Pocock S and Lamm C (2002). Coping with missing values in clinical trials: a model based approach applied to asthma trials Statistics in Medicine, 21, 1043-1066.
Daniel RM, Kenward MG, Cousens S, de Stavola B (2009) Using directed acyclic graphs to guide analysis in missing data problems. Statistical Methods in Medical Research, to appear.
Dempster AP Laird NM and Rubin DB (2007). Maximum likelihood from incomplete data via the EM algorithm (with discussion). Journal of the Royal Statistical Society, Series B, 39, 1-38.
Diggle PJ and Kenward MG (1994). Informative dropout in longitudinal data analysis (with discussion). Applied Statistics, 43, 49-94.
Goldstein H, Carpenter JR, Kenward MG and Levin K (2009). Multilevel models with multivariate mixed response types. Statistical Modelling, 9, 173-197.
Molenberghs G and Kenward MG (2007). Missing Data in Clinical Studies. Chichester: Wiley.
Rubin DB (1976). Inference and missing data. Biometrika, 63, 581-592.
The measurement of variables of interest is central to epidemiological study. Often, the measurements we obtain are noisy error-prone versions of the underlying quantity of primary interest. Such errors can arise due to technical error induced by imperfect measurement instruments and short-term fluctuations over time. An example is a single measurement of blood pressure, considered as a measure of an individual’s underlying average blood pressure. Variables obtained by asking individuals to answer questions about their behaviour or characteristics are also often subject to error, either due to the individual’s inability to accurately recall the behaviour in question or a tendency, for whatever reason, to over-estimate or under-estimate the quantity being requested.
The consequences of measurement error in a variable depend on the variable’s role in the substantive model of interest (Carroll et al). For example, independent error in the continuous outcome variable in a linear regression does not cause bias. In contrast, measurement error in the explanatory variables of regression models does cause bias, in general. Measurement error in an exposure of interest may distort estimates of the exposures effect on the outcome of interest, while error in confounders will lead to imperfect adjustment for confounding, leading to biased estimates of the effect of an exposure.
When explanatory variables in regression models are categorical the analogy of measurement error is misclassification. Unlike measurement errors, which can often plausibly be assumed to be independent of underlying true levels, a misclassification error is never independent of the underlying value of the predictor variable and so different theory covers the effects of misclassification and measurement errors (White et al).
Over the past thirty years a vast array of methods has been developed to accommodate measurement errors and misclassification in statistical analysis models. While simple methods include method of moments correction and regression calibration have sometimes been applied in epidemiological research, more sophisticated approaches, such as maximum likelihood (Bartlett et al) and semi-parametric methods (Carroll et al), have received less attention. This is likely partly due to a relative scarcity of implementation in statistical software packages.
Areas for future research efforts
Greater recognition of the effects of measurement error and misclassification in the analysis of epidemiological and clinical studies.
Increasing the accessibility of methods to deal with measurement error, through dissemination of methods and the implementation of methods into statistical software.
Development of methods that allow for the effects of measurement errors in causal models that describe how risk factors, and therefore risks of disease, change over time.
Bartlett J. W., De Stavola B. L., Frost C. (2009). Linear mixed models for replication data to efficiently allow for covariate measurement error. Statistics in Medicine; 28: 3158-3178.
Carroll R. J., Ruppert D., Stefanski L. A., Crainiceanu C. M. (2006). Measurement error in nonlinear models. Chapman & Hall/CRC, Boca Raton, FL, US.
Frost C., Thompson S. G. (2000). Correcting for regression dilution bias: comparison of methods for a single predictor variable. Journal of the Royal Statistical Society A; 163: 173-189.
Frost C., White I. R. (2005). The effect of measurement error in risk factors that change over time in cohort studies: do simple methods overcorrect for `regression dilution’?. International Journal of Epidemiology; 34: 1359-1368.
Gustafson, P. (2003). Measurement Error and Misclassification in Statistics and Epidemiology: Impacts and Bayesian Adjustments. Chapman and Hall/CRC Press.
White I., Frost C., Tokunaga S. (2001). Correcting for measurement error in binary and continuous variables using replicates. Statistics in Medicine; 20:3441-3457
Knuiman M. W., Divitini M. L., Buzas J. S., Fitzgerald P. E. B. (1998). Adjustment for regression dilution in epidemiological regression analyses. Annals of Epidemiology; 8: 56-63.
- Survival Analysis
Theme Co-ordinators: Bernard Rachet, Aurelien Belot
Please see here for slides and audio recordings of previous seminars relating to this theme.
Survival analysis is at the core of any study of time to a particular event, such as death, infection, or diagnosis of a particular cancer. It is therefore fundamental to most epidemiological cohort studies, as well as many randomised controlled trials (RCTs).
An important issue in survival analysis is the choice of time scale: this could be for example time since entry into the study (or since first treatment in a RCT), time since a particular event (e.g. the Japanese tsunami), or time since birth (i.e. age). The latter is particularly relevant for epidemiological studies of chronic diseases, where age often exerts a substantial confounding effect (see , Chapter 6, for a discussion of alternative time scales).
Usually not all participants are followed up until they experience the event of interest, leading to their times being ‘censored‘. In this case, the available information consists only of a lower bound for their actual event time. It is typically assumed that the process giving rise to censoring is independent of the process determining time to the event of interest. In contrast to most regression approaches (which typically involve modelling means of distributions given explanatory variables), many survival analysis models are defined in terms of the hazard (or rate) of the event of interest. Within this framework, the hazard is expressed as a function of explanatory variables and an underlying ‘baseline’ hazard. Fully parametric models assume a particular form for the baseline hazard, the simplest being that it is constant over time (Poisson regression). Cox’s proportional hazards model, perhaps the most popular model for survival data, makes no parametric assumptions about the baseline hazard. Both the Poisson and Cox regression models assume the hazards to be proportional for individuals with different values of the explanatory variables. This assumption can be relaxed, for example through use of Aalen’s additive hazard model.
Generalizations to deal with repeated episodes of an event of interest, such as infection, are possible through the introduction of random effects that capture the correleation among events that occur to the same individual. Within the survival analysis literature these are referred to as frailty models. → Design and analysis for dependent data
An alternative approach to modelling survival data, more in keeping with most regression techniques, involves modelling the (logarithmically transformed) survival times directly. These are expressed in terms of as a linear function of explanatory variables and an error term, with a choice of distributions for the error terms leading to the family of accelerated failure time models. When the errors are assumed to be exponential, the accelerated failure time model is equivalent to a Poisson regression model.
Most of our applications of survival analysis models involve various flavours of the models mentioned above. However specific issues arise in certain contexts and are of interest to our group. These are discussed below.
Areas of current interest
Censoring may occur for several reasons. A particular setting where censoring is not independent of the process governing the event of interest arises when there are competing events. Competing events are events that remove the individual from being at risk of the event of interest, in other words they preclude its occurrence. This happens for example if we study lung cancer mortality while individuals may die of other causes. Obviously the termination of the follow-up of individuals who die from other causes is not the same as loss to follow-up because the latter does not prevent the occurrence of the event of interest after time is censored.
The issues and methods arising for the analysis of competing events have been discussed in the biostatistical literature since the 1980s, (for a review see ) but have not really filtered into epidemiological practice, with the notable exception of applications to AIDS research . They are only marginally discussed in the RCT literature, where the problem is usually dealt with by creating composite events. → Analysis of clinical trials
There are two main possible approaches to the analysis of data affected by competing events:
a) Carrying out a so-called ‘cause-specific‘ analysis, that is adopt traditional survival analysis methods where competing events are treated as censoring events. Note however that ’cause-specific’ in this context is a misnomer since the estimated effect depends on the rates generating all the other events (see , page 66). The main issue with this is approach is one of interpretation, as all estimated effects are conditional on suffering the competing event.
b) Adopting a different focus, that is model the cumulative incidence of the event of interest as opposed to its hazard (or rate). This approach was first proposed by Fine and Gray  but belongs to the broader family of inverse probability weighting (IPW) estimators (e.g. ) that has also been proposed in other contexts, notably to deal with informative missingness and selection bias [6-7]. → Causal inference, Missing data and measurement error
Information on cancer survival is essential for cancer control and has important implications for cancer policy. The primary indicator of interest is net survival, a conceptual survival metric which would be observed if the patients were only subject to the mortality from the disease of interest and the mortality rate of this disease remained as in the context of analyses involving competing events, the only situation which can be observed.
Two approaches attempt to estimate net survival: cause-specific survival and relative survival. Relative survival  is the standard approach of estimating population-based cancer survival, when the actual cause of death is not accurately known. Although widely used in the cancer field, it can be applied to any disease at population level. Relative survival was originally defined as the ratio of the observed survival probability of the cancer patients and the survival probability that would have been expected if the patients had had the same mortality probability as the general population (background mortality) with similar demographic variables e.g. age, sex, calendar year. Background mortality is derived from life tables stratified at least by age, sex and calendar time.
Unbiased estimator of net survival
Both approaches (cause-specific and relative survival) provide biased estimation of net survival because of the competitive censoring in particular due to age. An unbiased descriptive estimator of net survival using the principle of inverse probability weighting has been recently proposed alongside the modelling approach (Pohar-Perme M, Stare J, Estève J. Biometrics 2011 – in review].
Multivariable excess hazard models
Relative survival is the survival analogue of excess mortality. Additive regression models for relative survival estimate the hazard at time t since diagnosis of cancer, as the sum of the expected hazard (background) of the general population at time t, and the excess hazard due to cancer [9-11]. More flexible models using splines for modelling the baseline excess hazard function of death as well as the non-proportionality of the co-variables effects have been recently developed [12-14]; modelling the log-cumulative excess hazard has been also proposed [15-16]. Alternative approaches were recently developed .
Unbiased estimation of net survival requires the inclusion of the main censoring variables in the excess hazard models, variables usually included in the life tables .
Estimation of net survival relies on accurate life tables. Methodology based on multivariable flexible Poisson model has been developed in order to build complete, smoothed life tables for subpopulations, as defined by region, deprivation, ethnicity etc. .
Survival on sparse data
Contrasting with incidence and mortality, very little has been done on the estimation of survival based on sparse data or small areas . The main challenge in survival is the additional dimension that is time since diagnosis. Multilevel modelling and Bayesian approaches are two main possible routes. Ultimately, presentation of such survival results can easily mislead healthcare policy makers and methodological work on mapping and funnel plots is needed .
Public health relevance
Several indicators (avoidable deaths, population ‘cure’ parameters, crude probability of death, partitioned excess mortality) have been explored to present cancer survival results in ways more relevant for public health and health policy.
Missing data and misclassification
The analysis of routine, population-based data always face the problem of incomplete data for which it may be difficult or impossible to obtain the required complementary information. A tutorial paper explored the estimation of relative survival when the data are incomplete . Even when complete, tumour stage in particular may be misclassified, compromising comparison in cancer survival between subpopulations.
Disparities in cancer survival
Inequalities in cancer survival are still not well understood and structural equation modelling appears to be a possible approach to investigate potential causal pathways.
1. Clayton D and Hills M. Statistical Models in Epidemiology. Oxford University Press, 1993, Oxford.
2. Putter, H., Fiocco, M., and Geskus, R. B. Tutorial in biostatistics: Competing risks and multi-state models. Statistics in Medicine. 2007: 26, 2389–2430.
3. CASCADE Collaboration. Effective therapy has altered the spectrum of cause specific mortality following HIV seroconversion. AIDS, 2006, 20:741–749
4. Fine, JP and Gray R J. A proportional hazards model for the subdistribution of a competing risk. Journal of the American Statistical Association. 1999: 94, 496–509.
5. Klein JP, Andersen PK. Regression Modeling of Competing Risks Data Based on Pseudovalues of the Cumulative Incidence Function. Biometrics 2005: 61, 223–229.
6. Robins JM, et al. Semiparametric regression for repeated outcomes with non-ignorable non-response. Journal of the American Statistical Association. 1998; 93 1321-1339.
7. Hernán MA, Hernandez-Diaz S, Robins JM. A structural approach to selection bias. Epidemiology 2004;15:615-625.
8. Ederer F, Axtell LM, Cutler SJ. The relative survival: a statistical methodology. Natl Cancer Inst Monogr 1961; 6: 101-21.
9. Hakulinen T, Tenkanen L. Regression analysis of relative survival rates. J Roy Stat Soc Ser C 1987; 36: 309-17.
10. Estève J, Benhamou E, Croasdale M, Raymond L. Relative survival and the estimation of net survival: elements for further discussion. Stat Med 1990; 9: 529-38.
11. Dickman PW, Sloggett A, Hills M, Hakulinen T. Regression models for relative survival. Stat Med 2004; 23: 51-64.
12. Bolard P, Quantin C, Abrahamowicz M, Estève J, Giorgi R, Chadha-Boreham H, Binquet C, Faivre J. Assessing time-by-covariate interactions in relative survival models using restrictive cubic spline functions. J Cancer Epidemiol Prev 2002; 7: 113-22.
13. Giorgi R, Abrahamowicz M, Quantin C, Bolard P, Estève J, Gouvernet J, Faivre J. A relative survival regression model using B-spline functions to model non-proportional hazards. Stat Med 2003; 22: 2767-84.
14. Remontet L, Bossard N, Belot A, Estève J, FRANCIM. An overall strategy based on regression models to estimate relative survival and models to estimate relative survival and model the effects of prognostic factors in cancer survival studies. Stat Med 2007; 26: 2214-28.
15. Nelson CP, Lambert PC, Squire IB, Jones DR. Flexible parametric models for relative survival, with application in coronary heart disease. Stat Med 2007; 26: 5486-98.
16. Lambert PC, Royston P. Further development of flexible parametric models for survival analysis. Stata J 2010; 9: 265-90.
17. Perme MP, Henderson R, Stare J. An approach to estimation in relative survival regression. Biostatistics 2009; 10: 136-46.
18. Estève J, Benhamou E, Raymond L. Statistical methods in cancer research, volume IV. Descriptive epidemiology. (IARC Scientific Publications No. 128). Lyon: International Agency for Research on Cancer, 1994.
19. Cancer Research UK Cancer Survival Group. Life tables for England and Wales by sex, calendar period, region and deprivation. http://www.lshtm.ac.uk/ncdeu/cancersurvival/tools/, 2004.
20. Quaresma M, Walters S, Gordon E, Carrigan C, Coleman MP, Rachet B. A cancer survival index for Primary Care Trusts. Office for National Statistics, 7 Sep 2010. http://www.statistics.gov.uk/statbase/Product.asp?vlnk=15388
21. Spiegelhalter DJ. Funnel plots for comparing institutional performance. Statistics in Medicine 2005; 24: 1185-202.
22. Nur U, Shack LG, Rachet B, Carpenter JR, Coleman MP. Modelling relative survival in the presence of incomplete data: a tutorial. IJE 2010; 39: 118-28.
- Time Series Regression Analysis
Theme Co-ordinators: Antonio Gasparrini, Ben Armstrong
Please see here for slides and audio recordings of previous seminars relating to this theme.This theme looks at time series regression analysis. A time series may be defined as a sequence of measurements taken at (usually equally spaced) ordered points in time.
Time series designs are increasingly being exploited in biomedical data, due to the availability of routinely collected series of administrative or medical data, such as mortality or morbidity counts, environmental measures, changes in socio-economic or demographic indices. Modern methods have extended the application of time series analysis beyond traditional settings, exploiting more complex data structures and modelling techniques.
At CSM, we work on methodological issues such as:
- Case time series for individual-level and small-area data
- Distributed lag linear and non-linear models
- Extended two-stage designs
- Health impact assessments of environmental factors and climate change
- Big data and novel data technologies
- Functional data analysis with time series data
- Interrupted time series designs for public health evaluation
- Software development
Overview of methods
- Armstrong B, Gasparrini A, Tobias A. Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis. BMC Medical Research Methodology. 2014;14(1):122.
- Gasparrini A. Modelling lagged associations in environmental time series data: a simulation study. Epidemiology. 2016;27(6):835-842. DOI: 10.1097/EDE.0000000000000533. PMID: 27400064; PMCID: PMC5388182.
- Gasparrini A, …, Armstrong B. Mortality risk attributable to high and low ambient temperature: a multicountry observational study. The Lancet. 2015;386(9991):369-375. DOI: 10.1016/S0140-6736(14)62114-0. PMID: 26003380; PMCID: PMC4521077.
- Gasparrini A, …, Armstrong BG. Temporal variation in heat-mortality associations: a multicountry study. Environmental Health Perspectives. 2015;123(11):1200-1207.
Distributed lag non-linear models
- Gasparrini A, Armstrong B, Kenward MG. Distributed lag non-linear models. Statistics in Medicine. 2010;29(21):2224-34.
- Gasparrini A, Scheipl F, Armstrong B, and Kenward MG. A penalized framework for distributed lag non-linear models. Biometrics. 2017;73(3):938-948. DOI: 10.1111/biom.12645. PMID: 28134978.
- Vicedo-Cabrera AM, Sera F, Gasparrini A. Hands-on tutorial on a modeling framework for projections of climate change impacts on health. Epidemiology. 2019;30(3):321-329. DOI: 10.1097/EDE.0000000000000982. PMID: 30829832; PMCID: PMC6533172.
- Vicedo-Cabrera AM, …, Gasparrini A. The burden of heat-related mortality attributable to recent human-induced climate change. Nature Climate Change. 2021;11(6):492-500. DOI: 10.1038/s41558-021-01058-x. PMID: 34221128; PMCID: PMC7611104.
- Gasparrini A. The case time series design. Epidemiology. 2021;32(6):829-837. DOI: 10.1097/EDE.0000000000001410. PMID: 34432723.
- Gasparrini A. A tutorial on the case time series design for small-area analysis. BMC Medical Research Methodology. 2022;22(1):129. DOI: 10.1186/s12874-022-01612-x. PMID: 35501713.
- Mistry MN, …, Gasparrini A. Comparison of weather station and climate reanalysis data for modelling temperature-related mortality. Scientific Reports. 2022;12(1):5178. DOI: 10.1038/s41598-022-09049-4. PMID: 35338191.
- Schneider R, …, Gasparrini A. A satellite-based spatio-temporal machine learning model to reconstruct daily PM2.5 concentrations across Great Britain. Remote Sensing. 2020;12(22):3803. DOI: 10.3390/rs12223803.
- Sera F, Armstrong B, Blangiardo M, Gasparrini A. An extended mixed-effects framework for meta-analysis. Statistics in Medicine. 2019;38(29):5429-5444. DOI: 10.1002/sim.8362. PMID: 31647135.
- Sera F, Gasparrini A. Extended two-stage designs for environmental research. Environmental Health. 2022;21(1):41. DOI: 10.1186/s12940-022-00853-z. PMID: 35436963; PMCID: PMC9017054.
- Gasparrini A. Distributed lag linear and non-linear models in R: the package dlnm. Journal of Statistical Software. 2011;43(8):1-20.
- Sera F, …, Gasparrini A. How urban characteristics affect vulnerability to heat and cold: a multi-country analysis. International Journal of Epidemiology. 2019;48(4):1101-1112. DOI: 10.1093/ije/dyz008. PMID: 30815699.
- Sera F, …, Gasparrini A. Air conditioning and heat-related mortality: a multi-country longitudinal study. Epidemiology. 2020;31(6):779-787. DOI: 10.1097/EDE.0000000000001241.
- Gasparrini A, Leone M. Attributable risk from distributed lag models. BMC Medical Research Methodology. 2014;14(1):55.
- Liu C, …, Gasparrini A, Kan H. Ambient particulate air pollution and daily mortality in 652 cities. New England Journal of Medicine. 2019;381(8):705-715. DOI 10.1056/NEJMoa1817364. PMID: 31433918; PMCID: PMC7891185.
- Masselot P, …, Gasparrini A. Differential mortality risks associated with PM2.5 components: a multi-country, multi-city study. Epidemiology. 2022;33(2):167-175. DOI: 10.1097/EDE.0000000000001455. PMID: 34907973.
- Gasparrini A, Armstrong B. The impact of heat waves on mortality. Epidemiology. 2011;22(1):68-73
- Hajat S, Armstrong B, …. Impact of high temperatures on mortality: is there an added heat wave effect? Epidemiology. 2006;17(6):632-638.
- Armstrong B, …, Gasparrini A. Longer-term impact of high and low temperature on mortality: an international study to clarify length of mortality displacement. Environmental Health Perspectives. 2017;125(10):107009.
- Lowe R, …, Gasparrini A. Combined effects of hydrometeorological hazards and urbanisation on dengue risk in Brazil: a spatiotemporal modelling study. The Lancet Planetary Health. 2021;5(4):e209-e219. DOI: 10.1016/S2542-5196(20)30292-8.
- Sera F, …, Gasparrini A, Lowe R. A cross-sectional analysis of meteorological factors and SARS-CoV-2 transmission in 409 cities across 26 countries. Nature Communications. 2021;12(1):5968. DOI: 10.1038/s41467-021-25914-8. PMID: 34645794.
- Schneider R, …, Gasparrini A. Differential impact of government lockdown policies on reducing air pollution levels and related mortality in Europe. Scientific Reports. 2022;12(1):726. DOI: 10.1038/s41598-021-04277-6. PMID: 35082316.
- Lopez Bernal J, Cummins S, Gasparrini A. Interrupted time series regression for the evaluation of public health interventions: a tutorial. International Journal of Epidemiology. 2017;46(1): 348-355.
- Lopez Bernal J, Soumerai S, Gasparrini A. A methodological framework for model selection in interrupted time series studies. Journal of Clinical Epidemiology. 2018;103:82-91.
- Masselot P, Ouarda T.B.M.J., …. Heat-related mortality prediction using low-frequency climate oscillation indices: Case studies of the cities of Montréal and Québec, Canada. Environmental Epidemiology 2022;6, e206.
- Sera F, …, Cortina-Borja M. Using functional data analysis to understand daily activity levels and patterns in primary school-aged children: Cross-sectional analysis of a UK-wide study. PLOS ONE. 2017;12(11):e0187677.