TMRP Working Group Funding Awards

TMRP Working Group Funding Awards

INITIAL: Involving patients and the public In sTatistIcal Analysis pLans

Lead: Dr Beatriz Goulao, University of Aberdeen

Improving relationships and communication between trialists and public partners has been identified as the number one priority for methodological research in patient and public involvement (PPI) in trials, which aims to enhance research by improving relevance, increasing transparency and public trust. However, PPI often remains tokenistic. Our recent work has evidenced that the public believes patient and public involvement in numerical aspects of clinical trials is important, but trialists find involving patients challenging. Statistical analysis plans contain many important analytical choices, but patients are rarely involved in these, which may undermine the relevance of the results of trials. Failing to actively communicate with and involve patients or the public in these decisions can ultimately lead to research waste, for example by answering non-relevant (to patients) research questions. The current project proposes a first step to address this gap by: 1) develop Plain English Summaries to define and describe relevant items in statistical analysis plans, using a creative workshop methodology; 2) identify the most important items of statistical analysis plans to involve patients and the public in, through a Delphi consensus study.

Determining the most important methodological areas requiring methodological research for routine data in trials: a consensus.

Lead: Dr Fiona Lugg-Widger 

Researchers are increasingly utilising routinely collected data to support data collection in clinical trials. The availability of routine data for research has increased and infrastructure funding has enabled much of this. However challenges remain for data access, storage and sharing. Many of these challenges have been experienced within individual trial teams, and the lessons have not necessarily been shared between these disparate silos. Publications drawing from experienced researchers and limited case studies make reference to these and other ongoing challenges. However, the challenges as experienced and perceived by the wider community have not been systematically collected and reported.

This funded work aims to systematically identify these real and perceived, ongoing challenges, from the perspective of all relevant stakeholders in the UK.  We will carry out a 3-step Delphi method consisting of two rounds of anonymous web-based surveys (steps 1 and 2), and a virtual consensus meeting (step 3). Stakeholders will be identified through co-applicant networks and will include trialists, EHR infrastructures (i.e. HDRUK), funders of EHR trials, regulators (HRA, MHRA), data providers and the public. Stakeholders will provide research questions/uncertainties that they believe are of particular importance (step 1), and then rate each proposed question on a 1-5 scale (step 2). Research questions/uncertainties that meet predetermined consensus thresholds will be brought forward to the consensus meeting (step 3) for discussion and ranking with representatives of the stakeholder groups.

Ensuring academic trials answer the questions of interest: Implementation of the estimand framework

Lead: Dr Suzie Cro

When evaluating the effect of a treatment in a clinical trial, different questions can be addressed. For example, does the treatment work when it is received as prescribed?, or does the treatment work regardless of whether all is received? The answers to these questions may lead to different conclusions on treatment benefit. It is therefore important to have a clear understanding of exactly what treatment effect a trial intends to demonstrate, referred to as the ‘estimand’. Trial design, conduct and analysis can then be aligned to address this.
The use of estimands has recently been brought into focus with the publication of the ICH-E9(R1) addendum which describes a framework for how estimands should be defined. However, in a recent review of 50 published trial protocols (92% academic/not-for-profit sponsor), none specified estimands.
There is a need to implement the estimand framework across UKCRC registered Clinical Trials Units (CTU). We propose a workshop for UKCRC CTU statisticians and clinicians on the implementation of the framework. The aim is to show trialists how to use the estimand framework by providing implementation tools. This to increase uptake of the framework, to ensure trials are designed and analysed to answer the questions of interest.

Minority ExpeRiences In Trials (MERIT): Understanding why ethnic minority groups are under-represented in trials through a rapid qualitative evidence synthesis, and mapping evidence to find solutions

Lead: Dr Heidi Gardner, University of Aberdeen

Despite growing diversity in the UK’s population, non-White British people are less likely to be represented in clinical trial populations. Poor diversity is a public health issue; if trial participants do not reflect the patients the trial is designed to serve, there is no guarantee that the results will apply to un/under-represented patients. There is also a moral imperative to ensure that everyone has equal opportunity to participate in trials. This research aims to explore factors that impact on recruitment of ethnic minority people to trials, and to better understand how those factors differ from the recruitment of predominantly white people. Our objectives are to: 1) Rapidly review trial recruitment evidence specific to the views and experiences of ethnic minority groups in a qualitative evidence synthesis, 2) Compare the factors that impact on trial participation found in objective 1 to the existing Cochrane Recruitment Qualitative Evidence Synthesis to explore similarities and differences between mainly white participants and people from ethnic minority backgrounds, and 3) Analyse findings from objective 2 to suggest if/how existing recruitment interventions/strategies might or might not work to increase trial representativeness– and if they do not work, make suggestions/recommendations on designing new interventions/strategies for trialists.  

Beyond "must speak English": In search of a fairer way to operationalise patient screening for language proficiency in trial recruitment

Lead: Dr Talia Isaacs, University College London

Trial teams set eligibility criteria to select trial participants. One common nonclinical eligibility criterion is that patients must speak English to be able to participate. Not including people who are perhaps unfairly or inaccurately judged as not able to speak English means that participants who could benefit from the treatment or might be harmed by the intervention in its current form, may be excluded and their perspectives ignored. This could limit external validity and potentially exacerbate existing health inequalities. Conversely, participants may be included when they are unable to understand the conditions of research participation and, hence, cannot provide truly informed consent. The objective of this study is to better understand how trial teams make language-related gatekeeping decisions during recruitment. First, we will systematically examine NIHR research reports featuring trials targeting two conditions for which ethnic minorities are disproportionately affected: type 2 diabetes and depression. We will investigate how language eligibility is operationalised and the validity of the reported procedures. Then, working with the South Asian Health Foundation and Centre for BME Health, we will seek the views of patient representatives and recruiters on their experiences of language challenges in recruitment and retention and views on improving language screening practice.

Using Machine Learning with user feedback to improve ORRCA  

Lead: Anna Kearney

ORRCA/ ORRCA2 is an online searchable database aimed at helping identify effective solutions to address the two biggest challenges in clinical trial delivery, recruiting and retaining trial participants. It is regularly accessed by users from across the world and has been used to support key methodological projects.

However, it is not clear how users engage with the search function in ORRCA and how relevant the returned results are to the original query. Searches may return large numbers of results, of which only a few may be highly relevant. If searches are not effective this may impact on the use and uptake of the resource. In addition, many of the search fields use data that is manually extracted from eligible articles during the review process which is time consuming.

This research will assess the utility of the search function by asking a diverse group of users to evaluate the relevance of returned results. Data will also be collected on the search terms used, search aims and general experience of the ORRCA site in order to identify areas for improvement. Machine Learning methods will use the collected data to train and test an improved search algorithm for use in the ORRCA website.