Where’s the evidence for how we run clinical trials?

No one ever said that doing a clinical trial was easy. Indeed it often feels like a Sisyphean task, when faced with obdurate funding committees, or centres that seem unable to recruit a single patient, when just six months previously they were inundated with them. Every piece of research has its pain points; however, sometimes we do have a tendency to over-complicate things.

This was exactly the message of Shaun Treweek’s talk at the 2nd Clinical Trials Methodology Conference in November last year, where he asked if we were making our own lives more difficult than they needed to be. We have a tendency to do trials the way we do because that’s the way we do them.

Do we really need to collect all those data? Is selecting that site a good idea, and why? Do we know our retention strategies are effective and, if not, why are we using them without evaluation?

Using the analogy of the British cycling team, he argued that if we could systematically identify discrete trial processes and improve the efficiency of each by just a few percent, then this would give huge benefits overall. This is the ambition of the Trial Forge initiative.

Following the above analogy, the first stage of this ambitious project was to develop a trial pathway onto which evidence could be connected, and a shared vision for how Trial Forge should work. It was in pursuit of this that we held the first Trial Forge meeting in the Wellcome Rooms at the Royal Society of Edinburgh last week.

The day kicked off with Mike Clarke succinctly captioning our pain: we have dispersed knowledge, gaps in that knowledge and no coordinated effort to fill those gaps. Using the example of recruitment, he identified over 32,000 articles through a PubMed search; however, only 45 of these were eligible for inclusion in the relevant Cochrane review. Similarly, for retention, almost 14,000 studies were identified, with only 38 included in the Cochrane review. This was further emphasised by Paula Williamson, who presented the results from a priority setting exercise in which research into recruitment, retention and outcome selection were identified as being the highest priority.

So how do we, as researchers, know that we are doing more good than harm with the designs we choose for our studies, and how can we ensure that we make well-informed choices about all aspects of randomised controlled trials?

Trudie Lang argued that clinical research involves many steps that are largely similar, irrespective of location, disease area or type of study, presenting her work with the Global Health Network and the Process Map they have created. At this point, we split up into smaller groups to discuss what we felt were the key trial processes that Trial Forge should cover, given the length of the trial ‘journey’.

Led by Doug Altman, my group quickly decided the start point should be formulating the research question. While recognising the importance of research prioritisation, we felt that there were too many potential local influences that we could not capture. Similarly, dissemination was selected as the endpoint, with the pathway then feeding into other systems for implementation and eventual assessment of the impact.

However, at this point, it emerged that the design and conduct of a trial was not a linear process, even when viewed retrospectively. This made separating the overall pathway into discrete, sequential processes impossible. Instead, we split it into seven general steps, many of which were ongoing throughout the research process, and were often re-evaluated based on feedback from later steps.

After feeding back our discussions to the main group, we split off again to discuss how to ensure that Trial Forge is useful to all users. Here we identified the audience as anyone conducting a ‘definitive’ (phase III/IV) clinical trials, whether they were novice or expert. This was supported by the other groups; however, it was agreed that they will want different things from the project depending on their level of expertise.

It was considered that more novice researchers would benefit from an evidence-based project management tool, which presented the various options, together with the evidence (or lack thereof) for their use, as well as integration with existing initiatives, such as the EQUATOR Network. Conversely, a register of trial methodology research, analogous to the existing study registers, would help more expert researchers.

So what next for Trial Forge? The steering committee are pulling together all the notes and scribblings from the meeting and formulating a plan to take this exciting project forward. Make sure you keep checking the Trial Forge website for updates.

View the latest posts on the On Medicine homepage

Comments