Article Text

Download PDFPDF

Lessons to be learnt from unsuccessful clinical trials
  1. Rosalind L Smyth
  1. Correspondence to Professor Rosalind L Smyth, Brough Professor of Paediatric Medicine, Institute of Translational Medicine, University of Liverpool, Alder Hey Children's NHS Foundation Trust, Liverpool L12 2AP, UK; r.l.smyth{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In this issue of Thorax, Lenney and colleagues have described their trials and tribulations with the MASCOT study (see page 457), which compared three different treatment options in children with asthma whose condition was not adequately controlled on low-dose inhaled corticosteroids (ICS).1 The study, which aimed to initially recruit 900 children and enter 450 into a randomised comparison, recently had to close because it had fallen considerably short of its recruitment targets and funding was not extended (by the NIHR Health Technology Assessment Programme). The MASCOT trial was coordinated by the NIHR Medicines for Children Research Network (MCRN) Clinical Trials Unit and supported by the NIHR MCRN and Primary Care Research Networks, which provided an infrastructure of research nurses and other staff to support recruitment of children both in the community and from hospital outpatient clinics. The trial addressed a very important clinical question, but despite a great deal of effort by researchers, clinical trial methodologists and support staff, it failed. Why was this and what lessons can be learnt for the future?

The first series of tribulations that the investigators ran into related to the preparation, packaging and supply of the investigational medicinal products (IMPs) and placebos. To design the trial as a double-blind comparison, among three treatment arms, each child had to take three preparations: ICS plus montelukast and long-acting beta-agonist (LABA) placebo, or ICS plus LABA and montelukast placebo, or ICS plus LABA placebo and montelukast placebo. The investigators depended on two pharmaceutical companies for the supply of these active drugs and placebo. In their section entitled ‘Study progress’, they chronicle a series of problems, which together delayed study initiation by at least 18 months. To maintain the double-blind design, the investigators were critically dependent on the goodwill of these companies, neither of which was driving the study and whose approach to its overall success may have been ambivalent. The investigators had to request substantial ‘additional commercial funding’ to address these costs, which suggests that these aspects were not addressed in the original application.

Many of the statements in the next two sections (entitled ‘Recruitment issues’ and ‘Other challenges’) are personal opinions and not substantiated with relevant data. This is a pity because presumably these investigators were in a position to provide data to support statements such as ‘children in secondary care were mainly too young for the study (pre-school age) or were already receiving add-on therapy’. A number of the comments express frustrations with the complex system of approvals required to conduct multicentre studies in the UK. These include dealing with requests from the Research Ethics Committee, obtaining approvals from individual Research and Development offices and lack of clarity within NHS Trusts about whether research staff were able to access patient data to determine whether patients were eligible for recruitment to MASCOT. These frustrations are not unique and have recently been addressed by a UK government commissioned review by the Academy of Medical Sciences.2 3 This report recommended the formation of a new Health Research Agency to streamline all the current arrangements for ethical approval and to provide a National Research Governance Service. It also called for clear guidance that researchers should be considered part of a clinical care team and able to access such information about patients to enable them to decide if they are potentially eligible for recruitment to a clinical study.

One year after re-opening, the study was clearly falling badly behind its recruitment targets. Of the 450 participants needed, only 65 had been randomised and a lower number, than expected, were progressing from the run-in phase to randomisation. Despite strenuous efforts, and a complex variety of recruitment strategies, the proportion of families who responded to invitations to participate in the study, by letter or phone call, was much less than 10%, which meant that recruitment was considerably more costly than had ever been envisaged. Although the authors have provided some figures about responses to different recruitment approaches, it would have been helpful to readers of their article if the authors had provided a systematic overview of recruitment strategies and their relative success to enable something to be learnt from their experience. The funding body, which had been aware of the problems with MASCOT for some time, monitored progress closely during this year and closed the study because of poor recruitment after 13 months.

The investigators clearly feel demoralised about this outcome and are critical of the funding body, NIHR Health Technology Assessment Programme, for closing the study when they did and apparently for not allowing time for new sites to be brought on board and recruitment strategies to be fully implemented. The funder had already committed over £1 million of public money to the study (more than previously requested) and presumably judged that the recruitment targets either would never be met, or would only be met after considerable further investment. Another trial in childhood asthma, the Magnesium Nebuliser Trial in Children (MAGNETIC) trial was funded at the same time as MASCOT. MAGNETIC is a randomised, placebo-controlled study of nebulised magnesium in acute severe asthma in children and is also running in the UK, supported by MCRN. MAGNETIC's sample size is 500 children and it is on target to complete in March 2011 after a recruitment period of 28 months. Critically, the MAGNETIC trial was only funded after its investigators had demonstrated, in a feasibility study, that they could recruit patients from accident and emergency departments in the numbers needed to complete the study to time and target.

The authors also compare their experience with that of the, recently published, BADGER trial, which was run in the USA and also asked the question about the most appropriate step-up therapy in children with asthma whose condition was not adequately controlled on low-dose inhaled steroids.4 They attribute their lack of success, compared with BADGER, with the ‘bureaucratic, communication, governance and recruitment issues’ in the UK. This analysis seems somewhat simplistic; BADGER was a much less ambitious study than MASCOT; it recruited 182 patients and followed them for 16 rather than 48 weeks.

The overwhelming impression created by the ‘Trials and tribulations’ narrative is of a group of investigators who were caught by surprise and did too little too late to remedy things. Some of the problems could not have been anticipated, although costs for IMPs and placebo should have been included in the original proposal. The original assumptions about recruitment should have been tested properly in a feasibility study. This could have assessed where the patients were treated, how best to work in a primary care setting, the relative success of different recruitment strategies and, most importantly, provide an estimate of how many patients might be recruited from different sites over a reasonable time frame. The requirement for a ‘run in’ period meant that only half of the potentially eligible patients would enter randomisation. A feasibility study could have assessed how necessary the run in period was to the study design. Feasibility studies have traditionally been poorly understood and unpopular. They are often regarded as a ‘mini’ version of the ‘real’ trial and difficult to publish. While the latter may be true, the former certainly is not. Feasibility studies for clinical trials are all about testing assumptions, including the importance of the research question and acceptability of trial procedures to patients and clinicians, feasibility and success of recruitment strategies, measurement of outcomes and so on.5

What is the role of research networks that have been established in the UK to provide an infrastructure to support high-quality clinical studies such as MASCOT? Clearly staff working on a trial and particularly chief investigators, such as Professor Lenney, need support, first to negotiate the complex regulatory framework, to identify potential study sites and local investigators, and to provide training and administrative support to establish those sites. Once the trial is open, then the major role of the network is to support recruitment. There is a lot of external evidence that MCRN is achieving this very successfully and currently over 8000 children per year are recruited to MCRN portfolio studies, which represents an almost twice doubling in numbers over two successive years. The narrative by Lenney and colleagues is, however, a reminder that despite these impressive developments in capacity for undertaking clinical research with children, much work remains to be done to avoid such trials and tribulations and to ensure that important research questions are answered, for the benefit of children.


View Abstract


  • Linked article 156885.

  • Competing interests Professor Smyth is Director of NIHR Medicines for Children Research Network.

  • Provenance and peer review Commissioned; not externally peer reviewed.

Linked Articles