Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Healthcare quality has received sustained attention since the release of To Err is Human by the US Institute of Medicine in late 1999.1 This report captured widespread interest with the oft-quoted estimate that medical errors annually cause 44 000–98 000 deaths in US hospitals alone. This period also coincided with publication of ‘An organisation with a memory’,2 which described the scale and nature of serious failures in the UK National Health Service.
A widely accepted definition describes quality as the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.3 4 This definition further characterised quality in terms of six dimensions: safety, effectiveness, patient centeredness, timeliness, efficiency and equitability.
Numerous studies document major shortcomings in each of these dimensions across a range of clinical settings.3 One illustrative study5 showed that only 55% of Americans with chronic medical conditions received basic aspects of acute, chronic and preventive care.5 For example, only 50% of patients with asthma received chronic inhaled corticosteroids and a similarly low percentage of patients with chronic obstructive pulmonary disease (COPD) received influenza vaccination. These major shortfalls in effective healthcare do not simply reflect access issues, as comparable data from Canada (with universal public healthcare) show that only 56% of patients with COPD had undergone spirometry as recommended by guidelines and only 34% received guideline-concordant treatment.6 Given that COPD will become the third leading cause of death by 20307 and represents the one common cause of death for which mortality rates continue to climb, we must improve adherence to evidence-based aspects of COPD management.8
Addressing quality problems
Quality improvement (QI) is a science9 and includes numerous distinct strategies for changing patient and provider behaviour, as well as redesigning systems of care—audit and feedback, case management, support for self-management, patient registries and computerised decision support to name just a few.10–12 But, the single most basic approach involves iterative cycles of outcome measurement, identification of problems, implementation of potential solutions and repeated measurement.13
The positive impact of such cycles of continuous QI in pulmonary medicine has been nowhere as evident as under the direction of the American Cystic Fibrosis Foundation Patient Registry and its Therapeutic Development Network. In this issue, Drs Quon and Goss provide a review of the huge impacts these initiatives have had on the lives of patients with cystic fibrosis.14 The overriding principle has been transparency, with all participating centres committed to reporting their results to clinicians and patients.
The American Cystic Fibrosis Foundation Patient Registry has evolved over 45 years from a few basic measures of the natural history of disease to over 300 variables for some 26 000 patients, detailing aspects of management, pulmonary functional status, laboratory data and clinical outcomes, as well patients' (or their parents') assessments of the quality of care received. This engagement in transparently measuring and improving care has been associated with continued improvements in outcomes, including an increase in life expectancy from 27 years in 1989 to 36 years in 2009.14
Challenges in reporting improvement efforts
We urgently need more such successful improvement initiatives in pulmonary medicine. That said, reporting the methods and results of QI initiatives differs in important ways from reports of traditional clinical research. QI reports tend to address messier problems, involve more complex interventions and require far greater attention to context (table 1).
The ‘messiness’ of problems in QI reflects their broader scope and focus on routine care, rather than the idealised setting of a clinical trial. For instance, a clinical trial might address the question: Does such-and-such drug improve the following specific clinical outcome for patients with COPD? An improvement project, by contrast, might ask: Can we improve outcomes for patients with COPD by reorganising our referral and scheduling processes to ensure timely access and better coordination between specialists and general practitioners? This example illustrates not just the ‘messiness’ problem, but also the intrinsic complexity of the interventions. When reporting a clinical trial, the intervention typically requires scant description because its components are well understood: a drug with known ingredients, administered according to a specified regimen, with such-and-such processes related to follow-up assessment. By contrast, reporting changes to a clinic's referral and scheduling processes requires detailed description, because none of the changes involve ubiquitous or well-understood ingredients and actions.
Messy as the problems of QI are and complex as the associated interventions can be, the crucial role of context in reporting and interpreting improvement initiatives adds a unique dimension that has received increasing attention.15 Potentially relevant contextual factors include external environmental influences (eg, regulatory requirements, payment systems, media attention) and numerous organisational features, such as resources, technologies, staffing, institutional culture and baseline quality, among others.
In interpreting a clinical trial, we do not need to know the psychological or institutional motivations that gave rise to the trial. (‘My father suffered with COPD for many years and the head of my department encouraged me to focus on this promising new drug.’) We do not require such details because, except in the case of commercial interests, they have no bearing on the conduct or interpretation of the research. With QI, however, stating that ‘our hospital undertook this initiative after media reports of poor outcomes’ and ‘the president of the hospital championed this improvement project’ suggests factors that may have directly affected the project's success—staff motivation, executive support for necessary policy changes and provision of resources.
The general issues illustrated in table 1 encompass numerous specific factors potentially relevant to the interpretation of QI research. The SQUIRE (Standards for QUality Improvement Reporting Excellence) statement provides a checklist of 19 items that authors should consider when reporting QI studies. Most items are common to all scientific reporting, but many have been modified to reflect the unique nature of improvement work.16 For instance, the Introduction should include not just a description of relevant background literature but also an explicit description of the local problem that gave rise to the initiative. And the Methods should include not just the usual sections on study design, outcomes of interest and analytic methods, but also describe planning and implementation of the intervention (eg, why specific components were chosen, how they were expected to work).
The importance of this SQUIRE framework can be seen when applied to the published report of a single centre's experience to improve clinician adherence to best practice guidelines for asthma and COPD.17 The intervention consisted of developing a set of evidence-based performance indicators, use of an electronic medical record to support automated generation of performance reports, discussion of division-level reports at regular faculty meetings and quarterly provision of individual performance reports to each faculty member.
Using the SQUIRE checklist, one would include not just the basic description of the academic respirology division in which in the intervention occurred, but what specific local interest motivated the effort. One would also want to report some detail about the amount of effort required to use the electronic medical record system to provide usable performance reports. And why choose performance reports as the intervention? Unless the main issues underlying the targeted problems all fell under physicians' control, feeding back performance reports to physicians would serve little purpose. Finally, the attitudes of the division's leaders and faculty members would help understand their receptivity to the performance reports.
This specific paper17 reports information recommended in SQUIRE to a variable degree. However, our point lies not in critiquing this paper, but rather in pointing out the degree to which using the SQUIRE checklist (available at http://www.squire-statement.org/assets/pdfs/SQUIRE_guidelines_table.pdf) facilitates interpretation of the study's results and informs readers' decisions of whether or not such an intervention might work in their practice settings.15
Like the CONSORT statement for the reporting of randomised trials,18 the goal of SQUIRE lies not just in improved reporting, but also in improved design. One would not want clinical trialists to find out about concealed allocation and blinding only at the stage of consulting CONSORT to write up their results. Similarly, recognising the importance of issues covered in SQUIRE will enhance the success of QI research, not just its publication. For instance, the exhortation to report details such as collaboration with major patient advocacy groups and the focus on transparent, detailed reporting of outcomes, as occurred with initiatives in cystic fibrosis,14 also suggests the importance of considering such features in other QI initiatives for chronic illnesses (eg, COPD, diabetes, congestive heart failure, asthma). These specific components may not prove essential in all cases, but the general model followed in cystic fibrosis should serve as a call to arms for others to improve patient care and SQUIRE provides a framework for enhancing both the rigour and the reporting of all such efforts.
Competing interests None.
Provenance and peer review Commissioned; internally peer reviewed.