NOTE: By submitting this form and registering with us, you are providing us with permission to store your personal data and the record of your registration. In addition, registration with the Medical Independent includes granting consent for the delivery of that additional professional content and targeted ads, and the cookies required to deliver same. View our Privacy Policy and Cookie Notice for further details.



Don't have an account? Subscribe

ADVERTISEMENT

ADVERTISEMENT

Data on death — the complex task of assessing mortality

By Mindo - 23rd Oct 2018

As the CervicalCheck controversy showed, clinical audit can be a very sensitive issue. The manner in which audit data are collected is not straightforward, with important decisions required in terms of methodology and allocating staff time to the process. In relation to mortality, clinical audits are especially sensitive and the method by which the data is collated can be contentious. Moreover, discussion of mortality data often leads to mention of hospital ‘league tables’ and media sensationalism. 

<h3 class=”subheadMIstyles”>Starting point</h3>

The National Office of Clinical Audit (NOCA) was established to create sustainable clinical audit programmes at national level. In December 2014, NOCA was asked by the HSE Quality Improvement Division to work with the Executive’s Health Intelligence Unit to support the monitoring of a National Audit of Hospital Mortality (NAHM). The audit limited its focus to cardiovascular (acute myocardial infarction; heart failure; ischaemic stroke; haemorrhagic stroke) and respiratory (COPD and pneumonia) conditions. NOCA commenced the deployment of a National Quality Assurance System (NQAIS) to all acute hospitals to facilitate the audit.

<img src=”../attachments/be7f9238-4aff-4774-9bc3-200b924a9471.JPG” alt=”” />

<strong>Dr Brian Creedon </strong>

Clinical Lead and Chair of the NAHM Governance Committee Dr Brian Creedon told the <strong><em>Medical Independent</em></strong> (<strong><em>MI</em></strong>) that, with some notable exceptions (see panel opposite), nobody was examining mortality in an orchestrated manner before the establishment of the NAHM.

“Now, every hospital, some of them have directorate structures; some of them will look at it under quality and patient safety; some of them will look under individual work groups and individual specialties,” according to Dr Creedon.

“But all these hospitals are looking at this data now, are aware of it, and I think it is very positive.”

<h3 class=”subheadMIstyles”>The benefit of SMRs?</h3>

NQAIS displays in-hospital mortality patterns and standardised mortality ratios (SMRs) in a national context on a web-based tool, where hospitals have an ongoing view of their mortality data and can produce local reports. The SMR is based on the principal diagnosis, or the primary reason the patient was brought to hospital. In order to ensure that ‘like is compared with like’ across different hospitals, potentially confounding factors are adjusted for in the analysis; for example, the age of the patient and the presence of comorbidities. Otherwise known as the ‘Dr Foster method’, these factors have been used for over 20 years as an indicator of the quality of a hospital.

For Dr Creedon, SMRs provide a robust method for examining mortality across the acute sector. He said it is a much better method than, for example, crude mortality, as it adjusts for various other factors. He added that it also offers a “wide-screen” view of potential problems in patient care.

“SMRs take a wide-screen view of the entire healthcare system,” Dr Creedon stated, “because you could have a clinician or a hospital that doesn’t realise it has a problem in its care paradigm. There could be delays in getting patients from EDs [emergency departments] to the wards. There could be issues with getting certain aspects of care that they may not even be aware of.”

However, not everyone believes the method represents the best approach to measuring the quality of care. Prof John Browne is the Director of the National Health Services Research Institute of Ireland. He is a health services researcher with a professional background in health outcome measurement. Prof Browne advised NOCA a number of years ago not to employ SMRs. Speaking to <strong><em>MI</em></strong>, he said the variation in deaths between hospitals is extremely small. When variations occur, Prof Browne argued they are more likely to be due to where the hospital is situated and the catchment area it serves, rather than problems with clinical quality.

<img src=”../attachments/b867dbc5-ae77-47da-bd84-d192beee06b5.PNG” alt=”” />

<strong>Prof John Browne</strong>

 “Hospitals treat different types of patients,” according to Prof Browne.  “And they have different catchment areas. If you are a hospital like Limerick or Beaumont, your catchment area is going to be poorer, it’s going to be in ill health, and it’s going to be coming from areas where people are much more susceptible to disease and find it much more difficult to survive treatments. And that’s the main difference. And this is the thing that has been overlooked time and time again. Most people still die in hospitals and one of the biggest predictors of a hospital’s death rate is the availability of alternative places to die in the surrounding catchment area. If you are in a hospital that has plenty of good nursing homes, that has good standards of care at the end of life, or palliative care centres, or hospices, then your death rate is going to go down.

“And lots of parts of the country don’t have that… and the death rate looks terrible. They try and adjust for that with certain formulas, but the adjustment method for standardised mortality ratios are notoriously bad. So I pay very little attention to hospital death rates.”

<h3 class=”subheadMIstyles”>Case reviews vs SMRs</h3>

Recent evidence from the UK has also questioned the use of SMRs as a marker. A review requested by National Medical Director of the NHS Commissioning Board Sir Bruce Keogh used hospital-wide SMRs to select acute Hospital Trusts in England for detailed consideration of their quality. This review was established in February 2013 in the wake of the second Francis report into the Mid-Staffordshire NHS Foundation Trust.

The 14 Trusts selected had a higher than expected hospital-wide SMR for two consecutive years. In July 2013, one of the main recommendations of the review was the need for a study into the relation between “excess mortality rates” based on SMRs and “actual avoidable deaths”. Avoidable deaths was considered by some to be a more meaningful quality care indicator, as it was based on a detailed review of deaths by clinicians, rather than statistical probability derived from routine data. The group that undertook the study had already published a paper (PRISM 1), which stated the incidence of preventable hospital deaths was much lower than previously estimated.

Retrospective case record reviews (RCRRs) of 1,000 adults who died in 2009 in 10 acute hospitals in England were undertaken. According to the research, published in 2012, some 5.2 per cent of deaths were judged as having a 50 per cent or greater chance of being preventable. In the 2015 study (PRISM 2), only 3.6 per cent of deaths were judged to be avoidable.

In spite of these reviews being viewed as a potentially superior quality measure, the 2015 study found that an “absence of a significant association between hospital-wide SMRs and avoidable death proportions suggests that neither metric is a helpful or informative indicator of the quality of a Trust”. Furthermore, the study stated that it is potentially misleading to the public, clinicians, managers and commissioners to praise or condemn a Trust on the basis of either measure.

Dr Helen Hogan, Associate Professor, Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine, UK, was one of the lead authors of the studies. Speaking to <strong><em>MI</em></strong>, she said the case review approach and SMRs are two different things.

“[Case reviews] allow clinicians to review the notes of those patients that have died in a standardised way, with the aim of identifying problems in the quality of the care provided, as well as where it went well,” according to Dr Hogan.

“This is primarily a quality improvement process, with findings being fed into a cycle of learning and change. Using this process to judge whether a death is avoidable is difficult, because you need to have multiple reviewers (some suggest eight or more) to get a reliable judgement and sometimes, not all the information you need to make a judgement is stored in the case notes. SMRs were designed principally as a performance measure for comparing different hospitals. However, given the small numbers of avoidable deaths found by most studies to date, it is not likely that these measures are an indicator of the numbers of avoidable deaths.”

<img src=”../attachments/570dcb1e-17cd-4da5-925b-e4a1ae120fc8.JPG” alt=”” />

<strong>Dr Helen Hogan </strong>

Dr Hogan said some experts would argue that they are also of limited value as a performance measure in relation to the quality of care overall. This is because issues such as coding practices, and the number of deaths occurring in hospital versus outside hospitals, for example, all have significant impacts on these measures.

Many criticisms have been made about the quality of data produced by the Hospital In-Patient Enquiry (HIPE) scheme in Irish hospitals, which is the basis for SMRs. The scheme was the subject of recent reports by HIQA, one of which was on governance deficiencies.

“In addition, they are a hospital-wide measure and tell us little about departmental performance, especially those departments with few or no deaths.”

Ultimately, Dr Hogan said she does not believe there is a perfect measure for comparing quality of care across hospitals. Dr Hogan also said there is a question-mark about the very idea of using deaths as a measure of quality in hospitals.

“It is best to use a combination and know their limits,” she said. Despite these limitations, the NHS recently launched a new programme on measuring and learning from avoidable deaths (see panel opposite).

<h3 class=”subheadMIstyles”>The value of mixing methods</h3>

Dr Creedon is aware of criticism of SMRs as a measure, but still believes there is value to the method.

“Do I think it [SMR] is 100 per cent robust for every individual patient? Absolutely not. Do I think SMRs are perfect? Absolutely not,” said Dr Creedon. “Just because you come to a hospital with x, does that mean you die from x, or could you have died from something else during your hospitalisation coincidentally? Absolutely. But I do think it is a structured approach and it is the best we have at the moment, but it is not perfect. And I think when we use it across an island, where there is over a million admissions every year, yes, I am very satisfied that it has statistical significance and it can help to improve care. But it is not a panacea for ultimate quality of care; it is one measurement and you need other quality initiatives, and also [to look] at other outcome measures.”

 Dr Creedon said he sees considerable merit in case reviews and has recommended that they be used in University Hospital Waterford, where he is based. The method is used in some hospitals, but not on a systemic basis. For example, in Cork University Hospital, the 2016 NAHM report stated there was a ‘red’ signal relating to ischaemic stroke identified. Analysis identified the months of February and August as having significant deviation from expected mortality. The hospital chose to look at the clinical data and used a ‘structured judgment’ case review approach to see if quality-of-care was deemed one of the contributing factors to the outcomes documented. There were no quality-of-care deficits identified in this review. Dr Creedon said that the issue, which related to data quality, has since been resolved.

“It adds another layer, where people are looking at their own individual cases in mortality, doing an assessment of that, discussing it with their colleagues at a forum and asking, ‘could we have done better?’” according to Dr Creedon.

However, he echoed Dr Hogan’s remarks in saying that the method is highly labour-intensive, especially considering Ireland’s low ratio of consultants per capita.

“They are very labour intensive,” stated Dr Creedon, “particularly if you are mindful that some of our specialties in Ireland look [after] two or three times the patient load that a colleague in the UK would. Therefore, you would have two-to-three times the mortality. So you might have three times the work to do. I think we just need to be mindful of introducing this in a way that it is set up… to succeed rather than to fail.”

<h3 class=”subheadMIstyles”>Future development</h3>

The work of NOCA complements that undertaken by different Clinical Programmes and other quality initiatives across the health service. Recently, the HSE’s National Sepsis Report 2017 showed the number of sepsis-associated hospital deaths had fallen by more than 20 per cent over the past four years. The reduction was largely the result of improved clinical practices and heightened awareness brought about by the National Sepsis Programme. Recently, University of Limerick Hospital Group published a report on carbapenemase-producing Enterobacteriaceae (CPE) mortality. The report is based on an investigation into whether CPE had contributed significantly to mortality in University Hospital Limerick, since the time of the first isolated case in 2009, up to May 2017. The findings from the investigation show that, during the period examined, 196 CPE patients were detected and 73 of those patients had died. It is the conclusion of the report that CPE was an associated factor in the deaths of eight patients. “All these patients also had other significant medical problems, and CPE was only one of a number of factors contributing to their deaths,” according to the report. This again highlights the difficulty in determining the cause of death in patients with significant comorbidities.

Dr Creedon said the third NAHM annual report is at draft stage and that, in general, NOCA is examining ways in which the quality of data produced in the audit can be improved.  

Something that is not going to happen in the foreseeable future is the publication of hospital league tables, where hospitals are directly ranked against each other in terms of assessed quality.

“A league table is not appropriate for our data because our data isn’t statistically robust to produce a league table,” Dr Creedon said, “because it won’t determine if care is better in one institution over another. It will show that there are extra deaths. But what it won’t show is that if it is a tertiary centre that takes the most complex case, or a transplant centre, obviously their mortality rate is going to be higher. But it doesn’t take those things into account… So the league table is meaningless, but can cause a lot of hurt and damage and scare the public wrongly, where they avoid somewhere where there is excellent care available.”

Dr Creedon added that the size of the Irish health service, which is smaller than the Greater Manchester Trust, makes comparisons difficult.

In the short term, he said NOCA is examining the possibility of publishing data on a wider range of conditions.

“Although we know that there are challenges with the statistical robustness of that data… we are still publishing that data to show we are measuring it, and offering it to hospitals and allowing them the opportunity to improve.”

Leave a Reply

ADVERTISEMENT

Latest

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

Latest Issue
medical independent 2nd April
The Medical Independent 2nd April 2024

You need to be logged in to access this content. Please login or sign up using the links below.

ADVERTISEMENT

Most Read

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT