Showing posts with label Pronovost. Show all posts
Showing posts with label Pronovost. Show all posts

Tuesday, March 29, 2011

Pronovost

Great piece in todays Wall Street Journal, an interview with Pronovost, well worth reading.

Tuesday, March 22, 2011

Boeing to the Rescue?

A recent piece in JAMA by Pronovost is well worth reading. In it he contrasts the way we design (or rather fail to design) healthcare, especially in relation to equipment, (haphazard, no systems thinking, individuals insisting on their preferred piece of technology etc) versus the way airlines buy planes. They do not buy planes, each of which has different toilets seats, lifebelts etc. They buy a standard plane, for economic as well as safety (reduce variation ) reasons. He suggests in an American context that what is needed is a systems integrator, similar to Boeing. It is interesting that national healthcare systems, despite being in a better position to act in this way, singularly fail to do so.
I have worked in intensive care units where the number of different types of ventilator exceeded the daily census of ventilated children; where the choice of a specific ventilator that a child was placed on depended on which physician was on call. This type of variability is hugely damaging, expensive, a safety risk and results in poor training. Compounding this is a perceived need for hospitals to get the latest new thing, often resulting in a situation where there are insufficient patient numbers to allow all team members to develop the required expertise and experience which are necessary to deliver the best outcomes.
While I agree with his arguments, I think he fails to develop the logical conclusion, in that the entire system, inside and outside hospitals needs to be standardized as much as possible.

Friday, March 18, 2011

Optimizing Patient Flow to enhance productivity and safety, Part 2

Coincidentally, following on from my recent post about patient flow, (March 16th) comes  a paper which again demonstrates the critical need to optimize patient flow, not just to improve productivity, but more importantly, to reduce mortality.
Just published this week is a very important paper in the NEJM, here. The authors looked at the effect of nurse staffing numbers on in-hospital mortality in a large academic hospital. It shows that peaks in patient flow (turnovers) are an even greater cause of mortality than patient per nurse staffing ratio. The authors state,
"We also found that the risk of death among patients increased with increasing exposure to shifts with high turnover of patients. Staffing projection models rarely account for the effect on workload of admissions, discharges, and transfers. Our results suggest that both target and actual staffing should be adjusted to account for the effect of turnover. In light of the potential importance of turnover on patient outcomes, research is needed to improve the management of turnover and institute workflows that mitigate the effect of this fluctuation."
The basis of this is simple. Elective admissions are hugely variable, and dependent almost entirely on doctor choice. Because these admissions occur without any reference to the other needs of the hospital, they cause huge peaks and troughs in patient numbers, e.g. not many elective patient will be admitted Friday.
One of the worlds leading experts in patient safety, Peter Pronovost, has also made clear his view that optimizing patient flow is essential for reducing in hospital mortality, see this recent paper.
There are huge opportunities to be had.

Monday, March 14, 2011

Progress of the Safety Movement

Some (relatively) disappointing news recently. An analysis of the Safer Patients Initiative was published in the BMJ (here and here). This was a large scale intervention comprising a large number of components of care. Unfortunately, there was no evidence of any difference in outcomes between control and intervention hospitals. What could the possible reasons be? There are a number of hypotheses,


  1. hospitals will get safer regardless of interventions, and do not need this type of large scale change, (unlikely)
  2. These interventions were too complex, encompassing 43 different interventions, (probable)
  3. Management and clinician buy in, expertise, knowledge and support were insufficient to show a difference, (highly likely)
  4. The interventions sought has insufficient evidence to underpin their use, (highly likely in some cases)
  5. Many hospitals, (both intervention and control) already had high levels of quality in some domains, hence the big effects were less likely to be seen, (likely)
So what lessons can be drawn? I think there are a number.

  1. Large scale change is difficult, messy, a long term commitment and often fails
  2. Leadership, at management and clinical level is critical
  3. Improvement and quality must be seen to be the only way to do business, not an optional extra
  4. There must be a better system of measurement; even today measuring mortality is contentious. The ideal measurement system should be one that measures patient outcomes from the perspective of the patient, and reimburses the system, not a provider for optimal outcomes. See Micheal Porters work from Harvard for more on this

Tuesday, November 30, 2010

Diagnostic Error; The Elephant in the Closet?

Diagnostic error as a cause of avoidable harm has received relatively little attention in the quality safety literature until recently. Diagnostic error is a diagnosis that is missed, delayed, or incorrect. Various estimates suggest that errors of diagnosis account for 40-80,000 deaths annually in the US. Autopsy studies have shown that in 5% of cases, a diagnosis is found which if known prior to death and treated appropriately could have averted death. Physician errors are more likely to occur from diagnostic mistakes than medication error; are likely to result in serious harm and more likely to result in bigger lawsuit payouts.  The causes of diagnostic error are complex, and as yet poorly understood. One concerning finding however is that a lack of knowledge or expertise is rarely the rimary factor; indeed, some researchers suggest that experienced doctors who have "a gut feeling" may be led astray by their experience. For further reading, I would suggest this paper by Pronovost. A talk at the recent Risky Business conference which discussed the psychological basis of error is well worth viewing. (Free to view, but registration required)
There are few data in paediatrics; a recent study reported that 50% of paediatricians had made one to two diagnostic errors in the previous month. 45% reported diagnostic errors that harmed patients at least once in the previous year.
So we have identified a big problem, conceivably more serious than medication harm; what is the solution? In truth, no-one knows. Suggestions include computer aided diagnostic tools, realistic simulation in training, more training!, reform of tort law. We need to start with the basics, and begin to understand the causes of diagnostic error, and only then can we begin to introduce solutions. Medicine is messy, diagnosis in contrast to treatment remains an art; we have to make it more of a science.