Sunday, October 18, 2009

Why they have to: Patients and patient safety

Last week, Bob Wachter, a patient safety leader I admire, wrote a post Can Patients Help Ensure Their Own Safety? More Importantly, Why Should They Have To? As the title suggests, Wachter addresses both the utility of patient participation in safe practices and the necessity for this.

On occasion, these issues make my own hard drive blink. They did most recently when I considered how patient involvement squared with principles used to engineer highly reliable systems while writing From Safe Practices to Safe Patients: The Evolution of a Revolution (published on the Medscape platform last month.) At one point, I considered jettisoning the piece, convinced that allowing variability of the magnitude that patients (humans) necessarily introduce to a system couldn't be defended, let alone operationalized.

Wachter seems close to casting patients overboard, too. He rightly points out that the ability to self-advocate varies both between individuals (who possess differing knowledge, abilities, desire, and social support systems) and within one individual across time (subject to things like severity of illness, level of consciousness, and use of medications). Systems engineers (one is quoted in his post) tell us that variability is the enemy of stability. And finding variability in a system and driving it down is what gets these folks out of bed in the morning.

I've wanted to do this kind of "people parsing" on occasion myself.


Who wouldn't like to eliminate the outliers in the patient population we serve? Hypervigilant, distrustful patients can be problematic. At the other end of the self-advocacy continuum are unconscious Jane Does. They, too, interrupt work flows. But eliminating variability in measures that inform patient safety risks treating all patients like the least common denominator: the "bar" gets set at the level of the anesthetized patient.

And here's the other problem: Neutralizing patient input in patient safety assumes that the system is sound. That is, it produces reliable results if you just sit back and let the system do its thing.

Wachter does something I like to do: comparing the experience of being a passenger on a commercial aircraft to being a patient. I travel a lot, enjoy flying, and I'm perfectly happy assuming the safety duties expected of every other passenger on board. I wouldn't think of offering to lend a helping hand to those on the flight deck.

A commerical aircraft crashes 1 time in every 6 million departures. The fitness of systems used in commercial aviation clearly do not depend upon input from me. I'm okay with saying that if I get booked on the unlucky 1 in 6 million flight, "It's my time." But safety leaders in aviation are not. They continually strive to improve the system, to find ways to drive the incidence of error down, further diminishing the likelihood of 1 in millions events.

A preoccupation with making things safer is what distinguishes aviation (and other high consequence industries with reliable safety records) from healthcare. There's no doubt that the "alert" signals engineered into aircraft are easier to read than those built into humans. But that does not diminish the effectiveness of an alert.

I've been a nurse for a long time, and I suspect I share many of Dr. Wachter's feelings about what professionals should do for their patients. We have duty and desire, but, at this point in time, we do not have the means. Wachter is right to call for systems that turn intention into outcome.

But the answer to, "Why should they have to?" is that safest care won't happen without them.

2 comments:

Mark Graban said...

Rather than focusing on the variation in patients, I agree with Wachter that we should focus on the variation in health care systems (or lack of systems).

The need to ask patients and families to be hyper-vigilant should be considered, at best, a short-term countermeasure to the quality problems in healthcare. The real burden should be on clinicians and administrators to create reliable systems that ensure that the wrong medication never reaches a patient, hence eliminating the need for patient hyper-vigilance.

Barbara L. Olson said...

Thanks for helping keep the demand for safer system design visible, Mark. I'm not an apologist for the troubling outcomes we post and think there is much to learn on the journey to safest care. As a fan of strong, tight operations and a person committed to making certain key "deliverables" are attainable by frontline clinicians 24/7, patient safety has left me scratching my head on more than a few occasions. (Other algorithm-loving clinicians like me would probably say the same thing.)
I guess I say "uncle" on the issue of patient contributions to safe care because the complexity of humans (and their plans of care) exceeds any model of reliability that has yet to be defined. The idea that we'll become highly reliable by treating patients the same way that we treat 747s doesn't pass the "smell test" when you begin to operationalize how to make a "never event" never happen. The patient has the potential to add redundancies and most certainly represents the last point at which an error set in motion may be discovered. (Remember, reliable processes are not flawless- they simply catch errors before harm-causing deviation from an expected outcome occurs.) I recently read a piece by James Reason in which he pointed out that in addition to the number of widely divergent high stakes processes that sit under the healthcare umbrella, we are distinguished from other high consequence industries by our tight "server" to "served" ratio (making us very different from aviation where a small crew serves hundreds). This increases the importance of communication in defining and attaining a desired outcome. (Not all people with advanced cancer, for example, will choose the same course of action and not all women will choose the same pain management plan during labor and birth. Yet these decisions, both of which impact risk profiles, are good and appropriate system variants.) I don't think we can "deliver" in the long haul without patient engagement.

You can find the reference to the Reason article in the Medscape article referenced in this post. I tried to articulate this in more detail in my original post but stopped for the sake of time and focus. Thanks again for your comment and giving me the chance to share my thoughts in more detail.

Barb

 
Creative Commons License
Florence dot com by Barbara Olson is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.