Last week, Bob Wachter, a patient safety leader I admire, wrote a post Can Patients Help Ensure Their Own Safety? More Importantly, Why Should They Have To? As the title suggests, Wachter addresses both the utility of patient participation in safe practices and the necessity for this.
On occasion, these issues make my own hard drive blink. They did most recently when I considered how patient involvement squared with principles used to engineer highly reliable systems while writing From Safe Practices to Safe Patients: The Evolution of a Revolution (published on the Medscape platform last month.) At one point, I considered jettisoning the piece, convinced that allowing variability of the magnitude that patients (humans) necessarily introduce to a system couldn't be defended, let alone operationalized.
Wachter seems close to casting patients overboard, too. He rightly points out that the ability to self-advocate varies both between individuals (who possess differing knowledge, abilities, desire, and social support systems) and within one individual across time (subject to things like severity of illness, level of consciousness, and use of medications). Systems engineers (one is quoted in his post) tell us that variability is the enemy of stability. And finding variability in a system and driving it down is what gets these folks out of bed in the morning.
I've wanted to do this kind of "people parsing" on occasion myself.
Who wouldn't like to eliminate the outliers in the patient population we serve? Hypervigilant, distrustful patients can be problematic. At the other end of the self-advocacy continuum are unconscious Jane Does. They, too, interrupt work flows. But eliminating variability in measures that inform patient safety risks treating all patients like the least common denominator: the "bar" gets set at the level of the anesthetized patient.
And here's the other problem: Neutralizing patient input in patient safety assumes that the system is sound. That is, it produces reliable results if you just sit back and let the system do its thing.
Wachter does something I like to do: comparing the experience of being a passenger on a commercial aircraft to being a patient. I travel a lot, enjoy flying, and I'm perfectly happy assuming the safety duties expected of every other passenger on board. I wouldn't think of offering to lend a helping hand to those on the flight deck.
A commerical aircraft crashes 1 time in every 6 million departures. The fitness of systems used in commercial aviation clearly do not depend upon input from me. I'm okay with saying that if I get booked on the unlucky 1 in 6 million flight, "It's my time." But safety leaders in aviation are not. They continually strive to improve the system, to find ways to drive the incidence of error down, further diminishing the likelihood of 1 in millions events.
A preoccupation with making things safer is what distinguishes aviation (and other high consequence industries with reliable safety records) from healthcare. There's no doubt that the "alert" signals engineered into aircraft are easier to read than those built into humans. But that does not diminish the effectiveness of an alert.
I've been a nurse for a long time, and I suspect I share many of Dr. Wachter's feelings about what professionals should do for their patients. We have duty and desire, but, at this point in time, we do not have the means. Wachter is right to call for systems that turn intention into outcome.
But the answer to, "Why should they have to?" is that safest care won't happen without them.