Thursday, November 26, 2009

Thanks for speaking up

When I give talks about patient safety, I usually include a slide called, "Why Pilots Won't Nurse." It's an attention getter, one that draws smiles and sometimes fosters an "a-ha" moment for students, seasoned clinicians, and administrators.

I think that pilots won't nurse because, as a group, pilots are knowledgeble enough to reject systems that lack sufficient barriers, redundancies, and opportunities to uncover and rectify potentially lethal errors that have been set in motion. Commericial aviation isn't fool-proof, but the industry's 1 in 6 million crash rate shows what can be accomplished in high stakes domains when adequate barriers, redundancies, and recovery ops are in place.

I could add another slide: why pilots don't practice pharmacy. And there's no better place to read why than Bob Wachter's thanksgiving day post about the tragic case in Ohio, one in which a little girl lost her life, a family dissolved, and a pharmacist went to jail. 

Late last summer, Mike Cohen, the president of the Institute for Safe Medication Practices, published An Injustice Has Been Done about what happened to pharmacist Eric Cropp in the aftermath of little Emily Jerry's death. Bob and Mike talked about what Eric's case means, for professionals and for patient safety in a CareFusion webinar (the recording is available here). Thanks for speaking up.

Saturday, November 21, 2009

I am thankful!

People who check in at Florence dot com come from all over the world, united by a desire to see better, safer healthcare systems emerge. I'm happy my efforts are contributing to hard work being done by so many others and thought it would be interesting to share where visits to Florence dot com came from last month.  (The darker the green, the more visits from that region.)


In this season of thanksgiving, I want to say how grateful I am for the opportunity to share reflections and pass along resources I value with so many of you.

This week, I heard a physician leader in a large multi-system healthcare organization talk about progress her organization has made in patient safety. The gains were substantial, and hard won, coming not from gorging on cheap Happy Meals, but from putting safety and quality at the center of the table where bright, powerful, and connected people in the organization regularly convene. These people not only plan the meal, they're accountable for what's served.

Patient safety, a component of quality healthcare, isn't the same as quality. People struggle to understand their relationship, especially in complex and evolving arenas like healthcare. Safety doesn't prove which chemotherapy regime is the most efficacious. It's what allows the one selected to be delivered as intended.

Safety may not reveal God's perfect truth. But, done well, safety is what allows humans to facilitate the activities--some miraculous, some mundane--needed to heal. If therapy fails because the chemotherapy regime selected isn't the best match for a person's genotype or stage of cancer, more work on the quality side of performance improvement is needed. But if a person dies from an accidental chemotherapy overdose or doesn't receive the curative benefits because of less obvious dosing errors, there's work to be done on the safety side.

I learn the most when people who are well into the safety journey talk about where they're stumbling. The physician who shared inspirational data about the reduction of serious, preventable safety events in her organization shook her head when asked about the barriers that prevent further gains. "It's hard," she said, "to make humans perform as flawlessly as the healthcare system needs them to."

This means that even in healthcare organizations where demonstrable gains in patient safety have been made, there's still plenty of work to be done. Improving system design and actively shaping the choices made by people who use the system is how David Marx, a systems engineer, attorney, and the author of the Just CultureTM algorithm, describes the work leaders undertake when they gather to create and sustain a culture of safety.

This year, one of my best reads was Marx' book, Whack-a-Mole: The Price We Pay for Expecting Perfection. It's a resource I'm thankful to have and one I hope you'll find helpful in your journey toward safest care, no matter where you are (in the world or on your journey).

Oh, and thanks for checking in today and on so many other days this year. Come back soon!

Tuesday, November 17, 2009

Thank you, Grand Rounds!

A Thanksgiving edition of Grand Rounds is up this morning at Colorado Health Insurance Insider. One of my favorite posts comes from Laika's MedLibLog, where a web 2.0 savvy health librarian shares a list of scientific journals that she follows on Twitter.

Maybe Twitter lists like this speak to me because I remember what accessing quality medical information once looked like: finding time to go to the university library, scrounging around for change, countless hours spent looking for high-end resources, doing the bump and grind with a recalcitrant copy machine (so that the prized materials might be made readily accessible in places more welcoming than the nasty, drafty, dirty library). And leaving with the nagging sense that I probably missed the best things anyway.

People who think that Twitter is a place where bores report the outcome of their children's travel soccer games are missing it. There are bores in Health 2.0. (There are bores everywhere: The 2009 word of the year is unfriend and I suspect unfollow, what you do to bores on Twitter, will pop up next year.)

But the ability to piggyback onto lists used by medical librarians, gaining access to real time output from 90 (just a start, no doubt) scientific journals a health information practitioner relies on.... well, it's enough to make you drop your dimes.

When you let go of old ways of doing things, it's nice to find something as a useful as a "follow" button.

Friday, November 13, 2009

Welcome to Lake Wobegon!



Last Monday, over at the Wall Street Journal Health Blog, Jacob Goldstein was not kind to the residents of Lake Wobegon, calling out their leaders for believing that Only 1% of Hospitals are Below Average. Goldstein's piece shares findings from a study by Jha and Epstein published in Health Affairs this month, one that links knowledge and value ascribed to clinical quality on the part of not-for-profit board chairs to the quality measures their organizations post. [link]

Additional findings from Jha and Epstein's survey of 1,000 not-for-profit hospital boards chairs between November 2007 and January 2008 include:
  • less than 1/2 of respondents rated “quality” as one of their “top 2” priorities
  • 3/4 reported their hospitals had “moderate” or "substantial” expertise in quality of care
  • only about 1/3 had received formal training in clinical quality measures
  • when clinical quality training was included in education provided to the board, the mean amount of instruction time was 4 hours 
  • less than 1% rated their hospital's performance as worse or much worse than a typical hospital's performance on standard quality measures (like The Joint Commission's core measures or other publicly reported measures)
When you dive deeper to find the "take-away" lessons from data like these, resist the urge to see the chairs' overly optimistic assessments as nefarious or necessarily careless. Their estimates are, in point of fact, consistent with what people do when they are asked to evaluate their performance relative to others. Cognitive psychologists call this bias illusory superiority, and it describes the tendency of people to overestimate the degree to which they possess desirable qualities relative to others or underestimate their negative qualities relative to others. It fact, illusory superiority is referred to as the "Lake Wobegon Effect."

The real take-away lesson here is that the Lake Wobegon Effect is proportional to the specific knowledge and skill a person is asked to rate. If you ask someone who plays ball how well he plays compared to others, he will provide a more accurate assessment than if you ask someone who has no experience with ball at all. Wildly optimistic estimates of performance suggest profound lack of experience.

This means that board chairs often don't know enough about quality to know whether the organizations they oversee reliably deliver quality outcomes. I don't fault them for their "glass half full" outlook (which likely serves them and their organizations well on other fronts). But I do worry who is in a position to tell the emperor about the problem with his clothes.

Two places where you'll find this being done, albeit a bit more genteelly, is the Institute for Healthcare Improvement's Boards on Board and creative partnerships, like the one housed at SafetyLeaders.org, that help make the National Quality Forum's Safe Practices expectations come to life through free webinars and web-accessible transcripts.

Closer to home, though, finding a credible champion for quality and patient safety becomes more challenging. What powerful community leaders know and believe likely mirrors the opinion of powerful people in the organization and the community. I'm sympathetic to where board leaders find themselves these days because for most of my career, I've lived in the same "small towns" they govern.

Healthcare culture values processes that rely on knowledge contained in human memory and devalues those that rely on more mundane performance shaping measures. For a very recent example of how this thinking shapes culture, consider this tweet I picked up from a PSO insider yesterday:
"I had one surgeon tell me that checklists are for the lame and weak"
If the chair of your local hospital's board (or one of her close family members) hasn't been the beneficiary of physicians, nurses, and pharmacists who hold similar opinions, you may indeed be somewhere very good. But it's a very different place from where the average American gives, receives, and oversees care.

Healthcare is a place where "intention" still trumps "outcome." Jha and Epstein reinforce the need for senior decision makers to become familiar with how desirable quality outcomes are fostered, then measured in healthcare.

Everyone else in town needs these lessons, too. It's easy to become lost under the standard normal curve out here.

Monday, November 9, 2009

Waiting for Rabbit Redux

Good stories are sometimes told across time, and so may be the case in telling the story of how healthcare gets healed.

I found this interesting interview, Medical Errors, 10 Years Post-Op, with two of the authors of the original IOM report. It's nicely bundled with a short history of the "hospitalist" specialty. (Don't miss the history of events that have informed the evolution of patient safety at the bottom of the piece.)

While we're waiting for Rabbit, here's a link to another snapshot of patient safety-sensitive performance measures: a 2009 report, commissioned by the American College of Healthcare Executives entitled, "Bad Blood: Doctor-Nurse Behavior Problems Impact Patient Care."

Maybe get a chair.

Sunday, November 8, 2009

As always, the big picture counts

Making Health Care Better, a piece by David Leonhardt in today's New York Times magazine, is simply a must-read for understanding the complex relationships that shape healthcare quality.

Here is an illustration, based on Don Berwick's "Level of Interest," that often helps me identify players, understand where they're seated, and anticipate where (and why) to expect push-back.


Berwick wrote the piece this slide is drawn from as a "user's guide" for people who would be leading improvement efforts in the aftermath of the IOM report "Crossing the Quality Chasm."

It's worth considering where the elements (drivers; incentives; methodologies) described and critiqued in the Intermountain system fit into Berwick's original construct. (This a case where the expression "same stuff, different decade" is not a slam, but rather a chance to see the evolution of welcome change.)

A better case for a system-approach to healthcare improvement cannot be made than what you'll find in the New York Times piece.

Read it. More importantly, learn from it.

Thursday, November 5, 2009

Error Prevention Strategies: It's not "Sophie's Choice" folks

Last week on my Medscape medication safety blog On Your Meds, I wrote a piece about how nurses in greater San Francisco area hospitals improved medication safety. The collaborative is reporting an 88% reduction in the incidence of errors in the administration node of the medication use process over a three year period.

At the outset, it's worth noting that these results are astonishing, placing them in the "almost too good to be true" category. The study employed "observed error" methodology, a more robust method of error detection than "reported errors," (the methodology most programs and data sources rely on). The rigor of the detection methodology used in this study adds credence to the results.

But it's worth looking a little more closely at the study design to find the most important take-away lessons.

The nurses tested how adherence to six distinct performance elements in their medication administration process impacted accuracy: [link]

1. Compare medication to medical record
2. Keep medication labeled until administration
3. Check two forms of patient identification
4. Immediately record medication administration in chart
5. Explain the medication to the patient
6. Minimize distractions and disruptions during the administration process

From an engineering standpoint, these elements can be predicted to produce a robust medication administration system. Comparing medications to the medical record and checking two forms of patient identification, for example, add redundancy at high stakes junctures of the process. And "explaining the medication to the patient" creates a recovery opportunity, an engineering control that allows an error that's been set in motion to be detected and remediated before harm occurs. (The practice is also desirable from a participatory care standpoint and also is "the right thing to do" based on variety of ethical principles.)

"Minimizing distractions and disruptions during the medication use process" is the performance element that drew the most attention in the lay press, and it's what I focused on the first time I took on the issue at Medscape. Minimizing distractions at high stakes junctures of performance is a technique that high reliability industries employ. (It's why aviation personnel in the flight deck close the door and why they're subject to tighter performance expectations at altitudes less than 10,000 feet.)

What the San Francisco nurses really studied is whether adherence to a system designed to elicit a specific outcome yields the desired outcome more often than using a loosely defined, variably employed set of expectations does. Minimizing distractions was an important part of the interventions, but it wasn't the only one. The nurses did not find one "magic bullet," but rather moved from an "intention-based" process to a process that was both engineered and adhered to, something that helps explain the very favorable, highly desirable results obtained.

Understanding how these results were obtained is also important before leaping into the comparative arena, especially when the discussion is built around a "forced choice" construct that does not and should not exist. This is what I think is happening in a blog post entitled, Low Tech solution to Med Admin errors better than BCMA?

Designing the most robust system feasible to accomplish a high stakes task is how system engineers approach their work. (Risks surrounding medication administration are well documented and errors at this point remain common.)

Seminal medication safety data show that a substantial portion of errors originate in the administration phase of the medication use process.



Equally important these data reveal that patient harm is highly likely to occur as a result of errors that originate in the administration node.


It's important to recognize that errors in the administration node are problematic, not because nurses are problematic but because the systems nurses rely on and the downstream position of their work confer risk. Managing that risk has been the focus of medication and patient safety specialists over the past decade. IT solutions, specifically the ability to bar code patients and their medications, and to have key patient, drug, and order information integrated and available at the point of care, represent strategies engineers see as reliable, reproducible, and capable of sustaining change over time.

The San Francisco nurses' study did not rely upon bar code medication administration (BCMA) although it appears BCMA was used in at least some of the study sites. But what must be noted is that key performance measures in the study (namely, "compare medications to the medical record" and "check two forms of patient identification") represent standard medication safety practices that are now part of The Joint Commission's healthcare accreditation standards. While they are important elements in the system design the nurses tested, these elements are not "stand alones." They would have occurred, on some level and likely with unwelcome variability, in these hospitals during the study period irrespective of whether they were part of an intervention study.

More important to debunking ill-conceived notions that medication administration accuracy is an "either/or" proposition (pitting low tech performance measures against tech-mediated ones) is the knowledge that BCMA automates key elements of the performance measures the San Francisco nurses built into the system they tested. These include comparing medication to data in the medical record; immediately recording medication administration in the chart; and checking two forms of patient identification. Additionally, BCMA work flows necessarily foster work processes in which medications remain labeled (often in their original packaging) until the point of medication administration.

If BCMA has failed to reach its full potential in the medication administration arena, as John Poikonen questions in his RxInformatics post, the reason has less to do with the inherent fitness of the technology than how user-friendly it is designed to be; how it is incorporated into nurses' work flow; and how it is supported in the aftermath of the initial investment. Most importantly, disappointing results with BCMA likely reflect system design failures that do not take into consideration the limits of human performance when carrying out high stakes tasks. Nurses should rely on automated solutions to accomplish high stakes work and they should not be expected to multitask while using them.

Your pilots get to close the cockpit door when they perform tasks that, if carried out incompletely or incorrectly, could kill the people who depend upon them. Pilots also rely on high tech instrumentation that automates many key performance elements.

Why would you want your nurses to "pick one"?


Note: Representation of the seminal medication error data discussed here was borrowed from similar formats used by the medication safety professionals at the Institute for Safe Medication Practices. I am indebted to them, both for this depiction and the modeling upon which my knowledge of medication safety is based.

Tuesday, November 3, 2009

A Non-Clinical Grand Rounds

Dr. Joseph Kim is hosting Grand Rounds today at a blog devoted to exploring non-clinical medical careers. There's an interesting array of posts over there plus a chance to "shop around" the non-clinical world.

The only thing that made me wince when I took a quick look was that "patient safety" is near the top of the queue. From a blogger's point of view, this is good news, since posts placed high in the Grand Rounds narrative draw more hits to an author's blog. But from a patient safety standpoint, the perception that "patient safety" lives in the non-clinical world is a bad thing.

If you've every heard the expression, "your restaurant is only as as good as the last steak I ate there," you'll understand why. While many interests have a place at the table, the "sweet spot" in patient safety is at the point, often jagged and bleeding, where care is given and received.

There is certainly a science that informs patient safety and legitimate work to be done fostering a culture that recognizes and supports safe care. But if it's not visible at the front lines of care, it's not "patient safety."
 
Creative Commons License
Florence dot com by Barbara Olson is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.