Leading Evidence-Based Innovation at Your Facility, Part 2: Knowing What Success Looks Like
By Kelly Daley, PT, MBA
Welcome to part 2 in a 3-part collection of tips on how to introduce evidence-based systematic change in your facility. In part 1, I discussed getting buy-in from leadership in order to launch the change. Now let's discuss what happens after you get that support.
The next step? Considering the best measures of success.
You really believe in this change, and you've convinced leadership of its value, so now you need to figure out how you're going to know if the change is really working for your facility. This can get tricky, but stay focused and summon up all your analytical abilities, and you'll be fine.
Of course you'll want to demonstrate that the initiative is actually helping patients to improve by, at the very least, showing incremental improvement over baseline, through a difference in scores between evaluation and end of the episode of care. But you also will need to make the value case (the "amount of improvement, per dollar spent, in outcomes your patients care about" that I covered in the first post in this series). And even more to the point, you'll need to make the value case for the initiative compared with the ways your facility operated before the change.
For instance, for a low back pain initiative you may choose to use the Oswestry score and a count of visits per episode to verify improvement over time, and then multiply the approximate per-visit cost by the average number of visits per episode. You will have not only an understanding of how much improvement patients experienced, but just how much it cost to achieve that level of improvement. After that, you can compare these with similar data from the previous approach, and if all goes as expected, you'll be able to demonstrate value.
If these data are collected and reported by your electronic health record (EHR) or other software, then that's great, but there are other ways to get that information. These other ways could include paper collection (simple, cheap, and low-tech, but takes a lot of manual entering of information to get at value), tablet entry to a database outside of the EMR (such as in a waiting room), or data collection from wrist fitness bands or other devices. And there's always the possibility that you might partner with payers, such as Medicare or private payers, that offer their own analytics around your care patterns for a given diagnosis.
But having a solid source of data isn't enough by itself. It's how you approach it and what you do with it that matters. So ...
Be deliberate. Consider taking your measures of baseline first, before starting a pilot. Then, with leadership's approval, try your pilot with a few patients (and therapists if there is more than 1) for a fixed amount of time.
Evaluate. Gather data from the measures used in your pilot, then carefully consider what you've learned. What incremental improvement have you seen? Have you seen indicators of enough incremental improvement to continue to build this initiative? Remember, it's possible that the pilot didn't do what you had hoped—that's important information too.
Refine. Provided that your pilot has produced outcomes that show promise, combine your pilot information with scholarly evidence (such as clinical practice guidelines) to finalize your proposed initiative. A 2008 article in BMJ titled "Translating evidence into practice: a model for large scale knowledge translation" can give you more insight on the process.
Keep the loop going. Initial buy-in is important, but you also want to ensure long-term engagement from both your leadership sponsor and from the team actively involved in implementing the change. Keep up the monitoring and bring back findings on a regular basis, especially as the formal program takes shape.
(A quick tip: if you want to get a better idea as to whether your approach is on target, a good resource is the "Plan-Do-Study-Act" template offered by CMS—it provides clear, easy-to-follow guidance. Just remember that you're looking toward making systematic, clinic-wide change here—maybe bigger than that—and not just looking at what your own patients are doing.)
It's easy to get excited about implementing a change you believe will make a difference in the lives of your patients, but don't let that excitement turn into impatience. The transition from plan formulation, to pilot testing, to refining the actual program requires a careful approach—but that only increases the chance that the end result will be what got you so excited in the first place.
Coming up in the third and final installment in this series: a few tips on assembling a great team to make the change happen.
Kelly Daley is clinical informatics program coordinator for Johns Hopkins Hospital, in Baltimore, Maryland.
Explore other posts from the "Narrow the Gap" series.