Using EHR Data and Clinical Decision Support Tools


[… is now being recorded] [All guests have been muted] [Christine Hunter] Welcome to the NIH Adherence
Network Distinguished Speaker Webinar Series. I’m Christine Hunter I’m from the National
Institutes of Diabetes and Digestive and Kidney Diseases, NIDDK, and I’m pleased to be able
to give a brief introduction for today’s speaker. Dr. Rachel Gold will be speaking today. She
received her master’s in public health from Temple University in ’98 and a Ph.D. in Epidemiology
from University of Washington in 2003. Dr. Gold is currently an assistant investigator
at a Kaiser Permanente center for health research. And we are so pleased that Dr. Gold accepted
our invitation to speak and are really looking forward to an interesting and timely presentation.
Her talk today is titled Using EHR Data and Clinical Decision Support Tools to Enhance
Guideline-based Prescribing in Safety Net Clinics. So without taking any further time
for introduction so we can get to her presentation, I will turn it over to Dr. Gold. [Dr. Gold] Thanks Christine, and thanks everyone
for signing on. I see, I think some of you are out there. So, yep, I’m Rachel Gold. I’m
an assistant investigator at the center for health research at Kaiser Permanente Northwest.
I’m going to talk to you today about some preliminary results that we have from a study
where we’re trying to get at how you can, how we can use data from the health records
to support provider adherence to guideline-based prescribing. In this particular case, we’re
looking at within diabetes population. And I have adherence in quotes and I just wanted
to start the talk by saying the safety net providers who I work with raised their eyebrows
a bit when I showed them this slide deck “What are you talking about, provider adherence?”
Right? So I put it in quotes just to say that although we’re talking about, I’m not sure
it’s quite the right word to talk about provider behavior change, so, that said. We, this project
is funded by an R-18 from the NHLBI we thank you very much for that. It’s, and I want to
point out that the research we’re doing real quickly is a multi-institutional endeavor.
We have, so I’m at the center for health research. Greg Nichols is the P.I. on this study and
I’m the Project Director. The next three organizations, Richmond Clinic, Virginia Garcia, Multnomah
County are all federally qualified health centers or organization service areas, excuse
me, service organizations that run FQHCs. And OCHIN I’m going to talk about quite a
bit as we get going, but it’s, it is the organization that provides the EHR to all of these clinics. So the general background behind what we’re
looking at is as follows. Organizations like Kaiser Permanente have the resources to develop
terrific tools that harness the EHR data to figure out how to provide clinical decision
support that may enhance provider adherence. And I would maybe, just again, frame that
as these are great decision tools to help providers provide guideline-based care. So,
using some of these EHR tools, and I’ll talk about them in detail, Kaiser’s, Kaiser developed
an intervention called the ALL Initiative, it’s also sometimes called the A-L-L Initiative.
I’ll describe that again in some detail as well, but the overall gist of the ALL Initiative
is to improve the rates of guideline-based prescribing of cardio protective meds for
diabetic, for patients with diabetes. So we’re really interested in this overall study and
how we can take the same kind of tools and strategies that Kaiser used and make it, and
see if we can get them to work in the safety net setting. Okay, again the overall set of questions are:
well, what kinds of decision support tools are useful in community health centers? What
do they need? How can we build tools that actually improve providers’ guideline-based
care? Can we translate from other settings to community health centers? And really I’m
very interested in that because, as you all know I’m sure, public clinics do not have
the resources to develop these kind of tools themselves. So places like Kaiser and the
VA, they’re doing that development. So the question, I think the really salient question
is: how do we translate them? Can they be translated? Can that work? And what aspects
of the tools are going to work in community health centers? And what aspects are not going
to work? And what are the boundaries of facilitators, of course? What kind of interface components
are necessary? So we’re in the middle of a 5-year study in which we are translating some
of the same approaches that Kaiser used to implement the ALL Initiative into 11 community
health centers in Portland. And again this is in partnership with OCHIN, and again I’ll
talk about OCHIN specifically in a bit, but I want to point out that this research is
done in partnership with a practice-based research network. All the work has been done
hand in hand with docs from these clinics and other staff as well. And that’s just,
that’s really important to the work. It’s not, I mean I don’t know how else you would
develop tools that effectively serve providers without asking providers what they want. So
we’re having a, it’s been a lot of fun actually and it’s been great working with all these
providers. We believe this is the first clinical trial that is looking at how do you translate
an intervention like the ALL Initiative from a private care setting into the CHC setting.
If any of you are aware of other trials that are doing something similar, please email
me and let me know about it. I’d like to know, you know we’re keeping an eye on the literature,
but I am not aware of anything else that’s happening like this. So if you do I would
really appreciate the info. So a little bit about the intervention itself,
the ALL Initiative. There’s plenty of evidence that for a lot of patients with diabetes,
the risk of cardiovascular disease events can be greatly reduced by taking statins and
ACE inhibitors or ARBs depending on various comorbid factors and age etcetera, but some
combination of meds depending. ALL at Kaiser stands for Aspirin, Lisinopril, but really
any ACE inhibitor is fine and there’s an asterisk by the word ‘Aspirin’ because the evidence
for aspirin really shifted right in the middle of our proposal process. So Kaiser is, in
fact, not emphasizing aspirin much anymore and we’re not targeting it in our study because
the evidence wasn’t solid enough. The overall goal of the study, of the intervention is
to improve rates of patients with diabetes who had active prescriptions or indicated,
these medications have indicated and indicated per national guidelines, evidence based indication.
I just want to, since we’re talking about adherence, when Kaiser Permanente implemented
this intervention there was also quite a bit of focus on patient adherence as in once you
got the med and the prescription in the patient’s hand, did they pick it up? Did they refill
on time? Follow-up calls, etcetera. We haven’t had the bandwidth to do that in this study
yet. We are just starting to think about how to do that piece to try and do a little bit
more than we’ve done already, but we’re just starting to take that on. It’s, and again,
it’s mostly a bandwidth issue. Kaiser estimated that 3 years after the ALL Initiative was
implemented in all of its regions nationally that there was a 60% reduction in CBD events.
Now this is a projection, it’s a modeled number, but nevertheless it’s a really impressive
impact. I mean if all of our interventions could do this, we’d be in good shape. So I want to talk a little bit about the overall,
at Kaiser, the overall strategies underlying the ALL Initiative are actually fairly simple.
And they are make it easier to identify patients who are missing a medication for which they
are indicated and make it easier to prescribe those meds. Fewer clicks, make it easy. But
since we’re talking about adherence, let’s talk about some of the other tactics that
Kaiser used to improve ‘provider adherence’. One very important piece was that they tied
provider incentives to this. So there was actually a financial incentive to prove their
rates there and it’s based on monthly feedback. That’ll light a fire under your seat. They’re
also, Kaiser also identified clinician champions for different regions and practices who would
talk to their colleagues and try and, you know, let folks know about the intervention
and why we’re doing it. And Kaiser made, opened up time for them provided that, the support
for these docs to have the time to do that, to do that work. The other pieces that, again
at Kaiser Permanente, it’s a fairly top down organization, so the, you know, medical leadership
can say this is our new standard of care, implement it, and that’s, I think that’s important
because we’re thinking about what incentivizes adherence. And these are some of the, and
I’ll get back to this in a bit. Okay, so, back one please. Thanks. At Kaiser Northwest
and now let me say this, all the Kaiser regions are somewhat autonomous and so this, while
the ALL Initiative was implemented at every region, it was implemented differently at
different regions, so we’re in Northwest and so we worked with the folks there who had
implemented there and used that as our model from which we adapted. In summary, there are
some, there are some, there is a point of care alert, again, that in Northwest was part
of a point of care patient data summary and there’s a whole set of care gaps that gets
flagged and this was just integrated into that set of care gaps. And there were also
panel management tools that would either support the daily inreach for scrub, I’m not sure
how familiar everyone is with the term ‘scrub’, but it’s what the folks we work with call
the process of looking over who’s coming in today, who are our patients, and what are
they going to need? And it’s sort of the team huddle, the morning huddle is another term
I heard for it. And the tools also support targeted outreach that kind of specifically,
again there was, there were nurse and pharmacy case managers who were doing outreach to patients
who were indicated for a lot of the meds, but didn’t seem to have an active script for
them. The other, some other tools that we built
in CHR were preset order sets. Now, again at Kaiser there was a real emphasis on there
being a medication bundle. A 6 dose bundle, one click gets you all 3 meds. We’ll talk
about this a bit more, but as we adapted this intervention to meet the needs of our clinics,
our CHCs that wasn’t of interest to them. So we really are just targeting one med, both
meds but not necessarily thinking about it as a bundle. There was some, there was less
interest in having prescribing be so proscribed as it were. There’s also after-visit summary
text. Again with, you know, an easy click you can easily generate a few paragraphs about
here’s what your statin does for you and etcetera, etcetera. And again there were provider incentives
and feedback. Okay. So I’m going to talk to you a little bit about our study. I should
maybe start right now with a caveat that we’re right in the middle of the study. We’re just
about to end our third year and, so the information that I’m sharing with you today are really
preliminary results. A lot of them are really just observational. We have a qualitative
data collection team that are doing a great job here collecting an enormous amount of
processed data, but we have just started actually analyzing the qualitative data so. Again,
I just want to be clear that we are going to talk a lot about qualitative results as
I get into this presentation, but just to be clear, these are not, haven’t been coded
and we haven’t done the, you know, the text analysis yet. That’s coming. So effective
first year, no let me back up a second. So the overall study design is a randomized staggered
implementation. Again this is a real world study in community health centers and it was
simply not feasible to say some of you aren’t going to get this intervention. So the way,
it wasn’t fair to ask them to participate in that way. So the way we set it up was we
randomized 6 of the clinics to start a year before the other 5 clinics and that gives
us at least one year where there’s really clear comparison and we can still do pre/post
for all concerned. We felt like it was the right design. Now the part that I want to really point out
that’s exciting in the environment in which we’re conducting this study that enabled us
to do this. Now I’m going to talk to you about OCHIN. Now, again, OCHIN is the Oregon, was
the Oregon Community Health Information Network, and is now just OCHIN because there are 12
states involved. And OCHIN basically is a middle man between the Epic, the folks who
make the Epic EHR and community health centers. So OCHIN approaches Epic from, you know, approaches
the EHR from Epic, works with them, has adapted EHR for community health centers and then
all OCHIN member clinics contract with OCHIN to take part in this electronic health record.
Now what’s really cool about this, it means that all of OCHIN’s clinics are on one health
record that is centrally managed at OCHIN with one patient identifier across all sites.
And now there’s over 300 clinics in 12 states that are primary care CHCs where we have all
of their data in one centrally managed location. Made it feasible to do the randomization by
clinic. It made it feasible to turn on the tools differentially by clinic. So we’ve got
a really neat laboratory there that we’re really developing and FYI this is the first
intervention study that has been conducted within the OCHIN membership. Although by no
means is it the last. So we spent the first year adapting, really
talking to our, the providers at the clinics, at the study clinics and saying “well here’s
how Kaiser Northwest implemented this intervention. Here are the tools they used, the strategies
they used. How is this going to work for you?” And so I would go from the providers to the
programmers at OCHIN and say “Well, here’s what the providers want.” And the programmers
would say “Well that’s feasible, but this isn’t feasible. Here’s what we think we can
do.” And I’d go back to the clinics and say “Well, the programmers say we can do this,
can you work it this way?” so we spent a lot of time debating how to define, you know,
hypertension and, you know, we had, we spent a long time coming to an agreement again there
were three service organizations involved here. But it was essential, and I think this
is actually quite important in terms of the adherence piece, it was essential to get the
buy-in of these clinics to gauge them in that process. So I just want to point that out.
So we, then we roll out the intervention in the 6 early clinics. Almost 2 years ago we
rolled out the late clinics about a year ago and we’re going to spend the next couple of
years really evaluating what we’ve, our impact and sort of what has worked and what hasn’t?
We’re also engaging in, we’re calling it ‘hot topics’ and that’s, it’s the best practice-sharing
process, but we’re really trying to brainstorm on some tricky pieces that have come up in
this, in this implementation. For example, women of child-bearing age, these meds are
contraindicated for pregnancy. So how do we deal with that in the data? And we’ve had
to really kind of come up with some work-arounds. It’s been an interesting process. Okay, so I kind of covered this slide already,
but just for a sec we, thanks, we, again, there was, yeah I talked about this already.
Just the point was that at the end of that first year, we had created a menu of tools
that our study clinics could choose from. Some of them were biased just a little bit,
like we basically took the aggregated summary text, you know, as a whole, but Kaiser said
that was fine. Some we had to really recreate to fit the infrastructure availability. We
know Kaiser and the clinics both use the Epic EHR, but at Northwest they have this, again,
point of care data summary and the clinics, they didn’t. So we had to use, well, you’ll
see. We’ll get to it. Okay, so here’s the menu of tools that we ended up with, that
our clinic could choose to use. And, again, given, because this is a translational study,
we really made it very clear to the clinics that, see which of these tools work for you.
We were interested specifically in seeing how different clinics would adopt different
tools and how different, how they would work in workflow. It’s been interesting to watch
that. And of course we’ll publish all this as soon as we can. There is a point-of-care
alert that’s a best practice alert, that’s what BPA stands for and it’s just a reminder.
“Hey, you open up an encounter. Hey, this patient is indicated for a statin.” And I’ll
show you, I’m going to talk about the BPA quite a lot in upcoming slides. We did come
up with the order sets that made it easier to, in theory, were going to make it easier
to order these meds, but with having, sort of, the most, having kind of preset the most
common dosages and particular, you know, you know, like lovastatin or whatever. But interestingly,
well we’ll get to it. The idea was to make prescribing easier with these order sets,
but we, we found that no one used them, but we’ll come to that. We have a, no I’m sorry I wasn’t ready. We
also had a, we also had a patient data summary and panel management rosters that serve several
purposes and I’ll just summarize this now, but I’ll talk about them all quite a bit in
detail in a moment. There was, again, inreach or scrubbing tools, or huddle tools that would
allow, were supposed to be, kind of, another reminder mechanism. So we had in both you’d
be reminded both at the huddle and then at the point of care. And then outreach tools
to help identify patients who haven’t been seen recently but who are indicated for these
meds that don’t seem to be on them. Or to do follow-up with patients who have a recent
script and again we haven’t really started using those tools yet, but we built them.
And again we have some other tools that were really targeting patient adherence. Exam room
posters, handouts in several languages, and again the aggregated summary text. And I have,
if folks are interested, we’re going to, I guarantee we’ll have time for questions. I
do have screenshots of those kind of at the end of the slide deck if folks are interested
in seeing them. Okay, thanks. So here’s our best practice alert. And again this is used
as a function that’s native to Epic, best practice alert. And that we, but we, you know,
at OCHIN we have Epic programmers and so they built it to our specifications. This patient
has diabetes and is indicated, but does not have an active prescription for an ACEI, ARB,
or a Statin. The BPAs are the algorithm underlying them is designed so that if a patient is indicated
for just one of the meds, then that’s, it’ll just show that. It’ll say this patient is
indicated for a Statin. You’ll see there’s a sentence there, please consider discussing
contraceptive options when prescribing to childbearing age women. That was one way that
we, you know, people don’t actually read the text. In fact, they just sort of see the words
ACEI, ARB, and Statin, but we put the text in there as requested. We do give last LDL.
This is just a sample, but we, in the actual practice that they would show lab results
there. And, okay I’m ready, thanks. The BPA ‘fires’ in an office visit or a phone
encounter and recently, and this is interesting, OCHIN just changed it all across their membership
so that best practice alerts are firing in interim encounters, which folks use to do
some charting and we’re really excited about how this is going to effect the use of, how
this is going to affect our outcomes because it’s enabling a certain amount of work to
be done that’s not necessarily in the encounter. So, for example, if the patient really should
not be, if the doc does not want to be reminded about a Statin for this patient again, they
have the opportunity to sort of turn off the BPA and I’ll show you that on the next slide,
but to not to it in the encounters. You can do it when, you know, at their leisure. And,
so that’s pretty cool. We’re excited to see how that changes workflow. And again it’s
patients who have diabetes, aren’t lactating or pregnant, no anaphylaxis, are indicated
for the meds but have no active prescription. And just so you know how we’re defining this,
because it’s imperfect for sure, we don’t have, we have just prescription data, we don’t
have dispense data. So we are defining an active prescript as one that was issued within
the last year. So we certainly know that, you know, so this is actually kind of interesting,
I think, in terms of this being about provider adherence, right. At this step, this is where
we’re intervening. At this step in the process where the provider decides, here’s the script
for you, but beyond that we haven’t, that piece isn’t in place yet. So that’s what this
metric gives us. Okay, so just a little bit about how they
can use the best practice alert. They can, they have a couple different delay options.
And, again, these are the buttons, these different options, the postpone and override options
are what I think folks are going to start using in the interim encounters. And what
we’re hoping is that it’ll make, it’ll make them trust the BPA more when they fire an
actual encounter because they’ll know, okay it went through my panel and if this is firing
for this patient I want it to. So, again, there’s an option to say, well I’m just going
to wait for the next visit because this patient has too much going on this visit, or I’m going
to wait 6 months because this patient says “I’m not talking about any new meds with you
right now.” Or there’s a permanent override, I don’t want to be, I don’t want to hear about
this for this patient again. And we’re trying to track which docs hit the permanent override
all the time and talk them out of it. We’re finding actually that that’s not happening
that much. And where it is it’s generally for totally reasonable reasons. That also
lets you jump to the allergies field that actually turned out to be a totally useless
function. It lets you jump to the order set and it lets you quickly print out this after-visit
summary text. So here’s our order set, this is just an example of one. For some reason,
the order sets can’t be built centrally, they have to be built service area by service area.
I don’t know why. But this shows, here you go, simvastatin 20 milligrams, you know, as
instructed and again there’s an opportunity, you can change, you can do a drop-down and
change the, and change any of this that you want. But the idea is to at least make these,
make the meds easy, easy to prescribe. Okay. And then there’s panel management rosters
and we’ll talk about this quite a lot. I mean, I’m sure everyone listening is very familiar
with the need for tools that allow service areas, clinics, providers to look at what’s
going on with our patient panel. This is especially important for these community health centers
with all the transformation that’s going on now in terms of how health care is delivered
and the kinds of reporting requirements and the meaningful use reporting etcetera, etcetera.
So there’s really a lot of need for this. OCHIN has built a panel management tool, or
a data processing tool, and used, it’s really, it’s kind of like Excel on steroids and based
in the EHR, right. So pull the EHR data out, we can build algorithms that use the EHR data
that then populate solutions rosters. And we’ll show you some of those. But we had to
use OCHINs tool because that’s what we had. We understand that Epic has a tool coming
out, or that it’s available now called ‘Reporting Workbench’ that may, that we don’t have it
yet in our clinics. We’re interested to see whether it will support the complex algorithms
that we need to build underneath it. I hope it will because it would be really nice to
be able to transfer all of these rosters into an Epic-based tool. We’re going to see. So it identified patients, again, who are
indicated for one of these meds who aren’t actively prescribed. There’s filter functions.
And as we built these tools, our clinics said “Well okay, can you also add in the following
information about our patients with diabetes?” and so, sure, so to some extent we were able
to do that. And I just want to, what I think is interesting about these tools, the panel
management tools, is that these aren’t used by providers directly. They’re used by the
staff. So the staff’s going to, you know, is going to alert the doctors what’s going
on with, you know, your patients who are coming in today and they got the information from
these rosters. And they’ll also be doing the outreach piece as well. So I think that’s
an interesting piece of this. I think that there are different tools are needed for different
jobs. I mean, you need different tools to serve different people in the workflow. So
I think that’s kind of interesting to me. Okay, so I’ll talk to you a little bit about
our rosters because I do think this is interesting. At the four column headers are what you see
in the yellow boxes at the top and they say: ‘AceArb Indicated’, ‘On AceArb’, and so what
you would see in the underlying data would be, you know, ‘Yes indicated’, ‘No, not on’
and then you could, in theory, quickly, you know, you can export these to Excel and you
could sort through them and quickly identify which of your patients are indicated but not
active for one of these meds. So, again, and this I think is kind of interesting, we built
several different, again, scrub tools or huddle tools or however you want to call it. For
one of the clinics, we, they said: “Look, just add those four columns to one of the
rosters we’re already using to do chronic disease care management.” And there, the CDCM,
the chronic disease care management was a whole ‘nother quality improvement initiative
that was going on concurrent to our study and we’ll talk about how that effects adherence
actually. It’s quite interesting, I’ll get to that a little bit later. So I just wanted to point that out for the
all daily roster which was a tool we built that, by the request of some of the other
clinics. The idea was that it would be a summary of all their, of here are your patients with
diabetes who are, or here are all of your patients who are coming in today and there’s
a whole bunch of information specifically for the patients with diabetes. And I’ll show
you that roster as well. And then the outreach tools which I briefly mentioned before. No
recent visit, hey this, we need to reach out to these patients and bring them in. We’re
trying to get the clinics to figure out how they can integrate this outreach into their
existing outreach, their current outreach efforts. It’s been a little bit of a struggle.
And then I’ll just skip to the bottom one again, that’s the recent prescription roster.
We’re supposed to be able to use it follow up with patients. And I think we’re going
to move on that soon, but we, there’s, this has been a complicated project and there’s
only so much bandwidth. So this is, again, this is a bit of a repetition. This is, again,
this is the chronic disease care management roster. I just wanted to say, to point out
that the other columns that we added to this one, which was kind of interesting, was, well
they wanted to know when, and this is after we had started using the tools, they said
“Well could you let us, could we get more information in this roster about, did we override
this patient? Has this patient been overridden in the BPA? Can we see the last time the BPA
fired?” So, while there is a function in Epic to let you look at BPA history, no one uses
it, has been our understanding. At least in our sites. It’s just not part of their workflow,
but this meant so that, again, when the staff were going through and using these tools in
the scrub process they could say “Oh, well this patient is indicated, but not active,
but Dr. Smith did a permanent override. So I’m not going to remind him about this because
why would I, you know, bug him about something he didn’t want to be bugged about again.”
So and actually we are hearing that these tools are being used pretty consistently. So this is just a little bit of a screenshot
of what the daily roster looks like. And the bottom is a number of the columns we’ve done
about there. It’s patient name, age, who’s your PCP, do you have diabetes or not, then
you see the 4 columns most salient to the study. And then again, at their request values
of last, you know, last A1C data and then resolved and BP and LDL and etcetera. You
also have these filter options up top. So the very top red arrow lets you, you click
on providers and you would, uh-oh. I think we’ve maxed out our natural language processing.
Thanks. Okay, I’m not sure if all of you listening can see this. No they can’t? Okay, we’re having
a little bit of a technical thing here. Okay, so you could click on that arrow and just
pick which provider you wanted. You know, whose panel you want to be looking at. And
then there’s also the arrow to the right that lets you select what day’s patients. So you
could look, you know, who’s coming today, or tomorrow, or later in the week. Okay, okay.
And we also give the sites monthly data reports so they can see their progress. That has taken
up a lot more bandwidth for our team than we had expected it would. It’s been a little
interesting and the idea behind these monthly reports. Whoa, no not yet, you’re giving away
the good stuff. You got the sneak peek that’s too sneaky. The idea behind these reports
is to help our team and our, you know, the study team figure out, well who’s doing, you
know, which providers, which clinics are making the most progress and who’s not? And how can
we share information between sites? And figure out how we can learn stuff that way. And it
helps us, yeah, identify what’s not working about the tools. And we kind of thought that
having a little bit of competition would push adherence and I’m not sure if that’s happening
or not. Okay, now I’m ready. So, a sneak peek of our
results. These are not published yet, but they’re really exciting. And I’m going to
just show, I wanted to show you this as an example of one of the things we give them
in their monthly feedback reports. And here’s what you’re looking at. The blue line are
our ‘early clinics’ and you can see that their trajectory really heads upward right at the
beginning, which is when they implemented. The red line is the late clinics and the arrows
show when we started implementation of the late clinics. And I love, I just love this
image because you can see that as soon as we rolled out the implementation at the late
clinics, bam, up we go. You can also see, and this is an interesting piece of the results,
so that we have with the early clinics, kind of plateaued. We’re still making some gains,
but much, much more slowly than in the first 6 months. And so we’re really interested at
looking at why that’s happening and can we get better than this? Maybe we can’t, and
we will figure out why and/or what are the barriers to more improvement. Okay, so here is the real, we’re going to
get into the adherence stuff, that I think is really what you guys signed on for. How
and why the tools are used or are not used, the best practice alert. Alright, well for
some of our sites, they don’t regularly, it’s the only inreach tool they’re using. They’re
not using the solutions rosters. So that’s, so, and while we rarely have doctors that,
those response buttons aren’t actually being used very often, and we’re certainly not having
folks populate the text field that we gave them to tell us what’s going on almost at
all. But a lot of, again anecdotally what we’re hearing even from our doctors who are
co-investigators on the team, which I think is great. Like, oh that actually kind of helped,
like, oh yeah I thought I had all my panel, you know, appropriately prescribed, but actually
I didn’t and so that’s been really helpful, you know but they’re all surprised. And here’s
why. Here’s a really important piece of this. The providers don’t trust the best practice
alerts, they trust themselves to make good choices for their patients. Of course they
do. And so they habitually ignore them. And once you’ve got a habit of ignoring that yellow
box, that’s a pretty good habit and it’s really hard to change that. And we’re finding that
a really important barrier. We, there are a lot of best practice alerts that are built
into the HER that these guys use that weren’t as carefully tested, as carefully vetted when
they were implemented, but came out before ours did. So providers got pretty used to
saying “Ah this thing fires at the wrong times, it’s just not right.” So one of the challenges
for our team has been well, okay, yeah maybe those other best practice alerts weren’t as
carefully vetted, but this one’s been vetted and you can talk to Dr. Hill who was part
of our team and he’s your clinic supervisor and he says the tool works. But that’s an
interesting educational piece, again, in terms of adherence. Like, if you want a doctor to
believe in this best practice alert, you’ve got to buy in about it. And I’ll talk to,
some ideas I have a little later about how to make this stuff work better. And of course you’ve all heard about alert
fatigue, reminder fatigue. Yeah that’s real, although, again, I mean I’m very interested
in this phenomenon in that the electronic health record’s not going away. We’re going
to develop better, excuse me, better tools for helping to provide decision support and
how do we build them in a way that doesn’t create alert fatigue, but instead feels like
“oh great, the health record is helping me do my job,” Rather than “the health record
is telling me how to do my job”? The panel management rosters, so some of our sites don’t
regularly use solutions. Those are the ones that are not, that are only using the best
practice alert as their inreach tool. And there are some downsides to the way that the
current version of solutions is set up. Now OCHIN is developing a much better version
of their panel management tool we’re going to have that quite soon. And so some of these
problems are going to go away, but right now the way that the data get processed, they’re
always 36 hours old. So think about this daily tool that’s supposed to help you scrub for
who’s coming in today. Well that’s great for any patients who didn’t make a last-minute
appointment, but if they made a last-minute appointment, they’re not going to come up
in your scrub tool. And if they’re not coming up in your scrub tool, then you’ve got to
do the work of going into the health record anyway and cross-checking your, you know,
what you’re getting from this roster against your other, you know, your appointment data.
And then it’s actually not really saving time, so why bother with the roster? And you also
need to leave the Epic environment to use solutions. Again, I’m really hoping that with,
when OCHIN brings out reporting workbench we’ll see some shift in how these tools are
used. But, you know, as I’m sure you can imagine, there’s, there’ll be a lot of steps involved
between here and there. So limitations to getting folks to use these panel management,
these kind of panel management tools. You know, there’s resistance to changing workflow
and I think about it like this: I wouldn’t like someone to come to my desk and tell me
how to do my job differently. No one does, right? So how do you do this in a way that,
that doesn’t create that kind of pushback and frustration which, again, I can empathize
with. You know, so some teams are using the tool, some are not. The outreach use, I said,
is kind of mixed and we think that it works better when the columns are added to existing
rosters that are already being used. At least that’s the case in this one clinic. And again
think about that in terms of habits. Well we’re already in the habit of checking this
roster, then that’s one less change that needs to happen in terms of how to get these tools
to be used. And here’s something that I think is really
interesting. One of the, one of the things that we’re doing as part of implementing this
intervention in our study sites is we have hired 2 and 1/2 FTEs of site coordinators
or practice facilitators is what I think that the literature would call them, but this is
what we call them in the study. And they are on site helping the clinics with implementation
and I’ll talk about this quite a bit in the next couple slides, but what’s happening is
that the roster tools are being used as follows: the providers are saying to the site coordinators
“Well, can you just create a list of my patients who are indicated, but not active for these
meds? And I’ll just call them up and follow up on it.” So, and that actually we think
is having an enormous impact, but then it’s not so much is it the tool, or is it, I mean
it’s a combination of the tool you provide, but also the staff person who’s there to use
a tool in that way. So that’s an interesting, that’s been interesting. But, you know, we
hear back from the site coordinators that the docs love getting this list so that’s
been very informative, I think in terms of how to build effective tools which is what
we’re thinking about. The order sets, very rarely used. I think this is because, again
from Kaiser’s perspective, this was really this one-click for all 3 meds and we’re not
quite doing it that way. But again even our study co-Is, this is a quote from one of my
co-investigators: “you know, I’m used to making orders a certain way. This is how I make orders.”
And so, and it makes sense right? Why would you go into one order set to specifically
order this ACE and this Statin when the patient also needs a whole bunch of other stuff and
you’re going to have to go into your regular orders environment anyway? It’s extra work,
in fact to have this order set. So in some cases this can be helpful and in some, not.
Again, the posters and the handouts, sometimes, and this isn’t in the HR base, but it just
might be interesting. Some sites don’t let you hang posters, some sites do let you hang
posters. We had really kind of a hilarious struggle to get, when we were asked to make
these materials appropriate for a Russian-speaking population, because some of our clinics have
a large Russian-speaking population, and getting the images to be culturally appropriate was
quite a process. I mean harder than I would have expected, but interesting. And yet sometimes
what we’re hearing is “Oh the posters are great. They remind the patients that this,
you know, to ask the docs about the meds.” And sometimes we hear that the doctors are
reminded by the posters, which is kind of hilarious, but great, that’s great and very
low-tech. And again we’ve got these Dotphrases to support
the, the after-visit summary text and I’m frankly surprised that these aren’t being
used more often. I feel like it’s a great resource, but I think this is about changing
habit. It’s about, you know we’ve had a lot of effort already just to let folks know about
the other tools and I think we’ve all let, we’ve let this, in terms of educating the
users, there’s less, that’s been less emphasized. So, again, I’m hoping that’s going to change.
And, again, it’s hard to get people to change habits, but to my mind this is not a real
difficult change, but, you know, that’s what we’re finding. So, lessons learned. What are barriers to
improved provider adherence, or let’s say to improving our ability to support providers
providing evidence-based care? Change is hard, again no one wants to be told how to do their
job. There’s also, we really struggled with provider buy-in with the evidence underlying
the meds. Especially with the younger patients. We hear “I don’t want to start a 30-year-old
on a Statin.” Okay, but the evidence actually supports that, but okay, okay. You know, so
that’s an interesting piece. So we’ve had to educate and re-educate and re-educate and
we’ve heard, we had pushback, I mean, this is a bit of an outlier, but we had one provider
said “yeah, I really am more interested in fish oils and niacins around this stuff.”
And we said, okay well here’s our evidence sheet and you can read the articles yourself,
I mean we can’t make you do it. Very, very important part of the challenge is how do
you fit these tools into existing workflows and into changing workflows? You can’t ask
them to change workflow dramatically around, you know, one aspect of diabetes care, but
so then how do you get it to work in what you’re already doing? Which team member is
going to use which tools? Who’s going to decide that? Do you have a team-based approach at
your site or a less team-based approach? That’s going to make a big difference, again, in
roles and appropriate roles for appropriate tools at appropriate times in the workflow.
And, again, as I said before, multiple and concurrent quality improvement issues going
on. There’s a lot of practice transformation stuff going on especially in the safety net
and frankly sometimes our intervention’s getting a little bit lost in the shuffle. That can
be good and bad though, right? Because it can be good in that it’s just part of this
larger body of change, okay, yeah, but it can be bad because “oh gosh, you know, I haven’t
been able to follow up on that piece because I’m so busy doing the care plan intervention
stuff we have to do now.” Challenges around the decision support tools. These are very
important. We heard that almost immediately “Yeah, well this only taught, you know, your
tools really only support one element of diabetes care rather than all aspects of a whole patient’s
care and so I’ve got these tools that are helpful for one thing.” Okay, fair enough.
It’s challenging to create tools that both address, that are population-level tools that
promote practice that is, that supports guidelines that are at a population level make a lot
of sense, but the tools also have to be flexible enough so that when it’s a one-on-one the
doc in there with the patient there’s some flexibility about what happens. And I think
that’s a little challenge. How do you create that? And again other challenges like alert
fatigue which we’ve talked about. Implementation tips, so I’m very interested in this whole
body of research in how it really is where organizational theory and adult learning theory
and health information technology and also implementation and dissemination science all
kind of come together around these questions. So I’m going to talk a little bit about implementation
because I’m really excited about the science, and how implementation may affect adherence.
We have found that there’s a really important communication piece, implementing new tools
the, first of all, let’s start at the bottom. You have to have clinic leadership. If you
don’t have clinic leadership it’s not going to happen. And then you have to be, the communication
comedown from the team with the supportive clinic leadership saying “we are trying to,
these tools are supposed to help you, not tell you how to do your job.” And that’s a
fine line. I think it’s really important to have that directly communicated, explicitly
communicated. I think it needs to be communicated what leadership’s expectations are for what
the change is going to be, but yet allowing there to be flexibility. A very important
piece of this is that you need to communicate how you built the tools. How you tested the
tools. What the evidence is that underlies the tools. The parameters underlying the tools.
How are you defining each aspect of the tool? That has to be available. We heard back from
docs all the time: “well we don’t know how you’ve been defining diabetes.” Okay, so then
we were able to add a piece to the top of each of our roster columns where if you clicked
on a little icon it would show you the underlying algorithm. But that’s, you know, that was
an important piece and we never thought, we didn’t think about that at the beginning.
And that makes sense, right? These docs want to provide good care to their patients. That’s
their goal. So if you can show them, well here’s where this is coming from, then they
can judge for themselves if it seems valid. I think that’s important. Yeah, and again
expectations about which staff, which role, where in the workflow? Again if possible it’s
really helpful to have a clinician or staff champion to give them any kind of support
they need. You know coming in from a, you know, it makes sense to me right. I’m a researcher,
I’m coming into your clinic trying to help you change practice. Who are you? You know?
But if Dr. Munch, who’s, you know, your clinic, you know, your lead clinician says “well I’m,
you know, I’m behind this” It’s going to make a big difference. And then again it’s about
organizational change. Involving pharmacists and all members of the care team. We’re really
getting excited about this right now. We’re thinking about the role of pharmacists in
terms of doing contraceptive counselling with women of child bearing age and also doing
some of the outreach and the follow-up. And that’s been really exciting and we’ve been
starting to have some great conversations with the pharmacists from some of these sites.
What’s challenging is that the pharmacy databases and our EHR are not linked, right. At Kaiser
it’s all one integrated data system, but in these clinics that’s not the case. And some
of the patients don’t use the clinic pharmacies, in which case it’s quite hard to do any kind
of follow-up, but we’re working on that as well. We think there’s some technology coming
that will give us some bi-directional data about dispensing, but we don’t have it yet.
I think it’s important and maybe our experience has been, that it’s important to engage the
users in testing, adapting, and implementing the tools and hear what they have to say.
If we have time after we handle some questions we had one major adaptation we were asked
to make about 6 months into the study that we made because we wanted to make the tool
more useful to the providers. It kind of wreaked havoc with our, made a lot of extra work for
our data team, but it was important to do it. Again, provide regular progress updates
as I said, it was quite harder than it seems. And again try to create habits. I mean this
is really, really what this is about I think. I think habits really becomes quite key to
this, but how do you do that? So, I think that was, no not quite. And the other piece
of this it’s not just the tools. I mean it’s one thing to have, to build good tools, but
this incentive piece is really important as I said. At Kaiser there were financial incentives
there were top-down directives. We couldn’t do that. We had site coordinators who were
covered by the grant and so what happens when the project ends? But I think right now there
really, they are, I mean I know they are key to promoting adherence right now. Again, they’re
taking these lists to the providers and they’re saying “what’s working? What’s not?” we’re
in communication with them all the time. And they really provide a link between us and
the clinic staff and they’re there all the time and they build trust and it’s important.
Clinician champions have a little bit of time supported by the grant, again that’s important.
We meet with clinic staff monthly, we talk to site coordinators even more, and again
we engage the staff in adapting the tools. And I think that was a key part to really
increasing the buy-in to the extent that we’ve been able to. So I think, I think that, nope
there’s more. What are next steps? I, sharing best practices again, we’re trying to figure
out how can we increase our gains? And sustainability is a big question. We’re going to do a quantitative
evaluation of course of what our impact was, we are going to do a qualitative evaluation
really looking at the process and the impact on workflow etcetera, etcetera. And then I’ve,
our team’s got a couple of proposals in to study a couple of interesting questions that
have arisen from the first 3 years of this study. The first is dissemination and implementation
methods. Can we do this implementation without the kind of hand-holding that we’ve done?
How would that work? And then the other, the other, the R18 is how can we improve these
point of care tools? Because we’ve already identified a lot of things that need to work
better. Can we translate the full on point of care tool that’s been developed at Northwest?
Okay, this is definitely the end. I would just say that it’s absolutely a gorgeous spring
time although rainy in Portland and those are my tulips. And I’m, I just wanted to show
them to you because they’re beautiful. And now I assume we’re going to have some time
for questions. I’m going to mute, I’m supposed to tell you to mute your phone if you’re not
asking a question. You can also [all guests have been unmuted] so I think you’ve all been
unmuted, but please go ahead and mute yourself unless you want to ask a question. And you
can tweet, where can they tweet? And you can tweet to #nihadherence. So I’m happy to take
questions now. Are people just going to pipe in on the phone? Okay. Alright go ahead, pipe
in. You should unmute yourself to talk. [Christine Hunter] While we’re waiting, maybe
I’ll go ahead and ask a question. So I saw a few things that I thought were particularly
interesting and the data was very compelling as well. But did you think about other incentives?
Because the site coordinator wouldn’t necessarily, you know, be adaptable in other settings outside
of the research context. So were there other non-financial incentives that you considered?
I mean I don’t know what that would look like, but things that would make their lives easier,
but still be cost-efficient for a community health center. [Dr. Gold] Yeah, that’s interesting. So a
couple thoughts to that. Could folks hear you, do you think? The question was, I’m not
sure if that necessarily broadcast, you know, given that the site coordinators are expensive,
but a more important part of what really, you know, kind of put a fire under the project,
how, what are other incentives that we could use? So one thing that I think is really interesting
is how with all of these new federal reporting requirements and the meaningful use requirements
there are, not around these specific pieces of practice change, but that is one way that
that’s incentivizing because there is a financial incentive, right? How well can you improve
these metrics? I wouldn’t necessarily think these clinics would want to get down to such
a granular level of, like, if a patient with the following comorbidities etcetera, etcetera
is taking an ACE inhibitor it’s going to get really complicated. But that is one way that
folks are being incentivized. And, yeah, you know, the other piece of it I think really
then becomes about your provider. You know do you have a, so your question is about incentives,
but I get it. I mean some, but I’m not sure how you would do that in the real world. I
think it has to be about belief. I can’t see what else there would be in that, like an
incentive within our study clinics. We could, you know, if nothing else, instill some competition
maybe or you know or. But there’d have to, you really are going to have to get the buy-in
that this is a change we want to make, I think. It’s complicated. Yeah. So now, are there
any questions out there? I’d love to take some. I don’t know if anyone’s trying to get
through and not able to. [Christine Hunter] Okay, well. I have another
question. [Dr. Gold] Yeah. Oh, yeah we get one? Hello?
Oh they were just signing off. Okay. I’m going to hand the mic over to you Christina. [Christine Hunter] The site coordinator, the
list that they’re generating individually for the providers, is that something that
the tool could do? [Dr. Gold] It is. They’re using the roster
tools. [Christine Hunter] Oh, they just want, they
want somebody else to pull it and print it for them? [Dr. Gold] That’s right. [Christine Hunter] That’s interesting. [Dr. Gold] Can you give me this list, if you’re
going to keep asking me about this can you just give me the list? Yeah, but they certainly
could. The docs certainly could. [Christine Hunter] But is there a way to do
that like, you know, like you said the easier for them, the one-click kind of? [Dr. Gold] Yeah, I mean, yeah they could. [Christine Hunter] I’m thinking about ways
to think about how to fulfill the site coordinator role once the study is over. [Dr. Gold] Right. So my thought about that
is, would be that, again, that this is not necessarily a task that you want to give to
the provider, but it’s, you would want to have the clinic overall say: “we want this
list to be generated and this is going to go to a panel manager, or we will designate
someone on the staff” I mean kind of what I’ve been observing, again, is there really
does need to be a person whose job it is on-site to follow up and make this stuff work. And
so again that’s what we’re interested in the R01 that we just submitted, we’re specifically
hoping to look at different levels of support and how that works, but yeah. Does that answer
your question? [Christine Hunter] Yeah. What’s going on?
Okay, well then there’s no harm in ending early if there’s no questions we really appreciate
your attendance and thank you Dr. Gold for a wonderful presentation and really I think
it was helpful to focus on this issue of provider adherence. [Dr. Gold] Great, thanks for listening!

Leave a Reply

Your email address will not be published. Required fields are marked *