Thursday, February 10, 2011

EMR's and Typewriters: They both have potential

A couple of weeks ago an article came out in the Archives of Internal Medicine which essentially said that "Ambulatory EMR's don't improve quality", based on a meta-analysis (review of multiple research published in the past few years). Wow - that's like saying 'typewriters don't help create better stories' just a few years after typewriters were invented because there wasn't a lot of evidence proving that they did.  Clearly I'm not a fan of this article.  Let me break it down as follows:

First, I personally think it is crazy to expect research on individual EMR implementations to mean anything right now - the systems are all immature and evolving quickly, the implementations are all different, and individual usage is all over the place. Any research that is done at one location at one time is pretty much limited to that place and time. It is not like a drug study, where the drug is made and used the same way every time and thus research will be consistent. It will be a long time before research on any single EMR provides any value except to show what the POTENTIAL is for EMRs - and since it is a tool, we already know that there is good potential if done well, and poor potential if done poorly. So what would be much more interesting and relevant would be if we could start by assuming EMRs have the potential to help (since we know some research studies show they can), and focused research dollars on figuring out WHY an EMR did or did not improve quality at a specific time and place - I bet we would really learn from that!

Second, the follow-up discussion in the Archives by Clem McDonald (a true father of medical informatics) highlighted multiple studies that did show benefits and had a good breakdown of why this meta-analysis was not very valid.  It is certainly worth a read, especially if you are getting asked by your friends at cocktail parties about "that report on CNN which said EMRs don't improve quality"… Now you can have some snippy comebacks like:

• "Sure, if you like meta-analyses which only include medication quality indicators, but I prefer my meta-analyses the way I get my annual physical exams - with vaccines and screening labs."
or
• "Those chumps only looked at single visit outcomes, not multi-visit ones- can you believe that?!?  And umm, pass the wine please."
Or one more provided by my friend and colleague Dr. Bill Galanter:
• "You mean the one that shows that the American healthcare system doesn't deliver reliable, quality care no matter what kind of tools you give them? Since in addition to the physicians, insurance reimbursement, short visits, ill-advised mandatory government regulation, uninsured patients, pharmaceutical advertising, a terrible diet, overly expensive drugs and EMR's, co-pays, donut holes (will come back if republicans get their way) and a trillion other factors are also to blame..."

Or you can quote Dr. McDonald specifically, who wrote:
First, and most important, the current article tells us nothing about which CDS guidelines were implemented in the systems that they studied. Practices and EHRs vary considerably in the number and type of CDS rules that they implement, and we do not know whether the CDS rules implemented by the practices that participated in the surveys addressed any of the 20 quality indicators evaluated by Romano and Stafford. Second, the current study and Garg and coauthors' review considered very different categories of guidelines. Most of the guidelines (60%) in Romano and Stafford's study concern medication use; none of them deals with immunizations or screening tests, which were the dominant subjects in the studies reviewed by Garg et al. Furthermore, in our experience, care providers are less willing to accept and act on automated reminders about initiating long-term drug therapy than about ordering a single test or an immunization. The third difference is that the current study examined the outcome of a single visit, while most of the trials reviewed by Garg and colleagues observed the cumulative effect of the CDS system on a patient over many visits. Finally, the data available from NAMCS/NHAMCS may be limited compared with what is contained in most of the EHRs used for Garg and coauthors' trials. For example, the NAMCS/NHAMCS instruments have room to record only 8 medications, even though at least 17% of individuals older than 65 years take 10 or more medications.

Finally, this whole issue reminds me of what Don Berwick has been preaching for many years… that the way academic researchers study the effect of a new medication or procedure is great for those scenarios, but is not so good in studying the process of quality improvement, which usually relies on a combination of factors, including IT, cultural shifts and process changes. In this 2008 JAMA article called "The Science of Improvement" he explains how to improve the measurement of quality improvement programs:

Four changes in the current approach to evidence in health care would help accelerate the improvement of systems of care and practice. First, embrace a wider range of scientific methodologies. To improve care, evaluation should retain and share information on both mechanisms (ie, the ways in which specific social programs actually produce social changes) and contexts (ie, local conditions that could have influenced the outcomes of interest). Evaluators and medical journals will have to recognize that, by itself, the usual OXO experimental paradigm is not up to this task [observe a system (O), introduce a perturbation (X) to some participants but not others, and then observe again (O).]. It is possible to rely on other methods without sacrificing rigor. Many assessment techniques developed in engineering and used in quality improvement—statistical process control, time series analysis, simulations, and factorial experiments—have more power to inform about mechanisms and contexts than do RCTs, as do ethnography, anthropology, and other qualitative methods. For these specific applications, these methods are not compromises in learning how to improve; they are superior.

Second, reconsider thresholds for action on evidence. Embedded in traditional rules of inference (like the canonical threshold P<.05) is a strong aversion to rejecting the null hypothesis when it is true. That is prudent when the risks of change are high and when the status quo warrants some confidence. However, the Institute of Medicine report Crossing the Quality Chasm calls into question the wisdom of favoring the status quo.

Auerbach et al warned against “proceeding largely on the basis of urgency rather than evidence” in trying to improve quality of care. This is a false choice. It is both possible and wise to remain alert and vigilant for problems while testing promising changes very rapidly and with a sense of urgency. A central idea in improvement is to make changes incrementally, learning from experience while doing so: plan-do-study-act.

Third, rethink views about trust and bias. Bias can be a serious threat to valid inference; however, too vigorous an attack on bias can have unanticipated perverse effects. First, methods that seek to eliminate bias can sacrifice local wisdom since many OXO designs intentionally remove knowledge of context and mechanisms. That is wasteful. Almost always, the individuals who are making changes in care systems know more about mechanisms and context than third-party evaluators can learn with randomized trials. Second, injudicious assaults on bias can discourage the required change agents. Insensitive suspicion about biases, no matter how well-intended, can feel like attacks on sincerity, honesty, or intelligence. A better plan is to equip the workforce to study the effects of their efforts, actively and objectively, as part of daily work.

Fourth, be careful about mood, affect, and civility in evaluations. Academicians and frontline caregivers best serve patients and communities when they engage with each other on mutually respectful terms. Practitioners show respect for academic work when they put formal scientific findings into practice rapidly and appropriately. Academicians show respect for clinical work when they want to find out what practitioners know.

Additional Studies/Articles on this subject
* Health Affairs article (March, 2011) from Dr. Blumenthal: Meta-Analysis of recent studies shows more positive effect of EHRs on quality (less on provider satisfaction).

No comments:

Post a Comment