Friday, February 18, 2011

HIMSS Mania 2011

The big HIMSS conference is here once again (for those not in the field - that is the Healthcare Information Management Systems Society... the conference is 5 days, about 30,000 people). 

I'm looking forward to hearing keynote talks from former Secretary of Labor Robert Reich and Actor/Parkinson's Advocate Michael J. Fox, as well as CMS chief Don Berwick.  And I'm wondering if David Blumenthal will give his usual rah-rah talk to the audience he has been giving (as head of ONCHIT), or if he will plan to unleash how he might really feel as he is "retiring" this spring. 

I'm also looking forward to catching up with a lot of friends and colleagues, as well as meeting new folks, hearing new ideas and seeing new products - it's a big event and a long haul, but I always walk away with some new ideas and inspiration at this event (as well as achy feet).

I've been helping out with a "sub-conference" at HIMSS called HIT X.0.  It is basically a track of "special" educational sessions which highlight innovation and future thinking, with a fun twist.  It will be held in a single auditorium that seats up to 900 people and I'm moderating/presenting at four of these sessions - so if you are at HIMSS, hope you can make these!  

FYI, if you registered for the HIT X.0 "sub-conference" separately - you will be guaranteed seats (they limited registrations to around 900)... BUT, if you didn't register for it - you can just show up a bit early and about 5-10 minutes before the event starts they will open the doors to everyone (since you have to assume that all 900 won't be showing up for every session).
Here is what will be keeping me busy for part of each day:

HIT Geeks Got Talent? Round 1
Monday, February 21, 12:15 PM - 1:15 PM
Description:  HIT Geeks Got Talent?" HIT X.0 is a multi-media educational series that takes attendees on a trip to the not-too-distant future of healthcare technology. Building on the blockbuster reality show "America's Got Talent", these sessions will host a talent-search-like format featuring eight contestants demonstrating their latest technologies developed for the healthcare IT space.  The three judges will be:
* Erica Drazen, FHIMSS, Partner, CSC Healthcare Group
* Dave Garets, FHIMSS, Executive Director, Advisory Board Company
* Jonathan Teich, MD, PhD, FHIMSS, FACMI; Chief Medical Information Officer, Elsevier
AND the Audience gets to help choose the four finalists

HIT Geeks Got Talent? Final Round
Tuesday, February 22, 2:15 PM - 3:15 PM
The four finalists vie for a shot at top HIT Geek!
Same judges, same audience participation!

Iron Programmer Challenge: Agile Programming for Web and Mobile
Wednesday, February 23, 2:15 PM - 3:15 PM
Description:  Iron Chef meets HIT!  We give two teams the same "ingredients" (specifications for a new tool) and they use "agile software development" (quick, iterative) to create a web or mobile solution.
Objectives:
* Learn about the benefits of agile programming methodologies and how it can be used to create solutions which can work in parallel or be interfaced with your EMRs and other IT systems.
* Think about how own organization can use agile programming techniques to build small focused tools which result in "quick wins" for your users.
* See and hear how two teams of agile programmers addressed this challenge and created brand new tools. These tools will be demonstrated at the session.
Check out Healthfinch ("We create easy-to-use medical apps for clinicians.") and their blog to get an idea of what one team is working on for this challenge!

Expensive, Exasperating and Exhausting - EHR the Extormity Way
Thursday, February 24, 11:15 AM - 12:15 PM
Description: Fictional Extormity CEO Brantley Whittington explains how his company combines the principles of extortion and conformity to extract revenues from hospitals and physicians who pay dearly for its proprietary EHR solutions.
Objectives:
* Describe the need for physicians and healthcare executives to suspend disbelief and allocate significant budgets to the purchase and maintenance of an inflexible client-server EHR from Extormity.
* Learn to self-attest to meaningful use in a convincing manner, confidently proclaiming that with the aid of Extormity, you have met all the requirements and there is absolutely no need for an audit.
* Practice endorsing your stimulus checks over to Extormity, as this EHR solution will require every penny of the ARRA funds you receive.
* Prepare for breach notification, as the security protocols embedded in the Extormity EHR will no doubt result in a leak of PHI.
* Learn about Extormity's shackled PHR solution that takes the tethered patient portal model to a new level, turning patients into indentured servants.

Thursday, February 10, 2011

EMR's and Typewriters: They both have potential

A couple of weeks ago an article came out in the Archives of Internal Medicine which essentially said that "Ambulatory EMR's don't improve quality", based on a meta-analysis (review of multiple research published in the past few years). Wow - that's like saying 'typewriters don't help create better stories' just a few years after typewriters were invented because there wasn't a lot of evidence proving that they did.  Clearly I'm not a fan of this article.  Let me break it down as follows:

First, I personally think it is crazy to expect research on individual EMR implementations to mean anything right now - the systems are all immature and evolving quickly, the implementations are all different, and individual usage is all over the place. Any research that is done at one location at one time is pretty much limited to that place and time. It is not like a drug study, where the drug is made and used the same way every time and thus research will be consistent. It will be a long time before research on any single EMR provides any value except to show what the POTENTIAL is for EMRs - and since it is a tool, we already know that there is good potential if done well, and poor potential if done poorly. So what would be much more interesting and relevant would be if we could start by assuming EMRs have the potential to help (since we know some research studies show they can), and focused research dollars on figuring out WHY an EMR did or did not improve quality at a specific time and place - I bet we would really learn from that!

Second, the follow-up discussion in the Archives by Clem McDonald (a true father of medical informatics) highlighted multiple studies that did show benefits and had a good breakdown of why this meta-analysis was not very valid.  It is certainly worth a read, especially if you are getting asked by your friends at cocktail parties about "that report on CNN which said EMRs don't improve quality"… Now you can have some snippy comebacks like:

• "Sure, if you like meta-analyses which only include medication quality indicators, but I prefer my meta-analyses the way I get my annual physical exams - with vaccines and screening labs."
or
• "Those chumps only looked at single visit outcomes, not multi-visit ones- can you believe that?!?  And umm, pass the wine please."
Or one more provided by my friend and colleague Dr. Bill Galanter:
• "You mean the one that shows that the American healthcare system doesn't deliver reliable, quality care no matter what kind of tools you give them? Since in addition to the physicians, insurance reimbursement, short visits, ill-advised mandatory government regulation, uninsured patients, pharmaceutical advertising, a terrible diet, overly expensive drugs and EMR's, co-pays, donut holes (will come back if republicans get their way) and a trillion other factors are also to blame..."

Or you can quote Dr. McDonald specifically, who wrote:
First, and most important, the current article tells us nothing about which CDS guidelines were implemented in the systems that they studied. Practices and EHRs vary considerably in the number and type of CDS rules that they implement, and we do not know whether the CDS rules implemented by the practices that participated in the surveys addressed any of the 20 quality indicators evaluated by Romano and Stafford. Second, the current study and Garg and coauthors' review considered very different categories of guidelines. Most of the guidelines (60%) in Romano and Stafford's study concern medication use; none of them deals with immunizations or screening tests, which were the dominant subjects in the studies reviewed by Garg et al. Furthermore, in our experience, care providers are less willing to accept and act on automated reminders about initiating long-term drug therapy than about ordering a single test or an immunization. The third difference is that the current study examined the outcome of a single visit, while most of the trials reviewed by Garg and colleagues observed the cumulative effect of the CDS system on a patient over many visits. Finally, the data available from NAMCS/NHAMCS may be limited compared with what is contained in most of the EHRs used for Garg and coauthors' trials. For example, the NAMCS/NHAMCS instruments have room to record only 8 medications, even though at least 17% of individuals older than 65 years take 10 or more medications.

Finally, this whole issue reminds me of what Don Berwick has been preaching for many years… that the way academic researchers study the effect of a new medication or procedure is great for those scenarios, but is not so good in studying the process of quality improvement, which usually relies on a combination of factors, including IT, cultural shifts and process changes. In this 2008 JAMA article called "The Science of Improvement" he explains how to improve the measurement of quality improvement programs:

Four changes in the current approach to evidence in health care would help accelerate the improvement of systems of care and practice. First, embrace a wider range of scientific methodologies. To improve care, evaluation should retain and share information on both mechanisms (ie, the ways in which specific social programs actually produce social changes) and contexts (ie, local conditions that could have influenced the outcomes of interest). Evaluators and medical journals will have to recognize that, by itself, the usual OXO experimental paradigm is not up to this task [observe a system (O), introduce a perturbation (X) to some participants but not others, and then observe again (O).]. It is possible to rely on other methods without sacrificing rigor. Many assessment techniques developed in engineering and used in quality improvement—statistical process control, time series analysis, simulations, and factorial experiments—have more power to inform about mechanisms and contexts than do RCTs, as do ethnography, anthropology, and other qualitative methods. For these specific applications, these methods are not compromises in learning how to improve; they are superior.

Second, reconsider thresholds for action on evidence. Embedded in traditional rules of inference (like the canonical threshold P<.05) is a strong aversion to rejecting the null hypothesis when it is true. That is prudent when the risks of change are high and when the status quo warrants some confidence. However, the Institute of Medicine report Crossing the Quality Chasm calls into question the wisdom of favoring the status quo.

Auerbach et al warned against “proceeding largely on the basis of urgency rather than evidence” in trying to improve quality of care. This is a false choice. It is both possible and wise to remain alert and vigilant for problems while testing promising changes very rapidly and with a sense of urgency. A central idea in improvement is to make changes incrementally, learning from experience while doing so: plan-do-study-act.

Third, rethink views about trust and bias. Bias can be a serious threat to valid inference; however, too vigorous an attack on bias can have unanticipated perverse effects. First, methods that seek to eliminate bias can sacrifice local wisdom since many OXO designs intentionally remove knowledge of context and mechanisms. That is wasteful. Almost always, the individuals who are making changes in care systems know more about mechanisms and context than third-party evaluators can learn with randomized trials. Second, injudicious assaults on bias can discourage the required change agents. Insensitive suspicion about biases, no matter how well-intended, can feel like attacks on sincerity, honesty, or intelligence. A better plan is to equip the workforce to study the effects of their efforts, actively and objectively, as part of daily work.

Fourth, be careful about mood, affect, and civility in evaluations. Academicians and frontline caregivers best serve patients and communities when they engage with each other on mutually respectful terms. Practitioners show respect for academic work when they put formal scientific findings into practice rapidly and appropriately. Academicians show respect for clinical work when they want to find out what practitioners know.

Additional Studies/Articles on this subject
* Health Affairs article (March, 2011) from Dr. Blumenthal: Meta-Analysis of recent studies shows more positive effect of EHRs on quality (less on provider satisfaction).