Monday, October 12, 2009

Health 2.0 Conference - Review

I wrote up a summary about my day at Health 2.0 last week, posted on HISTalk:
http://histalk2.com/2009/10/12/readers-write-101209/

Talked about three main things:
1. Learned about what the big 3 are doing (Google, MS, WebMD)... all seem to want to own a patient's data.
2. Some cool new startups
3. Keas review

Hoping to go to the next Health 2.0 in April in Paris...

Sunday, October 11, 2009

Speech Recognition and EMRs (and the holy grail of user interfaces)

I was asked recently about Speech Recognition and EMRs, since the technology has improved in the past few years. Here are my thoughts:

Assuming it works well, the important question then becomes “How are you using it”? We are now seeing two main areas where it can be used in an EMR, and we can make some interesting predictions about the future.

The first option is to simply dictate a note after the visit or procedure. This saves on dictation costs, but one would lose out on (1) The value an EMR can bring with respect to decision support at the time of care and (2) The efficiency of copy/paste when documenting chronic care over many visits. Therefore, this option may be appropriate for things like
- Documenting procedures (eg colonoscopy)
- Specialists or ER doctors whom may just see a pateint once
- Creating a letter to send to a colleague
- EMR systems which really are just note repositories (ie ones that do not have electronic prescribing or other ordering, and thus decision support is not easily integrated).

A second and growing option is to integrate “hot spot” dictation into an EMR workflow by using it just for highly complex parts of the note, such as describing details in a patient’s “History of the Present Illness” (HPI). More and more EMRs allow for these “hot spots”, which can be done either during the visit or afterwards. Some rely on speech recognition, others send it to a live transcriptionist to type in, and others use a combination – starting with speech recognition and then sending to a human “correctionist” to make sure it was done right. The final product then needs to be “signed off” by the doctor.

However, the more interesting issue is what the future might hold as these systems improve. I predict that within 10-20 years, and maybe sooner, a computer with speech recognition could become an interactive part of the visit experience, and in fact serve as an “assistant” to the physician. Imagine a situation where the doctor could “tell” a computer that he wants to order a chemistry panel and start lisinoprol on a patient newly diagnosed with hypertension. The system would warn if there were any drug interactions, and could then input the orders into the correct place, send the prescription to the pharmacy and even offer to print up extra information about the drug and hypertension... all with no typing by the doctor. Even further down the road, perhaps the computer can listen to the doctor and patient talking about the history and create the note based on that input. The future of speech recognition paired with artificial intelligence may indeed by the holy grail for user interfaces.

Saturday, October 03, 2009

The Mayo Innovation Conference

Mayo had a recent conference on innovation. It was called "Transform: A Collaborative Symposium on Innovations in Health Care Experience and Delivery".
Their web site (http://centerforinnovation.mayo.edu/transform) actually has videos from the whole conference... wish I was there, or that I had a day to sit and watch all these (although nothing beats really being there in person!). I'm definitely planning to go next year.