Sunday, October 11, 2009

Speech Recognition and EMRs (and the holy grail of user interfaces)

I was asked recently about Speech Recognition and EMRs, since the technology has improved in the past few years. Here are my thoughts:

Assuming it works well, the important question then becomes “How are you using it”? We are now seeing two main areas where it can be used in an EMR, and we can make some interesting predictions about the future.

The first option is to simply dictate a note after the visit or procedure. This saves on dictation costs, but one would lose out on (1) The value an EMR can bring with respect to decision support at the time of care and (2) The efficiency of copy/paste when documenting chronic care over many visits. Therefore, this option may be appropriate for things like
- Documenting procedures (eg colonoscopy)
- Specialists or ER doctors whom may just see a pateint once
- Creating a letter to send to a colleague
- EMR systems which really are just note repositories (ie ones that do not have electronic prescribing or other ordering, and thus decision support is not easily integrated).

A second and growing option is to integrate “hot spot” dictation into an EMR workflow by using it just for highly complex parts of the note, such as describing details in a patient’s “History of the Present Illness” (HPI). More and more EMRs allow for these “hot spots”, which can be done either during the visit or afterwards. Some rely on speech recognition, others send it to a live transcriptionist to type in, and others use a combination – starting with speech recognition and then sending to a human “correctionist” to make sure it was done right. The final product then needs to be “signed off” by the doctor.

However, the more interesting issue is what the future might hold as these systems improve. I predict that within 10-20 years, and maybe sooner, a computer with speech recognition could become an interactive part of the visit experience, and in fact serve as an “assistant” to the physician. Imagine a situation where the doctor could “tell” a computer that he wants to order a chemistry panel and start lisinoprol on a patient newly diagnosed with hypertension. The system would warn if there were any drug interactions, and could then input the orders into the correct place, send the prescription to the pharmacy and even offer to print up extra information about the drug and hypertension... all with no typing by the doctor. Even further down the road, perhaps the computer can listen to the doctor and patient talking about the history and create the note based on that input. The future of speech recognition paired with artificial intelligence may indeed by the holy grail for user interfaces.

1 comment:

  1. Hot spot sounds like a Cerner PowerNote reference. We're using Dragon voice recognition system, integrating into a templated PowerNote. Still working through the bugs, but overall see great potential in both the template adjunct as well as free dictation.
    P.S. This post done with Dragon.

    ReplyDelete