Last week, I was pleased to attend the 83rd AHIMA conference in beautiful Salt Lake City. As usual, AHIMA was a great show and Salt Lake City a fantastic host city!  This was the first AHIMA for SpeechMotion, although the 6th for me personally.

The buzz this year was natural language processing (NLP) and the role it will play in transforming healthcare documentation.  From computer-assisted-coding (CAC) to data extraction and data mining, NLP has been used for a variety of purposes in healthcare for years.  So why the buzz now?

Major changes in healthcare documentation, like ICD-10 and Meaningful Use, have led to a momentum in the development of robust NLP technologies.  NLP promises many new advancements, and we’ll cover more of those in future posts.  In thinking about how NLP can radically change healthcare documentation is its potential for eliminating (or at least decreasing) the amount of pointing and clicking in an EMR interface done today by physicians. 

When I speak with physicians and in much of what I’ve read on the subject, I hear time and again how much they hate to point and click in an EMR interface.  It’s intrusive to the patient/doctor relationship, because the doctor is staring at a computer screen, rather than talking to the patient.  Pointing and clicking is also cumbersome and a restrictive way of encoding data. 

That’s the reason for the pointing and clicking – it’s so the data elements are tagged in such a way the EMR system can report on that data.  Here is a simplified example.  When an EMR user types in a field box, selects an option from a drop-down menu, or chooses a radio button, he is encoding data.  For example, by typing in the word penicillin in a field box marked “allergies,” the physician just tagged <allergies=penicillin>.  This process makes the data readable by computer software.

If data is going to be encoded via an interface, then the data to be encoded needs to be predicted at the time of development.  When developing an EMR, you have to predict the data elements you want to track and encode so you can create the interface around them.  That means that if you suddenly decide you want to also encode a new data element, you have to re-develop your interface, which is time consuming and expensive.  Plus, only data that is entered after the new interface is released will be encoded properly.  Past data will be lost.

NLP on the other hand works outside of an interface.  Instead, it looks at free-form text and extracts data elements automatically.  For example, let’s say the following sentence was contained in a patient record:

The patient states he is allergic to penicillin and has never been hospitalized.

The NLP software will extract ‘allergic to penicillin’ and encode that as a positive allergy, <allergies=penicillin>.  In computer language, it’s the same as if a human being typed it into a field box.  So in a case like this, the physician would not have needed to interact at all with the EMR.  Instead, he could have picked up his iPhone and dictated, comfortably, quickly, efficiently, with no intrusive interface to get in the way of the relationship with his patient.  Plus, in the future, if we wanted to track a patient’s history of hospitalizations, we could modify the NLP to also look for phrases like ‘never been hospitalized,’ which would encode it as <hospitalizations=none>.  If we wanted to track that information with an EMR interface, we would have to add a new field box or drop-down menu, further cluttering the display of the EMR.

Many years ago, I started graduate research in NLP and conversational speech recognition. I left that program after being recruited by a start-up company and have been more or less away from NLP development ever since.  Things have certainly progressed since then, and I can’t say enough about how excited I am by the buzz happening with the potential for how NLP can assist physicians to comfortably complete patient stories, while still generating data that can be reliably and routinely extracted for the improvement of patient care.