© unsplash/@nci

© unsplash/@nci
Home › 
Use Cases › 
Information extraction for electronic health records

Information extraction for electronic health records

Use Case
Healthcare & Pharma

Context

The digitization of the healthcare sector is a crucial task in order to improve processes for important time sensitive medical decisions and to relieve medical personnel from documentation duties. However, many patient records are still handwritten and in non-standardized form, what encourages the implementation of a machine learning model, that is capable of producing automatically digitized and standardized patient records.

Challenges

In case of handwritten records, a major challenge lies in the digitization of handwritten documents and to choose the right OCR tool in order to get accurate digital representations of the health records. However, handwriting is still a major challenge in OCR due to the diversity in handwritings. A low quality output of the OCR tool would therefore prevent all further processing.

If the record is digital, doctors edit a lot of information unstructured formats such as free text. Moreover, abbreviations are very common but not consistently used across physicians.

To analyze and visualize the information about cases, these free text entries need to be converted to structured data.

Potential solution approaches

For the text extraction from handwritten documents, the Google Cloud Vision OCR tool is by now the only viable option. If the OCR outputs us of sufficient quality, the recognized text data can be analyzed like the digitized free text entries.

To analyze the free text entries from both sources, the text input needs to be contextualized to the case and match entries with common terminology and abbreviations ("dictionaries") to produce structured and machine readble data. For this, a labeled dataset needs to be created based on expert knowledge from physicians.

The machine learning model extracts the relevant information from the text data using Natural Language Processing (NLP) techniques, such as word embedding, naive Bayes classifiers and TF-IDF algorithms. These methods allow the model to understand the relation between words and sentences and the underlying meaning by training with the labeled text data.

If the free text input data is structured, it can be visualized, exploratory data analysis can be applied or relationships can be mapped, e.g. with graph neural networks.

Get quarterly AI news

Receive news about Machine Learning and news around dida.

Successfully signed up.

Valid email address required.

Email already signed up.

Something went wrong. Please try again.

By clicking "Sign up" you agree to our privacy policy.

dida Logo