Welcome to EHRmonize
, a Python package to abstract medical concepts using large language models.
Matos, J., Gallifant, J., Pei, J., & Wong, A. I. (2024). EHRmonize: A framework for medical concept abstraction from electronic health records using large language models. arXiv. https://arxiv.org/abs/2407.00242
@article{matos2024ehrmonize,
title={EHRmonize: A Framework for Medical Concept Abstraction from Electronic Health Records using Large Language Models},
author={João Matos and Jack Gallifant and Jian Pei and A. Ian Wong},
year={2024},
journal={arXiv preprint arXiv:2407.00242v1 [cs.CL]},
url={https://arxiv.org/abs/2407.00242},
}
For documentation, please see: https://ehrmonize.readthedocs.io/.
A Demo can be found in this Google Colaboratory Notebook
Processing and harmonizing the vast amounts of data captured in complex electronic health records (EHR) is a challenging and costly task that requires clinical expertise. Large language models (LLMs) have shown promise in various healthcare-related tasks. We herein introduce EHRmonize
, a framework designed to abstract EHR medical concepts using LLMs.
EHRmonize
is designed with two main components: a corpus generation and an LLM inference pipeline. The first step entails querying the EHR databases to extract and the text/concepts across various data domains that need categorization. The second step employs LLM few-shot prompting across different tasks. The objective is to leverage the vast medical text exposure of LLMs to convert raw input medication data into useful, predefined classes.
Our curated and labeled dataset is accessible on HuggingFace.
Type | Task |
---|---|
Free-text | task_generic_drug |
task_generic_route | |
Multiclass | task_multiclass_drug |
Binary | task_binary_drug |
Custom | task_custom |
API | model_id |
---|---|
OpenAI | gpt-4 |
gpt-4o | |
gpt-3.5-turbo (discouraged!) | |
AWS Bedrock | anthropic.claude-3-5-sonnet-20240620-v1:0 |
meta.llama3-70b-instruct-v1:0 | |
mistral.mixtral-8x7b-instruct-v0:1 |