Enhancing Medical Guideline Retrieval with Language Model-Augmented Retrieval (LMAR): A Comparative Study with Traditional RAG
Poster Number: P182
Presentation Time: 05:00 PM - 06:30 PM
Abstract Keywords: Clinical Guidelines, Clinical Decision Support, Large Language Models (LLMs)
Primary Track: Applications
Programmatic Theme: Clinical Informatics
The development of retrieval-augmented generation (RAG) methods marked a significant advancement in natural language processing, particularly in clinical decision support systems. However, traditional RAG methods often struggle with accurately retrieving contextually relevant information from medical guidelines. Our study introduces Language Model-Augmented Retrieval (LMAR), an innovative approach that leverages large language models to enhance the retrieval and generation of information, ensuring the provision of accurate and actionable guidance.
We conducted a comparative study using a dataset of 42 clinical guidelines. The study compared traditional RAG, which uses cosine similarity for retrieval, with LMAR, which employs a prompted large language model to rank the relevance of each guideline based on metadata and the query. The effectiveness of each method was evaluated based on its ability to accurately retrieve dosing information for pharmacological treatments in response to preselected clinical queries.
Traditional RAG demonstrated limitations, often retrieving non-actionable information. In contrast, LMAR was able to capture medication dosing information in 100% of cases, accurately retrieving information from sections and tables within the guidelines. Our study demonstrates that LMAR significantly outperforms traditional RAG in extracting actionable guidance from medical guidelines.
Speaker(s):
Joongheum Park, MD
Chobanian & Avedisian School of Medicine, Boston University
Poster Number: P182
Presentation Time: 05:00 PM - 06:30 PM
Abstract Keywords: Clinical Guidelines, Clinical Decision Support, Large Language Models (LLMs)
Primary Track: Applications
Programmatic Theme: Clinical Informatics
The development of retrieval-augmented generation (RAG) methods marked a significant advancement in natural language processing, particularly in clinical decision support systems. However, traditional RAG methods often struggle with accurately retrieving contextually relevant information from medical guidelines. Our study introduces Language Model-Augmented Retrieval (LMAR), an innovative approach that leverages large language models to enhance the retrieval and generation of information, ensuring the provision of accurate and actionable guidance.
We conducted a comparative study using a dataset of 42 clinical guidelines. The study compared traditional RAG, which uses cosine similarity for retrieval, with LMAR, which employs a prompted large language model to rank the relevance of each guideline based on metadata and the query. The effectiveness of each method was evaluated based on its ability to accurately retrieve dosing information for pharmacological treatments in response to preselected clinical queries.
Traditional RAG demonstrated limitations, often retrieving non-actionable information. In contrast, LMAR was able to capture medication dosing information in 100% of cases, accurately retrieving information from sections and tables within the guidelines. Our study demonstrates that LMAR significantly outperforms traditional RAG in extracting actionable guidance from medical guidelines.
Speaker(s):
Joongheum Park, MD
Chobanian & Avedisian School of Medicine, Boston University
Enhancing Medical Guideline Retrieval with Language Model-Augmented Retrieval (LMAR): A Comparative Study with Traditional RAG
Category
Poster Invite
Description
Date: Monday (11/11)
Time: 05:00 PM to 06:30 PM
Room: Grand Ballroom (Posters)
Time: 05:00 PM to 06:30 PM
Room: Grand Ballroom (Posters)