Custom CSS
double-click to edit, do not edit in source
11/18/2025 |
9:45 AM – 11:00 AM |
Room 7
S70: Beyond the Buzzword: Practical Applications and Critical Considerations of LLMs in Health
Presentation Type: Oral Presentations
Leveraging Large Language Models for Cancer Vaccine Adjuvant Name Extraction from Biomedical Literature
Presentation Time: 09:45 AM - 09:57 AM
Abstract Keywords: Large Language Models (LLMs), Natural Language Processing, Information Extraction, Artificial Intelligence, Controlled Terminologies, Ontologies, and Vocabularies, Information Retrieval
Primary Track: Foundations
Programmatic Theme: Translational Bioinformatics
This study explores the automated recognition of cancer vaccine adjuvant names using Large Language Models (LLMs), specifically Generative Pretrained Transformers (GPT) and Large Language Model Meta AI (Llama). The models were tested in zero- and few-shot learning paradigms using AdjuvareDB and Vaccine Adjuvant Compendium (VAC) datasets. Prompts were designed to extract adjuvant names and assess the impact of contextual details. Notably, Llama-3.2 3B achieved a Recall up to 68.7% (72.5% with manual validation) on the VAC dataset with 4 shots, although its Precision and F1-score were lower. In contrast, GPT-4o, when provided additional contextual interventions, achieved a Precision of 65.9%, Recall of 79.7%, and F1-score of 69.8% on the AdjuvareDB dataset. Moreover, both LLMs outperformed BioBERT, a model widely used for biomedical text mining, highlighting the potential of general-purpose LLMs for automatic vaccine adjuvant name extraction and contributing to advancements in vaccine research.
Speaker:
Hasin
Rehana,
MS
University of North Dakota
Authors:
Hasin Rehana, MS - University of North Dakota;
Jie Zheng,
Ph.D. -
University of Michigan;
Feng-Yu Yeh, Masters Degree - University of Michigan Medical School - He Lab;
Benu Bansal, Biomedical Engineering - University of North Dakota;
Nur Bengisu Çam, Master of Science - Bogazici University;
Christianah Jemiyo, Masters - University of North Dakota;
Brett McGregor, Ph.D. - University of North Dakota;
Arzucan Özgür,
Ph.D. -
Bogazici University;
Yongqun He, PhD - University of Michigan;
Junguk Hur, Ph.D. - University of North Dakota;
Hasin
Rehana,
MS - University of North Dakota
‘Just Give me a Number’: Comparing Human and Large Language Model Interpretations of Verbal Probability Terms in Healthcare Communication
Presentation Time: 09:57 AM - 10:09 AM
Abstract Keywords: Patient Engagement and Preferences, Delivering Health Information and Knowledge to the Public, Large Language Models (LLMs)
Primary Track: Applications
Prior work found that patients have varied numerical interpretations of verbal probability terms (e.g ‘rare’). Patients may turn to large language (LLMs) models for clarification. We evaluated commercial LLMs for their interpretation of these terms. LLMs often abstained from interpreting verbal probability terms, but when forced using prompts, LLMs aligned closely with patient interpretations. As a result, incongruence between how patients and clinicians interpret verbal probabilities may be exacerbated by LLMs, potentially worsening health communication.
Speaker:
Nicholas
Jackson,
PhD Student Biomedical Informatics
Vanderbilt University
Authors:
Nicholas Jackson, PhD Student Biomedical Informatics - Vanderbilt University;
Katerina Andreadis, MS - NYU Grossman School of Medicine;
Jessica Ancker, MPH, PhD, FACMI - Vanderbilt University Medical Center;
Nicholas
Jackson,
PhD Student Biomedical Informatics - Vanderbilt University
Integrating Rule-Based NLP and Large Language Models for Statin Information Extraction from Clinical Notes
Presentation Time: 10:09 AM - 10:21 AM
Abstract Keywords: Large Language Models (LLMs), Artificial Intelligence, Data Mining
Primary Track: Applications
This study developed and evaluated a hybrid AI framework for extracting statin-related information from clinical notes at Vanderbilt University Medical Center. The framework combined a rule-based NLP filter, an LLM-based refinement filter, and a multi-category classifier to efficiently exclude irrelevant notes and accurately classify intolerance, contraindications, and patient refusal. Results support its scalability and potential to enhance clinical decision support and improve patient outcomes.
Speaker:
Siru
Liu,
PhD
Vanderbilt University Medical Center
Authors:
Allison McCoy, PhD, ACHIP, FACMI, FAMIA - Vanderbilt University Medical Center;
Qingyu Chen, PhD - Yale University;
Siru
Liu,
PhD - Vanderbilt University Medical Center
Large Language Models for Extracting and Inferring Functional Performance in Cancer Care
Presentation Time: 10:21 AM - 10:33 AM
Abstract Keywords: Nursing Informatics, Chronic Care Management, Natural Language Processing, Real-World Evidence Generation, Clinical Decision Support, Healthcare Quality
Working Group: Nursing Informatics Working Group
Primary Track: Applications
Programmatic Theme: Clinical Research Informatics
FuncStatAI leverages large language models to extract and infer functional performance status from clinical narratives in cancer care. Using structured prompting, our system achieved 73% accuracy in identifying performance scales and 70-78% in value accuracy. F1-scores across severity categories ranged from 74-91%. The system provides explainable outputs with confidence scores and supporting evidence. While performance on explicitly stated scores was strong, inferring status from implicit descriptions remains challenging but shows promising results in severity classification (68%).
Speaker:
Alaa
Albashayreh,
PhD, MSHI, RN
University of Iowa
Authors:
Hafiza Akter Munira,
BS -
University of Iowa;
Joonwoo Park,
MS -
University of Iowa;
Avinash Reddy Mudireddy,
MS -
University of Iowa;
Alaa
Albashayreh,
PhD, MSHI, RN - University of Iowa
Unregulated Large Language Models Produce Medical Device-Like Output
Presentation Time: 10:33 AM - 10:45 AM
Abstract Keywords: Artificial Intelligence, Clinical Decision Support, Large Language Models (LLMs), Policy
Primary Track: Policy
Programmatic Theme: Clinical Informatics
Large language models (LLMs) show considerable promise for clinical decision support (CDS) but none is currently authorized by the Food and Drug Administration (FDA) as a CDS device. We evaluated whether two popular LLMs could be induced to provide device-like CDS output. We found that LLM output readily produced device-like decision support across a range of scenarios, suggesting a need for regulation if LLMs are formally deployed for clinical use in the future.
Speaker:
GARY
WEISSMAN,
MD, MSHP
University of Pennsylvania
Authors:
GARY WEISSMAN, MD, MSHP - University of Pennsylvania;
Toni Mankowitz,
BS -
USC Schaeffer Center;
Genevieve P. Kanter,
PhD -
Department of Health Policy and Management, Sol Price School of Public Policy, University of Southern California, Los Angeles, California, USA;
GARY
WEISSMAN,
MD, MSHP - University of Pennsylvania
Extracting Social Determinants of Health from Clinical Notes Using Open-source Large Language Models
Presentation Time: 10:45 AM - 10:57 AM
Abstract Keywords: Large Language Models (LLMs), Surveys and Needs Analysis, Artificial Intelligence, Evaluation
Primary Track: Applications
Programmatic Theme: Clinical Research Informatics
Social determinants of health (SDOH) generally account for 80% of modifiable health factors. Current practice reports most SDOH in free-text clinical notes. We performed supervised finetuning of two open-source large language models (LLaMA-3-70B and Mixtral-8x7B) to extract SDOH from the 2022 n2c2 shared task clinical notes. We evaluated the LLMs in a hold-out dataset (n=188 notes). The LLaMA model had the best F1 score of 0.927 (95% C.I., 0.926-0.928) compared to the Mixtral model, P<0.05.
Speaker:
Fuchiang (Rich)
Tsui,
PhD, FAMIA, IEEE Senior Member
Children's Hospital of Philadelphia and University of Pennsylvania
Authors:
Sifei Han, PhD - Children's Hospital of Philadelphia;
Charles Jin,
None -
University of Pennsylvania;
Leah Ning,
None -
University of Pennsylvania;
Fuchiang (Rich)
Tsui,
PhD, FAMIA, IEEE Senior Member - Children's Hospital of Philadelphia and University of Pennsylvania