- Home
- 2026 Amplify Informatics Conference Program Gallery
- TRI24: Seeing Is Believing: Imaging & Vision AI (Oral Presentation)
Times are displayed in (UTC-06:00) Mountain Time (US & Canada) Change
Custom CSS
double-click to edit, do not edit in source
5/19/2026 |
3:30 PM – 4:45 PM |
Pikes Peak - 555 Building, 2nd Floor
TRI24: Seeing Is Believing: Imaging & Vision AI (Oral Presentation)
Presentation Type: Oral Presentations
Session Credits: 1.25
Adaptive Integration of Incomplete Multimodal 3D Neuroimaging for Alzheimer’s Prediction and Biomarker Discovery
Presentation Type: Paper - Student
Student Paper Competition Nominee
Presentation Time: 03:30 PM - 03:42 PM
Primary Track: Translation Bioinformatics/Precision Medicine
Alzheimer’s disease (AD) diagnosis requires analysis of diverse data types to capture the heterogeneous factors underlying its development and progression. Magnetic resonance imaging (MRI) and positron emission tomography (PET) noninvasively measure brain structure and neuronal activity, respectively, and can serve as early indicators of AD onset and future progression. We propose V3D-MMoE, an interpretable framework to adaptively integrate incomplete multimodal 3D neuroimaging for AD diagnosis prediction and biomarker discovery. It goes beyond prior approaches by leveraging (1) a sparse mixture-of-experts formulation to account for variation in the importance of different modality combinations, (2) modality alignment to enhance cross-modal learning, and (3) cross-encoders to dynamically handle missing modalities. When applied to MRI and PET scans to predict two-year AD diagnosis, V3D-MMoE outperformed state-of-the-art multimodal 3D neuroimaging methods. Interpretability analyses revealed subject-specific MRI and PET biomarkers consistent with the known biology of AD. Ablation experiments demonstrated the benefit of leveraging multimodal neuroimaging.
Speaker(s):
Jenna Ballard, MA
University of Pennsylvania
Author(s):
Jenna Ballard, MA - University of Pennsylvania; Li Shen, Ph.D., FAIMBE, FACMI, FAMIA - University of Pennsylvania; Qi Long, Ph.D. - University of Pennsylvania;
Presentation Type: Paper - Student
Student Paper Competition Nominee
Presentation Time: 03:30 PM - 03:42 PM
Primary Track: Translation Bioinformatics/Precision Medicine
Alzheimer’s disease (AD) diagnosis requires analysis of diverse data types to capture the heterogeneous factors underlying its development and progression. Magnetic resonance imaging (MRI) and positron emission tomography (PET) noninvasively measure brain structure and neuronal activity, respectively, and can serve as early indicators of AD onset and future progression. We propose V3D-MMoE, an interpretable framework to adaptively integrate incomplete multimodal 3D neuroimaging for AD diagnosis prediction and biomarker discovery. It goes beyond prior approaches by leveraging (1) a sparse mixture-of-experts formulation to account for variation in the importance of different modality combinations, (2) modality alignment to enhance cross-modal learning, and (3) cross-encoders to dynamically handle missing modalities. When applied to MRI and PET scans to predict two-year AD diagnosis, V3D-MMoE outperformed state-of-the-art multimodal 3D neuroimaging methods. Interpretability analyses revealed subject-specific MRI and PET biomarkers consistent with the known biology of AD. Ablation experiments demonstrated the benefit of leveraging multimodal neuroimaging.
Speaker(s):
Jenna Ballard, MA
University of Pennsylvania
Author(s):
Jenna Ballard, MA - University of Pennsylvania; Li Shen, Ph.D., FAIMBE, FACMI, FAMIA - University of Pennsylvania; Qi Long, Ph.D. - University of Pennsylvania;
Jenna
Ballard,
MA - University of Pennsylvania
Vision Foundry: A System for Training Foundational Vision AI Models
Presentation Type: Paper - Regular
Presentation Time: 03:42 PM - 03:54 PM
Primary Track: Data Science/Artificial Intelligence
Self-supervised learning (SSL) leverages vast unannotated medical datasets, yet steep technical barriers limit adoption by clinical researchers. We introduce Vision Foundry, a code-free, HIPAA-compliant platform that democratizes pre-training, adaptation, and deployment of foundational vision models. The system integrates the DINO-MX framework, abstracting distributed infrastructure complexities while implementing specialized strategies like Magnification-Aware Distillation (MAD) and Parameter-Efficient Fine-Tuning (PEFT). We validate the platform across domains, including neuropathology segmentation, lung cellularity estimation, and coronary calcium scoring. Our experiments demonstrate that models trained via Vision Foundry significantly outperform generic baselines in segmentation fidelity and regression accuracy, while exhibiting robust zero-shot generalization across imaging protocols. By bridging the gap between advanced representation learning and practical application, Vision Foundry enables domain experts to develop state-of-the-art clinical AI tools with minimal annotation overhead, shifting focus from engineering optimization to clinical discovery.
Speaker(s):
Mitchell Klusty, B.S. Computer Science
University of Kentucky
Author(s):
Mahmut Gokmen, M.S. - University of Kentucky; Mitchell Klusty, B.S. Computer Science - University of Kentucky; Evan Damron, B.S. - University of Kentucky; William Logan, B.S. in Computer Engineering - UKY; Aaron Mullen, M.S. - University of Kentucky; Caroline Leach, B.S. in Physics - University of Kentucky; Emily Collier, MSLS - University of Kentucky; Samuel Armstrong, MS - University of Kentucky; Cody Bumgardner, PhD - University of Kentucky;
Presentation Type: Paper - Regular
Presentation Time: 03:42 PM - 03:54 PM
Primary Track: Data Science/Artificial Intelligence
Self-supervised learning (SSL) leverages vast unannotated medical datasets, yet steep technical barriers limit adoption by clinical researchers. We introduce Vision Foundry, a code-free, HIPAA-compliant platform that democratizes pre-training, adaptation, and deployment of foundational vision models. The system integrates the DINO-MX framework, abstracting distributed infrastructure complexities while implementing specialized strategies like Magnification-Aware Distillation (MAD) and Parameter-Efficient Fine-Tuning (PEFT). We validate the platform across domains, including neuropathology segmentation, lung cellularity estimation, and coronary calcium scoring. Our experiments demonstrate that models trained via Vision Foundry significantly outperform generic baselines in segmentation fidelity and regression accuracy, while exhibiting robust zero-shot generalization across imaging protocols. By bridging the gap between advanced representation learning and practical application, Vision Foundry enables domain experts to develop state-of-the-art clinical AI tools with minimal annotation overhead, shifting focus from engineering optimization to clinical discovery.
Speaker(s):
Mitchell Klusty, B.S. Computer Science
University of Kentucky
Author(s):
Mahmut Gokmen, M.S. - University of Kentucky; Mitchell Klusty, B.S. Computer Science - University of Kentucky; Evan Damron, B.S. - University of Kentucky; William Logan, B.S. in Computer Engineering - UKY; Aaron Mullen, M.S. - University of Kentucky; Caroline Leach, B.S. in Physics - University of Kentucky; Emily Collier, MSLS - University of Kentucky; Samuel Armstrong, MS - University of Kentucky; Cody Bumgardner, PhD - University of Kentucky;
Mitchell
Klusty,
B.S. Computer Science - University of Kentucky
A Framework for Cross-Domain Generalization in Coronary Artery Calcium Scoring Across Gated and Non-Gated Computed Tomography
Presentation Type: Paper - Regular
Presentation Time: 03:54 PM - 04:06 PM
Primary Track: Data Science/Artificial Intelligence
Coronary artery calcium (CAC) scoring is a key predictor of cardiovascular risk, but relies on ECG-gated CT scans, restricting their use to specialized cardiac imaging settings. We introduce an automated framework, an automated framework for CAC detection and lesion-specific Agatston scoring that works across gated and non-gated CT scans. At its core is CARD-ViT, a self-supervised Vision Transformer trained exclusively on gated CT data using DINO. Without any non-gated training data, our framework achieves 0.707 accuracy and Cohen’s kappa of 0.528 on the Stanford non-gated dataset, matching models trained directly on non-gated scans. On gated test sets, the framework achieves 0.910 accuracy with Cohen’s kappa scores of 0.871 and 0.874 across independent datasets, demonstrating risk stratification. Cardiologists confirmed robustness against artifacts and anatomical confounders. Results demonstrate the feasibility of cross-domain CAC scoring from gated to non-gated domains, supporting scalable cardiovascular screening in routine chest imaging without additional scans or annotations.
Speaker(s):
Mahmut Gokmen, M.S.
University of Kentucky
Author(s):
Mahmut Gokmen, M.S. - University of Kentucky; Moneera Haque, MD - University of Kentucky; Steve Leung, MD - University of Kentucky; Caroline Leach, B.S. in Physics - University of Kentucky; Seth Parker, PhD - University of Kentucky; Stephen Hobbs, MD - University of Kentucky; Vincent Sorrell, MD - University of Kentucky; Brent Seales, PhD - University of Kentucky; Cody Bumgardner, PhD - University of Kentucky;
Presentation Type: Paper - Regular
Presentation Time: 03:54 PM - 04:06 PM
Primary Track: Data Science/Artificial Intelligence
Coronary artery calcium (CAC) scoring is a key predictor of cardiovascular risk, but relies on ECG-gated CT scans, restricting their use to specialized cardiac imaging settings. We introduce an automated framework, an automated framework for CAC detection and lesion-specific Agatston scoring that works across gated and non-gated CT scans. At its core is CARD-ViT, a self-supervised Vision Transformer trained exclusively on gated CT data using DINO. Without any non-gated training data, our framework achieves 0.707 accuracy and Cohen’s kappa of 0.528 on the Stanford non-gated dataset, matching models trained directly on non-gated scans. On gated test sets, the framework achieves 0.910 accuracy with Cohen’s kappa scores of 0.871 and 0.874 across independent datasets, demonstrating risk stratification. Cardiologists confirmed robustness against artifacts and anatomical confounders. Results demonstrate the feasibility of cross-domain CAC scoring from gated to non-gated domains, supporting scalable cardiovascular screening in routine chest imaging without additional scans or annotations.
Speaker(s):
Mahmut Gokmen, M.S.
University of Kentucky
Author(s):
Mahmut Gokmen, M.S. - University of Kentucky; Moneera Haque, MD - University of Kentucky; Steve Leung, MD - University of Kentucky; Caroline Leach, B.S. in Physics - University of Kentucky; Seth Parker, PhD - University of Kentucky; Stephen Hobbs, MD - University of Kentucky; Vincent Sorrell, MD - University of Kentucky; Brent Seales, PhD - University of Kentucky; Cody Bumgardner, PhD - University of Kentucky;
Mahmut
Gokmen,
M.S. - University of Kentucky
Development and Validation of an LLM-Based Pipeline for Transcranial Doppler Interpretation in Pediatric Sickle Cell Disease
Presentation Type: Podium Abstract
Presentation Time: 04:06 PM - 04:18 PM
Primary Track: Data Science/Artificial Intelligence
Background:
Up to 10% of US children with sickle cell disease (SCD) develop abnormal transcranial Doppler (TCD) velocities, indicating high stroke risk. Understanding how real-world care practices impact TCD trajectories requires large-scale analysis, but TCD results exist as unstructured clinical notes with substantial heterogeneity in format and documentation across institutions and time periods, making traditional rule-based extraction unreliable. We aimed to develop and validate an LLM-based pipeline to automatically extract and classify TCD velocities from unstructured clinical notes.
Methods:
We developed a two-stage LLM-based classification algorithm to extract TCD velocities from 6,802 reports from two PEDSnet institutions (2009-2025). Five open-source LLMs were benchmarked on 200 manually validated reports containing 2,103 unique velocities. Extracted velocities were then classified using STOP trial thresholds. Performance was validated against 4164 manually classified notes.
Results:
The best-performing model (gpt-oss-120b) achieved 99% exact-match accuracy for velocity extraction. The second best model was gpt-oss-20b with 98% exact match for velocity extraction.
For outcome classification, the algorithm demonstrated 100% sensitivity, 99% specificity, 92% PPV, 100% NPV, and F1 score of 0.96 for distinguishing abnormal versus not-abnormal TCDs.
Findings were similar for classification of other outcomes, with a sensitivity of 92%, specificity of 97.5%, PPV of 94%, NPV of 99% and F1 score of 0.93.
Conclusion:
Our LLM-based pipeline accurately classifies TCD results despite substantial inter-institutional variability. The extracted data enables real-world analysis of critical SCD outcomes and comparative effectiveness studies. This generalizable approach can be deployed for other use-cases within centers or learning health systems.
Speaker(s):
Sahal Master, MPH
Children's hospital of Philadelphia
Author(s):
Aleksandra Dain, MD, MSCE - Nemours Children's Hospital; Hanieh Razzaghi, PhD - Children's Hospital of Philadelphia; Charles Bailey;
Presentation Type: Podium Abstract
Presentation Time: 04:06 PM - 04:18 PM
Primary Track: Data Science/Artificial Intelligence
Background:
Up to 10% of US children with sickle cell disease (SCD) develop abnormal transcranial Doppler (TCD) velocities, indicating high stroke risk. Understanding how real-world care practices impact TCD trajectories requires large-scale analysis, but TCD results exist as unstructured clinical notes with substantial heterogeneity in format and documentation across institutions and time periods, making traditional rule-based extraction unreliable. We aimed to develop and validate an LLM-based pipeline to automatically extract and classify TCD velocities from unstructured clinical notes.
Methods:
We developed a two-stage LLM-based classification algorithm to extract TCD velocities from 6,802 reports from two PEDSnet institutions (2009-2025). Five open-source LLMs were benchmarked on 200 manually validated reports containing 2,103 unique velocities. Extracted velocities were then classified using STOP trial thresholds. Performance was validated against 4164 manually classified notes.
Results:
The best-performing model (gpt-oss-120b) achieved 99% exact-match accuracy for velocity extraction. The second best model was gpt-oss-20b with 98% exact match for velocity extraction.
For outcome classification, the algorithm demonstrated 100% sensitivity, 99% specificity, 92% PPV, 100% NPV, and F1 score of 0.96 for distinguishing abnormal versus not-abnormal TCDs.
Findings were similar for classification of other outcomes, with a sensitivity of 92%, specificity of 97.5%, PPV of 94%, NPV of 99% and F1 score of 0.93.
Conclusion:
Our LLM-based pipeline accurately classifies TCD results despite substantial inter-institutional variability. The extracted data enables real-world analysis of critical SCD outcomes and comparative effectiveness studies. This generalizable approach can be deployed for other use-cases within centers or learning health systems.
Speaker(s):
Sahal Master, MPH
Children's hospital of Philadelphia
Author(s):
Aleksandra Dain, MD, MSCE - Nemours Children's Hospital; Hanieh Razzaghi, PhD - Children's Hospital of Philadelphia; Charles Bailey;
Sahal
Master,
MPH - Children's hospital of Philadelphia
Robust AI-ECG for Predicting Left Ventricular Systolic Dysfunction in Pediatric Congenital Heart Disease
Presentation Type: Paper - Regular
Presentation Time: 04:18 PM - 04:30 PM
Primary Track: Data Science/Artificial Intelligence
Artificial intelligence-enhanced electrocardiogram (AI-ECG) has shown promise as an inexpensive, ubiquitous, and non-invasive screening tool to detect left ventricular systolic dysfunction in pediatric congenital heart disease. However, current approaches rely heavily on large-scale labeled datasets, which poses a major obstacle to the democratization of AI in hospitals where only limited pediatric ECG data are available. In this work, we propose a robust training framework to improve AI-ECG performance under low-resource conditions. Specifically, we introduce an on-manifold adversarial perturbation strategy for pediatric ECGs to generate synthetic samples that better reflect real-world signal variations. Building on this, we develop an uncertainty-aware adversarial training algorithm that is architecture-agnostic and enhances model robustness. Internal and external evaluation on real-world pediatric (n=178,495) and adult (n=100,000) datasets demonstrates that our method enables low-cost and reliable detection of left ventricular systolic dysfunction, highlighting its potential for deployment in resource-limited clinical settings.
Speaker(s):
Yuting Yang, PhD
Boston Children's Hospital
Author(s):
Yuting Yang, PhD - Boston Children's Hospital; Lorenzo Peracchio, MS - University of Pavia; Joshua Mayourian, PhD, MD - Boston Children’s Hospital/Harvard Medical School; John K. Triedman, MD - Boston Children’s Hospital/Harvard Medical School; Tim Miller, PhD - Children's Hospital Boston/Harvard Medical School; William La Cava, PhD - Boston Children's Hospital / Harvard Medical School;
Presentation Type: Paper - Regular
Presentation Time: 04:18 PM - 04:30 PM
Primary Track: Data Science/Artificial Intelligence
Artificial intelligence-enhanced electrocardiogram (AI-ECG) has shown promise as an inexpensive, ubiquitous, and non-invasive screening tool to detect left ventricular systolic dysfunction in pediatric congenital heart disease. However, current approaches rely heavily on large-scale labeled datasets, which poses a major obstacle to the democratization of AI in hospitals where only limited pediatric ECG data are available. In this work, we propose a robust training framework to improve AI-ECG performance under low-resource conditions. Specifically, we introduce an on-manifold adversarial perturbation strategy for pediatric ECGs to generate synthetic samples that better reflect real-world signal variations. Building on this, we develop an uncertainty-aware adversarial training algorithm that is architecture-agnostic and enhances model robustness. Internal and external evaluation on real-world pediatric (n=178,495) and adult (n=100,000) datasets demonstrates that our method enables low-cost and reliable detection of left ventricular systolic dysfunction, highlighting its potential for deployment in resource-limited clinical settings.
Speaker(s):
Yuting Yang, PhD
Boston Children's Hospital
Author(s):
Yuting Yang, PhD - Boston Children's Hospital; Lorenzo Peracchio, MS - University of Pavia; Joshua Mayourian, PhD, MD - Boston Children’s Hospital/Harvard Medical School; John K. Triedman, MD - Boston Children’s Hospital/Harvard Medical School; Tim Miller, PhD - Children's Hospital Boston/Harvard Medical School; William La Cava, PhD - Boston Children's Hospital / Harvard Medical School;
Yuting
Yang,
PhD - Boston Children's Hospital
Estimation of Left Ventricular Systolic Function in Pediatric and Congenital Heart Disease from Serial Electrocardiograms
Presentation Type: Paper - Regular
Presentation Time: 04:30 PM - 04:42 PM
Primary Track: Data Science/Artificial Intelligence
Estimating left ventricular ejection fraction (LVEF) from electrocardiograms (ECGs) is a useful task enabled by artificial intelligence (AI-ECG). Prior work focuses on predicting LVEF from single ECGs. Here, we investigate whether a patient’s history of ECGs can improve LVEF prediction. We study a longitudinal cohort from Boston Children’s Hospital, deriving LVEF from echocardiograms conducted within 2 days of ECGs (n=178,495, n. patients: 70,226, median age: 10.6). We propose a sequential AI-ECG approach using convolutional layers to represent single ECGs and a sequential neural network to reason over longitudinal ECGs. We build and test several sequential architectures. For predictions with at least 5 previous ECGs, sequential AI-ECG improved median AUROC (IQR) by 3.4 points (1.4, 4.4). When predicting the LVEF value, sequential AI-ECG improves Pearson R by 0.08 (0.02, 0.13). Results suggest that patients’ longitudinal ECG history contains valuable information for improving AI-ECG risk stratification beyond current snapshot-based models.
Speaker(s):
Platon Lukyanenko, PhD
Boston Children's Hospital
Author(s):
Platon Lukyanenko, PhD - Boston Children's Hospital; Sunil Ghelani, MD - Boston Children's Hospital / Harvard Medical School; John Triedman, MD - Boston Children's Hospital / Harvard Medical School; Joshua Mayourian, MD PhD - Boston Children's Hospital / Harvard Medical School; William La Cava, PhD - Boston Children's Hospital / Harvard Medical School;
Presentation Type: Paper - Regular
Presentation Time: 04:30 PM - 04:42 PM
Primary Track: Data Science/Artificial Intelligence
Estimating left ventricular ejection fraction (LVEF) from electrocardiograms (ECGs) is a useful task enabled by artificial intelligence (AI-ECG). Prior work focuses on predicting LVEF from single ECGs. Here, we investigate whether a patient’s history of ECGs can improve LVEF prediction. We study a longitudinal cohort from Boston Children’s Hospital, deriving LVEF from echocardiograms conducted within 2 days of ECGs (n=178,495, n. patients: 70,226, median age: 10.6). We propose a sequential AI-ECG approach using convolutional layers to represent single ECGs and a sequential neural network to reason over longitudinal ECGs. We build and test several sequential architectures. For predictions with at least 5 previous ECGs, sequential AI-ECG improved median AUROC (IQR) by 3.4 points (1.4, 4.4). When predicting the LVEF value, sequential AI-ECG improves Pearson R by 0.08 (0.02, 0.13). Results suggest that patients’ longitudinal ECG history contains valuable information for improving AI-ECG risk stratification beyond current snapshot-based models.
Speaker(s):
Platon Lukyanenko, PhD
Boston Children's Hospital
Author(s):
Platon Lukyanenko, PhD - Boston Children's Hospital; Sunil Ghelani, MD - Boston Children's Hospital / Harvard Medical School; John Triedman, MD - Boston Children's Hospital / Harvard Medical School; Joshua Mayourian, MD PhD - Boston Children's Hospital / Harvard Medical School; William La Cava, PhD - Boston Children's Hospital / Harvard Medical School;
Platon
Lukyanenko,
PhD - Boston Children's Hospital
TRI24: Seeing Is Believing: Imaging & Vision AI (Oral Presentation)
Description
Custom CSS
double-click to edit, do not edit in source
Date: Tuesday (05/19)
Time: 3:30 PM to 4:45 PM
Room: Pikes Peak - 555 Building, 2nd Floor
Time: 3:30 PM to 4:45 PM
Room: Pikes Peak - 555 Building, 2nd Floor