- Home
- 2025 Annual Symposium Program Gallery
- S68: The Visual Frontier: Navigating Biomedical Knowledge Through Different Lenses
Times are displayed in (UTC-04:00) Eastern Time (US & Canada) Change
Custom CSS
double-click to edit, do not edit in source
11/18/2025 |
9:45 AM – 11:00 AM |
Room 5
S68: The Visual Frontier: Navigating Biomedical Knowledge Through Different Lenses
Presentation Type: Oral Presentations
Unified Resource Browser: An Interactive and Reconfigurable Web Framework for Biomedical Metadata Exploration, Visualization, and Management
Presentation Time: 09:45 AM - 09:57 AM
Abstract Keywords: Informatics Implementation, User-centered Design Methods, Usability, Information Visualization
Primary Track: Applications
Programmatic Theme: Clinical Research Informatics
We present the Unified Resource Browser, a web-based framework designed to optimize the configuration, exploration, visualization, and dissemination of complex biomedical metadata. As a core component of the Neuroanatomy-Anchored Information Management Platform (NIMP), the Resource Browser enables researchers to efficiently access donor information, brain tissue samples, and sequencing data through structured resource tables and advanced search and filtering capabilities. The platform allows users to customize data views, share configurations via direct links, and seamlessly export data for further analysis. Integrated visualization tools offer immediate insights through customizable charts, enhancing data interpretation. By improving the accessibility and usability of biomedical data resources, the Unified Resource Browser fosters collaborative research and advances discoveries in brain structure and function.
Speaker:
Shiwei Lin, MS
The University of Texas Health Science Center at Houston
Authors:
Shiwei Lin, MS - The University of Texas Health Science Center at Houston; Ling Tong, PhD - The University of Texas Health Science Center at Houston; Kimberly Smith, BS - Allen Institute for Brain Science; Lydia Ng, Ph.D. - Allen Institute for Brain Science; Tyler Mollenkopf, M.S. - Allen Institute for Brain Science; Yan Huang, Ph.D - UT Health Science Center; Rashmie Abeysinghe, PhD - The University of Texas Health Science Center at Houston; Wei-Chun Chou, M.S; Antarr Byrd, M.S. - The University of Texas Health Science Center at Houston; Licong Cui, PhD - The University of Texas Health Science Center at Houston (UTHealth Houston); GQ Zhang, PhD - The University of Texas Health Science Center at Houston; Xiaojin Li, Ph.D. - University of Texas Health Science Center at Houston; Shiqiang Tao, PhD - The University of Texas Health Science Center at Houston;
Presentation Time: 09:45 AM - 09:57 AM
Abstract Keywords: Informatics Implementation, User-centered Design Methods, Usability, Information Visualization
Primary Track: Applications
Programmatic Theme: Clinical Research Informatics
We present the Unified Resource Browser, a web-based framework designed to optimize the configuration, exploration, visualization, and dissemination of complex biomedical metadata. As a core component of the Neuroanatomy-Anchored Information Management Platform (NIMP), the Resource Browser enables researchers to efficiently access donor information, brain tissue samples, and sequencing data through structured resource tables and advanced search and filtering capabilities. The platform allows users to customize data views, share configurations via direct links, and seamlessly export data for further analysis. Integrated visualization tools offer immediate insights through customizable charts, enhancing data interpretation. By improving the accessibility and usability of biomedical data resources, the Unified Resource Browser fosters collaborative research and advances discoveries in brain structure and function.
Speaker:
Shiwei Lin, MS
The University of Texas Health Science Center at Houston
Authors:
Shiwei Lin, MS - The University of Texas Health Science Center at Houston; Ling Tong, PhD - The University of Texas Health Science Center at Houston; Kimberly Smith, BS - Allen Institute for Brain Science; Lydia Ng, Ph.D. - Allen Institute for Brain Science; Tyler Mollenkopf, M.S. - Allen Institute for Brain Science; Yan Huang, Ph.D - UT Health Science Center; Rashmie Abeysinghe, PhD - The University of Texas Health Science Center at Houston; Wei-Chun Chou, M.S; Antarr Byrd, M.S. - The University of Texas Health Science Center at Houston; Licong Cui, PhD - The University of Texas Health Science Center at Houston (UTHealth Houston); GQ Zhang, PhD - The University of Texas Health Science Center at Houston; Xiaojin Li, Ph.D. - University of Texas Health Science Center at Houston; Shiqiang Tao, PhD - The University of Texas Health Science Center at Houston;
Shiwei
Lin,
MS - The University of Texas Health Science Center at Houston
Relational Database-Based Resource-Provenance Visualization Engine: with an application to BICAN data
Presentation Time: 09:57 AM - 10:09 AM
Abstract Keywords: Information Visualization, Informatics Implementation, Omics (genomics, metabolomics, proteomics, transcriptomics, etc.) and Integrative Analyses, Usability, User-centered Design Methods
Primary Track: Applications
Programmatic Theme: Clinical Research Informatics
Provenance tracking ensures data integrity, security, and accountability in healthcare and biomedical research. As biomedical data grows in complexity, comprehensive tracking mechanisms are needed to maintain reproducibility, transparency, and compliance with regulatory standards such as HIPAA and GDPR. Traditional log-based and ontology-based approaches capture and standardize data lineage, while cryptographic and blockchain-based methods enhance security and verifiability. However, challenges remain in scalability, security, and usability. To address these, we introduce the Resource-Provenance Visualization Engine (RPVE), an advanced system integrating data lineage tracking and interactive visualization. RPVE employs the Randomized N-gram Hashing Identifier (NHash ID) to establish precise data links within the BRAIN Initiative Cell Atlas Network (BICAN) and features an interactive Sankey visualization engine for seamless data exploration. The system enhances provenance tracking by improving data retrieval efficiency, ensuring reliable verification processes, and maintaining data integrity.
Speaker:
Xiaojin Li, Ph.D.
UTHealth
Authors:
Xiaojin Li, Ph.D. - University of Texas Health Science Center at Houston; Yan Huang, Ph.D - UT Health Science Center; Lydia Ng, Ph.D. - Allen Institute for Brain Science; Kimberly Smith, B.S. - Allen Institute for Brain Science; Wei-Chun Chou, M.S; Rashmie Abeysinghe, PhD - The University of Texas Health Science Center at Houston; Ling Tong, Ph.D. - University of Texas Health Science Center at Houston; Shiwei Lin, MS - The University of Texas Health Science Center at Houston; Licong Cui, PhD - The University of Texas Health Science Center at Houston (UTHealth Houston); Shiqiang Tao, PhD - The University of Texas Health Science Center at Houston; GQ Zhang, PhD - The University of Texas Health Science Center at Houston;
Presentation Time: 09:57 AM - 10:09 AM
Abstract Keywords: Information Visualization, Informatics Implementation, Omics (genomics, metabolomics, proteomics, transcriptomics, etc.) and Integrative Analyses, Usability, User-centered Design Methods
Primary Track: Applications
Programmatic Theme: Clinical Research Informatics
Provenance tracking ensures data integrity, security, and accountability in healthcare and biomedical research. As biomedical data grows in complexity, comprehensive tracking mechanisms are needed to maintain reproducibility, transparency, and compliance with regulatory standards such as HIPAA and GDPR. Traditional log-based and ontology-based approaches capture and standardize data lineage, while cryptographic and blockchain-based methods enhance security and verifiability. However, challenges remain in scalability, security, and usability. To address these, we introduce the Resource-Provenance Visualization Engine (RPVE), an advanced system integrating data lineage tracking and interactive visualization. RPVE employs the Randomized N-gram Hashing Identifier (NHash ID) to establish precise data links within the BRAIN Initiative Cell Atlas Network (BICAN) and features an interactive Sankey visualization engine for seamless data exploration. The system enhances provenance tracking by improving data retrieval efficiency, ensuring reliable verification processes, and maintaining data integrity.
Speaker:
Xiaojin Li, Ph.D.
UTHealth
Authors:
Xiaojin Li, Ph.D. - University of Texas Health Science Center at Houston; Yan Huang, Ph.D - UT Health Science Center; Lydia Ng, Ph.D. - Allen Institute for Brain Science; Kimberly Smith, B.S. - Allen Institute for Brain Science; Wei-Chun Chou, M.S; Rashmie Abeysinghe, PhD - The University of Texas Health Science Center at Houston; Ling Tong, Ph.D. - University of Texas Health Science Center at Houston; Shiwei Lin, MS - The University of Texas Health Science Center at Houston; Licong Cui, PhD - The University of Texas Health Science Center at Houston (UTHealth Houston); Shiqiang Tao, PhD - The University of Texas Health Science Center at Houston; GQ Zhang, PhD - The University of Texas Health Science Center at Houston;
Xiaojin
Li,
Ph.D. - UTHealth
MedViz: Illuminating Biomedical Literature using Agentic AI and Visual Analytics
Presentation Time: 10:09 AM - 10:21 AM
Abstract Keywords: Information Visualization, Information Extraction, Artificial Intelligence
Primary Track: Applications
Programmatic Theme: Translational Bioinformatics
In the rapidly evolving field of biomedicine, researchers are inundated with an ever-growing corpus of publications. Current search engines primarily usually don’t provide a global landscape of the knowledge space of interest, making it challenging to answer complex research questions. To address these challenges, we developed a new tool called MedViz, which leverages LLM-based agents and visual analytics technologies to provide a new way for biomedical researchers to explore the semantic space of biomedical literature.
Speaker:
Huan He, Ph.D.
Yale University
Authors:
Huan He, Ph.D. - Yale University; Xueqing Peng, PhD - Yale University; Yutong Xie, BE - University of Michigan; Qijia Liu, BE - University of Michigan; Chia-Hsuan Chang, PhD - Yale BIDS; Lingfei Qian, PHD - Yale University; Brian Ondov, PhD - Yale School of Medicine; Qiaozhu Mei, Ph.D - University of Michigan; Hua Xu, Ph.D - Yale University;
Presentation Time: 10:09 AM - 10:21 AM
Abstract Keywords: Information Visualization, Information Extraction, Artificial Intelligence
Primary Track: Applications
Programmatic Theme: Translational Bioinformatics
In the rapidly evolving field of biomedicine, researchers are inundated with an ever-growing corpus of publications. Current search engines primarily usually don’t provide a global landscape of the knowledge space of interest, making it challenging to answer complex research questions. To address these challenges, we developed a new tool called MedViz, which leverages LLM-based agents and visual analytics technologies to provide a new way for biomedical researchers to explore the semantic space of biomedical literature.
Speaker:
Huan He, Ph.D.
Yale University
Authors:
Huan He, Ph.D. - Yale University; Xueqing Peng, PhD - Yale University; Yutong Xie, BE - University of Michigan; Qijia Liu, BE - University of Michigan; Chia-Hsuan Chang, PhD - Yale BIDS; Lingfei Qian, PHD - Yale University; Brian Ondov, PhD - Yale School of Medicine; Qiaozhu Mei, Ph.D - University of Michigan; Hua Xu, Ph.D - Yale University;
Huan
He,
Ph.D. - Yale University
Unpacking Situational Awareness in Emergency Medical Services: An Eye-Tracking Study of Visual Attention
Presentation Time: 10:21 AM - 10:33 AM
Abstract Keywords: Workflow, Clinical Decision Support, Critical Care
Primary Track: Foundations
Programmatic Theme: Clinical Informatics
Situational awareness (SA) is critical for Emergency Medical Services (EMS) providers as they operate in high-stakes, dynamic environments requiring rapid information processing and decision-making. While prior research has explored SA challenges in EMS, little is known about how visual attention patterns influence SA and clinical performance. This study employs eye-tracking technology to objectively assess how EMS providers allocate their visual attention during simulated pediatric emergency scenarios in urban and rural settings. We investigate variations in visual attention across experience levels, team structures, and task roles and examine differences between high- and low-performing teams. Results reveal that high-performing teams demonstrate more frequent and evenly distributed visual scanning, whereas lower-performing teams exhibit a narrowed focus, increasing the risk of missing critical cues. Our findings underscore the need for training interventions and technology solutions to enhance SA and optimize EMS performance.
Speaker:
Enze Bai, Phd Dandidate
Pace University
Authors:
Zhan Zhang, PhD; Kathleen Adelgais, MD - University of Colorado School of Medicine; Mustafa Ozkaynak, PhD - University of Colorado-Denver | Anschutz Medical Campus;
Presentation Time: 10:21 AM - 10:33 AM
Abstract Keywords: Workflow, Clinical Decision Support, Critical Care
Primary Track: Foundations
Programmatic Theme: Clinical Informatics
Situational awareness (SA) is critical for Emergency Medical Services (EMS) providers as they operate in high-stakes, dynamic environments requiring rapid information processing and decision-making. While prior research has explored SA challenges in EMS, little is known about how visual attention patterns influence SA and clinical performance. This study employs eye-tracking technology to objectively assess how EMS providers allocate their visual attention during simulated pediatric emergency scenarios in urban and rural settings. We investigate variations in visual attention across experience levels, team structures, and task roles and examine differences between high- and low-performing teams. Results reveal that high-performing teams demonstrate more frequent and evenly distributed visual scanning, whereas lower-performing teams exhibit a narrowed focus, increasing the risk of missing critical cues. Our findings underscore the need for training interventions and technology solutions to enhance SA and optimize EMS performance.
Speaker:
Enze Bai, Phd Dandidate
Pace University
Authors:
Zhan Zhang, PhD; Kathleen Adelgais, MD - University of Colorado School of Medicine; Mustafa Ozkaynak, PhD - University of Colorado-Denver | Anschutz Medical Campus;
Enze
Bai,
Phd Dandidate - Pace University
A Clinically-Informed Framework for Evaluating Vision-Language Models in Radiology Report Generation: Taxonomy of Errors and Risk-Aware Metric
Presentation Time: 10:33 AM - 10:45 AM
Abstract Keywords: Artificial Intelligence, Evaluation, Large Language Models (LLMs), Imaging Informatics
Primary Track: Applications
Recent advances in vision-language models (VLMs) have enabled automatic radiology report generation, yet current evaluation methods remain limited to general-purpose NLP metrics or coarse classification-based clinical scores. In this study, we propose a clinically informed evaluation framework for VLM-generated radiology reports that goes beyond traditional performance measures. We define a taxonomy of 12 radiology-specific error types, each annotated with clinical risk levels (low, medium, high) in collaboration with physicians. Using this framework, we conduct a comprehensive error analysis of three representative VLMs, i.e., DeepSeek VL2, CXR-LLaVA, and CheXagent, on 685 gold-standard, expert-annotated MIMIC-CXR cases. We further introduce a risk-aware evaluation metric, the Clinical Risk-weighted Error Score for Text-generation (CREST), to quantify safety impact. Our findings reveal critical model vulnerabilities, common error patterns, and condition-specific risk profiles, offering actionable insights for model development and deployment. This work establishes a safety-centric foundation for evaluating and improving medical report generation models.
Speaker:
Hao Guan, PhD
Harvard Medical School/Brigham and Women's Hospital
Authors:
Peter Hou, MD - Brigham and Women's Hospital, Harvard Medical School; Pengyu Hong, PhD - Brandeis University; Liqin Wang, PhD - Brigham and Women's Hospital; Wenyu Zhang, PhD in Quantitative Biomedical Science - Dartmouth College; Xinsong Du, Ph.D. - Brigham and Women's Hospital/Harvard Medical School; Zhengyang Zhou, MS - Brandeis University; Li Zhou, MD, PhD, FACMI, FIAHSI, FAMIA - Brigham and Women's Hospital, Harvard Medical School;
Presentation Time: 10:33 AM - 10:45 AM
Abstract Keywords: Artificial Intelligence, Evaluation, Large Language Models (LLMs), Imaging Informatics
Primary Track: Applications
Recent advances in vision-language models (VLMs) have enabled automatic radiology report generation, yet current evaluation methods remain limited to general-purpose NLP metrics or coarse classification-based clinical scores. In this study, we propose a clinically informed evaluation framework for VLM-generated radiology reports that goes beyond traditional performance measures. We define a taxonomy of 12 radiology-specific error types, each annotated with clinical risk levels (low, medium, high) in collaboration with physicians. Using this framework, we conduct a comprehensive error analysis of three representative VLMs, i.e., DeepSeek VL2, CXR-LLaVA, and CheXagent, on 685 gold-standard, expert-annotated MIMIC-CXR cases. We further introduce a risk-aware evaluation metric, the Clinical Risk-weighted Error Score for Text-generation (CREST), to quantify safety impact. Our findings reveal critical model vulnerabilities, common error patterns, and condition-specific risk profiles, offering actionable insights for model development and deployment. This work establishes a safety-centric foundation for evaluating and improving medical report generation models.
Speaker:
Hao Guan, PhD
Harvard Medical School/Brigham and Women's Hospital
Authors:
Peter Hou, MD - Brigham and Women's Hospital, Harvard Medical School; Pengyu Hong, PhD - Brandeis University; Liqin Wang, PhD - Brigham and Women's Hospital; Wenyu Zhang, PhD in Quantitative Biomedical Science - Dartmouth College; Xinsong Du, Ph.D. - Brigham and Women's Hospital/Harvard Medical School; Zhengyang Zhou, MS - Brandeis University; Li Zhou, MD, PhD, FACMI, FIAHSI, FAMIA - Brigham and Women's Hospital, Harvard Medical School;
Hao
Guan,
PhD - Harvard Medical School/Brigham and Women's Hospital
LMOD: A Large Multimodal Ophthalmology Dataset and Benchmark for Vision-Language Models
Presentation Time: 10:45 AM - 10:57 AM
Abstract Keywords: Bioinformatics, Artificial Intelligence, Evaluation
Primary Track: Applications
Programmatic Theme: Clinical Informatics
This study introduces LMOD, a large-scale multimodal ophthalmology benchmark that enables systematic evaluation of large vision-language models (LVLMs) in ophthalmology-specific applications. Comprising 21,993 instances across five ophthalmic imaging modalities, LMOD includes optical coherence tomography, color fundus photographs, scanning laser ophthalmoscopy, lens photographs, and surgical scenes, with annotations for anatomical recognition and disease diagnosis. We benchmarked 13 state-of-the-art LVLMs, revealing significant performance limitations with an average F1 score of only 0.2189 for anatomical recognition and near-random accuracy for diagnostic tasks. Error analysis identified six major failure modes: misclassification, failure to abstain, inconsistent reasoning, hallucination, assertions without justification, and lack of domain-specific knowledge. In contrast, supervised neural networks achieved high performance on the same tasks, confirming that the challenges are specific to current LVLMs. The data is available at: https://kfzyqin.github.io/lmod/.
Speaker:
Qingyu Chen, PhD
Yale University
Authors:
Zhenyue Qin, PhD - Yale University; Yu Yin, Master - Imperial College London; Dylan Campbell, PhD - Australian National University; Xuansheng Wu, Master - University of Georgia; Ke Zou, PhD - National University of Singapore; Yih-Chung Tham, PhD - National University of Singapore; Ninghao Liu, PhD - University of Georgia; Xiuzhen Zhang, PhD - RMIT University; Qingyu Chen, PhD - Yale University;
Presentation Time: 10:45 AM - 10:57 AM
Abstract Keywords: Bioinformatics, Artificial Intelligence, Evaluation
Primary Track: Applications
Programmatic Theme: Clinical Informatics
This study introduces LMOD, a large-scale multimodal ophthalmology benchmark that enables systematic evaluation of large vision-language models (LVLMs) in ophthalmology-specific applications. Comprising 21,993 instances across five ophthalmic imaging modalities, LMOD includes optical coherence tomography, color fundus photographs, scanning laser ophthalmoscopy, lens photographs, and surgical scenes, with annotations for anatomical recognition and disease diagnosis. We benchmarked 13 state-of-the-art LVLMs, revealing significant performance limitations with an average F1 score of only 0.2189 for anatomical recognition and near-random accuracy for diagnostic tasks. Error analysis identified six major failure modes: misclassification, failure to abstain, inconsistent reasoning, hallucination, assertions without justification, and lack of domain-specific knowledge. In contrast, supervised neural networks achieved high performance on the same tasks, confirming that the challenges are specific to current LVLMs. The data is available at: https://kfzyqin.github.io/lmod/.
Speaker:
Qingyu Chen, PhD
Yale University
Authors:
Zhenyue Qin, PhD - Yale University; Yu Yin, Master - Imperial College London; Dylan Campbell, PhD - Australian National University; Xuansheng Wu, Master - University of Georgia; Ke Zou, PhD - National University of Singapore; Yih-Chung Tham, PhD - National University of Singapore; Ninghao Liu, PhD - University of Georgia; Xiuzhen Zhang, PhD - RMIT University; Qingyu Chen, PhD - Yale University;
Qingyu
Chen,
PhD - Yale University
S68: The Visual Frontier: Navigating Biomedical Knowledge Through Different Lenses
Description
Custom CSS
double-click to edit, do not edit in source