Evaluating the Performance of Instruction Tuned Large Language Models on Biomedical Entity Recognition
Presentation Time: 11:30 AM - 11:45 AM
Abstract Keywords: Large Language Models (LLMs), Natural Language Processing, Information Extraction
Primary Track: Applications
This study proposes a paradigm based on instruction-tuning Large Language Models (LLMs) that transforms biomedical NER from sequence labeling task into a generation task. The paradigm repurposes existing NER datasets to develop BioNER-LLaMA using LLaMA2-7B. For the first time we show a general domain LLM achieving performance comparable to fine-tuned PubMedBERT models and better performance than biomedical-specific LLM (PMC-LLaMA). The findings underscore the paradigm's potential for developing LLMs that rival SOTA performance in biomedical applications.
Speaker(s):
Vipina K. Keloth, PhD
Yale University
Presentation Time: 11:30 AM - 11:45 AM
Abstract Keywords: Large Language Models (LLMs), Natural Language Processing, Information Extraction
Primary Track: Applications
This study proposes a paradigm based on instruction-tuning Large Language Models (LLMs) that transforms biomedical NER from sequence labeling task into a generation task. The paradigm repurposes existing NER datasets to develop BioNER-LLaMA using LLaMA2-7B. For the first time we show a general domain LLM achieving performance comparable to fine-tuned PubMedBERT models and better performance than biomedical-specific LLM (PMC-LLaMA). The findings underscore the paradigm's potential for developing LLMs that rival SOTA performance in biomedical applications.
Speaker(s):
Vipina K. Keloth, PhD
Yale University
Evaluating the Performance of Instruction Tuned Large Language Models on Biomedical Entity Recognition
Category
Podium Abstract
Description
Date: Monday (11/11)
Time: 11:30 AM to 11:45 AM
Room: Continental Ballroom 8-9
Time: 11:30 AM to 11:45 AM
Room: Continental Ballroom 8-9