Improving Biomedical Fact Checking of Large Language Models via Instruction Tuning
Poster Number: P171
Presentation Time: -
Abstract Keywords: Large Language Models (LLMs), Natural Language Processing, Machine Learning
Primary Track: Applications
Despite the remarkable advancements in natural language generation by large language models (LLMs), their application in biomedical fact-checking, especially through fine-tuning, remains largely unexplored. This study undertakes a thorough evaluation of five publicly available language models within the realm of biomedical fact-checking, employing fine-tuning to assess their performance. Central to our methodology is the use of fine-tuning with LoRA (Low-Rank Adaptation), a method celebrated for its efficiency. Given the daily surge of medical information and misinformation, our research strives to compile a comprehensive joint dataset from multiple existing medical datasets, enhanced with precise instructions. We then apply instruction tuning to these language models, aiming to significantly improve their effectiveness in medical fact-checking tasks. The empirical results from our experiments decisively show that our method significantly enhances the models' performance across all evaluated datasets, highlighting the potency of fine-tuning and instruction tuning in elevating the precision of biomedical fact-checking. We make the data and codes publicly available via https://github.com/qingyu-qc/medical_fact_checking.
Speaker(s):
Qingyu Chen, PhD
Yale University
Author(s):
Wenhan Han, MSc - Eindhoven University of Technology; Mykola Pechenizkiy, PhD - Eindhoven University of Technology; Meng Fang, PhD - The University of Liverpool; Qingyu Chen, PhD - Yale University;
Poster Number: P171
Presentation Time: -
Abstract Keywords: Large Language Models (LLMs), Natural Language Processing, Machine Learning
Primary Track: Applications
Despite the remarkable advancements in natural language generation by large language models (LLMs), their application in biomedical fact-checking, especially through fine-tuning, remains largely unexplored. This study undertakes a thorough evaluation of five publicly available language models within the realm of biomedical fact-checking, employing fine-tuning to assess their performance. Central to our methodology is the use of fine-tuning with LoRA (Low-Rank Adaptation), a method celebrated for its efficiency. Given the daily surge of medical information and misinformation, our research strives to compile a comprehensive joint dataset from multiple existing medical datasets, enhanced with precise instructions. We then apply instruction tuning to these language models, aiming to significantly improve their effectiveness in medical fact-checking tasks. The empirical results from our experiments decisively show that our method significantly enhances the models' performance across all evaluated datasets, highlighting the potency of fine-tuning and instruction tuning in elevating the precision of biomedical fact-checking. We make the data and codes publicly available via https://github.com/qingyu-qc/medical_fact_checking.
Speaker(s):
Qingyu Chen, PhD
Yale University
Author(s):
Wenhan Han, MSc - Eindhoven University of Technology; Mykola Pechenizkiy, PhD - Eindhoven University of Technology; Meng Fang, PhD - The University of Liverpool; Qingyu Chen, PhD - Yale University;
Improving Biomedical Fact Checking of Large Language Models via Instruction Tuning
Category
Poster Invite