Using Large Language Models for sentiment analysis of health-related social media data: empirical evaluation and practical tips
Presentation Time: 02:00 PM - 02:15 PM
Abstract Keywords: Social Media and Connected Health, Natural Language Processing, Large Language Models (LLMs), Evaluation
Primary Track: Applications
Programmatic Theme: Consumer Health Informatics
Health-related social media data generated by patients and the public provide valuable insights into patient experiences and opinions toward health issues such as vaccination and medical treatments. Using Natural Language Processing (NLP) methods to analyze such data, however, often requires high-quality annotations that are difficult to obtain. The recent emergence of Large Language Models (LLMs) such as the Generative Pre-trained Transformers (GPTs) has shown promising performance on a variety of NLP tasks in the health domain with little to no annotated data. However, their potential in analyzing health-related social media data remains underexplored. In this paper, we report empirical evaluations of LLMs (GPT-3.5-Turbo, FLAN-T5, and BERT-based models) on a common NLP task of health-related social media data: sentiment analysis for identifying opinions toward health issues. We explored how different prompting and fine-tuning strategies affect the performance of LLMs on social media datasets across diverse health topics, including Healthcare Reform, vaccination, mask wearing, and healthcare service quality. We found that LLMs outperformed VADER, a widely used off-the-shelf sentiment analysis tool, but are far from being able to produce accurate sentiment labels. However, their performance can be improved by data-specific prompts with information about the context, task, and targets. The highest performing LLMs are BERT-based models that were fine-tuned on aggregated data. We provided practical tips for researchers to use LLMs on health-related social media for optimal outcomes. We also discuss future work needed to continue to improve the performance of LLMs for analyzing health-related social media data with minimal annotations.
Speaker(s):
Lu He, PhD
University of Wisconsin-Milwaukee
Author(s):
Lu He, PhD - University of Wisconsin-Milwaukee; Sammie Omranian - University of Wisconsin - Milwaukee; Susan McRoy, PhD - University of Wisconsin-Milwaukee; Kai Zheng, PhD - University of California, Irvine;
Presentation Time: 02:00 PM - 02:15 PM
Abstract Keywords: Social Media and Connected Health, Natural Language Processing, Large Language Models (LLMs), Evaluation
Primary Track: Applications
Programmatic Theme: Consumer Health Informatics
Health-related social media data generated by patients and the public provide valuable insights into patient experiences and opinions toward health issues such as vaccination and medical treatments. Using Natural Language Processing (NLP) methods to analyze such data, however, often requires high-quality annotations that are difficult to obtain. The recent emergence of Large Language Models (LLMs) such as the Generative Pre-trained Transformers (GPTs) has shown promising performance on a variety of NLP tasks in the health domain with little to no annotated data. However, their potential in analyzing health-related social media data remains underexplored. In this paper, we report empirical evaluations of LLMs (GPT-3.5-Turbo, FLAN-T5, and BERT-based models) on a common NLP task of health-related social media data: sentiment analysis for identifying opinions toward health issues. We explored how different prompting and fine-tuning strategies affect the performance of LLMs on social media datasets across diverse health topics, including Healthcare Reform, vaccination, mask wearing, and healthcare service quality. We found that LLMs outperformed VADER, a widely used off-the-shelf sentiment analysis tool, but are far from being able to produce accurate sentiment labels. However, their performance can be improved by data-specific prompts with information about the context, task, and targets. The highest performing LLMs are BERT-based models that were fine-tuned on aggregated data. We provided practical tips for researchers to use LLMs on health-related social media for optimal outcomes. We also discuss future work needed to continue to improve the performance of LLMs for analyzing health-related social media data with minimal annotations.
Speaker(s):
Lu He, PhD
University of Wisconsin-Milwaukee
Author(s):
Lu He, PhD - University of Wisconsin-Milwaukee; Sammie Omranian - University of Wisconsin - Milwaukee; Susan McRoy, PhD - University of Wisconsin-Milwaukee; Kai Zheng, PhD - University of California, Irvine;
Using Large Language Models for sentiment analysis of health-related social media data: empirical evaluation and practical tips
Category
Paper - Regular