Toward Large Language Models as a Therapeutic Tool: Comparing Prompting Techniques to Improve GPT-Delivered Problem-Solving Therapy
Presentation Time: 08:30 AM - 08:45 AM
Abstract Keywords: Large Language Models (LLMs), Evaluation, Behavioral Change, Usability
Primary Track: Applications
While Large Language Models (LLMs) are being quickly adapted to many domains, including healthcare, many of their
strengths and pitfalls remain under-explored. In our study, we examine the effects of employing in-context learning to
guide Large Language Models (LLMs) in delivering parts of a Problem-Solving Therapy (PST) session, particularly
during the symptom identification and assessment phase for personalized goal setting. We present an evaluation of the
model’s performance by automatic metrics and experienced medical professionals. We demonstrate that the model’s
capability to deliver protocolized therapy can be improved with the proper use of prompt engineering methods, albeit
with limitations. To our knowledge, this study is among the first to assess the effects of various prompting techniques in
enhancing a model’s ability to deliver psychotherapy, focusing on overall quality, consistency, and empathy. Exploring
LLMs’ potential in delivering psychotherapy holds promise with the current shortage of mental health professionals
amid significant needs, enhancing the potential utility of AI-based or AI-supported care services
Speaker(s):
Daniil Filienko, PhD Student
University of Washington Tacoma
Author(s):
Daniil Filienko, PhD in Computer Science and Systems - University of Washington Tacoma; Yinzhou Wang; Caroline El Jazmi, B.S. - University of Washington; Serena Jinchen Xie, Masters - Biomedical Informatics and Medical Education, University of Washington; Trevor Cohen, MBChB, PhD - Biomedical Informatics and Medical Education, University of Washington; Martine De Cock, Ph.D. - University of Washington Tacoma; Weichao Yuwen, PhD, RN - University of Washington Tacoma;
Presentation Time: 08:30 AM - 08:45 AM
Abstract Keywords: Large Language Models (LLMs), Evaluation, Behavioral Change, Usability
Primary Track: Applications
While Large Language Models (LLMs) are being quickly adapted to many domains, including healthcare, many of their
strengths and pitfalls remain under-explored. In our study, we examine the effects of employing in-context learning to
guide Large Language Models (LLMs) in delivering parts of a Problem-Solving Therapy (PST) session, particularly
during the symptom identification and assessment phase for personalized goal setting. We present an evaluation of the
model’s performance by automatic metrics and experienced medical professionals. We demonstrate that the model’s
capability to deliver protocolized therapy can be improved with the proper use of prompt engineering methods, albeit
with limitations. To our knowledge, this study is among the first to assess the effects of various prompting techniques in
enhancing a model’s ability to deliver psychotherapy, focusing on overall quality, consistency, and empathy. Exploring
LLMs’ potential in delivering psychotherapy holds promise with the current shortage of mental health professionals
amid significant needs, enhancing the potential utility of AI-based or AI-supported care services
Speaker(s):
Daniil Filienko, PhD Student
University of Washington Tacoma
Author(s):
Daniil Filienko, PhD in Computer Science and Systems - University of Washington Tacoma; Yinzhou Wang; Caroline El Jazmi, B.S. - University of Washington; Serena Jinchen Xie, Masters - Biomedical Informatics and Medical Education, University of Washington; Trevor Cohen, MBChB, PhD - Biomedical Informatics and Medical Education, University of Washington; Martine De Cock, Ph.D. - University of Washington Tacoma; Weichao Yuwen, PhD, RN - University of Washington Tacoma;
Toward Large Language Models as a Therapeutic Tool: Comparing Prompting Techniques to Improve GPT-Delivered Problem-Solving Therapy
Category
Paper - Regular