How To Put The Missing Human Values Back Into AI
Presentation Time: 01:45 PM - 03:15 PM
Moderator: Isaac Kohane, MD, PhD
Harvard Medical School
Abstract
When we meet our doctor, we can reasonably assume some alignment of our goals as a patient with those of the doctor. From the decision-theoretic perspective, we assume that the utilities of the patient are those that are being maximized. Whether implicitly or explicitly, those values are embedded in AI models. Effectively, the utilities/values may be driven by patient preferences, provider preferences or those of third parties. In this panel, we will discuss different approaches to that embedding of values and where they meet our expectations and if not, what should be done. The panel will be highly interactive with the audience to elicit their own views and suggestions on this important topic.
Introduction
The integration of artificial intelligence (AI) into healthcare has brought unprecedented capabilities and challenges. As highlighted in our recent review published in the New England Journal of Medicine (Yu et al., 2024), predictive models, from simple clinical equations to advanced large language models (LLMs) and other AI models, inevitably encode human values at every stage of their development and deployment. This panel will explore the critical issue of aligning AI systems with human values in medical decision-making.
The rapid advancement of AI in medicine, exemplified by models that can pass professional competency examinations and generate empathetic patient communications, has raised concerns about the embedded values in these systems. We will discuss how these values enter AI models through choices in training data, model development, and model use.
Key questions to be addressed include:
1. Whose values should be encoded in medical AI systems?
2. How can we ensure AI models reflect patient preferences and goals?
3. What are the implications of dataset shift on the alignment of AI with human values?
4. How can we adapt lessons from medical decision analysis to modern AI systems?
This panel aims to foster a critical discussion on putting human values back into AI, ensuring that as we advance technologically, we do not lose sight of the fundamental human aspects of healthcare.
Speaker(s):
Deborah Raji, PhD Candidate
Tiffani Bright, PhD
Cedars-Sinai Medical Center
Chris Longhurst, MD
UC San Diego Health
Presentation Time: 01:45 PM - 03:15 PM
Moderator: Isaac Kohane, MD, PhD
Harvard Medical School
Abstract
When we meet our doctor, we can reasonably assume some alignment of our goals as a patient with those of the doctor. From the decision-theoretic perspective, we assume that the utilities of the patient are those that are being maximized. Whether implicitly or explicitly, those values are embedded in AI models. Effectively, the utilities/values may be driven by patient preferences, provider preferences or those of third parties. In this panel, we will discuss different approaches to that embedding of values and where they meet our expectations and if not, what should be done. The panel will be highly interactive with the audience to elicit their own views and suggestions on this important topic.
Introduction
The integration of artificial intelligence (AI) into healthcare has brought unprecedented capabilities and challenges. As highlighted in our recent review published in the New England Journal of Medicine (Yu et al., 2024), predictive models, from simple clinical equations to advanced large language models (LLMs) and other AI models, inevitably encode human values at every stage of their development and deployment. This panel will explore the critical issue of aligning AI systems with human values in medical decision-making.
The rapid advancement of AI in medicine, exemplified by models that can pass professional competency examinations and generate empathetic patient communications, has raised concerns about the embedded values in these systems. We will discuss how these values enter AI models through choices in training data, model development, and model use.
Key questions to be addressed include:
1. Whose values should be encoded in medical AI systems?
2. How can we ensure AI models reflect patient preferences and goals?
3. What are the implications of dataset shift on the alignment of AI with human values?
4. How can we adapt lessons from medical decision analysis to modern AI systems?
This panel aims to foster a critical discussion on putting human values back into AI, ensuring that as we advance technologically, we do not lose sight of the fundamental human aspects of healthcare.
Speaker(s):
Deborah Raji, PhD Candidate
Tiffani Bright, PhD
Cedars-Sinai Medical Center
Chris Longhurst, MD
UC San Diego Health
How To Put The Missing Human Values Back Into AI
Category
Panel