Multi-Adversarial Debiasing in Clinical Artificial Intelligence
Presentation Time: 03:00 PM - 03:12 PM
Abstract Keywords: Fairness and Elimination of Bias, Artificial Intelligence, Health Equity
Primary Track: Applications
Programmatic Theme: Clinical Informatics
While multiple types of biases can occur in clinical machine learning, the status quo in algorithmic debiasing is to optimize a single fairness metric. We propose a multi-adversarial debiasing framework that builds on the established technique of adversarial debiasing to jointly optimize two or more fairness definitions. Our experiments use two adversaries corresponding to demographic parity (DP) and equalized mistreatment (EM). Evaluating four datasets, including two clinical datasets (UCI Heart Disease and a Parkinson’s Disease digital health dataset) and two algorithmic fairness benchmarks (COMPAS and Adult Income), we find that our multi-adversarial approach reduces DP between 0.03-0.22 and EM between 0.02-0.13 while maintaining the F1 score within 0-16% of the baseline models. Analyzing these performance variations, we find that adversarial debiasing is most effective in datasets with adequate representation of positive and negative labels across protected attribute values, but the effectiveness declines when this is not the case.
Speaker(s):
Md Rahat Shahriar Zawad, MS Student
University of Hawaii at Manoa
Author(s):
Md Rahat Shahriar Zawad, MS Student - University of Hawaii at Manoa; Irene Y Chen, PhD - University of California, Berkeley; Peter Washington, PhD - University of California, San Francisco;
Presentation Time: 03:00 PM - 03:12 PM
Abstract Keywords: Fairness and Elimination of Bias, Artificial Intelligence, Health Equity
Primary Track: Applications
Programmatic Theme: Clinical Informatics
While multiple types of biases can occur in clinical machine learning, the status quo in algorithmic debiasing is to optimize a single fairness metric. We propose a multi-adversarial debiasing framework that builds on the established technique of adversarial debiasing to jointly optimize two or more fairness definitions. Our experiments use two adversaries corresponding to demographic parity (DP) and equalized mistreatment (EM). Evaluating four datasets, including two clinical datasets (UCI Heart Disease and a Parkinson’s Disease digital health dataset) and two algorithmic fairness benchmarks (COMPAS and Adult Income), we find that our multi-adversarial approach reduces DP between 0.03-0.22 and EM between 0.02-0.13 while maintaining the F1 score within 0-16% of the baseline models. Analyzing these performance variations, we find that adversarial debiasing is most effective in datasets with adequate representation of positive and negative labels across protected attribute values, but the effectiveness declines when this is not the case.
Speaker(s):
Md Rahat Shahriar Zawad, MS Student
University of Hawaii at Manoa
Author(s):
Md Rahat Shahriar Zawad, MS Student - University of Hawaii at Manoa; Irene Y Chen, PhD - University of California, Berkeley; Peter Washington, PhD - University of California, San Francisco;
Multi-Adversarial Debiasing in Clinical Artificial Intelligence
Category
Paper - Student