Awardee Organization(s): University of Virginia | University of Pennsylvania
Principal Investigator(s): Aidong Zhang, PhD | Carol Manning, PhD | Li Shen, PhD | Mary Regina Boland, PhD, MPhil
Official Project Title: Fairness and Robust Interpretability of Prediction Approaches for Aging and Alzheimer’s Disease
AITC Partner: PennAITech
Website(s):
https://engineering.virginia.edu
https://www.cs.virginia.edu/~az9eg/website/home.html
https://www.med.upenn.edu

Machine learning (ML) approaches have been increasingly used for facilitating clinical decision-making in Alzheimer’s Disease (AD) and AD related dementia (ADRD). However, recent research has shown that existing ML techniques are prone to unintentional biases towards protected attributes such as age, race, sex, gender, and/or ethnicity. Moreover, although deep learning (DL) models have been a great success in many applications including AD/ADRD prediction, DL models are usually expressed in a way that is not interpretable. Thus, ML approaches using health data may incur ethical and trustworthiness concerns that may result in the unfair treatment of patients. As decision-making systems for aging and AD/ADRD become popular, a major challenge is how to ethically integrate AI/ML methods into the lives of people, given that ethical principles may often be violated in existing methods. This has become an important issue for both the ML community and the AD/ADRD community. Moreover, ML approaches that are not transparent can be prone to repeating discriminatory patterns from prior data or generating new ones based on biased learned patterns. This project develops electronic health records (EHRs) based ML methods for Penn Medicine EHR AD/ADRD datasets that are fair, generalizable, and interpretable solutions that would help inform the clinician for AD/ADRD diagnosis and care management. We focus on studying fairness and interpretability, two important factors for making AI methods trustworthy, particularly during deployment or use of the methods. We study how bias affects our prediction models. Also, we will develop explainable methods to increase clinical interpretability.

View Resource