Awardee Organization(s): DreamFace Technologies LLC
Principal Investigator(s): Mohammad H. Mahoor, PhD
Official Project Title: Building Deep Digital Twins for Prediction of AD/ADR/MCI in Older Adults
AITC Partner: PennAITech
Website(s): https://dreamfacetech.com/
The Alzheimer’s Association predicts that the number of Americans aged 65 and older with Alzheimer’s disease-related dementia (ADRD) will reach over 12 million people by 2050. ADRD often stars with mild cognitive impairment (MCI), which is characterized by challenges in memory, language, and thinking skills. Early MCI detection is vital for identifying those at risk of dementia, offering support, advice, and ongoing monitoring. Currently, older adults with MCI are diagnosed clinically; however, their daily challenges are often not noticeable to those whom they encounter irregularly. Artificial Intelligence (AI) holds promise for early cognitive impairment detection, with many AI studies focusing on expensive clinical assessments and medical scans like positron emission tomography (PET) and MRI. There is a pressing need for additional research to advance innovative, cost-effective, and accessible approaches for early detection and prediction of AD and MCI. Human digital twins are at the forefront of aging and longevity research, aiming to create personalized AI models that comprehensively simulate an individual’s behavioral, biological, physical, mental, and socio-emotional attributes using health and medical records. These models hold the potential to revolutionize our understanding, prediction, and management of the aging process, offering personalized healthcare solutions. This pilot project aims to investigate AI techniques that leverage multi-modal audio-visual data, along with other available data modalities, to develop human digital twins for research in aging and, more specifically, for predicting MCI and the early onset of AD/ADRD. We design and implement a Deep Digital Twins (DDT) model using Conditional Variational Autoencoders (CVAEs) suitable for heterogeneous multi-modal data including speech, transcribed speech, and facial videos. We then evaluate the efficacy of the proposed model using publicly available datasets such as the I-CONECT and ADReSS datasets, which contain multi-modal data and other metadata suitable for our project. We hypothesize that DDTs trained using multi-modal comprehensive data can predict MCI/AD with high fidelity and accuracy compared to uni-modal data. We compare our proposed DDT with state-of-the-art models in the literature. We assess the models’ performance, taking into account the impact of diverse data to ensure they remain unbiased. The expected outcome of this research are knowledge and prototyped Deep Digital Twins capable of assessing and predicting MCI/AD conditions in older adults. It is expected that the DDTs generate the longitudinal trajectories sampled from the data as well as predict the subject’s future condition.
View Resource