Suppr超能文献

基于深度监督编码器的数据融合用于阿尔茨海默病检测

Alzheimer's disease detection using data fusion with a deep supervised encoder.

作者信息

Trinh Minh, Shahbaba Ryan, Stark Craig, Ren Yueqi

机构信息

Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States.

Sage Hill School, Newport Beach, CA, United States.

出版信息

Front Dement. 2024;3. doi: 10.3389/frdem.2024.1332928. Epub 2024 Feb 12.

Abstract

Alzheimer's disease (AD) is affecting a growing number of individuals. As a result, there is a pressing need for accurate and early diagnosis methods. This study aims to achieve this goal by developing an optimal data analysis strategy to enhance computational diagnosis. Although various modalities of AD diagnostic data are collected, past research on computational methods of AD diagnosis has mainly focused on using single-modal inputs. We hypothesize that integrating, or "fusing," various data modalities as inputs to prediction models could enhance diagnostic accuracy by offering a more comprehensive view of an individual's health profile. However, a potential challenge arises as this fusion of multiple modalities may result in significantly higher dimensional data. We hypothesize that employing suitable dimensionality reduction methods across heterogeneous modalities would not only help diagnosis models extract latent information but also enhance accuracy. Therefore, it is imperative to identify optimal strategies for both data fusion and dimensionality reduction. In this paper, we have conducted a comprehensive comparison of over 80 statistical machine learning methods, considering various classifiers, dimensionality reduction techniques, and data fusion strategies to assess our hypotheses. Specifically, we have explored three primary strategies: (1) Simple data fusion, which involves straightforward concatenation (fusion) of datasets before inputting them into a classifier; (2) Early data fusion, in which datasets are concatenated first, and then a dimensionality reduction technique is applied before feeding the resulting data into a classifier; and (3) Intermediate data fusion, in which dimensionality reduction methods are applied individually to each dataset before concatenating them to construct a classifier. For dimensionality reduction, we have explored several commonly-used techniques such as principal component analysis (PCA), autoencoder (AE), and LASSO. Additionally, we have implemented a new dimensionality-reduction method called the supervised encoder (SE), which involves slight modifications to standard deep neural networks. Our results show that SE substantially improves prediction accuracy compared to PCA, AE, and LASSO, especially in combination with intermediate fusion for multiclass diagnosis prediction.

摘要

阿尔茨海默病(AD)影响着越来越多的人。因此,迫切需要准确的早期诊断方法。本研究旨在通过开发一种优化的数据分析策略来提高计算诊断能力,从而实现这一目标。尽管收集了各种AD诊断数据模式,但过去关于AD诊断计算方法的研究主要集中在使用单模态输入。我们假设,将各种数据模式整合或“融合”作为预测模型的输入,可以通过提供个人健康状况的更全面视图来提高诊断准确性。然而,随着这种多模态的融合可能会导致数据维度显著增加,一个潜在的挑战出现了。我们假设在异构模式下采用合适的降维方法不仅有助于诊断模型提取潜在信息,还能提高准确性。因此,必须确定数据融合和降维的最佳策略。在本文中,我们对80多种统计机器学习方法进行了全面比较,考虑了各种分类器、降维技术和数据融合策略来评估我们的假设。具体来说,我们探索了三种主要策略:(1)简单数据融合,即在将数据集输入分类器之前直接进行拼接(融合);(2)早期数据融合,即先拼接数据集,然后在将结果数据输入分类器之前应用降维技术;(3)中间数据融合,即在拼接数据集以构建分类器之前,先对每个数据集分别应用降维方法。对于降维,我们探索了几种常用技术,如主成分分析(PCA)、自动编码器(AE)和套索(LASSO)。此外,我们还实现了一种名为监督编码器(SE)的新降维方法,该方法对标准深度神经网络进行了轻微修改。我们的结果表明,与PCA、AE和LASSO相比,SE显著提高了预测准确性,特别是在与中间融合相结合用于多类诊断预测时。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b4d/11285614/105c0adeb2b0/frdem-03-1332928-g0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验