Suppr超能文献

利用微笑和与聊天机器人的对话来进行阿尔茨海默病的数字检测。

Digital detection of Alzheimer's disease using smiles and conversations with a chatbot.

机构信息

Department of Neurology, Faculty of Medicine, Juntendo University, 2-1-1 Hongo, Bunkyo-Ku, Tokyo, 113-8421, Japan.

Department of Neurology, Faculty of Medicine, Juntendo University Koshigaya Hospital, Saitama, Japan.

出版信息

Sci Rep. 2024 Nov 1;14(1):26309. doi: 10.1038/s41598-024-77220-0.

Abstract

In super-aged societies, dementia has become a critical issue, underscoring the urgent need for tools to assess cognitive status effectively in various sectors, including financial and business settings. Facial and speech features have been tried as cost-effective biomarkers of dementia including Alzheimer's disease (AD). We aimed to establish an easy, automatic, and extensive screening tool for AD using a chatbot and artificial intelligence. Smile images and visual and auditory data of natural conversations with a chatbot from 99 healthy controls (HCs) and 93 individuals with AD or mild cognitive impairment due to AD (PwA) were analyzed using machine learning. A subset of 8 facial and 21 sound features successfully distinguished PwA from HCs, with a high area under the receiver operating characteristic curve of 0.94 ± 0.05. Another subset of 8 facial and 20 sound features predicted the cognitive test scores, with a mean absolute error as low as 5.78 ± 0.08. These results were superior to those obtained from face or auditory data alone or from conventional image depiction tasks. Thus, by combining spontaneous sound and facial data obtained through conversations with a chatbot, the proposed model can be put to practical use in real-life scenarios.

摘要

在超老龄社会中,痴呆症已成为一个关键问题,这突显了在金融和商业等各个领域有效评估认知状态的迫切需求。面部和语音特征已被尝试作为痴呆症(包括阿尔茨海默病)的具有成本效益的生物标志物。我们旨在使用聊天机器人和人工智能为 AD 建立一种简单、自动和广泛的筛查工具。使用机器学习分析了来自 99 名健康对照者(HCs)和 93 名 AD 或 AD 引起的轻度认知障碍者(PwA)的聊天机器人的自然对话的微笑图像和视觉、听觉数据。8 个面部和 21 个声音特征的子集成功地区分了 PwA 和 HCs,受试者工作特征曲线下面积高达 0.94±0.05。另一个由 8 个面部和 20 个声音特征组成的子集可以预测认知测试分数,平均绝对误差低至 5.78±0.08。这些结果优于仅从面部或听觉数据或从传统的图像描述任务获得的结果。因此,通过结合通过与聊天机器人的对话获得的自发声音和面部数据,所提出的模型可以在现实生活场景中实际应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0fa0/11530557/b9a551b04d4e/41598_2024_77220_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验