Suppr超能文献

早期自闭症检测中的可解释人工智能:可解释机器学习方法的文献综述

Explainable AI in early autism detection: a literature review of interpretable machine learning approaches.

作者信息

Agrawal Renuka, Agrawal Rucha

机构信息

Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune, India.

Marathwada Mitra Mandal's Institute of Technology, Lohgaon, Pune, India.

出版信息

Discov Ment Health. 2025 Jul 1;5(1):98. doi: 10.1007/s44192-025-00232-3.

Abstract

Autism spectrum disorder (ASD) is a neurodevelopmental condition especially in children with a strong hereditary basis, making its early diagnosis challenging. Early detection of ASD enables individualized treatment programs that can improve social interactions, cognitive development, and communication abilities, hence lowering the long-term difficulties linked to the disorder. Early detection helps in therapeutic interventions, which can help children acquire critical skills and lessen the intensity of symptoms. Despite their remarkable predictive power, machine learning models are frequently less accepted in crucial industries like healthcare because of their opaque character, which makes it challenging for practitioners to comprehend the decision-making process. Explainable AI (XAI), an extension to AI, has emerged due to issues like trust, accountability, and transparency caused by the opaque nature of AI models, especially deep learning. XAI aims to make AI's decision-making processes easier to understand and more reliable. The present study delves into the extensive applications of XAI in diverse fields including healthcare, emphasizing its significance in guaranteeing an ethical and dependable implementation of AI. The article goes into additional detail in a specialized assessment of AI and XAI applications in research on ASD, showing how XAI can offer vital insights into identifying, diagnosing, and treating autism.

摘要

自闭症谱系障碍(ASD)是一种神经发育疾病,尤其在具有强大遗传基础的儿童中较为常见,这使得其早期诊断具有挑战性。早期发现ASD能够制定个性化治疗方案,从而改善社交互动、认知发展和沟通能力,进而减少与该疾病相关的长期困难。早期发现有助于进行治疗干预,这可以帮助儿童获得关键技能并减轻症状的严重程度。尽管机器学习模型具有显著的预测能力,但由于其不透明的特性,在医疗保健等关键行业中,它们往往不太被接受,这使得从业者难以理解决策过程。可解释人工智能(XAI)作为人工智能的扩展应运而生,这是由于人工智能模型(尤其是深度学习模型)的不透明性质所引发的信任、问责和透明度等问题。XAI旨在使人工智能的决策过程更易于理解且更可靠。本研究深入探讨了XAI在包括医疗保健在内的多个领域的广泛应用,强调了其在确保人工智能道德且可靠实施方面的重要性。本文在对ASD研究中人工智能和XAI应用的专门评估中进一步详细阐述,展示了XAI如何为自闭症的识别、诊断和治疗提供重要见解。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验