Alam Mohammad Shafiul, Rashid Muhammad Mahbubur, Jazlan Ahmad, Alahi Md Eshrat E, Kchaou Mohamed, Alharthi Khalid Ayed B
Department of Mechatronics Engineering, International Islamic University Malaysia, Kuala Lumpur 50728, Malaysia.
Department of Electrical and Electronic Engineering, Northern University Bangladesh (NUB), Dhaka 1230, Bangladesh.
Diagnostics (Basel). 2025 Jun 24;15(13):1601. doi: 10.3390/diagnostics15131601.
: Artificial intelligence (AI) is revolutionising healthcare for people with disabilities, including those with autism spectrum disorder (ASD), in the era of advanced technology. This work explicitly addresses the challenges posed by inconsistent data from various sources by developing and evaluating a robust deep ensemble learning system for the accurate and reliable classification of autism spectrum disorder (ASD) based on facial images. We created a system that learns from two publicly accessible datasets of ASD images (Kaggle and YTUIA), each with unique demographics and image characteristics. Utilising a weighted ensemble strategy (FPPR), our innovative ASD-UANet ensemble combines the Xception and ResNet50V2 models to maximise model contributions. This methodology underwent extensive testing on a range of groups stratified by age and gender, including a critical assessment of an unseen, real-time dataset (UIFID) to determine how well it generalised to new domains. The performance of the ASD-UANet ensemble was consistently better. It significantly outperformed individual transfer learning models (e.g., Xception alone on T1+T2 yielded an accuracy of 83%), achieving an impressive 96.0% accuracy and an AUC of 0.990 on the combined-domain dataset (T1+T2). Notably, the ASD-UANet ensemble demonstrated strong generalisation on the unseen real-time dataset (T3), achieving 90.6% accuracy and an AUC of 0.930. This demonstrates how well it generalises to new data distributions. Our findings demonstrate significant potential for widespread, equitable, and clinically beneficial ASD screening using this promising, reasonably priced, and non-invasive method. This study establishes the foundation for more precise diagnoses and greater inclusion for people with autism spectrum disorder (ASD) by integrating methods for diverse data and combining deep learning models.
在先进技术时代,人工智能(AI)正在为包括自闭症谱系障碍(ASD)患者在内的残疾人士彻底改变医疗保健方式。这项工作通过开发和评估一个强大的深度集成学习系统,明确解决了来自各种来源的不一致数据所带来的挑战,该系统用于基于面部图像对自闭症谱系障碍(ASD)进行准确可靠的分类。我们创建了一个从两个可公开获取的ASD图像数据集(Kaggle和YTUIA)学习的系统,每个数据集都有独特的人口统计学特征和图像特点。利用加权集成策略(FPPR),我们创新的ASD-UANet集成将Xception和ResNet50V2模型结合起来,以最大化模型贡献。该方法在一系列按年龄和性别分层的群体上进行了广泛测试,包括对一个未见过的实时数据集(UIFID)的关键评估,以确定其对新领域的泛化能力。ASD-UANet集成的性能始终更好。它显著优于单个迁移学习模型(例如,仅在T1+T2上的Xception准确率为83%),在组合域数据集(T1+T2)上实现了令人印象深刻的96.0%的准确率和0.990的AUC。值得注意的是,ASD-UANet集成在未见过的实时数据集(T3)上表现出强大的泛化能力,准确率达到90.6%,AUC为0.930。这表明它对新数据分布的泛化能力很强。我们的研究结果表明,使用这种有前景、价格合理且非侵入性的方法进行广泛、公平且对临床有益的ASD筛查具有巨大潜力。这项研究通过整合多种数据方法和结合深度学习模型,为自闭症谱系障碍(ASD)患者更精确的诊断和更大程度的融入奠定了基础。