Suppr超能文献

解码学生认知能力:教育数据挖掘中可解释人工智能算法的比较研究

Decoding student cognitive abilities: a comparative study of explainable AI algorithms in educational data mining.

作者信息

Niu Tianyue, Liu Ting, Luo Yiming Taclis, Pang Patrick Cheong-Iao, Huang Shuaishuai, Xiang Ao

机构信息

Xiamen Academy of Arts and Design, Fuzhou University, Xiamen, 361021, China.

Faculty of Applied Sciences, Macao Polytechnic University, Macao, 999078, Macao, China.

出版信息

Sci Rep. 2025 Jul 24;15(1):26862. doi: 10.1038/s41598-025-12514-5.

Abstract

Exploring students' cognitive abilities has long been an important topic in education. This study employs data-driven artificial intelligence (AI) models supported by explainability algorithms and PSM causal inference to investigate the factors influencing students' cognitive abilities, and it delved into the differences that arise when using various explainability AI algorithms to analyze educational data mining models. In this paper, five AI models were used to model educational data. Subsequently, four interpretable algorithms, including feature importance, Morris Sensitivity, SHAP, and LIME, were used to globally interpret the results, and PSM causal tests were performed on the factors that affect students' cognitive abilities. The results reveal that self-perception and parental expectations have a certain impact on students' cognitive abilities, as indicated by all algorithms. Our work also uncovers that different explainability algorithms exhibit varying preferences and inclinations when interpreting the model, as evidenced by discrepancies in the top ten features highlighted by each algorithm. Morris Sensitivity presents a more balanced perspective, while SHAP and feature importance reflect the diversity of interpretable algorithms, and LIME shows a unique perspective. This detailed observation highlights the practical contribution of interpretable AI algorithms in the field of educational data mining, paving the way for more refined applications and deeper insights in future research.

摘要

长期以来,探索学生的认知能力一直是教育领域的一个重要话题。本研究采用由可解释性算法和倾向得分匹配(PSM)因果推断支持的数据驱动型人工智能(AI)模型,来调查影响学生认知能力的因素,并深入研究在使用各种可解释性AI算法分析教育数据挖掘模型时出现的差异。在本文中,使用了五个AI模型对教育数据进行建模。随后,使用包括特征重要性、莫里斯敏感性分析、SHAP和局部可解释模型无关解释(LIME)在内的四种可解释算法来全局解释结果,并对影响学生认知能力的因素进行PSM因果检验。结果表明,所有算法均显示,自我认知和父母期望对学生的认知能力有一定影响。我们的研究还发现,不同的可解释算法在解释模型时表现出不同的偏好和倾向,这从每种算法突出显示的十大特征的差异中可见一斑。莫里斯敏感性分析呈现出更为平衡的视角,而SHAP和特征重要性反映了可解释算法的多样性,LIME则展现出独特的视角。这一详细观察突出了可解释AI算法在教育数据挖掘领域的实际贡献,为未来研究中更精细的应用和更深入的见解铺平了道路。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ecee/12287387/0014b37f9b19/41598_2025_12514_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验