Suppr超能文献

临床决策支持中的人工智能与不良事件预测

Artificial intelligence in clinical decision support and the prediction of adverse events.

作者信息

Oei S P, Bakkes T H G F, Mischi M, Bouwman R A, van Sloun R J G, Turco S

机构信息

Biomedical Diagnostics Lab, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands.

Anesthesiology, Catharina Hospital, Eindhoven, Netherlands.

出版信息

Front Digit Health. 2025 May 30;7:1403047. doi: 10.3389/fdgth.2025.1403047. eCollection 2025.

Abstract

This review focuses on integrating artificial intelligence (AI) into healthcare, particularly for predicting adverse events, which holds potential in clinical decision support (CDS) but also presents significant challenges. Biases in data acquisition, such as population shifts and data scarcity, threaten the generalizability of AI-based CDS algorithms across different healthcare centers. Techniques like resampling and data augmentation are crucial for addressing biases, along with external validation to mitigate population bias. Moreover, biases can emerge during AI training, leading to underfitting or overfitting, necessitating regularization techniques for balancing model complexity and generalizability. The lack of interpretability in AI models poses trust and transparency issues, advocating for transparent algorithms and requiring rigorous testing on specific hospital populations before implementation. Additionally, emphasizing human judgment alongside AI integration is essential to mitigate the risks of deskilling healthcare practitioners. Ongoing evaluation processes and adjustments to regulatory frameworks are crucial for ensuring the ethical, safe, and effective use of AI in CDS, highlighting the need for meticulous attention to data quality, preprocessing, model training, interpretability, and ethical considerations.

摘要

本综述聚焦于将人工智能(AI)整合到医疗保健领域,尤其是用于预测不良事件,这在临床决策支持(CDS)中具有潜力,但也带来了重大挑战。数据获取中的偏差,如人群变化和数据稀缺,威胁着基于AI的CDS算法在不同医疗中心的通用性。重采样和数据增强等技术对于解决偏差至关重要,同时还需要外部验证来减轻人群偏差。此外,AI训练过程中可能会出现偏差,导致欠拟合或过拟合,因此需要正则化技术来平衡模型复杂性和通用性。AI模型缺乏可解释性带来了信任和透明度问题,主张采用透明算法,并要求在实施前对特定医院人群进行严格测试。此外,在整合AI的同时强调人类判断对于减轻医疗从业者技能退化的风险至关重要。持续的评估过程和对监管框架的调整对于确保在CDS中道德、安全且有效地使用AI至关重要,这突出了对数据质量、预处理、模型训练、可解释性和伦理考量予以细致关注的必要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5341/12162700/34259ae5239c/fdgth-07-1403047-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验