文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

XAI-XGBoost:一种用于保障医疗物联网系统安全的创新型可解释入侵检测方法。

XAI-XGBoost: an innovative explainable intrusion detection approach for securing internet of medical things systems.

作者信息

Hosain Yousif, Çakmak Muhammet

机构信息

Department of Computer Engineering, Karabuk University, Karabuk, 78050, Turkey.

Faculty of Engineering and Architecture, Sinop University, Sinop, Turkey.

出版信息

Sci Rep. 2025 Jul 1;15(1):22278. doi: 10.1038/s41598-025-07790-0.


DOI:10.1038/s41598-025-07790-0
PMID:40594692
Abstract

The Internet of Medical Things (IoMT) has transformed healthcare delivery but faces critical challenges, including cybersecurity threats that endanger patient safety and data integrity. Intrusion Detection Systems (IDS) are essential for protecting IoMT networks, yet conventional models often struggle with class imbalance, lack interpretability, and are unsuitable for real-world deployment in sensitive healthcare settings. This study aims to develop an innovative, explainable IDS framework tailored for IoMT systems that ensures both high detection accuracy and model transparency. The proposed approach integrates a hybrid random sampling technique to mitigate class imbalance, Recursive Feature Elimination (RFE) for feature selection, and an optimized XGBoost classifier for robust attack detection. Explainable AI techniques, namely SHAP and LIME, are employed to provide global and local insights into model predictions, enhancing interpretability and trustworthiness. The system was evaluated using the WUSTL-EHMS-2020 dataset, which contains network flow and biometric data, achieving outstanding performance: 99.22% accuracy, 98.35% precision, 99.91% recall, 99.12% F1-score, and 100% ROC-AUC. The proposed framework outperforms several traditional Machine Learning (ML) models and state-of-the-art IDS approaches, demonstrating its robustness and suitability for practical healthcare environments. By integrating advanced ML with explainable AI, this work addresses the critical need for secure, interpretable, and high-performing IDS solutions in IoMT systems. The study concludes that explainability is not an optional feature but a fundamental requirement in healthcare cybersecurity, and the proposed framework represents a significant step towards safer and more accountable AI-driven security solutions for the IoMT ecosystem.

摘要

医疗物联网(IoMT)已经改变了医疗服务的提供方式,但面临着严峻挑战,包括危及患者安全和数据完整性的网络安全威胁。入侵检测系统(IDS)对于保护IoMT网络至关重要,然而传统模型往往难以应对类别不平衡问题,缺乏可解释性,且不适用于敏感医疗环境中的实际部署。本研究旨在开发一种专为IoMT系统量身定制的创新型、可解释的IDS框架,确保高检测准确率和模型透明度。所提出的方法集成了一种混合随机采样技术以减轻类别不平衡,采用递归特征消除(RFE)进行特征选择,并使用优化的XGBoost分类器进行强大的攻击检测。可解释人工智能技术,即SHAP和LIME,被用于提供对模型预测的全局和局部洞察,增强可解释性和可信度。该系统使用包含网络流量和生物特征数据的WUSTL-EHMS-2020数据集进行评估,取得了出色的性能:准确率99.22%、精确率98.35%、召回率99.91%、F1分数99.12%以及ROC-AUC为100%。所提出的框架优于几种传统机器学习(ML)模型和最先进的IDS方法,证明了其在实际医疗环境中的稳健性和适用性。通过将先进的机器学习与可解释人工智能相结合,这项工作满足了IoMT系统中对安全、可解释且高性能的IDS解决方案的迫切需求。该研究得出结论,可解释性不是一个可选特征,而是医疗网络安全的一项基本要求,所提出的框架代表了朝着为IoMT生态系统提供更安全、更具问责性的人工智能驱动安全解决方案迈出的重要一步。

相似文献

[1]
XAI-XGBoost: an innovative explainable intrusion detection approach for securing internet of medical things systems.

Sci Rep. 2025-7-1

[2]
Enhancing IDS for the IoMT based on advanced features selection and deep learning methods to increase the model trustworthiness.

PLoS One. 2025-7-2

[3]
Synergizing advanced algorithm of explainable artificial intelligence with hybrid model for enhanced brain tumor detection in healthcare.

Sci Rep. 2025-7-1

[4]
An explainable federated blockchain framework with privacy-preserving AI optimization for securing healthcare data.

Sci Rep. 2025-7-1

[5]
A deep dive into artificial intelligence with enhanced optimization-based security breach detection in internet of health things enabled smart city environment.

Sci Rep. 2025-7-2

[6]
Stabilizing machine learning for reproducible and explainable results: A novel validation approach to subject-specific insights.

Comput Methods Programs Biomed. 2025-6-21

[7]
Interpretable Machine Learning for Serum-Based Metabolomics in Breast Cancer Diagnostics: Insights from Multi-Objective Feature Selection-Driven LightGBM-SHAP Models.

Medicina (Kaunas). 2025-6-19

[8]
Enhancing remote patient monitoring with AI-driven IoMT and cloud computing technologies.

Sci Rep. 2025-7-5

[9]
Supervised Machine Learning Models for Predicting Sepsis-Associated Liver Injury in Patients With Sepsis: Development and Validation Study Based on a Multicenter Cohort Study.

J Med Internet Res. 2025-5-26

[10]
An explainable AI-based hybrid machine learning model for interpretability and enhanced crop yield prediction.

MethodsX. 2025-6-17

本文引用的文献

[1]
Security Analysis for Smart Healthcare Systems.

Sensors (Basel). 2024-5-24

[2]
An Improved Mutual Information Feature Selection Technique for Intrusion Detection Systems in the Internet of Medical Things.

Sensors (Basel). 2023-5-22

[3]
Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks.

Comput Biol Med. 2023-4

[4]
Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI.

Neural Netw. 2022-11

[5]
A Hybrid Framework for Intrusion Detection in Healthcare Systems Using Deep Learning.

Front Public Health. 2021

[6]
An Intrusion Detection Mechanism for Secured IoMT Framework Based on Swarm-Neural Network.

IEEE J Biomed Health Inform. 2022-5

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索