Saied Mohamed, Guirguis Shawkat
Institute of Graduate Studies & Research, Alexandria University, 832, Elhorrya Road, Alexandria, 21526, Egypt.
Sci Rep. 2025 Mar 4;15(1):7632. doi: 10.1038/s41598-025-90420-6.
The proliferation of internet of things (IoT) devices has led to unprecedented connectivity and convenience. However, this increased interconnectivity has also introduced significant security challenges, particularly concerning the detection and mitigation of botnet attacks. Detecting botnet activities in IoT environments is challenging due to the diverse nature of IoT devices and the large-scale data generated. Artificial intelligence and machine learning based approaches showed great potential in IoT botnet detection. However, as these approaches continue to advance and become more complex, new questions are opened about how decisions are made using such technologies. Integrating an explainability layer into these models can increase trustworthy and transparency. This paper proposes the utilization of explainable artificial intelligence (XAI) techniques for improving the interpretability and transparency of the botnet detection process. It analyzes the impact of incorporating XAI in the botnet detection process, including enhanced model interpretability, trustworthiness, and potential for early detection of emerging botnet attack patterns. Three different XAI based techniques are presented i.e. rule extraction and distillation, local interpretable model-agnostic explanations (LIME), Shapley additive explanations (SHAP). The experimental results demonstrate the effectiveness of the proposed approach, providing valuable insights into the inner workings of the detection model and facilitating the development of robust defense mechanisms against IoT botnet attacks. The findings of this study contribute to the growing body of research on XAI in cybersecurity and offer practical guidance for securing IoT ecosystems against botnet threats.
物联网(IoT)设备的激增带来了前所未有的连接性和便利性。然而,这种增强的互联性也带来了重大的安全挑战,特别是在僵尸网络攻击的检测和缓解方面。由于物联网设备的多样性以及所产生的大规模数据,在物联网环境中检测僵尸网络活动具有挑战性。基于人工智能和机器学习的方法在物联网僵尸网络检测中显示出巨大潜力。然而,随着这些方法不断发展并变得更加复杂,关于如何使用此类技术做出决策出现了新的问题。将可解释性层集成到这些模型中可以提高可信度和透明度。本文提出利用可解释人工智能(XAI)技术来提高僵尸网络检测过程的可解释性和透明度。它分析了在僵尸网络检测过程中纳入XAI的影响,包括增强模型的可解释性、可信度以及早期检测新兴僵尸网络攻击模式的潜力。提出了三种不同的基于XAI的技术,即规则提取与提炼、局部可解释模型无关解释(LIME)、沙普利值加法解释(SHAP)。实验结果证明了所提方法的有效性,为检测模型的内部工作原理提供了有价值的见解,并促进了针对物联网僵尸网络攻击的强大防御机制的开发。本研究结果有助于网络安全领域中关于XAI的研究不断发展,并为保护物联网生态系统免受僵尸网络威胁提供实际指导。