Mohale Vincent Zibi, Obagbuwa Ibidun Christiana
Faculty of Natural and Applied Sciences, Department of Computer Science and Information Technology, Sol Plaatje University, Kimberley, South Africa.
Front Artif Intell. 2025 Jan 28;8:1526221. doi: 10.3389/frai.2025.1526221. eCollection 2025.
The rise of sophisticated cyber threats has spurred advancements in Intrusion Detection Systems (IDS), which are crucial for identifying and mitigating security breaches in real-time. Traditional IDS often rely on complex machine learning algorithms that lack transparency despite their high accuracy, creating a "black box" effect that can hinder the analysts' understanding of their decision-making processes. Explainable Artificial Intelligence (XAI) offers a promising solution by providing interpretability and transparency, enabling security professionals to understand better, trust, and optimize IDS models. This paper presents a systematic review of the integration of XAI in IDS, focusing on enhancing transparency and interpretability in cybersecurity. Through a comprehensive analysis of recent studies, this review identifies commonly used XAI techniques, evaluates their effectiveness within IDS frameworks, and examines their benefits and limitations. Findings indicate that rule-based and tree-based XAI models are preferred for their interpretability, though trade-offs with detection accuracy remain challenging. Furthermore, the review highlights critical gaps in standardization and scalability, emphasizing the need for hybrid models and real-time explainability. The paper concludes with recommendations for future research directions, suggesting improvements in XAI techniques tailored for IDS, standardized evaluation metrics, and ethical frameworks prioritizing security and transparency. This review aims to inform researchers and practitioners about current trends and future opportunities in leveraging XAI to enhance IDS effectiveness, fostering a more transparent and resilient cybersecurity landscape.
复杂网络威胁的增加推动了入侵检测系统(IDS)的发展,这些系统对于实时识别和缓解安全漏洞至关重要。传统的入侵检测系统通常依赖复杂的机器学习算法,尽管其准确性很高,但缺乏透明度,产生了一种“黑匣子”效应,可能会阻碍分析师对其决策过程的理解。可解释人工智能(XAI)通过提供可解释性和透明度提供了一个有前景的解决方案,使安全专业人员能够更好地理解、信任和优化入侵检测系统模型。本文对XAI在入侵检测系统中的集成进行了系统综述,重点是提高网络安全中的透明度和可解释性。通过对近期研究的全面分析,本综述确定了常用的XAI技术,评估了它们在入侵检测系统框架内的有效性,并研究了它们的优点和局限性。研究结果表明,基于规则和基于树的XAI模型因其可解释性而更受青睐,尽管在检测准确性方面进行权衡仍然具有挑战性。此外,该综述强调了标准化和可扩展性方面的关键差距,强调了混合模型和实时可解释性的必要性。本文最后提出了未来研究方向的建议,建议改进针对入侵检测系统量身定制的XAI技术、标准化评估指标以及优先考虑安全性和透明度的道德框架。本综述旨在让研究人员和从业人员了解利用XAI提高入侵检测系统有效性方面的当前趋势和未来机会,营造一个更加透明和有弹性的网络安全环境。