Hong Chaerim, Oh Taeyeon
Seoul AI School, aSSIST University, Seoul, 03767, South Korea.
Sci Rep. 2025 Jul 2;15(1):22768. doi: 10.1038/s41598-025-05182-y.
With the development of AI technology, the number of cyber security threats that exploit it is increasing rapidly, and it is urgent to build an effective security threat detection system to respond to these threats. There is active research on AI-based security tools to detect and respond to these security threats. This study explores how heterogeneous data, such as signs of security attacks from security threat news and weaknesses in source code, can be analyzed integrally in an ML model and LLM environment. In this study, we applied scaling and normalization techniques to the Post News data to improve bias, and we used syntax analysis, semantic analysis, and data flow information to perform an integrated analysis of the source code to improve detection performance. It is designed to be applied to both ML models and LLM by systematizing data labeling and data formats. The results showed that the constructed learning model performed well in both text analysis and source code analysis. In the post-news data learning, the ML-based models XGBoost, SVM, and Random Forest all showed f1-scores of 0.96 to 0.97, while the LLM-based models ST5-xxl, XLNet, BERT, CodeBERT, and GraphCodeBERT all showed a score of 0.97. Additionally, in the C/C++ weakness code detection data learning, the LLM series model ST5-xxl achieved 0.9999, XLNet achieved 0.9999, BERT achieved 0.9037, CodeBERT achieved 0.9999, and GraphCodeBERT achieved 0.9999. The ML-based model XGBoost showed an accuracy of 0.9999 with the TF-IDF embedding method, SVM showed 0.9699 with the TF-IDF embedding method, and Random Forest showed 0.9493 with the TF-IDF method. The models demonstrated higher performance with the TF-IDF embedding method than with the Word2Vec embedding. This study proposed an ML and LLM integrated framework that could effectively detect source code vulnerabilities using abstract syntax trees (AST). This framework overcame the limitations of existing static analysis tools and improved detection accuracy by simultaneously considering the structural characteristics and semantic context of the code. In particular, by combining AST-based feature extraction with LLM's natural language understanding capabilities, it improved generalization performance for new types of vulnerabilities and significantly reduced false positives.
随着人工智能技术的发展,利用该技术的网络安全威胁数量正在迅速增加,因此迫切需要构建一个有效的安全威胁检测系统来应对这些威胁。目前正在积极研究基于人工智能的安全工具,以检测和应对这些安全威胁。本研究探讨了如何在机器学习模型和语言模型环境中对异构数据进行整体分析,例如来自安全威胁新闻的安全攻击迹象和源代码中的弱点。在本研究中,我们对新闻数据应用了缩放和归一化技术以改善偏差,并使用语法分析、语义分析和数据流信息对源代码进行综合分析以提高检测性能。通过将数据标记和数据格式系统化,该系统旨在应用于机器学习模型和语言模型。结果表明,构建的学习模型在文本分析和源代码分析中均表现良好。在新闻数据学习中,基于机器学习的模型XGBoost、支持向量机(SVM)和随机森林的F1分数均为0.96至0.97,而基于语言模型的模型ST5-xxl、XLNet、BERT、CodeBERT和GraphCodeBERT的分数均为0.97。此外,在C/C++弱点代码检测数据学习中,语言模型系列模型ST5-xxl的准确率为0.9999,XLNet为0.9999,BERT为0.9037,CodeBERT为0.9999,GraphCodeBERT为0.9999。基于机器学习的模型XGBoost在使用TF-IDF嵌入方法时准确率为0.9999,支持向量机在使用TF-IDF嵌入方法时为0.9699,随机森林在使用TF-IDF方法时为0.9493。与Word2Vec嵌入相比,这些模型在使用TF-IDF嵌入方法时表现出更高的性能。本研究提出了一个机器学习和语言模型集成框架,该框架可以使用抽象语法树(AST)有效地检测源代码漏洞。该框架克服了现有静态分析工具的局限性,并通过同时考虑代码的结构特征和语义上下文提高了检测准确率。特别是,通过将基于AST的特征提取与语言模型的自然语言理解能力相结合,它提高了对新型漏洞的泛化性能,并显著减少了误报。