• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于轻量级变压器的多任务学习模型,具有动态权重分配以改进漏洞预测。

A lightweight transformer based multi task learning model with dynamic weight allocation for improved vulnerability prediction.

作者信息

Liu Lan, Hui Zhanfa, Chen Guiming, Cai Tingfeng, Zhou Chiyu

机构信息

School of Electronic and Information Engineering, Guangdong Polytechnic Normal University, Guangzhou, 510655, Guangdong, China.

出版信息

Sci Rep. 2025 Aug 1;15(1):28176. doi: 10.1038/s41598-025-10650-6.

DOI:10.1038/s41598-025-10650-6
PMID:40750962
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12317127/
Abstract

Accurate vulnerability prediction is crucial for identifying potential security risks in software, especially in the context of imbalanced and complex real-world datasets. Traditional methods, such as single-task learning and ensemble approaches, often struggle with these challenges, particularly in detecting rare but critical vulnerabilities. To address this, we propose the MTLPT: Multi-Task Learning with Position Encoding and Lightweight Transformer for Vulnerability Prediction, a novel multi-task learning framework that leverages custom lightweight Transformer blocks and position encoding layers to effectively capture long-range dependencies and complex patterns in source code. The MTLPT model improves sensitivity to rare vulnerabilities and incorporates a dynamic weight loss function to adjust for imbalanced data. Our experiments on real-world vulnerability datasets demonstrate that MTLPT outperforms traditional methods in key performance metrics such as recall, F1-score, AUC, and MCC. Ablation studies further validate the contributions of the lightweight Transformer blocks, position encoding layers, and dynamic weight loss function, confirming their role in enhancing the model's predictive accuracy and efficiency.

摘要

准确的漏洞预测对于识别软件中的潜在安全风险至关重要,尤其是在不平衡且复杂的现实世界数据集的背景下。传统方法,如单任务学习和集成方法,往往难以应对这些挑战,特别是在检测罕见但关键的漏洞方面。为了解决这个问题,我们提出了MTLPT:用于漏洞预测的带位置编码和轻量级Transformer的多任务学习,这是一种新颖的多任务学习框架,它利用自定义的轻量级Transformer模块和位置编码层来有效捕获源代码中的长距离依赖关系和复杂模式。MTLPT模型提高了对罕见漏洞的敏感度,并纳入了动态加权损失函数以适应不平衡数据。我们在现实世界漏洞数据集上的实验表明,MTLPT在召回率、F1分数、AUC和MCC等关键性能指标上优于传统方法。消融研究进一步验证了轻量级Transformer模块、位置编码层和动态加权损失函数的贡献,证实了它们在提高模型预测准确性和效率方面的作用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/22e3b1c9a72d/41598_2025_10650_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/99da2f67d0bc/41598_2025_10650_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/2b74ffad65fd/41598_2025_10650_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/95fc3f0d6ade/41598_2025_10650_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/42cb1a8ab471/41598_2025_10650_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/c651f64429c8/41598_2025_10650_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/f744e6a36490/41598_2025_10650_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/22e3b1c9a72d/41598_2025_10650_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/99da2f67d0bc/41598_2025_10650_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/2b74ffad65fd/41598_2025_10650_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/95fc3f0d6ade/41598_2025_10650_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/42cb1a8ab471/41598_2025_10650_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/c651f64429c8/41598_2025_10650_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/f744e6a36490/41598_2025_10650_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5823/12317127/22e3b1c9a72d/41598_2025_10650_Fig6_HTML.jpg

相似文献

1
A lightweight transformer based multi task learning model with dynamic weight allocation for improved vulnerability prediction.一种基于轻量级变压器的多任务学习模型,具有动态权重分配以改进漏洞预测。
Sci Rep. 2025 Aug 1;15(1):28176. doi: 10.1038/s41598-025-10650-6.
2
Trajectory-Ordered Objectives for Self-Supervised Representation Learning of Temporal Healthcare Data Using Transformers: Model Development and Evaluation Study.使用Transformer进行时间序列医疗数据自监督表示学习的轨迹有序目标:模型开发与评估研究
JMIR Med Inform. 2025 Jun 4;13:e68138. doi: 10.2196/68138.
3
Are Current Survival Prediction Tools Useful When Treating Subsequent Skeletal-related Events From Bone Metastases?当前的生存预测工具在治疗骨转移后的骨骼相关事件时有用吗?
Clin Orthop Relat Res. 2024 Sep 1;482(9):1710-1721. doi: 10.1097/CORR.0000000000003030. Epub 2024 Mar 22.
4
A deep learning model for predicting systemic lupus erythematosus-associated epitopes.一种用于预测系统性红斑狼疮相关表位的深度学习模型。
BMC Med Inform Decis Mak. 2025 Jul 1;25(1):230. doi: 10.1186/s12911-025-03056-x.
5
Long-term care plan recommendation for older adults with disabilities: a bipartite graph transformer and self-supervised approach.针对残疾老年人的长期护理计划建议:一种二分图变压器和自监督方法。
J Am Med Inform Assoc. 2025 Apr 1;32(4):689-701. doi: 10.1093/jamia/ocae327.
6
Video swin-CLSTM transformer: Enhancing human action recognition with optical flow and long-term dependencies.视频双流卷积长短期记忆变压器:利用光流和长期依赖性增强人体动作识别
PLoS One. 2025 Jul 7;20(7):e0327717. doi: 10.1371/journal.pone.0327717. eCollection 2025.
7
A deep learning approach to direct immunofluorescence pattern recognition in autoimmune bullous diseases.深度学习方法在自身免疫性大疱性疾病中的直接免疫荧光模式识别。
Br J Dermatol. 2024 Jul 16;191(2):261-266. doi: 10.1093/bjd/ljae142.
8
Short-Term Memory Impairment短期记忆障碍
9
Multi-level channel-spatial attention and light-weight scale-fusion network (MCSLF-Net): multi-level channel-spatial attention and light-weight scale-fusion transformer for 3D brain tumor segmentation.多级通道空间注意力与轻量级尺度融合网络(MCSLF-Net):用于3D脑肿瘤分割的多级通道空间注意力与轻量级尺度融合变换器
Quant Imaging Med Surg. 2025 Jul 1;15(7):6301-6325. doi: 10.21037/qims-2025-354. Epub 2025 Jun 30.
10
Comparative analysis of convolutional neural networks and transformer architectures for breast cancer histopathological image classification.用于乳腺癌组织病理学图像分类的卷积神经网络与Transformer架构的比较分析
Front Med (Lausanne). 2025 Jun 17;12:1606336. doi: 10.3389/fmed.2025.1606336. eCollection 2025.

本文引用的文献

1
Asymptotic Properties of Matthews Correlation Coefficient.马修斯相关系数的渐近性质
Stat Med. 2025 Jan 15;44(1-2):e10303. doi: 10.1002/sim.10303. Epub 2024 Dec 16.
2
A Novel Smart Contract Vulnerability Detection Method Based on Information Graph and Ensemble Learning.基于信息图和集成学习的新型智能合约漏洞检测方法。
Sensors (Basel). 2022 May 8;22(9):3581. doi: 10.3390/s22093581.
3
Smart Contract Vulnerability Detection Model Based on Multi-Task Learning.基于多任务学习的智能合约漏洞检测模型。
Sensors (Basel). 2022 Feb 25;22(5):1829. doi: 10.3390/s22051829.
4
On the Vulnerability of CNN Classifiers in EEG-Based BCIs.基于 EEG 的脑机接口中 CNN 分类器的脆弱性
IEEE Trans Neural Syst Rehabil Eng. 2019 May;27(5):814-825. doi: 10.1109/TNSRE.2019.2908955. Epub 2019 Apr 2.