• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种集成深度传递迁移学习和注意力机制的轻量级 Deeplab V3+ 网络用于烧伤面积识别。

A lightweight Deeplab V3+ network integrating deep transitive transfer learning and attention mechanism for burned area identification.

作者信息

Liu Lizhi, Guo Ying, Chen Erxue, Li Zengyuan, Li Yu, Liu Yang, Zhang Qiang, Wang Bing

机构信息

College of Horticulture and Forestry, Tarim University, Alar, Xinjiang, 843300, China.

Institute of Forest Resource Information Techniques, Chinese Academy of Forestry, BeiJing, 100091, China.

出版信息

Sci Rep. 2025 May 7;15(1):15969. doi: 10.1038/s41598-024-66060-7.

DOI:10.1038/s41598-024-66060-7
PMID:40335537
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12059163/
Abstract

Complete and accurate burned area map data are needed to document spatial and temporal patterns of fires, to quantify their drivers, and to assess the impacts on human and natural systems. To achieve the the purpose of identifying burned area accurately and efficiency from remote sensing images, a lightweight deep learning model is proposed based on Deeplab V3 + , which employs the combination of attention mechanism and deep transitive transfer learning (DTTL) strategy. The lightweight MobileNet V2 network integrated with Convolutional Block Attention Module (CBAM) is designed as the backbone network to replace the traditional time-consuming Xception of Deeplab V3 +. The attention mechanism is introduced to enhance the recognition ability of the proposed deep learning model, and the deep transitive transfer learning strategy is adopted to solve the problem of incorrect identification of the burned area and discontinuous edge details caused by insufficient sample size during the extraction process. For the process of DTTL, the improved Deeplab V3 + network was first pre-trained on ImageNet. Sequentially, WorldView-2 and the Sentinel-2 dataset were employed to train the proposed network based on the ImageNet pre-trained weights. Experiments were conducted to extract burned area from remote sensing images based on the trained model, and the results show that the proposed methodology can improve extraction accuracy with OA of 92.97% and Kappa of 0.819, which is higher than the comparative methods, and it can reduce the training time at the same time. We applied this methodology to identify the burned area in Western Attica region of Greece, and a satisfactory result was achieved with. OA of 93.58% and Kappa of 0.8265. This study demonstrates the effectiveness of the improved Deeplab V3 +in identifying forest burned area. which can provide valuable information for forest protection and monitoring.

摘要

需要完整准确的火烧面积地图数据来记录火灾的时空模式、量化其驱动因素,并评估对人类和自然系统的影响。为了实现从遥感图像中准确高效地识别火烧面积的目的,提出了一种基于Deeplab V3+的轻量级深度学习模型,该模型采用了注意力机制和深度传递迁移学习(DTTL)策略的组合。将集成了卷积块注意力模块(CBAM)的轻量级MobileNet V2网络设计为主干网络,以取代传统的耗时的Deeplab V3+的Xception网络。引入注意力机制以增强所提出的深度学习模型的识别能力,并采用深度传递迁移学习策略来解决提取过程中由于样本量不足导致的火烧面积识别错误和边缘细节不连续的问题。对于DTTL过程,改进的Deeplab V3+网络首先在ImageNet上进行预训练。随后,基于ImageNet预训练权重,使用WorldView-2和哨兵-2数据集来训练所提出的网络。基于训练好的模型对遥感图像进行火烧面积提取实验,结果表明,所提出的方法可以提高提取精度,总体精度(OA)为92.97%,卡帕系数(Kappa)为0.819,高于对比方法,同时还能减少训练时间。我们将该方法应用于识别希腊西部阿提卡地区的火烧面积,取得了令人满意的结果,总体精度为93.58%,卡帕系数为0.8265。本研究证明了改进的Deeplab V3+在识别森林火烧面积方面的有效性,可为森林保护和监测提供有价值的信息。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/7a4a0d52531d/41598_2024_66060_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/37ee6ea78c37/41598_2024_66060_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/cc682c056cd2/41598_2024_66060_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/ab20c0d8c5ba/41598_2024_66060_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/c43074fb05ed/41598_2024_66060_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/1f72c553e4ae/41598_2024_66060_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/70536bc7707e/41598_2024_66060_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/38f548a76326/41598_2024_66060_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/9266b2062926/41598_2024_66060_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/2487a0cdcecf/41598_2024_66060_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/372d2e562942/41598_2024_66060_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/4cc213fc27fd/41598_2024_66060_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/d198127d4fc1/41598_2024_66060_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/1a0b592924f1/41598_2024_66060_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/7a4a0d52531d/41598_2024_66060_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/37ee6ea78c37/41598_2024_66060_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/cc682c056cd2/41598_2024_66060_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/ab20c0d8c5ba/41598_2024_66060_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/c43074fb05ed/41598_2024_66060_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/1f72c553e4ae/41598_2024_66060_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/70536bc7707e/41598_2024_66060_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/38f548a76326/41598_2024_66060_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/9266b2062926/41598_2024_66060_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/2487a0cdcecf/41598_2024_66060_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/372d2e562942/41598_2024_66060_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/4cc213fc27fd/41598_2024_66060_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/d198127d4fc1/41598_2024_66060_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/1a0b592924f1/41598_2024_66060_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9141/12059163/7a4a0d52531d/41598_2024_66060_Fig14_HTML.jpg

相似文献

1
A lightweight Deeplab V3+ network integrating deep transitive transfer learning and attention mechanism for burned area identification.一种集成深度传递迁移学习和注意力机制的轻量级 Deeplab V3+ 网络用于烧伤面积识别。
Sci Rep. 2025 May 7;15(1):15969. doi: 10.1038/s41598-024-66060-7.
2
Deep-agriNet: a lightweight attention-based encoder-decoder framework for crop identification using multispectral images.深度农业网络:一种基于注意力的轻量级编码器-解码器框架,用于利用多光谱图像进行作物识别。
Front Plant Sci. 2023 Apr 18;14:1124939. doi: 10.3389/fpls.2023.1124939. eCollection 2023.
3
An Improved DeepLab v3+ Deep Learning Network Applied to the Segmentation of Grape Leaf Black Rot Spots.一种应用于葡萄叶黑腐病斑分割的改进型深度卷积神经网络(DeepLab v3+)深度学习网络
Front Plant Sci. 2022 Feb 15;13:795410. doi: 10.3389/fpls.2022.795410. eCollection 2022.
4
Environment Understanding Algorithm for Substation Inspection Robot Based on Improved DeepLab V3.基于改进型深度卷积神经网络语义分割算法(DeepLab V3)的变电站巡检机器人环境理解算法
J Imaging. 2022 Sep 21;8(10):257. doi: 10.3390/jimaging8100257.
5
Fully automated segmentation and radiomics feature extraction of hypopharyngeal cancer on MRI using deep learning.基于深度学习的 MRI 下咽癌全自动分割及影像组学特征提取
Eur Radiol. 2023 Sep;33(9):6548-6556. doi: 10.1007/s00330-023-09827-2. Epub 2023 Jun 20.
6
MOB-CBAM: A dual-channel attention-based deep learning generalizable model for breast cancer molecular subtypes prediction using mammograms.MOB-CBAM:一种基于双通道注意力的深度学习可推广模型,用于使用乳腺 X 光片预测乳腺癌分子亚型。
Comput Methods Programs Biomed. 2024 May;248:108121. doi: 10.1016/j.cmpb.2024.108121. Epub 2024 Mar 10.
7
A precise model for skin cancer diagnosis using hybrid U-Net and improved MobileNet-V3 with hyperparameters optimization.基于混合 U-Net 和改进的 MobileNet-V3 并结合超参数优化的皮肤癌诊断精确模型。
Sci Rep. 2024 Feb 21;14(1):4299. doi: 10.1038/s41598-024-54212-8.
8
Tumor Segmentation in Intraoperative Fluorescence Images Based on Transfer Learning and Convolutional Neural Networks.基于迁移学习和卷积神经网络的术中荧光图像肿瘤分割。
Surg Innov. 2024 Jun;31(3):291-306. doi: 10.1177/15533506241246576. Epub 2024 Apr 15.
9
Lightweight Deep Neural Network Method for Water Body Extraction from High-Resolution Remote Sensing Images with Multisensors.基于多传感器的高分辨率遥感图像水体提取的轻量化深度神经网络方法。
Sensors (Basel). 2021 Nov 7;21(21):7397. doi: 10.3390/s21217397.
10
Medical image recognition and segmentation of pathological slices of gastric cancer based on Deeplab v3+ neural network.基于 Deeplab v3+ 神经网络的胃癌病理切片医学图像识别与分割。
Comput Methods Programs Biomed. 2021 Aug;207:106210. doi: 10.1016/j.cmpb.2021.106210. Epub 2021 May 29.

本文引用的文献

1
SSPNet: An interpretable 3D-CNN for classification of schizophrenia using phase maps of resting-state complex-valued fMRI data.SSPNet:一种基于静息态复值 fMRI 相位图的精神分裂症分类的可解释 3D-CNN。
Med Image Anal. 2022 Jul;79:102430. doi: 10.1016/j.media.2022.102430. Epub 2022 Mar 24.
2
Squeeze-and-Excitation Networks.挤压激励网络。
IEEE Trans Pattern Anal Mach Intell. 2020 Aug;42(8):2011-2023. doi: 10.1109/TPAMI.2019.2913372. Epub 2019 Apr 29.
3
The Collection 6 MODIS burned area mapping algorithm and product.
MODIS集合6燃烧面积制图算法与产品。
Remote Sens Environ. 2018 Oct;217:72-85. doi: 10.1016/j.rse.2018.08.005. Epub 2018 Aug 12.
4
Fully Convolutional Networks for Semantic Segmentation.全卷积网络用于语义分割。
IEEE Trans Pattern Anal Mach Intell. 2017 Apr;39(4):640-651. doi: 10.1109/TPAMI.2016.2572683. Epub 2016 May 24.