• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Affiliated Fusion Conditional Random Field for Urban UAV Image Semantic Segmentation.

作者信息

Kong Yingying, Zhang Bowen, Yan Biyuan, Liu Yanjuan, Leung Henry, Peng Xiangyang

机构信息

Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China.

Nanjing Research Institute of Electronics Engineering, Nanjing 210007, China.

出版信息

Sensors (Basel). 2020 Feb 12;20(4):993. doi: 10.3390/s20040993.

DOI:10.3390/s20040993
PMID:32059557
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7070791/
Abstract

Unmanned aerial vehicles (UAV) have had significant progress in the last decade, which is applied to many relevant fields because of the progress of aerial image processing and the convenience to explore areas that men cannot reach. Still, as the basis of further applications such as object tracking and terrain classification, semantic image segmentation is one of the most difficult challenges in the field of computer vision. In this paper, we propose a method for urban UAV images semantic segmentation, which utilizes the geographical information of the region of interest in the form of a digital surface model (DSM). We introduce an Affiliated Fusion Conditional Random Field (AF-CRF), which combines the information of visual pictures and DSM, and a multi-scale strategy with attention to improve the segmenting results. The experiments show that the proposed structure performs better than state-of-the-art networks in multiple metrics.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/3b53ab7bf03d/sensors-20-00993-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/b388875a059f/sensors-20-00993-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/a87e68cdc3c2/sensors-20-00993-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/ab061d36e167/sensors-20-00993-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/e41f4b5fe927/sensors-20-00993-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/46b4ec715e52/sensors-20-00993-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/36b7f0b6af50/sensors-20-00993-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/dce89e767966/sensors-20-00993-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/e102181b51eb/sensors-20-00993-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/2da87457bb2e/sensors-20-00993-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/86a8dd53a2e0/sensors-20-00993-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/6d847dc193ad/sensors-20-00993-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/3b53ab7bf03d/sensors-20-00993-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/b388875a059f/sensors-20-00993-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/a87e68cdc3c2/sensors-20-00993-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/ab061d36e167/sensors-20-00993-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/e41f4b5fe927/sensors-20-00993-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/46b4ec715e52/sensors-20-00993-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/36b7f0b6af50/sensors-20-00993-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/dce89e767966/sensors-20-00993-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/e102181b51eb/sensors-20-00993-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/2da87457bb2e/sensors-20-00993-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/86a8dd53a2e0/sensors-20-00993-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/6d847dc193ad/sensors-20-00993-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/92ef/7070791/3b53ab7bf03d/sensors-20-00993-g012.jpg

相似文献

1
Affiliated Fusion Conditional Random Field for Urban UAV Image Semantic Segmentation.
Sensors (Basel). 2020 Feb 12;20(4):993. doi: 10.3390/s20040993.
2
A Novel Network Framework on Simultaneous Road Segmentation and Vehicle Detection for UAV Aerial Traffic Images.一种用于无人机空中交通图像的同时进行道路分割和车辆检测的新型网络框架。
Sensors (Basel). 2024 Jun 3;24(11):3606. doi: 10.3390/s24113606.
3
Using Deep Learning and Low-Cost RGB and Thermal Cameras to Detect Pedestrians in Aerial Images Captured by Multirotor UAV.利用深度学习以及低成本的 RGB 和热成像摄像机,检测多旋翼无人机航拍图像中的行人。
Sensors (Basel). 2018 Jul 12;18(7):2244. doi: 10.3390/s18072244.
4
LOANet: a lightweight network using object attention for extracting buildings and roads from UAV aerial remote sensing images.LOANet:一种使用目标注意力从无人机航空遥感图像中提取建筑物和道路的轻量级网络。
PeerJ Comput Sci. 2023 Jul 11;9:e1467. doi: 10.7717/peerj-cs.1467. eCollection 2023.
5
A Real-Time Semantic Segmentation Method Based on STDC-CT for Recognizing UAV Emergency Landing Zones.一种基于STDC-CT的无人机应急着陆区识别实时语义分割方法。
Sensors (Basel). 2023 Jul 19;23(14):6514. doi: 10.3390/s23146514.
6
Deep Convolutional Neural Network for Flood Extent Mapping Using Unmanned Aerial Vehicles Data.基于无人机数据的深度卷积神经网络洪水淹没范围制图
Sensors (Basel). 2019 Mar 27;19(7):1486. doi: 10.3390/s19071486.
7
Optimal segmentation scale selection and evaluation of cultivated land objects based on high-resolution remote sensing images with spectral and texture features.基于高分辨率遥感图像光谱和纹理特征的耕地对象最优分割尺度选择与评价。
Environ Sci Pollut Res Int. 2021 Jun;28(21):27067-27083. doi: 10.1007/s11356-021-12552-2. Epub 2021 Jan 27.
8
Applying Fully Convolutional Architectures for Semantic Segmentation of a Single Tree Species in Urban Environment on High Resolution UAV Optical Imagery.基于高分辨率无人机光学影像的城市环境中单种树木语义分割的全卷积架构应用。
Sensors (Basel). 2020 Jan 20;20(2):563. doi: 10.3390/s20020563.
9
Identifying the Branch of Kiwifruit Based on Unmanned Aerial Vehicle (UAV) Images Using Deep Learning Method.基于深度学习方法的利用无人机(UAV)图像识别猕猴桃品种。
Sensors (Basel). 2021 Jun 29;21(13):4442. doi: 10.3390/s21134442.
10
An Effective Image Denoising Method for UAV Images via Improved Generative Adversarial Networks.基于改进生成对抗网络的无人机图像有效去噪方法。
Sensors (Basel). 2018 Jun 21;18(7):1985. doi: 10.3390/s18071985.

本文引用的文献

1
Fully Convolutional Networks for Semantic Segmentation.全卷积网络用于语义分割。
IEEE Trans Pattern Anal Mach Intell. 2017 Apr;39(4):640-651. doi: 10.1109/TPAMI.2016.2572683. Epub 2016 May 24.
2
P3 & beyond: move making algorithms for solving higher order functions.P3及以上:用于求解高阶函数的移动制造算法。
IEEE Trans Pattern Anal Mach Intell. 2009 Sep;31(9):1645-56. doi: 10.1109/TPAMI.2008.217.
3
Global discriminative learning for higher-accuracy computational gene prediction.用于更高精度计算基因预测的全局判别学习
PLoS Comput Biol. 2007 Mar 16;3(3):e54. doi: 10.1371/journal.pcbi.0030054. Epub 2007 Feb 2.