• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用深度神经网络-超像素方法从深度信息对城市化航空图像中的三维着陆区进行分割

Three-Dimensional Landing Zone Segmentation in Urbanized Aerial Images from Depth Information Using a Deep Neural Network-Superpixel Approach.

作者信息

Morales-Navarro N A, Osuna-Coutiño J A de Jesús, Pérez-Patricio Madaín, Camas-Anzueto J L, Velázquez-González J Renán, Aguilar-González Abiel, Ocaña-Valenzuela Ernesto Alonso, Ibarra-de-la-Garza Juan-Belisario

机构信息

Department of Science, Tecnológico Nacional de México/IT de Tuxtla Gutiérrez, Carr. Panamericana Km. 1080, Tuxtla Gutiérrez 29050, Chiapas, Mexico.

Department of Computer Science, Instituto Nacional de Astrofísica, Óptica y Electrónica, Luis Enrique Erro No. 1, Santa María Tonantzintla 72840, Puebla, Mexico.

出版信息

Sensors (Basel). 2025 Apr 17;25(8):2517. doi: 10.3390/s25082517.

DOI:10.3390/s25082517
PMID:40285207
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12031426/
Abstract

Landing zone detection of autonomous aerial vehicles is crucial for locating suitable landing areas. Currently, landing zone localization predominantly relies on methods that use RGB cameras. These sensors offer the advantage of integration into the majority of autonomous vehicles. However, they lack depth perception, which can lead to the suggestion of non-viable landing zones, as they only assess an area using RGB information. They do not consider if the surface is irregular or accessible for a user (easily accessible to a person on foot). An alternative approach is to utilize 3D information extracted from depth images, but this introduces the challenge of correctly interpreting depth ambiguity. Motivated by the latter, we propose a methodology for 3D landing zone segmentation using a DNN-Superpixel approach. This methodology consists of three steps: First, the proposal involves clustering depth information using superpixels to segment, locate, and delimit zones within the scene. Second, we propose feature extraction from adjacent objects through a bounding box of the analyzed area. Finally, this methodology uses a Deep Neural Network (DNN) to segment a 3D area as landable or non-landable, considering its accessibility. The experimental results are feasible and promising. For example, the landing zone detection achieved an average recall of 0.953, meaning that this approach identified 95.3% of the pixels according to the ground truth. In addition, we have an average precision of 0.949, meaning that this approach segments 94.9% of the landing zones correctly.

摘要

自主飞行器的着陆区域检测对于确定合适的着陆区域至关重要。目前,着陆区域定位主要依赖于使用RGB相机的方法。这些传感器具有可集成到大多数自动驾驶车辆中的优势。然而,它们缺乏深度感知能力,这可能会导致提出不可行的着陆区域,因为它们仅使用RGB信息评估一个区域。它们不考虑表面是否不规则或用户是否可到达(步行者是否易于到达)。另一种方法是利用从深度图像中提取的3D信息,但这带来了正确解释深度模糊性的挑战。受后者的启发,我们提出了一种使用DNN-超像素方法进行3D着陆区域分割的方法。该方法包括三个步骤:首先,该提议涉及使用超像素对深度信息进行聚类,以分割、定位和界定场景中的区域。其次,我们提出通过分析区域的边界框从相邻对象中提取特征。最后,该方法使用深度神经网络(DNN)根据其可达性将3D区域分割为可着陆或不可着陆区域。实验结果是可行且有前景的。例如,着陆区域检测的平均召回率达到0.953,这意味着该方法根据地面真值识别出了95.3%的像素。此外,我们的平均精度为0.949,这意味着该方法正确分割了94.9%的着陆区域。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/5634cb5111c7/sensors-25-02517-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/128c8f822960/sensors-25-02517-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/a1ca6da561ee/sensors-25-02517-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/3e4feda92288/sensors-25-02517-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/7f8c13494e03/sensors-25-02517-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/04c6c7e39c22/sensors-25-02517-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/3b4553118f8a/sensors-25-02517-g0A6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/0b7d0d1044e1/sensors-25-02517-g0A7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/3836f8e16ac9/sensors-25-02517-g0A8a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/ebb74f170882/sensors-25-02517-g0A9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/bbe2a22df58c/sensors-25-02517-g0A10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/2babc4520b01/sensors-25-02517-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/b0572a76a358/sensors-25-02517-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/e53d686bf1cf/sensors-25-02517-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/6d0c95b23bc0/sensors-25-02517-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/b54a5609b43f/sensors-25-02517-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/f9ae87072481/sensors-25-02517-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/919074b9b3fc/sensors-25-02517-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/1fcf57f0288f/sensors-25-02517-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/5634cb5111c7/sensors-25-02517-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/128c8f822960/sensors-25-02517-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/a1ca6da561ee/sensors-25-02517-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/3e4feda92288/sensors-25-02517-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/7f8c13494e03/sensors-25-02517-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/04c6c7e39c22/sensors-25-02517-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/3b4553118f8a/sensors-25-02517-g0A6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/0b7d0d1044e1/sensors-25-02517-g0A7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/3836f8e16ac9/sensors-25-02517-g0A8a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/ebb74f170882/sensors-25-02517-g0A9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/bbe2a22df58c/sensors-25-02517-g0A10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/2babc4520b01/sensors-25-02517-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/b0572a76a358/sensors-25-02517-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/e53d686bf1cf/sensors-25-02517-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/6d0c95b23bc0/sensors-25-02517-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/b54a5609b43f/sensors-25-02517-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/f9ae87072481/sensors-25-02517-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/919074b9b3fc/sensors-25-02517-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/1fcf57f0288f/sensors-25-02517-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16b/12031426/5634cb5111c7/sensors-25-02517-g009.jpg

相似文献

1
Three-Dimensional Landing Zone Segmentation in Urbanized Aerial Images from Depth Information Using a Deep Neural Network-Superpixel Approach.使用深度神经网络-超像素方法从深度信息对城市化航空图像中的三维着陆区进行分割
Sensors (Basel). 2025 Apr 17;25(8):2517. doi: 10.3390/s25082517.
2
Brain tumor segmentation and detection in MRI using convolutional neural networks and VGG16.使用卷积神经网络和VGG16在磁共振成像(MRI)中进行脑肿瘤分割与检测
Cancer Biomark. 2025 Mar;42(3):18758592241311184. doi: 10.1177/18758592241311184. Epub 2025 Apr 4.
3
Deep superpixel generation and clustering for weakly supervised segmentation of brain tumors in MR images.用于磁共振图像中脑肿瘤弱监督分割的深度超像素生成与聚类
BMC Med Imaging. 2024 Dec 18;24(1):335. doi: 10.1186/s12880-024-01523-x.
4
A Real-Time Semantic Segmentation Method Based on STDC-CT for Recognizing UAV Emergency Landing Zones.一种基于STDC-CT的无人机应急着陆区识别实时语义分割方法。
Sensors (Basel). 2023 Jul 19;23(14):6514. doi: 10.3390/s23146514.
5
A Semi-Automated Algorithm for Segmentation of the Left Atrial Appendage Landing Zone: Application in Left Atrial Appendage Occlusion Procedures.一种用于左心耳着陆区分割的半自动算法:在左心耳封堵手术中的应用。
J Biomed Phys Eng. 2020 Apr 1;10(2):205-214. doi: 10.31661/jbpe.v0i0.1912-1019. eCollection 2020 Apr.
6
Saliency Detection Based on Multiple-Level Feature Learning.基于多级特征学习的显著性检测
Entropy (Basel). 2024 Apr 30;26(5):383. doi: 10.3390/e26050383.
7
FusionVision: A Comprehensive Approach of 3D Object Reconstruction and Segmentation from RGB-D Cameras Using YOLO and Fast Segment Anything.融合视觉:一种使用YOLO和快速分割一切模型从RGB-D相机进行3D物体重建与分割的综合方法。
Sensors (Basel). 2024 Apr 30;24(9):2889. doi: 10.3390/s24092889.
8
Superpixel Embedding Network.超像素嵌入网络
IEEE Trans Image Process. 2019 Dec 11. doi: 10.1109/TIP.2019.2957937.
9
Plant Stress Detection Using a Three-Dimensional Analysis from a Single RGB Image.利用单幅RGB图像的三维分析进行植物胁迫检测。
Sensors (Basel). 2024 Dec 9;24(23):7860. doi: 10.3390/s24237860.
10
Multiscale superpixel depth feature extraction for hyperspectral image classification.用于高光谱图像分类的多尺度超像素深度特征提取
Sci Rep. 2025 Apr 19;15(1):13529. doi: 10.1038/s41598-025-90228-4.

本文引用的文献

1
Sensors and Measurements for UAV Safety: An Overview.无人机安全的传感器和测量技术:概述。
Sensors (Basel). 2021 Dec 10;21(24):8253. doi: 10.3390/s21248253.
2
Real-Time Monocular Vision System for UAV Autonomous Landing in Outdoor Low-Illumination Environments.用于无人机在户外低光照环境下自主着陆的实时单目视觉系统
Sensors (Basel). 2021 Sep 16;21(18):6226. doi: 10.3390/s21186226.
3
LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone.基于无人机可见光相机传感器的轻量密集型 YOLO:一种快速准确的自主无人机着陆标记跟踪器。
Sensors (Basel). 2018 May 24;18(6):1703. doi: 10.3390/s18061703.
4
Logistic regression.逻辑回归
Circulation. 2008 May 6;117(18):2395-9. doi: 10.1161/CIRCULATIONAHA.106.682658.