Morales-Navarro N A, Osuna-Coutiño J A de Jesús, Pérez-Patricio Madaín, Camas-Anzueto J L, Velázquez-González J Renán, Aguilar-González Abiel, Ocaña-Valenzuela Ernesto Alonso, Ibarra-de-la-Garza Juan-Belisario
Department of Science, Tecnológico Nacional de México/IT de Tuxtla Gutiérrez, Carr. Panamericana Km. 1080, Tuxtla Gutiérrez 29050, Chiapas, Mexico.
Department of Computer Science, Instituto Nacional de Astrofísica, Óptica y Electrónica, Luis Enrique Erro No. 1, Santa María Tonantzintla 72840, Puebla, Mexico.
Sensors (Basel). 2025 Apr 17;25(8):2517. doi: 10.3390/s25082517.
Landing zone detection of autonomous aerial vehicles is crucial for locating suitable landing areas. Currently, landing zone localization predominantly relies on methods that use RGB cameras. These sensors offer the advantage of integration into the majority of autonomous vehicles. However, they lack depth perception, which can lead to the suggestion of non-viable landing zones, as they only assess an area using RGB information. They do not consider if the surface is irregular or accessible for a user (easily accessible to a person on foot). An alternative approach is to utilize 3D information extracted from depth images, but this introduces the challenge of correctly interpreting depth ambiguity. Motivated by the latter, we propose a methodology for 3D landing zone segmentation using a DNN-Superpixel approach. This methodology consists of three steps: First, the proposal involves clustering depth information using superpixels to segment, locate, and delimit zones within the scene. Second, we propose feature extraction from adjacent objects through a bounding box of the analyzed area. Finally, this methodology uses a Deep Neural Network (DNN) to segment a 3D area as landable or non-landable, considering its accessibility. The experimental results are feasible and promising. For example, the landing zone detection achieved an average recall of 0.953, meaning that this approach identified 95.3% of the pixels according to the ground truth. In addition, we have an average precision of 0.949, meaning that this approach segments 94.9% of the landing zones correctly.
自主飞行器的着陆区域检测对于确定合适的着陆区域至关重要。目前,着陆区域定位主要依赖于使用RGB相机的方法。这些传感器具有可集成到大多数自动驾驶车辆中的优势。然而,它们缺乏深度感知能力,这可能会导致提出不可行的着陆区域,因为它们仅使用RGB信息评估一个区域。它们不考虑表面是否不规则或用户是否可到达(步行者是否易于到达)。另一种方法是利用从深度图像中提取的3D信息,但这带来了正确解释深度模糊性的挑战。受后者的启发,我们提出了一种使用DNN-超像素方法进行3D着陆区域分割的方法。该方法包括三个步骤:首先,该提议涉及使用超像素对深度信息进行聚类,以分割、定位和界定场景中的区域。其次,我们提出通过分析区域的边界框从相邻对象中提取特征。最后,该方法使用深度神经网络(DNN)根据其可达性将3D区域分割为可着陆或不可着陆区域。实验结果是可行且有前景的。例如,着陆区域检测的平均召回率达到0.953,这意味着该方法根据地面真值识别出了95.3%的像素。此外,我们的平均精度为0.949,这意味着该方法正确分割了94.9%的着陆区域。