Normandie University, UNIROUEN, ESIGELEC, IRSEEM, 76000 Rouen, France.
Sensors (Basel). 2022 Jul 13;22(14):5241. doi: 10.3390/s22145241.
The real-time segmentation of sidewalk environments is critical to achieving autonomous navigation for robotic wheelchairs in urban territories. A robust and real-time video semantic segmentation offers an apt solution for advanced visual perception in such complex domains. The key to this proposition is to have a method with lightweight flow estimations and reliable feature extractions. We address this by selecting an approach based on recent trends in video segmentation. Although these approaches demonstrate efficient and cost-effective segmentation performance in cross-domain implementations, they require additional procedures to put their striking characteristics into practical use. We use our method for developing a visual perception technique to perform in urban sidewalk environments for the robotic wheelchair. We generate a collection of synthetic scenes in a blending target distribution to train and validate our approach. Experimental results show that our method improves prediction accuracy on our benchmark with tolerable loss of speed and without additional overhead. Overall, our technique serves as a reference to transfer and develop perception algorithms for any cross-domain visual perception applications with less downtime.
实现机器人轮椅在城市环境中的自主导航,实时分割人行道环境至关重要。稳健且实时的视频语义分割为复杂领域中的高级视觉感知提供了合适的解决方案。这一方案的关键在于具有轻量级流估计和可靠特征提取的方法。我们通过选择基于视频分割最新趋势的方法来解决这个问题。尽管这些方法在跨域实现中展示了高效和经济有效的分割性能,但它们需要额外的步骤才能将其显著的特点付诸实际应用。我们使用我们的方法为机器人轮椅在城市人行道环境中开发视觉感知技术。我们在混合目标分布中生成一组合成场景来训练和验证我们的方法。实验结果表明,我们的方法在我们的基准测试中提高了预测精度,在可容忍的速度损失下,且没有额外的开销。总的来说,我们的技术为任何具有较少停机时间的跨域视觉感知应用程序的转移和开发感知算法提供了参考。