Azimi David, Hoseinnezhad Reza
School of Information Technology, Deakin University, Victoria 3125, Australia.
School of Engineering, RMIT University, Victoria 3082, Australia.
Sensors (Basel). 2025 Mar 4;25(5):1565. doi: 10.3390/s25051565.
This study introduces a hierarchical reinforcement learning (RL) framework tailored to object manipulation tasks by quadrupedal robots, emphasizing their real-world deployment. The proposed approach adopts a sensor-driven control structure capable of addressing challenges in dense and cluttered environments filled with walls and obstacles. A novel reward function is central to the method, incorporating sensor-based obstacle observations to optimize the decision-making. This design minimizes the computational demands while maintaining adaptability and robust functionality. Simulated trials conducted in NVIDIA Isaac Sim, utilizing ANYbotics quadrupedal robots, demonstrated a high manipulation accuracy, with a mean positioning error of 11 cm across object-target distances of up to 10 m. Furthermore, the RL framework effectively integrates path planning in complex environments, achieving energy-efficient and stable operations. These findings establish the framework as a promising approach for advanced robotics requiring versatility, efficiency, and practical deployability.
本研究介绍了一种专门为四足机器人的物体操纵任务量身定制的分层强化学习(RL)框架,强调其在现实世界中的部署。所提出的方法采用了一种传感器驱动的控制结构,能够应对充满墙壁和障碍物的密集和杂乱环境中的挑战。一种新颖的奖励函数是该方法的核心,它结合基于传感器的障碍物观测来优化决策。这种设计在保持适应性和强大功能的同时,将计算需求降至最低。在NVIDIA Isaac Sim中使用ANYbotics四足机器人进行的模拟试验显示出很高的操纵精度,在高达10米的物体目标距离上平均定位误差为11厘米。此外,RL框架有效地将复杂环境中的路径规划集成在一起,实现了节能和稳定的操作。这些发现确立了该框架作为一种有前途的方法,适用于需要多功能性、效率和实际可部署性的先进机器人技术。