Galatolo Alessio, Winkle Katie
Department of Information Technology, Uppsala University, Uppsala, Sweden.
Department of Women and Children's Health, Uppsala University, Uppsala, Sweden.
Front Robot AI. 2025 May 16;12:1581024. doi: 10.3389/frobt.2025.1581024. eCollection 2025.
As social robots gain advanced communication capabilities, users increasingly expect coherent verbal and non-verbal behaviours. Recent work has shown that Large Language Models (LLMs) can support autonomous generation of such multimodal behaviours. However, current LLM-based approaches to non-verbal behaviour often involve multi-step reasoning with large, closed-source models-resulting in significant computational overhead and limiting their feasibility in low-resource or privacy-constrained environments.
To address these limitations, we propose a novel method for simultaneous generation of text and gestures with minimal computational overhead compared to plain text generation. Our system does not produce low-level joint trajectories, but instead predicts high-level communicative intentions, which are mapped to platform-specific expressions. Central to our approach is the introduction of lightweight, robot-specific "gesture heads" derived from the LLM's architecture, requiring no pose-based datasets and enabling generalisability across platforms.
We evaluate our method on two distinct robot platforms: Furhat (facial expressions) and Pepper (bodily gestures). Experimental results demonstrate that our method maintains behavioural quality while introducing negligible computational and memory overhead. Furthermore, the gesture heads operate in parallel with the language generation component, ensuring scalability and responsiveness even on small or locally deployed models.
Our approach supports the use of Small Language Models for multimodal generation, offering an effective alternative to existing high-resource methods. By abstracting gesture generation and eliminating reliance on platform-specific motion data, we enable broader applicability in real-world, low-resource, and privacy-sensitive HRI settings.
随着社交机器人获得先进的通信能力,用户越来越期望其具备连贯的言语和非言语行为。最近的研究表明,大语言模型(LLMs)可以支持自主生成此类多模态行为。然而,当前基于大语言模型的非言语行为方法通常涉及使用大型闭源模型进行多步推理,这会导致大量的计算开销,并限制了它们在低资源或隐私受限环境中的可行性。
为了解决这些限制,我们提出了一种新颖的方法,与纯文本生成相比,该方法能以最小的计算开销同时生成文本和手势。我们的系统不生成低级关节轨迹,而是预测高级交际意图,这些意图被映射到特定平台的表达上。我们方法的核心是引入从大语言模型架构派生的轻量级、特定于机器人的“手势头”,无需基于姿态的数据集,并能实现跨平台的通用性。
我们在两个不同的机器人平台上评估了我们的方法:Furhat(面部表情)和Pepper(身体手势)。实验结果表明,我们的方法在引入可忽略不计的计算和内存开销的同时,保持了行为质量。此外,手势头与语言生成组件并行运行,即使在小型或本地部署的模型上也能确保可扩展性和响应性。
我们的方法支持使用小语言模型进行多模态生成,为现有的高资源方法提供了一种有效的替代方案。通过抽象手势生成并消除对特定平台运动数据的依赖,我们在现实世界、低资源和隐私敏感的人机交互环境中实现了更广泛的适用性。