Xu Fumei, Xia Yu, Wu Xiaorun
School of Music, Jiangxi Normal University, Nanchang, Jiangxi, China.
School of Aviation Services and Music, Nanchang Hangkong University, Nanchang, Jiangxi, China.
Front Neurorobot. 2023 Oct 9;17:1270652. doi: 10.3389/fnbot.2023.1270652. eCollection 2023.
Currently, most robot dances are pre-compiled, the requirement of manual adjustment of relevant parameters and meta-action to change the dancing to another type of music would greatly reduce its function. To overcome the gap, this study proposed a dance composition model for mobile robots based on multimodal information. The model consists of three parts. (1) Extraction of multimodal information. The temporal structure feature method of structure analysis framework is used to divide audio music files into music structures; then, a hierarchical emotion detection framework is used to extract information (rhythm, emotion, tension, etc.) for each segmented music structure; calculating the safety of the current car and surrounding objects in motion; finally, extracting the stage color of the robot's location, corresponding to the relevant atmosphere emotions. (2) Initialize the dance library. Dance composition is divided into four categories based on the classification of music emotions; in addition, each type of dance composition is divided into skilled composition and general dance composition. (3) The total path length can be obtained by combining multimodal information based on different emotions, initial speeds, and music structure periods; then, target point planning can be carried out based on the specific dance composition selected. An adaptive control framework based on the Cerebellar Model Articulation Controller (CMAC) and compensation controllers is used to track the target point trajectory, and finally, the selected dance composition is formed. Mobile robot dance composition provides a new method and concept for humanoid robot dance composition.
目前,大多数机器人舞蹈都是预先编译好的,手动调整相关参数和元动作以将舞蹈转换为另一种类型音乐的要求会大大降低其功能。为了克服这一差距,本研究提出了一种基于多模态信息的移动机器人舞蹈创作模型。该模型由三部分组成。(1)多模态信息提取。利用结构分析框架的时间结构特征方法将音频音乐文件划分为音乐结构;然后,使用分层情感检测框架为每个分割后的音乐结构提取信息(节奏、情感、张力等);计算当前车辆和周围运动物体的安全性;最后,提取机器人所在位置的舞台颜色,对应相关的氛围情感。(2)初始化舞蹈库。根据音乐情感分类将舞蹈创作分为四类;此外,每种类型的舞蹈创作又分为熟练创作和一般舞蹈创作。(3)基于不同情感、初始速度和音乐结构周期组合多模态信息可得到总路径长度;然后,基于所选的特定舞蹈创作进行目标点规划。使用基于小脑模型关节控制器(CMAC)和补偿控制器的自适应控制框架来跟踪目标点轨迹,最终形成所选的舞蹈创作。移动机器人舞蹈创作为类人机器人舞蹈创作提供了一种新的方法和概念。