Suppr超能文献

在存在运动学习的情况下使用动态贝叶斯优化诱导期望效应:一项模拟研究。

Using Dynamic Bayesian Optimization to Induce Desired Effects in the Presence of Motor Learning: a Simulation Study.

作者信息

Kim GilHwan, Chishty Haider A, Sergi Fabrizio

机构信息

Department of Mechanical Engineering, University of Delaware, Newark, DE 19716, USA.

Department of Biomedical Engineering, University of Delaware, Newark DE, 19713, USA.

出版信息

bioRxiv. 2024 Aug 16:2024.08.13.607783. doi: 10.1101/2024.08.13.607783.

Abstract

Human-in-the-loop (HIL) optimization is a control paradigm used for tuning the control parameters of human-interacting devices while accounting for variability among individuals. A limitation of state-of-the-art HIL optimization algorithms such as Bayesian Optimization (BO) is that they assume that the relationship between control parameters and user response does not change over time. BO can be modified to account for the dynamics of the user response by implementing time into the kernel function, a method known as Dynamic Bayesian Optimization (DBO). However, it is unknown if DBO outperforms BO when the human response is characterized by models of human motor learning. In this work, we simulated runs of HIL optimization using BO and DBO towards establishing if DBO is a suitable paradigm for HIL optimization in the presence of motor learning. Simulations were conducted assuming either purely time-dependent participant responses, or assuming that responses would arise from state-space models of motor learning capable of describing both adaptation and use-dependent learning behavior. Statistical comparisons indicated that DBO was never inferior to BO, and, after a certain number of iterations, generally outperformed BO in convergence to optimal inputs and outputs. The number of iterations beyond which DBO was superior to BO occurred earlier when the input-output relationship of the simulated responses was more dynamic. Our results suggest that DBO may improve the performance of HIL optimization over BO when a sufficient number of iterations can be evaluated to accurately distinguish between unstructured variability (noise) and learning.

摘要

人在回路(HIL)优化是一种控制范式,用于在考虑个体差异的情况下调整人机交互设备的控制参数。诸如贝叶斯优化(BO)等现有HIL优化算法的一个局限性在于,它们假设控制参数与用户响应之间的关系不会随时间变化。可以通过将时间纳入核函数来修改BO,以考虑用户响应的动态变化,这种方法称为动态贝叶斯优化(DBO)。然而,当人类响应由人类运动学习模型表征时,DBO是否优于BO尚不清楚。在这项工作中,我们使用BO和DBO模拟了HIL优化运行,以确定DBO在存在运动学习的情况下是否是HIL优化的合适范式。模拟是在假设参与者响应仅随时间变化,或者假设响应将来自能够描述适应和使用依赖学习行为的运动学习状态空间模型的情况下进行的。统计比较表明,DBO从不劣于BO,并且在一定数量的迭代之后,在收敛到最优输入和输出方面通常优于BO。当模拟响应的输入-输出关系更具动态性时,DBO优于BO的迭代次数出现得更早。我们的结果表明,当可以评估足够数量的迭代以准确区分非结构化变异性(噪声)和学习时,DBO可能比BO提高HIL优化的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/59bd/11343104/901ba56c8a66/nihpp-2024.08.13.607783v1-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验