IEEE Int Conf Rehabil Robot. 2023 Sep;2023:1-6. doi: 10.1109/ICORR58425.2023.10328385.
Brightness-mode (B-mode) ultrasound has been used to measure in vivo muscle dynamics for assistive devices. Estimation of fascicle length from B-mode images has now transitioned from time-consuming manual processes to automatic methods, but these methods fail to reach pixel-wise accuracy across extended locomotion. In this work, we aim to address this challenge by combining a U-net architecture with proven segmentation abilities with an LSTM component that takes advantage of temporal information to improve validation accuracy in the prediction of fascicle lengths. Using 64,849 ultrasound frames of the medial gastrocnemius, we semi-manually generated ground-truth for training the proposed U-net-LSTM. Compared with a traditional U-net and a CNNLSTM configuration, the validation accuracy, mean square error (MSE), and mean absolute error (MAE) of the proposed U-net-LSTM show better performance (91.4%, MSE =0.1± 0.03 mm, MAE =0.2± 0.05 mm). The proposed framework could be used for real-time, closed-loop wearable control during real-world locomotion.
亮度模式(B 模式)超声已被用于测量辅助设备中的活体肌肉动力学。现在,从 B 模式图像估计肌束长度已经从耗时的手动过程过渡到自动方法,但这些方法无法在扩展的运动过程中达到逐像素的精度。在这项工作中,我们旨在通过结合具有证明分割能力的 U-net 架构和利用时间信息的 LSTM 组件来解决这一挑战,以提高在预测肌束长度方面的验证准确性。使用内侧比目鱼肌的 64,849 个超声帧,我们半自动地生成了地面真实数据,用于训练所提出的 U-net-LSTM。与传统的 U-net 和 CNNLSTM 配置相比,所提出的 U-net-LSTM 的验证准确性、均方误差(MSE)和平均绝对误差(MAE)表现出更好的性能(91.4%,MSE=0.1±0.03mm,MAE=0.2±0.05mm)。所提出的框架可用于在现实运动中进行实时、闭环可穿戴控制。