Śliwowski Maciej, Martin Matthieu, Souloumiac Antoine, Blanchart Pierre, Aksenova Tetiana
Université Grenoble Alpes, CEA, LETI, Clinatec, Grenoble, France.
Université Paris-Saclay, CEA, List, Palaiseau, France.
Front Hum Neurosci. 2023 Mar 16;17:1111645. doi: 10.3389/fnhum.2023.1111645. eCollection 2023.
In brain-computer interfaces (BCI) research, recording data is time-consuming and expensive, which limits access to big datasets. This may influence the BCI system performance as machine learning methods depend strongly on the training dataset size. Important questions arise: taking into account neuronal signal characteristics (e.g., non-stationarity), can we achieve higher decoding performance with more data to train decoders? What is the perspective for further improvement with time in the case of long-term BCI studies? In this study, we investigated the impact of long-term recordings on motor imagery decoding from two main perspectives: model requirements regarding dataset size and potential for patient adaptation.
We evaluated the multilinear model and two deep learning (DL) models on a long-term BCI & Tetraplegia (ClinicalTrials.gov identifier: NCT02550522) clinical trial dataset containing 43 sessions of ECoG recordings performed with a tetraplegic patient. In the experiment, a participant executed 3D virtual hand translation using motor imagery patterns. We designed multiple computational experiments in which training datasets were increased or translated to investigate the relationship between models' performance and different factors influencing recordings.
Our results showed that DL decoders showed similar requirements regarding the dataset size compared to the multilinear model while demonstrating higher decoding performance. Moreover, high decoding performance was obtained with relatively small datasets recorded later in the experiment, suggesting motor imagery patterns improvement and patient adaptation during the long-term experiment. Finally, we proposed UMAP embeddings and local intrinsic dimensionality as a way to visualize the data and potentially evaluate data quality.
DL-based decoding is a prospective approach in BCI which may be efficiently applied with real-life dataset size. Patient-decoder co-adaptation is an important factor to consider in long-term clinical BCI.
在脑机接口(BCI)研究中,记录数据既耗时又昂贵,这限制了对大型数据集的获取。这可能会影响BCI系统的性能,因为机器学习方法在很大程度上依赖于训练数据集的大小。由此产生了一些重要问题:考虑到神经元信号的特征(例如,非平稳性),我们能否通过更多的数据来训练解码器从而实现更高的解码性能?在长期BCI研究的情况下,随着时间的推移进一步改进的前景如何?在本研究中,我们从两个主要角度研究了长期记录对运动想象解码的影响:关于数据集大小的模型要求和患者适应的潜力。
我们在一个长期BCI与四肢瘫痪(ClinicalTrials.gov标识符:NCT02550522)临床试验数据集上评估了多线性模型和两种深度学习(DL)模型,该数据集包含一名四肢瘫痪患者进行的43次皮层脑电图(ECoG)记录。在实验中,一名参与者使用运动想象模式执行3D虚拟手部平移。我们设计了多个计算实验,其中增加或转换训练数据集以研究模型性能与影响记录的不同因素之间的关系。
我们的结果表明,与多线性模型相比,DL解码器在数据集大小方面表现出相似的要求,同时展示出更高的解码性能。此外,在实验后期记录的相对较小的数据集中也获得了较高的解码性能,这表明在长期实验中运动想象模式得到了改善,患者也实现了适应。最后,我们提出使用UMAP嵌入和局部固有维度作为一种可视化数据并潜在评估数据质量的方法。
基于DL的解码是BCI中的一种前瞻性方法,可有效地应用于实际数据集大小。患者 - 解码器共同适应是长期临床BCI中需要考虑的一个重要因素。