State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin 300401, P. R. China.
Int J Neural Syst. 2020 Mar;30(3):2050009. doi: 10.1142/S0129065720500094.
Traditional training methods need to collect a large amount of data for every subject to train a subject-specific classifier, which causes subjects fatigue and training burden. This study proposes a novel training method, TrAdaBoost based on cross-validation and an adaptive threshold (CV-T-TAB), to reduce the amount of data required for training by selecting and combining multiple subjects' classifiers that perform well on a new subject to train a classifier. This method adopts cross-validation to extend the amount of the new subject's training data and sets an adaptive threshold to select the optimal combination of the classifiers. Twenty-five subjects participated in the N200- and P300-based brain-computer interface. The study compares CV-T-TAB to five traditional training methods by testing them on the training of a support vector machine. The accuracy, information transfer rate, area under the curve, recall and precision are used to evaluate the performances under nine conditions with different amounts of data. CV-T-TAB outperforms the other methods and retains a high accuracy even when the amount of data is reduced to one-third of the original amount. The results imply that CV-T-TAB is effective in improving the performance of a subject-specific classifier with a small amount of data by adopting multiple subjects' classifiers, which reduces the training cost.
传统的训练方法需要为每个受试者收集大量数据来训练特定于受试者的分类器,这会导致受试者疲劳和训练负担。本研究提出了一种新颖的训练方法,基于交叉验证和自适应阈值的 TrAdaBoost(CV-T-TAB),通过选择和组合在新受试者上表现良好的多个受试者分类器来训练分类器,从而减少训练所需的数据量。该方法采用交叉验证来扩展新受试者的训练数据量,并设置自适应阈值来选择最佳的分类器组合。25 名受试者参加了基于 N200 和 P300 的脑机接口研究。通过在支持向量机的训练上测试 CV-T-TAB 和五种传统训练方法,比较了它们的性能。在九种不同数据量的条件下,使用准确性、信息传输率、曲线下面积、召回率和精度来评估性能。CV-T-TAB 优于其他方法,即使数据量减少到原始数据量的三分之一,它也能保持较高的准确性。结果表明,CV-T-TAB 通过采用多个受试者的分类器,有效地提高了具有少量数据的特定于受试者的分类器的性能,降低了训练成本。