IEEE Trans Neural Syst Rehabil Eng. 2024;32:2893-2904. doi: 10.1109/TNSRE.2024.3438610. Epub 2024 Aug 14.
Accurate sleep stage classification is significant for sleep health assessment. In recent years, several machine-learning based sleep staging algorithms have been developed, and in particular, deep-learning based algorithms have achieved performance on par with human annotation. Despite improved performance, a limitation of most deep-learning based algorithms is their black-box behavior, which have limited their use in clinical settings. Here, we propose a cross-modal transformer, which is a transformer-based method for sleep stage classification. The proposed cross-modal transformer consists of a cross-modal transformer encoder architecture along with a multi-scale one-dimensional convolutional neural network for automatic representation learning. The performance of our method is on-par with the state-of-the-art methods and eliminates the black-box behavior of deep-learning models by utilizing the interpretability aspect of the attention modules. Furthermore, our method provides considerable reductions in the number of parameters and training time compared to the state-of-the-art methods. Our code is available at https://github.com/Jathurshan0330/Cross-Modal-Transformer. A demo of our work can be found at https://bit.ly/Cross_modal_transformer_demo.
准确的睡眠阶段分类对于睡眠健康评估至关重要。近年来,已经开发出了几种基于机器学习的睡眠分期算法,特别是基于深度学习的算法,其性能已经与人工标注相当。尽管性能有所提高,但大多数基于深度学习的算法的一个局限性是其黑盒行为,这限制了它们在临床环境中的使用。在这里,我们提出了一种跨模态转换器,这是一种基于转换器的睡眠阶段分类方法。所提出的跨模态转换器由一个跨模态转换器编码器架构以及一个用于自动表示学习的多尺度一维卷积神经网络组成。我们的方法的性能与最先进的方法相当,并通过利用注意力模块的可解释性方面消除了深度学习模型的黑盒行为。此外,与最先进的方法相比,我们的方法在参数数量和训练时间方面都有了相当大的减少。我们的代码可在 https://github.com/Jathurshan0330/Cross-Modal-Transformer 上获得。我们工作的演示可在 https://bit.ly/Cross_modal_transformer_demo 上找到。