State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, Institutes of Brain Science, Institute for Medical and Engineering Innovation, Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai 200032, China.
Biosensors (Basel). 2024 Aug 22;14(8):406. doi: 10.3390/bios14080406.
Over the past decades, feature-based statistical machine learning and deep neural networks have been extensively utilized for automatic sleep stage classification (ASSC). Feature-based approaches offer clear insights into sleep characteristics and require low computational power but often fail to capture the spatial-temporal context of the data. In contrast, deep neural networks can process raw sleep signals directly and deliver superior performance. However, their overfitting, inconsistent accuracy, and computational cost were the primary drawbacks that limited their end-user acceptance. To address these challenges, we developed a novel neural network model, MLS-Net, which integrates the strengths of neural networks and feature extraction for automated sleep staging in mice. MLS-Net leverages temporal and spectral features from multimodal signals, such as EEG, EMG, and eye movements (EMs), as inputs and incorporates a bidirectional Long Short-Term Memory (bi-LSTM) to effectively capture the spatial-temporal nonlinear characteristics inherent in sleep signals. Our studies demonstrate that MLS-Net achieves an overall classification accuracy of 90.4% and REM state precision of 91.1%, sensitivity of 84.7%, and an F1-Score of 87.5% in mice, outperforming other neural network and feature-based algorithms in our multimodal dataset.
在过去几十年中,基于特征的统计机器学习和深度神经网络已被广泛用于自动睡眠阶段分类(ASSC)。基于特征的方法提供了对睡眠特征的清晰理解,所需的计算能力较低,但往往无法捕捉数据的时空上下文。相比之下,深度神经网络可以直接处理原始睡眠信号,并提供卓越的性能。然而,它们的过拟合、不一致的准确性和计算成本是限制其最终用户接受程度的主要缺点。为了解决这些挑战,我们开发了一种新的神经网络模型 MLS-Net,它将神经网络和特征提取的优势结合起来,用于自动对小鼠进行睡眠阶段分类。MLS-Net 将来自 EEG、EMG 和眼动(EM)等多模态信号的时间和频谱特征作为输入,并结合了双向长短时记忆网络(bi-LSTM),以有效地捕捉睡眠信号中固有的时空非线性特征。我们的研究表明,MLS-Net 在小鼠中实现了总体分类准确率为 90.4%、快速眼动(REM)状态准确率为 91.1%、灵敏度为 84.7%和 F1 得分为 87.5%,在我们的多模态数据集的其他神经网络和基于特征的算法中表现出色。