Suppr超能文献

在深度卷积神经网络中编码时间信息。

Encoding temporal information in deep convolution neural network.

作者信息

Singh Avinash Kumar, Bianchi Luigi

机构信息

School of Computer Science, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia.

Department of Civil Engineering and Computer Science Engineering, Tor Vergata University, Rome, Italy.

出版信息

Front Neuroergon. 2024 Jun 19;5:1287794. doi: 10.3389/fnrgo.2024.1287794. eCollection 2024.

Abstract

A recent development in deep learning techniques has attracted attention to the decoding and classification of electroencephalogram (EEG) signals. Despite several efforts to utilize different features in EEG signals, a significant research challenge is using time-dependent features in combination with local and global features. Several attempts have been made to remodel the deep learning convolution neural networks (CNNs) to capture time-dependency information. These features are usually either handcrafted features, such as power ratios, or splitting data into smaller-sized windows related to specific properties, such as a peak at 300 ms. However, these approaches partially solve the problem but simultaneously hinder CNNs' capability to learn from unknown information that might be present in the data. Other approaches, like recurrent neural networks, are very suitable for learning time-dependent information from EEG signals in the presence of unrelated sequential data. To solve this, we have proposed an encoding kernel (EnK), a novel time-encoding approach, which uniquely introduces time decomposition information during the vertical convolution operation in CNNs. The encoded information lets CNNs learn time-dependent features in addition to local and global features. We performed extensive experiments on several EEG data sets-physical human-robot collaborations, P300 visual-evoked potentials, motor imagery, movement-related cortical potentials, and the Dataset for Emotion Analysis Using Physiological Signals. The EnK outperforms the state of the art with an up to 6.5% reduction in mean squared error (MSE) and a 9.5% improvement in F1-scores compared to the average for all data sets together compared to base models. These results support our approach and show a high potential to improve the performance of physiological and non-physiological data. Moreover, the EnK can be applied to virtually any deep learning architecture with minimal effort.

摘要

深度学习技术的一项最新进展引起了人们对脑电图(EEG)信号解码和分类的关注。尽管人们为利用EEG信号中的不同特征做出了多项努力,但一个重大的研究挑战是将时间相关特征与局部和全局特征结合使用。已经进行了几次尝试来重塑深度学习卷积神经网络(CNN)以捕获时间依赖性信息。这些特征通常要么是手工制作的特征,如功率比,要么是将数据分割成与特定属性相关的较小窗口,如300毫秒处的峰值。然而,这些方法部分解决了问题,但同时也阻碍了CNN从数据中可能存在的未知信息中学习的能力。其他方法,如递归神经网络,非常适合在存在不相关序列数据的情况下从EEG信号中学习时间依赖性信息。为了解决这个问题,我们提出了一种编码内核(EnK),这是一种新颖的时间编码方法,它在CNN的垂直卷积操作期间独特地引入了时间分解信息。编码后的信息使CNN除了学习局部和全局特征外,还能学习时间相关特征。我们在几个EEG数据集上进行了广泛的实验——物理人机协作、P300视觉诱发电位、运动想象、运动相关皮层电位以及使用生理信号进行情感分析的数据集。与基础模型相比,EnK在所有数据集的平均值基础上,均方误差(MSE)降低了6.5%,F1分数提高了9.5%,优于现有技术。这些结果支持了我们的方法,并显示出提高生理和非生理数据性能的巨大潜力。此外,EnK几乎可以轻松应用于任何深度学习架构。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/abbd/11220250/1aa1d8dbb5e9/fnrgo-05-1287794-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验