Suppr超能文献

一种用于癫痫患者运动分析的分层多模态系统。

A hierarchical multimodal system for motion analysis in patients with epilepsy.

作者信息

Ahmedt-Aristizabal David, Fookes Clinton, Denman Simon, Nguyen Kien, Fernando Tharindu, Sridharan Sridha, Dionisio Sasha

机构信息

The Speech, Audio, Image and Video Technologies (SAIVT) research group, School of Electrical Engineering & Computer Science, Queensland University of Technology, Australia.

The Speech, Audio, Image and Video Technologies (SAIVT) research group, School of Electrical Engineering & Computer Science, Queensland University of Technology, Australia.

出版信息

Epilepsy Behav. 2018 Oct;87:46-58. doi: 10.1016/j.yebeh.2018.07.028. Epub 2018 Aug 31.

Abstract

During seizures, a myriad of clinical manifestations may occur. The analysis of these signs, known as seizure semiology, gives clues to the underlying cerebral networks involved. When patients with drug-resistant epilepsy are monitored to assess their suitability for epilepsy surgery, semiology is a vital component to the presurgical evaluation. Specific patterns of facial movements, head motions, limb posturing and articulations, and hand and finger automatisms may be useful in distinguishing between mesial temporal lobe epilepsy (MTLE) and extratemporal lobe epilepsy (ETLE). However, this analysis is time-consuming and dependent on clinical experience and training. Given this limitation, an automated analysis of semiological patterns, i.e., detection, quantification, and recognition of body movement patterns, has the potential to help increase the diagnostic precision of localization. While a few single modal quantitative approaches are available to assess seizure semiology, the automated quantification of patients' behavior across multiple modalities has seen limited advances in the literature. This is largely due to multiple complicated variables commonly encountered in the clinical setting, such as analyzing subtle physical movements when the patient is covered or room lighting is inadequate. Semiology encompasses the stepwise/temporal progression of signs that is reflective of the integration of connected neuronal networks. Thus, single signs in isolation are far less informative. Taking this into account, here, we describe a novel modular, hierarchical, multimodal system that aims to detect and quantify semiologic signs recorded in 2D monitoring videos. Our approach can jointly learn semiologic features from facial, body, and hand motions based on computer vision and deep learning architectures. A dataset collected from an Australian quaternary referral epilepsy unit analyzing 161 seizures arising from the temporal (n = 90) and extratemporal (n = 71) brain regions has been used in our system to quantitatively classify these types of epilepsy according to the semiology detected. A leave-one-subject-out (LOSO) cross-validation of semiological patterns from the face, body, and hands reached classification accuracies ranging between 12% and 83.4%, 41.2% and 80.1%, and 32.8% and 69.3%, respectively. The proposed hierarchical multimodal system is a potential stepping-stone towards developing a fully automated semiology analysis system to support the assessment of epilepsy.

摘要

癫痫发作期间,可能会出现各种各样的临床表现。对这些症状的分析,即癫痫发作症状学分析,可为所涉及的潜在脑网络提供线索。在监测耐药性癫痫患者以评估其是否适合进行癫痫手术时,症状学是术前评估的重要组成部分。面部运动、头部动作、肢体姿势和关节活动以及手部和手指自动症的特定模式,可能有助于区分内侧颞叶癫痫(MTLE)和颞叶外癫痫(ETLE)。然而,这种分析耗时且依赖临床经验和培训。鉴于此限制,对症状学模式进行自动分析,即检测、量化和识别身体运动模式,有可能有助于提高定位诊断的准确性。虽然有一些单模态定量方法可用于评估癫痫发作症状学,但在文献中,对患者跨多种模态行为的自动量化进展有限。这主要是由于临床环境中常见的多个复杂变量,例如在患者被覆盖或室内光线不足时分析细微的身体动作。症状学涵盖了反映相连神经网络整合的症状的逐步/时间进展。因此,孤立的单个症状信息要少得多。考虑到这一点,在此我们描述了一种新颖的模块化、分层式多模态系统,旨在检测和量化二维监测视频中记录的症状学体征。我们的方法可以基于计算机视觉和深度学习架构,从面部表情、身体动作和手部动作中联合学习症状学特征。我们的系统使用了从澳大利亚一家四级转诊癫痫中心收集的数据集,该数据集分析了来自颞叶(n = 90)和颞叶外(n = 71)脑区的161次癫痫发作,以便根据检测到的症状学对这些类型的癫痫进行定量分类。对面部、身体和手部症状学模式进行留一法(LOSO)交叉验证,分类准确率分别在12%至83.4%、41.2%至80.1%以及32.8%至69.3%之间。所提出的分层多模态系统是朝着开发一个全自动症状学分析系统迈出的潜在一步,以支持癫痫评估。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验