Suppr超能文献

一种用于多模态iMotion数据中瞳孔测量信号的预处理管道。

A Preprocessing Pipeline for Pupillometry Signal from Multimodal iMotion Data.

作者信息

Ong Jingxiang, He Wenjing, Maglanque Princess, Jiang Xianta, Gillman Lawrence M, Vergis Ashley, Hardy Krista

机构信息

Department of Surgery, University of Manitoba Max Rady College of Medicine, Winnipeg, MB R3E 0W2, Canada.

Department of Computer Science, Memorial University of Newfoundland, St. John's, NL A1B 3X7, Canada.

出版信息

Sensors (Basel). 2025 Jul 31;25(15):4737. doi: 10.3390/s25154737.

Abstract

Pupillometry is commonly used to evaluate cognitive effort, attention, and facial expression response, offering valuable insights into human performance. The combination of eye tracking and facial expression data under the iMotions platform provides great opportunities for multimodal research. However, there is a lack of standardized pipelines for managing pupillometry data on a multimodal platform. Preprocessing pupil data in multimodal platforms poses challenges like timestamp misalignment, missing data, and inconsistencies across multiple data sources. To address these challenges, the authors introduced a systematic preprocessing pipeline for pupil diameter measurements collected using iMotions 10 (version 10.1.38911.4) during an endoscopy simulation task. The pipeline involves artifact removal, outlier detection using advanced methods such as the Median Absolute Deviation (MAD) and Moving Average (MA) algorithm filtering, interpolation of missing data using the Piecewise Cubic Hermite Interpolating Polynomial (PCHIP), and mean pupil diameter calculation through linear regression, as well as normalization of mean pupil diameter and integration of the pupil diameter dataset with facial expression data. By following these steps, the pipeline enhances data quality, reduces noise, and facilitates the seamless integration of pupillometry other multimodal datasets. In conclusion, this pipeline provides a detailed and organized preprocessing method that improves data reliability while preserving important information for further analysis.

摘要

瞳孔测量法通常用于评估认知努力、注意力和面部表情反应,为了解人类行为表现提供了有价值的见解。iMotions平台下的眼动追踪和面部表情数据相结合,为多模态研究提供了巨大的机会。然而,在多模态平台上缺乏用于管理瞳孔测量数据的标准化流程。在多模态平台中预处理瞳孔数据面临着诸如时间戳未对齐、数据缺失以及多个数据源之间不一致等挑战。为应对这些挑战,作者引入了一种系统的预处理流程,用于处理在内窥镜模拟任务期间使用iMotions 10(版本10.1.38911.4)收集的瞳孔直径测量数据。该流程包括去除伪迹、使用诸如中位数绝对偏差(MAD)和移动平均(MA)算法滤波等先进方法进行异常值检测、使用分段三次埃尔米特插值多项式(PCHIP)对缺失数据进行插值、通过线性回归计算平均瞳孔直径,以及对平均瞳孔直径进行归一化并将瞳孔直径数据集与面部表情数据进行整合。通过遵循这些步骤,该流程提高了数据质量,减少了噪声,并促进了瞳孔测量数据与其他多模态数据集的无缝整合。总之,该流程提供了一种详细且有条理的预处理方法,提高了数据可靠性,同时保留了重要信息以供进一步分析。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5a0/12349379/2945c2bd5e75/sensors-25-04737-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验