Dixon Philippe C, Dubeau Simon, Roy Jean-François, Fournier Pierre-Alexandre
Department of Kinesiology and Physical Activity, McGill University. Montreal, Canada.
Carré Technologies, Inc., Montreal, Canada.
Comput Biol Med. 2025 Jun;191:110192. doi: 10.1016/j.compbiomed.2025.110192. Epub 2025 Apr 15.
Coughing behavior is associated with conditions such as sleep apnea, asthma, and chronic obstructive pulmonary disorder and can severely affect quality of life in those affected. In this context, coughing quantification is often important, but routinely performed via questionnaires. This approach is dependent on patient compliance or recall, which may affect validity and be especially difficult for nocturnal coughs. Manual review of audio recordings is potentially more accurate, but raises privacy concerns due to the collection and review of sensitive audio-data by a human annotator. Today, machine learning approaches are increasingly used to quantify coughs; however, algorithms often rely on microphone recordings, resulting in the same privacy issues, especially if data are sent to a remote server for analysis. The aims of this study are to determine if (1) a suite of sensors, excluding microphone recordings, can accurately detect coughs unobtrusively and (2) what the relative importance of each sensor-type on model performance may be. Data from 44 healthy young adult participants performing on-demand coughs and other tasks (breathing, talking, throat clearing, laughing, sniffing) in supine and sitting conditions were collected for this observational, cross-sectional study using a multi-sensor smart-garment device. Synchronized video was used to annotate tasks. Three-dimension acceleration, respiration (inductance plethysmography), and electrical activity (electrocardiography) signals were extracted into 1 s strips and binarized into coughs and non-coughs. Data were split into train and test sets using an inter-subject 80:20 split, ensuring that data from a particular participant are found in a single set. This procedure was repeated 10 times with different random inter-subject splits to assess the variability of results. Statistical and frequency-based features were computed and used as inputs to a Random Forest Classifier to predict classes (cough vs not-cough). Model hyperparameters were tuned to maximize F1-score using five-fold cross validation of the training set. Final model performance was assessed using F1-score, precision, and recall (sensitivity) on the test sets with mean (standard deviation) reported. Single sensor models based on acceleration, respiration, or electrocardiography revealed F1 scores of 92.6 (1.2)%, 88.9 (3.2)%, and 77.5 (3.4)%, respectively. Overall, the dual (acceleration, respiration) sensor model achieved the highest performance (F1-score 93.0 (1.1)%, precision 84.2 (4.2)%, and recall 95.5 (1.6)%). The multi-modal wearable device was able to distinguish coughs from other respiratory maneuvers, with acceleration and respiration sensors providing the most valuable information. Future studies could implement this approach for remote monitoring of coughs in patients suffering from coughing symptoms.
咳嗽行为与睡眠呼吸暂停、哮喘和慢性阻塞性肺疾病等病症相关,会严重影响患者的生活质量。在此背景下,咳嗽的量化评估通常很重要,但目前通常通过问卷调查来进行。这种方法依赖于患者的依从性或回忆能力,这可能会影响其有效性,对于夜间咳嗽来说尤其困难。人工查看录音可能会更准确,但由于人类注释者需要收集和查看敏感音频数据,这引发了隐私问题。如今,机器学习方法越来越多地用于咳嗽量化;然而,算法通常依赖于麦克风录音,也会导致同样的隐私问题,特别是当数据被发送到远程服务器进行分析时。本研究的目的是确定:(1)一套不包括麦克风录音的传感器能否在不引人注意的情况下准确检测咳嗽;(2)每种传感器类型对模型性能的相对重要性可能是什么。在这项观察性横断面研究中,使用多传感器智能服装设备收集了44名健康年轻成年人在仰卧和坐姿下按需咳嗽以及执行其他任务(呼吸、说话、清嗓声、大笑、吸气)的数据。同步视频用于标注任务。将三维加速度、呼吸(感应式体积描记法)和电活动(心电图)信号提取为1秒的片段,并二值化为咳嗽和非咳嗽信号。使用80:20的受试者间划分将数据分为训练集和测试集,确保特定参与者的数据只出现在一个集合中。使用不同的随机受试者间划分重复此过程10次,以评估结果的可变性。计算基于统计和频率的特征,并将其用作随机森林分类器的输入,以预测类别(咳嗽与非咳嗽)。使用训练集的五折交叉验证对模型超参数进行调整,以最大化F1分数。使用F1分数、精确率和召回率(敏感性)在测试集上评估最终模型性能,并报告平均值(标准差)。基于加速度、呼吸或心电图的单传感器模型的F1分数分别为92.6(1.2)%、88.9(3.2)%和77.5(3.4)%。总体而言,双传感器(加速度、呼吸)模型表现最佳(F1分数为93.0(1.1)%,精确率为84.2(4.2)%,召回率为95.5(1.6)%)。这种多模式可穿戴设备能够区分咳嗽和其他呼吸动作,加速度和呼吸传感器提供了最有价值的信息。未来的研究可以采用这种方法对有咳嗽症状的患者进行咳嗽远程监测。