Munoz Mario Francisco, Huy Hoang Vu, Le Thanh-Dung, Jouvet Philippe, Noumeir Rita
Electrical Engineering DepartmentÉcole de Technologie Supérieure Montréal QC H3C 1K3 Canada.
Saint-Justine Mother and Child University Hospital Center Montréal QC H3C 1K3 Canada.
IEEE Open J Eng Med Biol. 2024 Nov 20;6:176-182. doi: 10.1109/OJEMB.2024.3503499. eCollection 2025.
Remote patient monitoring has emerged as a prominent non-invasive method, using digital technologies and computer vision (CV) to replace traditional invasive monitoring. While neonatal and pediatric departments embrace this approach, Pediatric Intensive Care Units (PICUs) face the challenge of occlusions hindering accurate image analysis and interpretation. In this study, we propose a hybrid approach to effectively segment common occlusions encountered in remote monitoring applications within PICUs. Our approach centers on creating a deep-learning pipeline for limited training data scenarios. First, a combination of the well-established Google DeepLabV3+ segmentation model with the transformer-based Segment Anything Model (SAM) is devised for occlusion segmentation mask proposal and refinement. We then train and validate this pipeline using a small dataset acquired from real-world PICU settings with a Microsoft Kinect camera, achieving an Intersection-over-Union (IoU) metric of 85%. Both quantitative and qualitative analyses underscore the effectiveness of our proposed method. The proposed framework yields an overall classification performance with 92.5% accuracy, 93.8% recall, 90.3% precision, and 92.0% F1-score. Consequently, the proposed method consistently improves the predictions across all metrics, with an average of 2.75% gain in performance compared to the baseline CNN-based framework. Our proposed hybrid approach significantly enhances the segmentation of occlusions in remote patient monitoring within PICU settings. This advancement contributes to improving the quality of care for pediatric patients, addressing a critical need in clinical practice by ensuring more accurate and reliable remote monitoring.
远程患者监测已成为一种突出的非侵入性方法,利用数字技术和计算机视觉(CV)取代传统的侵入性监测。虽然新生儿科和儿科采用了这种方法,但儿科重症监护病房(PICU)面临着遮挡问题,这阻碍了准确的图像分析和解读。在本研究中,我们提出了一种混合方法,以有效分割PICU远程监测应用中常见的遮挡情况。我们的方法重点是为有限训练数据场景创建一个深度学习管道。首先,设计了一种将成熟的谷歌DeepLabV3+分割模型与基于Transformer的分割一切模型(SAM)相结合的方法,用于遮挡分割掩码的提议和细化。然后,我们使用从现实世界PICU环境中通过微软Kinect相机获取的小数据集对这个管道进行训练和验证,实现了85%的交并比(IoU)指标。定量和定性分析都强调了我们提出的方法的有效性。所提出的框架产生了总体分类性能,准确率为92.5%,召回率为93.8%,精确率为90.3%,F1分数为92.0%。因此,与基于卷积神经网络(CNN)的基线框架相比,所提出的方法在所有指标上持续提高了预测性能,平均性能提升了2.75%。我们提出的混合方法显著增强了PICU环境中远程患者监测中遮挡的分割。这一进展有助于提高儿科患者的护理质量,通过确保更准确和可靠的远程监测满足临床实践中的关键需求。