Chair of Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Aachen, Deutschland.
Fraunhofer Institute for Microelectronic Circuits and Systems, Duisburg, Germany.
Biomed Eng Online. 2023 Mar 22;22(1):28. doi: 10.1186/s12938-023-01092-0.
Monitoring the body temperature of premature infants is vital, as it allows optimal temperature control and may provide early warning signs for severe diseases such as sepsis. Thermography may be a non-contact and wireless alternative to state-of-the-art, cable-based methods. For monitoring use in clinical practice, automatic segmentation of the different body regions is necessary due to the movement of the infant.
This work presents and evaluates algorithms for automatic segmentation of infant body parts using deep learning methods. Based on a U-Net architecture, three neural networks were developed and compared. While the first two only used one imaging modality (visible light or thermography), the third applied a feature fusion of both. For training and evaluation, a dataset containing 600 visible light and 600 thermography images from 20 recordings of infants was created and manually labeled. In addition, we used transfer learning on publicly available datasets of adults in combination with data augmentation to improve the segmentation results.
Individual optimization of the three deep learning models revealed that transfer learning and data augmentation improved segmentation regardless of the imaging modality. The fusion model achieved the best results during the final evaluation with a mean Intersection-over-Union (mIoU) of 0.85, closely followed by the RGB model. Only the thermography model achieved a lower accuracy (mIoU of 0.75). The results of the individual classes showed that all body parts were well-segmented, only the accuracy on the torso is inferior since the models struggle when only small areas of the skin are visible.
The presented multi-modal neural networks represent a new approach to the problem of infant body segmentation with limited available data. Robust results were obtained by applying feature fusion, cross-modality transfer learning and classical augmentation strategies.
监测早产儿的体温至关重要,因为它可以实现最佳的温度控制,并可能为败血症等严重疾病提供早期预警信号。热成像可能是一种非接触式和无线的替代方案,可以替代最先进的基于电缆的方法。由于婴儿的运动,为了在临床实践中进行监测,需要自动分割不同的身体部位。
本工作提出并评估了使用深度学习方法自动分割婴儿身体部位的算法。基于 U-Net 架构,开发并比较了三个神经网络。前两个网络仅使用一种成像模式(可见光或热成像),而第三个网络则应用了两种模式的特征融合。为了训练和评估,创建了一个包含 20 个婴儿记录的 600 张可见光和 600 张热成像图像的数据集,并进行了手动标记。此外,我们还使用了公开的成人数据集进行迁移学习,并结合数据增强来提高分割结果。
对三个深度学习模型进行单独优化发现,无论成像模式如何,迁移学习和数据增强都可以提高分割效果。融合模型在最终评估中表现最佳,平均交并比(mIoU)为 0.85,紧随其后的是 RGB 模型。只有热成像模型的精度较低(mIoU 为 0.75)。各个类别的结果表明,所有的身体部位都得到了很好的分割,只有躯干的精度较低,因为当只有皮肤的小面积可见时,模型会遇到困难。
所提出的多模态神经网络代表了一种利用有限可用数据解决婴儿身体分割问题的新方法。通过应用特征融合、跨模态迁移学习和经典增强策略,获得了稳健的结果。