Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, 52074 Aachen, Germany.
Department of Electromagnetic and Biomedical Engineering, Faculty of Electrical Engineering and Information Technology, University of Zilina, 010 26 Zilina, Slovakia.
Sensors (Basel). 2023 Jan 15;23(2):999. doi: 10.3390/s23020999.
In today's neonatal intensive care units, monitoring vital signs such as heart rate and respiration is fundamental for neonatal care. However, the attached sensors and electrodes restrict movement and can cause medical-adhesive-related skin injuries due to the immature skin of preterm infants, which may lead to serious complications. Thus, unobtrusive camera-based monitoring techniques in combination with image processing algorithms based on deep learning have the potential to allow cable-free vital signs measurements. Since the accuracy of deep-learning-based methods depends on the amount of training data, proper validation of the algorithms is difficult due to the limited image data of neonates. In order to enlarge such datasets, this study investigates the application of a conditional generative adversarial network for data augmentation by using edge detection frames from neonates to create RGB images. Different edge detection algorithms were used to validate the input images' effect on the adversarial network's generator. The state-of-the-art network architecture Pix2PixHD was adapted, and several hyperparameters were optimized. The quality of the generated RGB images was evaluated using a Mechanical Turk-like multistage survey conducted by 30 volunteers and the FID score. In a fake-only stage, 23% of the images were categorized as real. A direct comparison of generated and real (manually augmented) images revealed that 28% of the fake data were evaluated as more realistic. An FID score of 103.82 was achieved. Therefore, the conducted study shows promising results for the training and application of conditional generative adversarial networks to augment highly limited neonatal image datasets.
在当今的新生儿重症监护病房中,监测心率和呼吸等生命体征是新生儿护理的基础。然而,附着的传感器和电极限制了运动,并可能导致与医学粘合剂相关的皮肤损伤,因为早产儿的皮肤不成熟,这可能导致严重的并发症。因此,基于摄像机的无干扰监测技术与基于深度学习的图像处理算法相结合,有可能实现无电缆的生命体征测量。由于基于深度学习的方法的准确性取决于训练数据的数量,由于新生儿的图像数据有限,因此很难对算法进行适当的验证。为了扩大此类数据集,本研究探讨了使用边缘检测帧从新生儿创建 RGB 图像来进行数据扩充的条件生成对抗网络的应用。使用不同的边缘检测算法来验证输入图像对对抗网络生成器的影响。适应了最先进的网络架构 Pix2PixHD,并优化了几个超参数。通过由 30 名志愿者进行的类似于 Mechanical Turk 的多阶段调查和 FID 分数来评估生成的 RGB 图像的质量。在仅假阶段,23%的图像被归类为真实。生成的图像与真实(手动增强)图像的直接比较表明,28%的虚假数据被评估为更真实。获得了 103.82 的 FID 分数。因此,进行的研究表明,条件生成对抗网络在训练和应用于增强高度受限的新生儿图像数据集中具有有希望的结果。