Chong Chak Fong, Wang Yapeng, Ng Benjamin, Luo Wuman, Yang Xu
Macao Polytechnic University, Macao Special Administrative Region of China.
Comput Biol Med. 2023 Sep;164:107277. doi: 10.1016/j.compbiomed.2023.107277. Epub 2023 Jul 19.
Automatic interpretation of chest X-ray (CXR) photos taken by smartphones at the same performance level as with digital CXRs is challenging, due to the projective transformation caused by the non-ideal camera position. Existing rectification methods for other camera-captured photos (document photos, license plate photos, etc.) cannot precisely rectify the projective transformation of CXR photos, due to its specific projective transformation type. In this paper, we propose an innovative deep learning-based Projective Transformation Rectification Network (PTRN) to automatically rectify the projective transformation of CXR photos by predicting the projective transformation matrix. Additionally, synthetic CXR photos are generated for training with the consideration of visual artifacts of natural images. The effectiveness of the proposed classification pipeline with PTRN is evaluated in the CheXphoto smartphone-captured CXR photo classification competition. It achieves first place with a huge performance improvement (ours 0.850, second-best 0.762, in AUC). Moreover, experimental results show that our approach successfully achieves the same performance level of digital CXR classification (AUC 0.893) on CXR photo classification (AUC 0.893).
要使智能手机拍摄的胸部X光(CXR)照片达到与数字CXR照片相同的性能水平进行自动解读具有挑战性,这是由于非理想相机位置导致的投影变换。对于其他相机拍摄的照片(文档照片、车牌照片等),现有的校正方法由于CXR照片特定的投影变换类型,无法精确校正其投影变换。在本文中,我们提出了一种创新的基于深度学习的投影变换校正网络(PTRN),通过预测投影变换矩阵来自动校正CXR照片的投影变换。此外,在考虑自然图像视觉伪影的情况下生成合成CXR照片用于训练。在CheXphoto智能手机拍摄的CXR照片分类竞赛中评估了所提出的带有PTRN的分类流程的有效性。它以巨大的性能提升获得第一名(在AUC方面,我们的结果为0.850,第二名是0.762)。此外,实验结果表明,我们的方法在CXR照片分类(AUC 0.893)上成功达到了与数字CXR分类相同的性能水平(AUC 0.893)。