Le Phuong Thi, Pham Bach-Tung, Chang Ching-Chun, Hsu Yi-Chiung, Tai Tzu-Chiang, Li Yung-Hui, Wang Jia-Ching
Department of Computer Science and Information Engineering, National Central University, Taoyuan 320, Taiwan.
Department of Biomedical Sciences and Engineering, National Central University, Taoyuan 320, Taiwan.
Diagnostics (Basel). 2023 Apr 18;13(8):1460. doi: 10.3390/diagnostics13081460.
The need for a lightweight and reliable segmentation algorithm is critical in various biomedical image-prediction applications. However, the limited quantity of data presents a significant challenge for image segmentation. Additionally, low image quality negatively impacts the efficiency of segmentation, and previous deep learning models for image segmentation require large parameters with hundreds of millions of computations, resulting in high costs and processing times. In this study, we introduce a new lightweight segmentation model, the mobile anti-aliasing attention u-net model (MAAU), which features both encoder and decoder paths. The encoder incorporates an anti-aliasing layer and convolutional blocks to reduce the spatial resolution of input images while avoiding shift equivariance. The decoder uses an attention block and decoder module to capture prominent features in each channel. To address data-related problems, we implemented data augmentation methods such as flip, rotation, shear, translate, and color distortions, which enhanced segmentation efficiency in the international Skin Image Collaboration (ISIC) 2018 and PH2 datasets. Our experimental results demonstrated that our approach had fewer parameters, only 4.2 million, while it outperformed various state-of-the-art segmentation methods.
在各种生物医学图像预测应用中,对轻量级且可靠的分割算法的需求至关重要。然而,数据量有限给图像分割带来了重大挑战。此外,低图像质量会对分割效率产生负面影响,并且先前用于图像分割的深度学习模型需要数亿次计算的大量参数,导致成本高和处理时间长。在本研究中,我们引入了一种新的轻量级分割模型,即移动抗混叠注意力U-Net模型(MAAU),它具有编码器和解码器路径。编码器包含一个抗混叠层和卷积块,以降低输入图像的空间分辨率,同时避免平移等变性。解码器使用注意力块和解码器模块来捕获每个通道中的突出特征。为了解决与数据相关的问题,我们实施了诸如翻转、旋转、剪切、平移和颜色失真等数据增强方法,这些方法提高了国际皮肤图像协作组织(ISIC)2018和PH2数据集中的分割效率。我们的实验结果表明,我们的方法参数较少,仅420万个,同时优于各种先进的分割方法。