Wang Xiaoyan, Yu Jianhao, Zhang Bangze, Huang Xiaojie, Shen Xiaoting, Xia Ming
School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China.
The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China.
J Appl Clin Med Phys. 2025 Feb;26(2):e14584. doi: 10.1002/acm2.14584. Epub 2024 Dec 1.
The complexity of convolutional neural networks (CNNs) can lead to improved segmentation accuracy in medical image analysis but also results in increased network complexity and training challenges, especially under resource limitations. Conversely, lightweight models offer efficiency but often sacrifice accuracy. This paper addresses the challenge of balancing efficiency and accuracy by proposing LightAWNet, a lightweight adaptive weighting neural network for medical image segmentation.
We designed LightAWNet with an efficient inverted bottleneck encoder block optimized by spatial attention. A two-branch strategy is employed to separately extract detailed and spatial features for fusion, enhancing the reusability of model feature maps. Additionally, a lightweight optimized up-sampling operation replaces traditional transposed convolution, and channel attention is utilized in the decoder to produce more accurate outputs efficiently.
Experimental results on the LiTS2017, MM-WHS, ISIC2018, and Kvasir-SEG datasets demonstrate that LightAWNet achieves state-of-the-art performance with only 2.83 million parameters. Our model significantly outperforms existing methods in terms of segmentation accuracy, highlighting its effectiveness in maintaining high performance with reduced complexity.
LightAWNet successfully balances efficiency and accuracy in medical image segmentation. The innovative use of spatial attention, dual-branch feature extraction, and optimized up-sampling operations contribute to its superior performance. These findings offer valuable insights for the development of resource-efficient yet highly accurate segmentation models in medical imaging. The code will be made available at https://github.com/zjmiaprojects/lightawnet upon acceptance for publication.
卷积神经网络(CNN)的复杂性可提高医学图像分析中的分割精度,但也会导致网络复杂性增加和训练挑战,尤其是在资源受限的情况下。相反,轻量级模型具有高效性,但往往会牺牲准确性。本文提出了LightAWNet,一种用于医学图像分割的轻量级自适应加权神经网络,以应对平衡效率和准确性的挑战。
我们设计的LightAWNet采用了通过空间注意力优化的高效倒置瓶颈编码器模块。采用双分支策略分别提取详细特征和空间特征进行融合,提高了模型特征图的可重用性。此外,轻量级优化上采样操作取代了传统的转置卷积,并在解码器中利用通道注意力以高效地产生更准确的输出。
在LiTS2017、MM-WHS、ISIC2018和Kvasir-SEG数据集上的实验结果表明,LightAWNet仅用283万个参数就实现了当前最优性能。我们的模型在分割精度方面显著优于现有方法,突出了其在降低复杂性的同时保持高性能的有效性。
LightAWNet成功地在医学图像分割中平衡了效率和准确性。空间注意力的创新应用、双分支特征提取和优化的上采样操作促成了其卓越性能。这些发现为医学成像中开发资源高效且高度准确的分割模型提供了有价值的见解。论文被接受发表后,代码将在https://github.com/zjmiaprojects/lightawnet上提供。