Department of General Surgery Shengjing Hospital of China Medical University, Liaoning 110004, China.
Department of General Surgery Shengjing Hospital of China Medical University, Liaoning 110004, China.
Comput Methods Programs Biomed. 2021 Aug;207:106210. doi: 10.1016/j.cmpb.2021.106210. Epub 2021 May 29.
In order to improve the efficiency of gastric cancer pathological slice image recognition and segmentation of cancerous regions, this paper proposes an automatic gastric cancer segmentation model based on Deeplab v3+ neural network.
Based on 1240 gastric cancer pathological slice images, this paper proposes a multi-scale input Deeplab v3+ network, _and compares it with SegNet, ICNet in sensitivity, specificity, accuracy, and Dice coefficient.
The sensitivity of Deeplab v3+ is 91.45%, the specificity is 92.31%, the accuracy is 95.76%, and the Dice coefficient reaches 91.66%, which is more than 12% higher than the SegNet and Faster-RCNN models, and the parameter scale of the model is also greatly reduced.
Our automatic gastric cancer segmentation model based on Deeplab v3+ neural network has achieved better results in improving segmentation accuracy and saving computing resources. Deeplab v3+ is worthy of further promotion in the medical image analysis and diagnosis of gastric cancer.
为提高胃癌病理切片图像中癌区识别和分割的效率,提出一种基于 Deeplab v3+神经网络的自动胃癌分割模型。
基于 1240 张胃癌病理切片图像,提出一种多尺度输入 Deeplab v3+网络,并与 SegNet、ICNet 在敏感性、特异性、准确性和 Dice 系数方面进行比较。
Deeplab v3+的敏感性为 91.45%,特异性为 92.31%,准确性为 95.76%,Dice 系数达到 91.66%,比 SegNet 和 Faster-RCNN 模型高出 12%以上,同时模型的参数量级也大幅降低。
基于 Deeplab v3+神经网络的自动胃癌分割模型在提高分割准确性和节省计算资源方面取得了更好的效果。Deeplab v3+在胃癌的医学图像分析和诊断中值得进一步推广。