IEEE J Biomed Health Inform. 2019 May;23(3):1205-1214. doi: 10.1109/JBHI.2018.2850040. Epub 2018 Jun 25.
Recent advances in deep learning have produced encouraging results for biomedical image segmentation; however, outcomes rely heavily on comprehensive annotation. In this paper, we propose a neural network architecture and a new algorithm, known as overlapped region forecast, for the automatic segmentation of gastric cancer images. To the best of our knowledge, this report for the first time describes that deep learning has been applied to the segmentation of gastric cancer images. Moreover, a reiterative learning framework that achieves superior performance without pretraining or further manual annotation is presented to train a simple network on weakly annotated biomedical images. We customize the loss function to make the model converge faster while avoiding becoming trapped in local minima. Patch boundary errors were eliminated by our overlapped region forecast algorithm. By studying the characteristics of the model trained using two different patch extraction methods, we train iteratively and integrate predictions and weak annotations to improve the quality of the training data. Using these methods, a mean Intersection over Union coefficient of 0.883 and a mean accuracy of 91.09% were achieved on the partially labeled dataset, thereby securing a win in the 2017 China Big Data and Artificial Intelligence Innovation and Entrepreneurship Competition.
深度学习在生物医学图像分割方面取得了令人鼓舞的进展;然而,结果严重依赖于全面的注释。在本文中,我们提出了一种神经网络架构和一种新的算法,称为重叠区域预测,用于自动分割胃癌图像。据我们所知,这是首次将深度学习应用于胃癌图像分割的报告。此外,还提出了一种迭代学习框架,该框架无需预训练或进一步的手动注释即可在弱注释的生物医学图像上训练简单的网络。我们自定义损失函数以使模型更快地收敛,同时避免陷入局部最小值。我们的重叠区域预测算法消除了斑块边界误差。通过研究使用两种不同斑块提取方法训练的模型的特点,我们进行迭代训练,并整合预测和弱注释以提高训练数据的质量。使用这些方法,在部分标记数据集上实现了 0.883 的平均交并比和 91.09%的平均准确率,从而在 2017 年中国大数据与人工智能创新创业大赛中获胜。