Huang Wei, Chen Chang, Xiong Zhiwei, Zhang Yueyi, Chen Xuejin, Sun Xiaoyan, Wu Feng
IEEE Trans Med Imaging. 2022 Nov;41(11):3016-3028. doi: 10.1109/TMI.2022.3176050. Epub 2022 Oct 27.
Emerging deep learning-based methods have enabled great progress in automatic neuron segmentation from Electron Microscopy (EM) volumes. However, the success of existing methods is heavily reliant upon a large number of annotations that are often expensive and time-consuming to collect due to dense distributions and complex structures of neurons. If the required quantity of manual annotations for learning cannot be reached, these methods turn out to be fragile. To address this issue, in this article, we propose a two-stage, semi-supervised learning method for neuron segmentation to fully extract useful information from unlabeled data. First, we devise a proxy task to enable network pre-training by reconstructing original volumes from their perturbed counterparts. This pre-training strategy implicitly extracts meaningful information on neuron structures from unlabeled data to facilitate the next stage of learning. Second, we regularize the supervised learning process with the pixel-level prediction consistencies between unlabeled samples and their perturbed counterparts. This improves the generalizability of the learned model to adapt diverse data distributions in EM volumes, especially when the number of labels is limited. Extensive experiments on representative EM datasets demonstrate the superior performance of our reinforced consistency learning compared to supervised learning, i.e., up to 400% gain on the VOI metric with only a few available labels. This is on par with a model trained on ten times the amount of labeled data in a supervised manner. Code is available at https://github.com/weih527/SSNS-Net.
基于深度学习的新兴方法在从电子显微镜(EM)体积数据中进行自动神经元分割方面取得了巨大进展。然而,现有方法的成功严重依赖大量注释,由于神经元的密集分布和复杂结构,这些注释的收集通常成本高昂且耗时。如果无法达到学习所需的手动注释数量,这些方法就会变得很脆弱。为了解决这个问题,在本文中,我们提出了一种用于神经元分割的两阶段半监督学习方法,以从未标记数据中充分提取有用信息。首先,我们设计了一个代理任务,通过从其扰动后的对应物重建原始体积来实现网络预训练。这种预训练策略从未标记数据中隐式提取有关神经元结构的有意义信息,以促进下一阶段的学习。其次,我们利用未标记样本与其扰动后的对应物之间的像素级预测一致性来规范监督学习过程。这提高了学习模型的泛化能力,以适应EM体积数据中的各种数据分布,特别是在标签数量有限的情况下。在具有代表性的EM数据集上进行的大量实验表明,与监督学习相比,我们的强化一致性学习具有卓越的性能,即在只有少数可用标签的情况下,VOI指标提高了400%。这与以监督方式在十倍数量的标记数据上训练的模型相当。代码可在https://github.com/weih527/SSNS-Net获取。