IEEE Trans Pattern Anal Mach Intell. 2019 Jul;41(7):1669-1680. doi: 10.1109/TPAMI.2018.2835450. Epub 2018 May 24.
We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-Net, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on Malis, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27, 15, and 250 percent. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of $\sim$∼ 2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.
我们提出了一种将亲和力预测与区域凝聚相结合的方法,该方法在从电子显微镜 (EM) 进行神经元分割的准确性和可扩展性方面明显优于现有技术。我们的方法由一个 3D U-Net 组成,该网络经过训练可预测体素之间的亲和力,然后进行迭代区域凝聚。我们使用基于 Malis 的结构化损失进行训练,鼓励从亲和力阈值获得拓扑正确的分割。我们的扩展由两部分组成:首先,我们提出了一种准线性方法来计算损失梯度,这优于原始的二次算法。其次,我们在两个单独的通道中计算梯度,以避免在早期训练阶段出现虚假的梯度贡献。我们的预测足够准确,以至于简单的无学习百分位凝聚优于早期使用的更复杂的方法。我们在三个不同的 EM 数据集上展示了结果,与之前的结果相比,相对提高了 27%、15%和 250%。我们的发现表明,单一方法可以应用于几乎各向同性的块面 EM 数据和各向异性的连续切片 EM 数据。我们的方法的运行时间与体积大小呈线性比例,吞吐量约为每百万体素 2.6 秒,这使我们的方法有资格处理非常大的数据集。