Roberts Mike, Jeong Won-Ki, Vázquez-Reina Amelio, Unger Markus, Bischof Horst, Lichtman Jeff, Pfister Hanspeter
Harvard University, USA.
Med Image Comput Comput Assist Interv. 2011;14(Pt 1):621-8. doi: 10.1007/978-3-642-23623-5_78.
We present a novel semi-automatic method for segmenting neural processes in large, highly anisotropic EM (electron microscopy) image stacks. Our method takes advantage of sparse scribble annotations provided by the user to guide a 3D variational segmentation model, thereby allowing our method to globally optimally enforce 3D geometric constraints on the segmentation. Moreover, we leverage a novel algorithm for propagating segmentation constraints through the image stack via optimal volumetric pathways, thereby allowing our method to compute highly accurate 3D segmentations from very sparse user input. We evaluate our method by reconstructing 16 neural processes in a 1024 x 1024 x 50 nanometer-scale EM image stack of a mouse hippocampus. We demonstrate that, on average, our method is 68% more accurate than previous state-of-the-art semi-automatic methods.
我们提出了一种新颖的半自动方法,用于在大型、高度各向异性的电子显微镜(EM)图像堆栈中分割神经突起。我们的方法利用用户提供的稀疏涂鸦注释来指导3D变分分割模型,从而使我们的方法能够在分割上全局最优地施加3D几何约束。此外,我们利用一种新颖的算法,通过最优体积路径在图像堆栈中传播分割约束,从而使我们的方法能够从非常稀疏的用户输入中计算出高度准确的3D分割。我们通过在小鼠海马体的1024×1024×50纳米尺度的EM图像堆栈中重建16个神经突起来评估我们的方法。我们证明,平均而言,我们的方法比以前的最先进半自动方法准确68%。