Fernandez Pablo Blasco, Gopinath Karthik, Williams-Ramirez John, Herisse Rogeny, Deden-Binder Lucas J, Zemlyanker Dina, Connors Theressa, Kozanno Liana, Oakley Derek, Hyman Bradley, Young Sean I, Iglesias Juan Eugenio
ETH Zurich, Zurich, Switzerland.
Martinos Center for Biomedical Imaging, MGH & Harvard Medical School.
Mach Learn Med Imaging. 2025;15242:74-84. doi: 10.1007/978-3-031-73290-4_8. Epub 2024 Oct 23.
Parcellation of mesh models for cortical analysis is a central problem in neuroimaging. Most classical and deep learning methods have requisites in terms of mesh topology, requiring inputs that are homeomorphic to a sphere (i.e., no holes or handles). Topology correction algorithms do exist, but their computational complexity is quadratic with the size of the topological defects - sometimes hours, effectively precluding segmentation of incorrect meshes, including those derived from imperfect segmentations or obtained from inherently noisy modalities like surface scanning. Furthermore, deep learning mesh segmentation also struggles surface scans of brains because they are relatively nondescript and require modeling of longer-range dependencies. Here we propose "pseudo-render-inverse-render" (PRIR), a novel perspective on cortical mesh parcellation that effectively reframes the problem as a 2D segmentation task using an direct-inverse rendering framework. Our approach: renders the mesh from a number of perspectives, projecting the three components of the face normal vectors to a three-channel image; segments these images with U-Nets; maps the 2D segmentations back to vertices (inverse rendering); and aggregates the information from multiple views, postprocessing the output with a Markov Random Field to ensure smoothness and segmentation of occluded areas. PRIR is not affected by mesh topology and easily captures long-range dependencies with the U-Nets. Our results demonstrate: state-of-the-art accuracy on topologically correct white matter meshes; equally accurate performance on simulated surface scans; and robust segmentation of real surface scans.
用于皮质分析的网格模型分割是神经影像学中的一个核心问题。大多数经典方法和深度学习方法在网格拓扑方面都有要求,需要与球体同胚的输入(即无孔洞或把手)。拓扑校正算法确实存在,但其计算复杂度与拓扑缺陷的大小呈二次关系——有时需要数小时,这实际上排除了对不正确网格的分割,包括那些源自不完美分割或从本质上有噪声的模态(如表面扫描)获得的网格。此外,深度学习网格分割在大脑表面扫描方面也存在困难,因为它们相对缺乏特征,并且需要对长程依赖关系进行建模。在此,我们提出“伪渲染-逆渲染”(PRIR),这是一种关于皮质网格分割的新视角,它使用直接-逆渲染框架有效地将问题重新构建为二维分割任务。我们的方法:从多个视角渲染网格,将面法向量的三个分量投影到三通道图像;使用U-Net对这些图像进行分割;将二维分割映射回顶点(逆渲染);并聚合来自多个视图的信息,使用马尔可夫随机场对输出进行后处理,以确保平滑性和对遮挡区域的分割。PRIR不受网格拓扑的影响,并且可以通过U-Net轻松捕获长程依赖关系。我们的结果表明:在拓扑正确的白质网格上具有领先的准确性;在模拟表面扫描上具有同样准确的性能;以及对真实表面扫描进行稳健的分割。