Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui, 230601, China.
SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu, 210096, China.
Bioinformatics. 2024 Nov 1;40(11). doi: 10.1093/bioinformatics/btae606.
Recent brain mapping efforts are producing large-scale whole-brain images using different imaging modalities. Accurate alignment and delineation of anatomical structures in these images are essential for numerous studies. These requirements are typically modeled as two distinct tasks: registration and segmentation. However, prevailing methods, fail to fully explore and utilize the inherent correlation and complementarity between the two tasks. Furthermore, variations in brain anatomy, brightness, and texture pose another formidable challenge in designing multi-modal similarity metrics. A high-throughput approach capable of overcoming the bottleneck of multi-modal similarity metric design, while effective leveraging the highly correlated and complementary nature of two tasks is highly desirable.
We introduce a deep learning framework for joint registration and segmentation of multi-modal brain images. Under this framework, registration and segmentation tasks are deeply coupled and collaborated at two hierarchical layers. In the inner layer, we establish a strong feature-level coupling between the two tasks by learning a unified common latent feature representation. In the outer layer, we introduce a mutually supervised dual-branch network to decouple latent features and facilitate task-level collaboration between registration and segmentation. Since the latent features we designed are also modality-independent, the bottleneck of designing multi-modal similarity metric is essentially addressed. Another merit offered by this framework is the interpretability of latent features, which allows intuitive manipulation of feature learning, thereby further enhancing network training efficiency and the performance of both tasks. Extensive experiments conducted on both multi-modal and mono-modal datasets of mouse and human brains demonstrate the superiority of our method.
The code is available at https://github.com/tingtingup/DCRS.
最近的脑图谱研究工作正在使用不同的成像模式产生大规模的全脑图像。在这些图像中,准确地对齐和描绘解剖结构对于许多研究都是至关重要的。这些要求通常被建模为两个不同的任务:配准和分割。然而,现有的方法未能充分探索和利用这两个任务之间的固有相关性和互补性。此外,大脑解剖结构、亮度和纹理的变化给设计多模态相似性度量标准带来了另一个巨大的挑战。需要一种能够克服多模态相似性度量标准设计瓶颈的高通量方法,同时有效地利用两个任务的高度相关性和互补性。
我们提出了一种用于多模态脑图像联合配准和分割的深度学习框架。在这个框架下,配准和分割任务在两个层次上进行深入的耦合和协作。在内层,我们通过学习统一的公共潜在特征表示,在两个任务之间建立了强烈的特征级耦合。在外层,我们引入了一个相互监督的双分支网络,以解耦潜在特征,并促进配准和分割之间的任务级协作。由于我们设计的潜在特征也是与模态无关的,因此基本上解决了设计多模态相似性度量标准的瓶颈问题。这个框架的另一个优点是潜在特征的可解释性,它允许直观地操作特征学习,从而进一步提高网络训练效率和两个任务的性能。在小鼠和人类大脑的多模态和单模态数据集上进行的广泛实验证明了我们方法的优越性。
代码可在 https://github.com/tingtingup/DCRS 上获得。