Zheng Weijie, An Yiping, Li Kang, Wang Jinyue, Gao Jianqing, Mu Huawei, Tang Jin, Wang Hao
AHU-IAI AI Joint Laboratory, Anhui University, Hefei, China.
Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China.
Front Neurosci. 2025 Jul 7;19:1622950. doi: 10.3389/fnins.2025.1622950. eCollection 2025.
Accurate mapping of the spatial distribution of diverse cell types is essential for understanding the cellular organization of brain. However, the cellular heterogeneity and the substantial cost of manual annotation of cells in volumetric images hinder existing neural networks from achieving high-precision segmentation of multiple cell-types within a unified framework.
To address this challenge, we introduce a self-supervised learning framework, Voxelwise U-shaped Swin-Mamba network (VUSMamba), for automatic segmentation of multiple neuronal populations in 300 μm thick brain slices. VUSMamba employs contrastive learning and pretext tasks for self-supervised learning on unlabeled data, followed by fine-tuning with minimal annotations. As a proof of concept, we applied the framework to a multi-cell-type dataset obtained using multiplexed fluorescence in situ hybridization (multi-FISH) combined with high-speed volumetric microscopy VISoR.
Compared to state-of-the-art baseline models, VUSMamba achieves higher segmentation accuracy with reduced computational cost. The framework enables simultaneous high-precision segmentation of glutamatergic neurons, GABAergic neurons, and nuclei.
This work presents a unified self-supervised neural network framework that offers a standardized pipeline for constructing and analyzing whole-brain cell-type atlases.
准确绘制不同细胞类型的空间分布对于理解大脑的细胞组织至关重要。然而,细胞异质性以及在体积图像中手动注释细胞的高昂成本阻碍了现有的神经网络在统一框架内实现对多种细胞类型的高精度分割。
为应对这一挑战,我们引入了一种自监督学习框架,即体素级U型Swin-Mamba网络(VUSMamba),用于对300μm厚的脑切片中的多个神经元群体进行自动分割。VUSMamba采用对比学习和前置任务对未标记数据进行自监督学习,随后使用最少的注释进行微调。作为概念验证,我们将该框架应用于使用多重荧光原位杂交(multi-FISH)结合高速体积显微镜VISoR获得的多细胞类型数据集。
与最先进的基线模型相比,VUSMamba以更低的计算成本实现了更高的分割精度。该框架能够同时对谷氨酸能神经元、γ-氨基丁酸能神经元和细胞核进行高精度分割。
这项工作提出了一个统一的自监督神经网络框架,为构建和分析全脑细胞类型图谱提供了标准化流程。