Suppr超能文献

厚脑切片中多荧光原位杂交标记细胞类型图谱的自监督学习分析

Self-supervised learning analysis of multi-FISH labeled cell-type map in thick brain slices.

作者信息

Zheng Weijie, An Yiping, Li Kang, Wang Jinyue, Gao Jianqing, Mu Huawei, Tang Jin, Wang Hao

机构信息

AHU-IAI AI Joint Laboratory, Anhui University, Hefei, China.

Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China.

出版信息

Front Neurosci. 2025 Jul 7;19:1622950. doi: 10.3389/fnins.2025.1622950. eCollection 2025.

Abstract

INTRODUCTION

Accurate mapping of the spatial distribution of diverse cell types is essential for understanding the cellular organization of brain. However, the cellular heterogeneity and the substantial cost of manual annotation of cells in volumetric images hinder existing neural networks from achieving high-precision segmentation of multiple cell-types within a unified framework.

METHODS

To address this challenge, we introduce a self-supervised learning framework, Voxelwise U-shaped Swin-Mamba network (VUSMamba), for automatic segmentation of multiple neuronal populations in 300 μm thick brain slices. VUSMamba employs contrastive learning and pretext tasks for self-supervised learning on unlabeled data, followed by fine-tuning with minimal annotations. As a proof of concept, we applied the framework to a multi-cell-type dataset obtained using multiplexed fluorescence in situ hybridization (multi-FISH) combined with high-speed volumetric microscopy VISoR.

RESULTS

Compared to state-of-the-art baseline models, VUSMamba achieves higher segmentation accuracy with reduced computational cost. The framework enables simultaneous high-precision segmentation of glutamatergic neurons, GABAergic neurons, and nuclei.

DISCUSSION

This work presents a unified self-supervised neural network framework that offers a standardized pipeline for constructing and analyzing whole-brain cell-type atlases.

摘要

引言

准确绘制不同细胞类型的空间分布对于理解大脑的细胞组织至关重要。然而,细胞异质性以及在体积图像中手动注释细胞的高昂成本阻碍了现有的神经网络在统一框架内实现对多种细胞类型的高精度分割。

方法

为应对这一挑战,我们引入了一种自监督学习框架,即体素级U型Swin-Mamba网络(VUSMamba),用于对300μm厚的脑切片中的多个神经元群体进行自动分割。VUSMamba采用对比学习和前置任务对未标记数据进行自监督学习,随后使用最少的注释进行微调。作为概念验证,我们将该框架应用于使用多重荧光原位杂交(multi-FISH)结合高速体积显微镜VISoR获得的多细胞类型数据集。

结果

与最先进的基线模型相比,VUSMamba以更低的计算成本实现了更高的分割精度。该框架能够同时对谷氨酸能神经元、γ-氨基丁酸能神经元和细胞核进行高精度分割。

讨论

这项工作提出了一个统一的自监督神经网络框架,为构建和分析全脑细胞类型图谱提供了标准化流程。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5044/12277362/0cd4aaa65f1c/fnins-19-1622950-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验