Suppr超能文献

LYNSU:用于大脑荧光图像的自动化3D神经纤维网分割

LYNSU: automated 3D neuropil segmentation of fluorescent images for brains.

作者信息

Hsu Kai-Yi, Shih Chi-Tin, Chen Nan-Yow, Lo Chung-Chuan

机构信息

Institute of Systems Neuroscience, College of Life Science, National Tsing Hua University, Hsinchu, Taiwan.

Department of Applied Physics, Tunghai University, Taichung, Taiwan.

出版信息

Front Neuroinform. 2024 Jul 29;18:1429670. doi: 10.3389/fninf.2024.1429670. eCollection 2024.

Abstract

The brain atlas, which provides information about the distribution of genes, proteins, neurons, or anatomical regions, plays a crucial role in contemporary neuroscience research. To analyze the spatial distribution of those substances based on images from different brain samples, we often need to warp and register individual brain images to a standard brain template. However, the process of warping and registration may lead to spatial errors, thereby severely reducing the accuracy of the analysis. To address this issue, we develop an automated method for segmenting neuropils in the brain for fluorescence images from the database. This technique allows future brain atlas studies to be conducted accurately at the individual level without warping and aligning to a standard brain template. Our method, LYNSU (Locating by YOLO and Segmenting by U-Net), consists of two stages. In the first stage, we use the YOLOv7 model to quickly locate neuropils and rapidly extract small-scale 3D images as input for the second stage model. This stage achieves a 99.4% accuracy rate in neuropil localization. In the second stage, we employ the 3D U-Net model to segment neuropils. LYNSU can achieve high accuracy in segmentation using a small training set consisting of images from merely 16 brains. We demonstrate LYNSU on six distinct neuropils or structures, achieving a high segmentation accuracy comparable to professional manual annotations with a 3D Intersection-over-Union (IoU) reaching up to 0.869. Our method takes only about 7 s to segment a neuropil while achieving a similar level of performance as the human annotators. To demonstrate a use case of LYNSU, we applied it to all female brains from the database to investigate the asymmetry of the mushroom bodies (MBs), the learning center of fruit flies. We used LYNSU to segment bilateral MBs and compare the volumes between left and right for each individual. Notably, of 8,703 valid brain samples, 10.14% showed bilateral volume differences that exceeded 10%. The study demonstrated the potential of the proposed method in high-throughput anatomical analysis and connectomics construction of the brain.

摘要

脑图谱提供了有关基因、蛋白质、神经元或解剖区域分布的信息,在当代神经科学研究中起着至关重要的作用。为了基于来自不同脑样本的图像分析这些物质的空间分布,我们通常需要将个体脑图像扭曲并配准到一个标准脑模板。然而,扭曲和配准过程可能会导致空间误差,从而严重降低分析的准确性。为了解决这个问题,我们开发了一种自动方法,用于从数据库中分割脑荧光图像中的神经纤维网。这项技术使未来的脑图谱研究能够在个体水平上准确进行,而无需扭曲和对齐到标准脑模板。我们的方法LYNSU(通过YOLO定位并通过U-Net分割)由两个阶段组成。在第一阶段,我们使用YOLOv7模型快速定位神经纤维网,并快速提取小尺度3D图像作为第二阶段模型的输入。这一阶段在神经纤维网定位方面达到了99.4%的准确率。在第二阶段,我们采用3D U-Net模型分割神经纤维网。LYNSU使用仅由16个脑的图像组成的小训练集就能在分割中实现高精度。我们在六个不同的神经纤维网或结构上展示了LYNSU,实现了与专业手动注释相当的高分割精度,三维交并比(IoU)高达0.869。我们的方法分割一个神经纤维网仅需约7秒,同时达到了与人类注释者相似的性能水平。为了展示LYNSU的一个用例,我们将其应用于数据库中的所有雌性果蝇脑,以研究果蝇学习中心蘑菇体(MBs)的不对称性。我们使用LYNSU分割双侧MBs,并比较每个个体左右两侧的体积。值得注意的是,在8703个有效的脑样本中,10.14%的样本显示双侧体积差异超过10%。该研究证明了所提出的方法在脑的高通量解剖分析和连接组学构建中的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9dc/11317296/47dba74c2585/fninf-18-1429670-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验