• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

迁移学习可提高跨组织的体积电子显微镜细胞器分割的性能。

Transfer learning improves performance in volumetric electron microscopy organelle segmentation across tissues.

作者信息

Xie Ronald, Mulcahy Ben, Darbandi Ali, Marwah Sagar, Ali Fez, Lee Yuna, Parlakgul Gunes, Hotamisligil Gokhan S, Wang Bo, MacParland Sonya, Zhen Mei, Bader Gary D

机构信息

Terrence Donnelly Centre for Cellular & Biomolecular Research, University of Toronto, Toronto, ON, M5S 3E1, Canada.

Peter Munk Cardiac Centre and Joint Department of Medical Imaging, University Health Network, Toronto, ON, M5G 2N2, Canada.

出版信息

Bioinform Adv. 2025 Apr 2;5(1):vbaf021. doi: 10.1093/bioadv/vbaf021. eCollection 2025.

DOI:10.1093/bioadv/vbaf021
PMID:40196751
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11974384/
Abstract

MOTIVATION

Volumetric electron microscopy (VEM) enables nanoscale resolution three-dimensional imaging of biological samples. Identification and labeling of organelles, cells, and other structures in the image volume is required for image interpretation, but manual labeling is extremely time-consuming. This can be automated using deep learning segmentation algorithms, but these traditionally require substantial manual annotation for training and typically these labeled datasets are unavailable for new samples.

RESULTS

We show that transfer learning can help address this challenge. By pretraining on VEM data from multiple mammalian tissues and organelle types and then fine-tuning on a target dataset, we segment multiple organelles at high performance, yet require a relatively small amount of new training data. We benchmark our method on three published VEM datasets and a new rat liver dataset we imaged over a 56×56×11 m volume measuring 7000×7000×219 px using serial block face scanning electron microscopy with corresponding manually labeled mitochondria and endoplasmic reticulum structures. We further benchmark our approach against the Segment Anything Model 2 and MitoNet in zero-shot, prompted, and fine-tuned settings.

AVAILABILITY AND IMPLEMENTATION

Our rat liver dataset's raw image volume, manual ground truth annotation, and model predictions are freely shared at github.com/Xrioen/cross-tissue-transfer-learning-in-VEM.

摘要

动机

体积电子显微镜(VEM)能够对生物样本进行纳米级分辨率的三维成像。为了解释图像,需要对图像体积中的细胞器、细胞和其他结构进行识别和标记,但手动标记极其耗时。这可以使用深度学习分割算法实现自动化,但传统上这些算法需要大量的手动注释来进行训练,而且通常这些标记的数据集对于新样本来说是不可用的。

结果

我们表明迁移学习有助于应对这一挑战。通过在来自多种哺乳动物组织和细胞器类型的VEM数据上进行预训练,然后在目标数据集上进行微调,我们能够以高性能分割多个细胞器,同时只需要相对少量的新训练数据。我们在三个已发表的VEM数据集以及一个我们使用串行块面扫描电子显微镜成像的新大鼠肝脏数据集上对我们的方法进行了基准测试,该数据集体积为56×56×11μm,大小为7000×7000×219像素,具有相应的手动标记的线粒体和内质网结构。我们还在零样本、提示和微调设置下,将我们的方法与Segment Anything Model 2和MitoNet进行了进一步的基准测试。

可用性和实现

我们的大鼠肝脏数据集的原始图像体积、手动真值注释和模型预测可在github.com/Xrioen/cross-tissue-transfer-learning-in-VEM上免费共享。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f2/11974384/0a0a38a39e45/vbaf021f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f2/11974384/ad70a569e015/vbaf021f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f2/11974384/56cce3e2b8d3/vbaf021f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f2/11974384/3f816ca26f8f/vbaf021f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f2/11974384/b69d82e13c09/vbaf021f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f2/11974384/0a0a38a39e45/vbaf021f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f2/11974384/ad70a569e015/vbaf021f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f2/11974384/56cce3e2b8d3/vbaf021f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f2/11974384/3f816ca26f8f/vbaf021f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f2/11974384/b69d82e13c09/vbaf021f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f2/11974384/0a0a38a39e45/vbaf021f5.jpg

相似文献

1
Transfer learning improves performance in volumetric electron microscopy organelle segmentation across tissues.迁移学习可提高跨组织的体积电子显微镜细胞器分割的性能。
Bioinform Adv. 2025 Apr 2;5(1):vbaf021. doi: 10.1093/bioadv/vbaf021. eCollection 2025.
2
Automated segmentation of cell organelles in volume electron microscopy using deep learning.基于深度学习的体式电子显微镜中细胞器官的自动分割。
Microsc Res Tech. 2024 Aug;87(8):1718-1732. doi: 10.1002/jemt.24548. Epub 2024 Mar 19.
3
Reducing manual operation time to obtain a segmentation learning model for volume electron microscopy using stepwise deep learning with manual correction.使用分步深度学习并结合手动校正来减少手动操作时间,从而获得用于体式电子显微镜的分割学习模型。
Microscopy (Oxf). 2021 Nov 24;70(6):526-535. doi: 10.1093/jmicro/dfab025.
4
Airway Cells 3D Reconstruction via Manual and Machine-Learning Aided Segmentation of Volume EM Datasets.通过手动和机器学习辅助的体 EM 数据集分割实现气道细胞 3D 重建。
Methods Mol Biol. 2024;2725:131-146. doi: 10.1007/978-1-0716-3507-0_8.
5
A segment anything model-guided and match-based semi-supervised segmentation framework for medical imaging.一种用于医学成像的基于段式分割模型引导和匹配的半监督分割框架。
Med Phys. 2025 Mar 29. doi: 10.1002/mp.17785.
6
EM-stellar: benchmarking deep learning for electron microscopy image segmentation.EM-stellar:用于电子显微镜图像分割的深度学习基准测试
Bioinformatics. 2021 Apr 9;37(1):97-106. doi: 10.1093/bioinformatics/btaa1094.
7
On the objectivity, reliability, and validity of deep learning enabled bioimage analyses.深度学习赋能的生物影像分析的客观性、可靠性和有效性。
Elife. 2020 Oct 19;9:e59780. doi: 10.7554/eLife.59780.
8
Weighted average ensemble-based semantic segmentation in biological electron microscopy images.基于加权平均集成的生物电子显微镜图像语义分割。
Histochem Cell Biol. 2022 Nov;158(5):447-462. doi: 10.1007/s00418-022-02148-3. Epub 2022 Aug 20.
9
SuRVoS 2: Accelerating Annotation and Segmentation for Large Volumetric Bioimage Workflows Across Modalities and Scales.SuRVoS 2:加速跨模态和尺度的大型体积生物图像工作流程的注释和分割
Front Cell Dev Biol. 2022 Apr 1;10:842342. doi: 10.3389/fcell.2022.842342. eCollection 2022.
10
Mitochondria morphometry in 3D datasets obtained from mouse brains with serial block-face scanning electron microscopy.使用连续块面扫描电子显微镜获取的小鼠脑 3D 数据集的线粒体形态计量学。
Methods Cell Biol. 2023;177:197-211. doi: 10.1016/bs.mcb.2023.01.021. Epub 2023 Mar 21.

本文引用的文献

1
Volume electron microscopy.体积电子显微镜术
Nat Rev Methods Primers. 2022 Jul 7;2:51. doi: 10.1038/s43586-022-00131-9.
2
Instance segmentation of mitochondria in electron microscopy images with a generalist deep learning model trained on a diverse dataset.使用在多样化数据集上训练的通用深度学习模型对电子显微镜图像中的线粒体进行实例分割。
Cell Syst. 2023 Jan 18;14(1):58-71.e5. doi: 10.1016/j.cels.2022.12.006.
3
Regulation of liver subcellular architecture controls metabolic homeostasis.调控肝亚细胞结构控制代谢稳态。
Nature. 2022 Mar;603(7902):736-742. doi: 10.1038/s41586-022-04488-5. Epub 2022 Mar 9.
4
An open-access volume electron microscopy atlas of whole cells and tissues.开放获取的全细胞和组织电子显微镜图谱集
Nature. 2021 Nov;599(7883):147-151. doi: 10.1038/s41586-021-03992-4. Epub 2021 Oct 6.
5
Whole-cell organelle segmentation in volume electron microscopy.体积电子显微镜中的全细胞细胞器分割
Nature. 2021 Nov;599(7883):141-146. doi: 10.1038/s41586-021-03977-3. Epub 2021 Oct 6.
6
Connectomes across development reveal principles of brain maturation.连接组学揭示大脑发育的成熟规律。
Nature. 2021 Aug;596(7871):257-261. doi: 10.1038/s41586-021-03778-8. Epub 2021 Aug 4.
7
nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation.nnU-Net:一种基于深度学习的生物医学图像分割的自配置方法。
Nat Methods. 2021 Feb;18(2):203-211. doi: 10.1038/s41592-020-01008-z. Epub 2020 Dec 7.
8
Automatic Reconstruction of Mitochondria and Endoplasmic Reticulum in Electron Microscopy Volumes by Deep Learning.通过深度学习对电子显微镜图像中的线粒体和内质网进行自动重建
Front Neurosci. 2020 Jul 21;14:599. doi: 10.3389/fnins.2020.00599. eCollection 2020.
9
Automatic segmentation of mitochondria and endolysosomes in volumetric electron microscopy data.在体积电子显微镜数据中对线粒体和内溶酶体进行自动分割。
Comput Biol Med. 2020 Apr;119:103693. doi: 10.1016/j.compbiomed.2020.103693. Epub 2020 Mar 3.
10
Fast Homogeneous Staining of Large Tissue Samples for Volume Electron Microscopy.用于体积电子显微镜的大型组织样本快速均匀染色
Front Neuroanat. 2018 Sep 28;12:76. doi: 10.3389/fnana.2018.00076. eCollection 2018.