Xie Ronald, Mulcahy Ben, Darbandi Ali, Marwah Sagar, Ali Fez, Lee Yuna, Parlakgul Gunes, Hotamisligil Gokhan S, Wang Bo, MacParland Sonya, Zhen Mei, Bader Gary D
Terrence Donnelly Centre for Cellular & Biomolecular Research, University of Toronto, Toronto, ON, M5S 3E1, Canada.
Peter Munk Cardiac Centre and Joint Department of Medical Imaging, University Health Network, Toronto, ON, M5G 2N2, Canada.
Bioinform Adv. 2025 Apr 2;5(1):vbaf021. doi: 10.1093/bioadv/vbaf021. eCollection 2025.
Volumetric electron microscopy (VEM) enables nanoscale resolution three-dimensional imaging of biological samples. Identification and labeling of organelles, cells, and other structures in the image volume is required for image interpretation, but manual labeling is extremely time-consuming. This can be automated using deep learning segmentation algorithms, but these traditionally require substantial manual annotation for training and typically these labeled datasets are unavailable for new samples.
We show that transfer learning can help address this challenge. By pretraining on VEM data from multiple mammalian tissues and organelle types and then fine-tuning on a target dataset, we segment multiple organelles at high performance, yet require a relatively small amount of new training data. We benchmark our method on three published VEM datasets and a new rat liver dataset we imaged over a 56×56×11 m volume measuring 7000×7000×219 px using serial block face scanning electron microscopy with corresponding manually labeled mitochondria and endoplasmic reticulum structures. We further benchmark our approach against the Segment Anything Model 2 and MitoNet in zero-shot, prompted, and fine-tuned settings.
Our rat liver dataset's raw image volume, manual ground truth annotation, and model predictions are freely shared at github.com/Xrioen/cross-tissue-transfer-learning-in-VEM.
体积电子显微镜(VEM)能够对生物样本进行纳米级分辨率的三维成像。为了解释图像,需要对图像体积中的细胞器、细胞和其他结构进行识别和标记,但手动标记极其耗时。这可以使用深度学习分割算法实现自动化,但传统上这些算法需要大量的手动注释来进行训练,而且通常这些标记的数据集对于新样本来说是不可用的。
我们表明迁移学习有助于应对这一挑战。通过在来自多种哺乳动物组织和细胞器类型的VEM数据上进行预训练,然后在目标数据集上进行微调,我们能够以高性能分割多个细胞器,同时只需要相对少量的新训练数据。我们在三个已发表的VEM数据集以及一个我们使用串行块面扫描电子显微镜成像的新大鼠肝脏数据集上对我们的方法进行了基准测试,该数据集体积为56×56×11μm,大小为7000×7000×219像素,具有相应的手动标记的线粒体和内质网结构。我们还在零样本、提示和微调设置下,将我们的方法与Segment Anything Model 2和MitoNet进行了进一步的基准测试。
我们的大鼠肝脏数据集的原始图像体积、手动真值注释和模型预测可在github.com/Xrioen/cross-tissue-transfer-learning-in-VEM上免费共享。