Xing Cheng, Xie Ronald, Bader Gary D
Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada.
The Donnelly Centre, University of Toronto, Toronto, Ontario, Canada.
PLoS Comput Biol. 2025 May 28;21(5):e1013115. doi: 10.1371/journal.pcbi.1013115. eCollection 2025 May.
Electron microscopy (EM) has revolutionized our understanding of cellular structures at the nanoscale. Accurate image segmentation is required for analyzing EM images. While manual segmentation is reliable, it is labor-intensive, incentivizing the development of automated segmentation methods. Although deep learning-based segmentation has demonstrated expert-level performance, it lacks generalizable performance across diverse EM datasets. Current approaches usually use either convolutional or transformer-based neural networks for image feature extraction. We developed the RETINA method, which combines pre-training on the large, unlabeled CEM500K EM image dataset with a hybrid neural-network model architecture that integrates both local (convolutional layer) and global (transformer layer) image processing to learn from manual image annotations. RETINA outperformed existing models on cellular structure segmentation on five public EM datasets. This improvement works toward automated cellular structure segmentation for the EM community.
电子显微镜(EM)彻底改变了我们对纳米尺度细胞结构的理解。分析EM图像需要准确的图像分割。虽然手动分割很可靠,但劳动强度大,这促使了自动分割方法的发展。尽管基于深度学习的分割已展现出专家级的性能,但它在不同的EM数据集上缺乏可推广的性能。当前的方法通常使用基于卷积或基于Transformer的神经网络进行图像特征提取。我们开发了RETINA方法,该方法将在大型未标记的CEM500K EM图像数据集上的预训练与混合神经网络模型架构相结合,该架构整合了局部(卷积层)和全局(Transformer层)图像处理,以从手动图像注释中学习。RETINA在五个公共EM数据集的细胞结构分割上优于现有模型。这一改进有助于为EM社区实现自动细胞结构分割。