Peng Boyuan, Chen Jiaju, Githinji P Bilha, Gul Ijaz, Ye Qihui, Chen Minjiang, Qin Peiwu, Huang Xingru, Yan Chenggang, Yu Dongmei, Ji Jiansong, Chen Zhenglin
Zhejiang Key Laboratory of Imaging and Interventional Medicine, Zhejiang Engineering Research Center of Interventional Medicine Engineering and Biotechnology, The Fifth Affiliated Hospital of Wenzhou Medical University, Lishui 323000, China.
Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong, China.
Comput Struct Biotechnol J. 2024 Sep 10;26:23-39. doi: 10.1016/j.csbj.2024.09.002. eCollection 2024 Dec.
Cell segmentation is essential in biomedical research for analyzing cellular morphology and behavior. Deep learning methods, particularly convolutional neural networks (CNNs), have revolutionized cell segmentation by extracting intricate features from images. However, the robustness of these methods under microscope optical aberrations remains a critical challenge. This study evaluates cell image segmentation models under optical aberrations from fluorescence and bright field microscopy. By simulating different types of aberrations, including astigmatism, coma, spherical aberration, trefoil, and mixed aberrations, we conduct a thorough evaluation of various cell instance segmentation models using the DynamicNuclearNet (DNN) and LIVECell datasets, representing fluorescence and bright field microscopy cell datasets, respectively. We train and test several segmentation models, including the Otsu threshold method and Mask R-CNN with different network heads (FPN, C3) and backbones (ResNet, VGG, Swin Transformer), under aberrated conditions. Additionally, we provide usage recommendations for the Cellpose 2.0 Toolbox on complex cell degradation images. The results indicate that the combination of FPN and SwinS demonstrates superior robustness in handling simple cell images affected by minor aberrations. In contrast, Cellpose 2.0 proves effective for complex cell images under similar conditions. Furthermore, we innovatively propose the Point Spread Function Image Label Classification Model (PLCM). This model can quickly and accurately identify aberration types and amplitudes from PSF images, assisting researchers without optical training. Through PLCM, researchers can better apply our proposed cell segmentation guidelines. This study aims to provide guidance for the effective utilization of cell segmentation models in the presence of minor optical aberrations and pave the way for future research directions.
细胞分割在生物医学研究中对于分析细胞形态和行为至关重要。深度学习方法,特别是卷积神经网络(CNN),通过从图像中提取复杂特征,彻底改变了细胞分割技术。然而,这些方法在显微镜光学像差下的稳健性仍然是一个关键挑战。本研究评估了荧光显微镜和明场显微镜光学像差下的细胞图像分割模型。通过模拟不同类型的像差,包括像散、彗差、球差、三叶草像差和混合像差,我们分别使用代表荧光显微镜和明场显微镜细胞数据集的DynamicNuclearNet(DNN)和LIVECell数据集,对各种细胞实例分割模型进行了全面评估。我们在像差条件下训练和测试了几种分割模型,包括大津阈值法以及具有不同网络头(FPN、C3)和骨干网络(ResNet、VGG、Swin Transformer)的Mask R-CNN。此外,我们还针对复杂细胞降解图像给出了Cellpose 2.0工具箱的使用建议。结果表明,FPN和SwinS的组合在处理受轻微像差影响的简单细胞图像时表现出卓越的稳健性。相比之下,Cellpose 2.0在类似条件下对复杂细胞图像证明是有效的。此外,我们创新性地提出了点扩散函数图像标签分类模型(PLCM)。该模型可以从点扩散函数(PSF)图像中快速准确地识别像差类型和幅度,帮助没有光学训练背景的研究人员。通过PLCM,研究人员可以更好地应用我们提出的细胞分割指南。本研究旨在为在存在轻微光学像差的情况下有效利用细胞分割模型提供指导,并为未来的研究方向铺平道路。