Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
Biomedical Optical Imaging Laboratory, Departments of Medicine and Bioengineering, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
BMC Bioinformatics. 2021 Jun 15;22(1):325. doi: 10.1186/s12859-021-04245-x.
Automated segmentation of nuclei in microscopic images has been conducted to enhance throughput in pathological diagnostics and biological research. Segmentation accuracy and speed has been significantly enhanced with the advent of convolutional neural networks. A barrier in the broad application of neural networks to nuclei segmentation is the necessity to train the network using a set of application specific images and image labels. Previous works have attempted to create broadly trained networks for universal nuclei segmentation; however, such networks do not work on all imaging modalities, and best results are still commonly found when the network is retrained on user specific data. Stochastic optical reconstruction microscopy (STORM) based super-resolution fluorescence microscopy has opened a new avenue to image nuclear architecture at nanoscale resolutions. Due to the large size and discontinuous features typical of super-resolution images, automatic nuclei segmentation can be difficult. In this study, we apply commonly used networks (Mask R-CNN and UNet architectures) towards the task of segmenting super-resolution images of nuclei. First, we assess whether networks broadly trained on conventional fluorescence microscopy datasets can accurately segment super-resolution images. Then, we compare the resultant segmentations with results obtained using networks trained directly on our super-resolution data. We next attempt to optimize and compare segmentation accuracy using three different neural network architectures.
Results indicate that super-resolution images are not broadly compatible with neural networks trained on conventional bright-field or fluorescence microscopy images. When the networks were trained on super-resolution data, however, we attained nuclei segmentation accuracies (F1-Score) in excess of 0.8, comparable to past results found when conducting nuclei segmentation on conventional fluorescence microscopy images. Overall, we achieved the best results utilizing the Mask R-CNN architecture.
We found that convolutional neural networks are powerful tools capable of accurately and quickly segmenting localization-based super-resolution microscopy images of nuclei. While broadly trained and widely applicable segmentation algorithms are desirable for quick use with minimal input, optimal results are still found when the network is both trained and tested on visually similar images. We provide a set of Colab notebooks to disseminate the software into the broad scientific community ( https://github.com/YangLiuLab/Super-Resolution-Nuclei-Segmentation ).
为了提高病理诊断和生物研究的效率,已经对显微镜图像中的细胞核进行了自动分割。随着卷积神经网络的出现,分割的准确性和速度得到了显著提高。神经网络在细胞核分割中的广泛应用存在一个障碍,即需要使用一组特定于应用的图像和图像标签来训练网络。以前的工作试图创建广泛训练的网络来进行通用的细胞核分割;然而,这种网络并非在所有成像模式下都有效,并且当网络在用户特定的数据上重新训练时,仍然可以得到最佳的结果。基于随机光学重建显微镜(STORM)的超分辨率荧光显微镜为在纳米分辨率下对核结构进行成像开辟了新的途径。由于超分辨率图像的典型大尺寸和不连续特征,自动细胞核分割可能很困难。在这项研究中,我们应用常用的网络(Mask R-CNN 和 UNet 架构)来分割细胞核的超分辨率图像。首先,我们评估了广泛训练于常规荧光显微镜数据集的网络是否能够准确地分割超分辨率图像。然后,我们将这些分割结果与直接在我们的超分辨率数据上训练的网络得到的结果进行比较。接下来,我们尝试使用三种不同的神经网络架构进行优化和比较分割精度。
结果表明,超分辨率图像与基于常规明场或荧光显微镜图像训练的神经网络不兼容。然而,当网络在超分辨率数据上进行训练时,我们获得了超过 0.8 的细胞核分割准确率(F1-Score),与在常规荧光显微镜图像上进行细胞核分割时获得的过去结果相当。总体而言,我们使用 Mask R-CNN 架构获得了最佳结果。
我们发现卷积神经网络是一种强大的工具,能够准确快速地分割基于定位的超分辨率显微镜细胞核图像。虽然广泛训练和广泛适用的分割算法是快速使用和最小输入的理想选择,但当网络在视觉上相似的图像上进行训练和测试时,仍然可以获得最佳结果。我们提供了一套 Colab 笔记本,以便将软件传播到更广泛的科学界(https://github.com/YangLiuLab/Super-Resolution-Nuclei-Segmentation)。