O'Connor Owen M, Dunlop Mary J
Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America.
Biological Design Center, Boston University, Boston, Massachusetts, United States of America.
PLoS Comput Biol. 2025 May 23;21(5):e1013071. doi: 10.1371/journal.pcbi.1013071. eCollection 2025 May.
Deep learning-based methods for identifying and tracking cells within microscopy images have revolutionized the speed and throughput of data analysis. These methods for analyzing biological and medical data have capitalized on advances from the broader computer vision field. However, cell tracking can present unique challenges, with frequent cell division events and the need to track many objects with similar visual appearances complicating analysis. Existing architectures developed for cell tracking based on convolutional neural networks (CNNs) have tended to fall short in managing the spatial and global contextual dependencies that are crucial for tracking cells. To overcome these limitations, we introduce Cell-TRACTR (Transformer with Attention for Cell Tracking and Recognition), a novel deep learning model that uses a transformer-based architecture. Cell-TRACTR operates in an end-to-end manner, simultaneously segmenting and tracking cells without the need for post-processing. Alongside this model, we introduce the Cell-HOTA metric, an extension of the Higher Order Tracking Accuracy (HOTA) metric that we adapted to assess cell division. Cell-HOTA differs from standard cell tracking metrics by offering a balanced and easily interpretable assessment of detection, association, and division accuracy. We test our Cell-TRACTR model on datasets of bacteria growing within a defined microfluidic geometry and mammalian cells growing freely in two dimensions. Our results demonstrate that Cell-TRACTR exhibits strong performance in tracking and division accuracy compared to state-of-the-art algorithms, while also meeting traditional benchmarks in detection accuracy. This work establishes a new framework for employing transformer-based models in cell segmentation and tracking.
基于深度学习的方法用于识别和跟踪显微镜图像中的细胞,彻底改变了数据分析的速度和通量。这些分析生物和医学数据的方法利用了更广泛的计算机视觉领域的进展。然而,细胞跟踪可能带来独特的挑战,频繁的细胞分裂事件以及需要跟踪许多视觉外观相似的物体使分析变得复杂。现有的基于卷积神经网络(CNN)开发的细胞跟踪架构在管理对细胞跟踪至关重要的空间和全局上下文依赖关系方面往往有所不足。为了克服这些限制,我们引入了Cell-TRACTR(用于细胞跟踪和识别的带注意力机制的Transformer),这是一种使用基于Transformer架构的新型深度学习模型。Cell-TRACTR以端到端的方式运行,无需后处理即可同时分割和跟踪细胞。除了这个模型,我们还引入了Cell-HOTA指标,它是高阶跟踪精度(HOTA)指标的扩展,我们对其进行了调整以评估细胞分裂。Cell-HOTA与标准细胞跟踪指标的不同之处在于,它对检测、关联和分裂精度提供了平衡且易于解释的评估。我们在定义的微流体几何结构中生长的细菌数据集以及二维自由生长的哺乳动物细胞数据集上测试了我们的Cell-TRACTR模型。我们的结果表明,与最先进的算法相比,Cell-TRACTR在跟踪和分裂精度方面表现出强大的性能,同时在检测精度方面也达到了传统基准。这项工作建立了一个在细胞分割和跟踪中使用基于Transformer模型的新框架。