Archit Anwai, Freckmann Luca, Nair Sushmita, Khalid Nabeel, Hilt Paul, Rajashekar Vikas, Freitag Marei, Teuber Carolin, Buckley Genevieve, von Haaren Sebastian, Gupta Sagnik, Dengel Andreas, Ahmed Sheraz, Pape Constantin
Georg-August-University Göttingen, Institute of Computer Science, Goettingen, Germany.
German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
Nat Methods. 2025 Mar;22(3):579-591. doi: 10.1038/s41592-024-02580-4. Epub 2025 Feb 12.
Accurate segmentation of objects in microscopy images remains a bottleneck for many researchers despite the number of tools developed for this purpose. Here, we present Segment Anything for Microscopy (μSAM), a tool for segmentation and tracking in multidimensional microscopy data. It is based on Segment Anything, a vision foundation model for image segmentation. We extend it by fine-tuning generalist models for light and electron microscopy that clearly improve segmentation quality for a wide range of imaging conditions. We also implement interactive and automatic segmentation in a napari plugin that can speed up diverse segmentation tasks and provides a unified solution for microscopy annotation across different microscopy modalities. Our work constitutes the application of vision foundation models in microscopy, laying the groundwork for solving image analysis tasks in this domain with a small set of powerful deep learning models.
尽管已经开发了许多用于此目的的工具,但对显微镜图像中的物体进行准确分割仍然是许多研究人员面临的瓶颈。在此,我们展示了用于显微镜的分割一切模型(μSAM),这是一种用于多维显微镜数据分割和跟踪的工具。它基于分割一切模型,这是一种用于图像分割的视觉基础模型。我们通过对光学显微镜和电子显微镜的通用模型进行微调来扩展它,这显著提高了在各种成像条件下的分割质量。我们还在napari插件中实现了交互式和自动分割,该插件可以加速各种分割任务,并为跨不同显微镜模式的显微镜注释提供统一的解决方案。我们的工作构成了视觉基础模型在显微镜中的应用,为用一小组强大的深度学习模型解决该领域的图像分析任务奠定了基础。