• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于 Swin-Transformer 的高效监督预训练实现显微镜图像虚拟染色。

Efficient Supervised Pretraining of Swin-Transformer for Virtual Staining of Microscopy Images.

出版信息

IEEE Trans Med Imaging. 2024 Apr;43(4):1388-1399. doi: 10.1109/TMI.2023.3337253. Epub 2024 Apr 3.

DOI:10.1109/TMI.2023.3337253
PMID:38010933
Abstract

Fluorescence staining is an important technique in life science for labeling cellular constituents. However, it also suffers from being time-consuming, having difficulty in simultaneous labeling, etc. Thus, virtual staining, which does not rely on chemical labeling, has been introduced. Recently, deep learning models such as transformers have been applied to virtual staining tasks. However, their performance relies on large-scale pretraining, hindering their development in the field. To reduce the reliance on large amounts of computation and data, we construct a Swin-transformer model and propose an efficient supervised pretraining method based on the masked autoencoder (MAE). Specifically, we adopt downsampling and grid sampling to mask 75% of pixels and reduce the number of tokens. The pretraining time of our method is only 1/16 compared with the original MAE. We also design a supervised proxy task to predict stained images with multiple styles instead of masked pixels. Additionally, most virtual staining approaches are based on private datasets and evaluated by different metrics, making a fair comparison difficult. Therefore, we develop a standard benchmark based on three public datasets and build a baseline for the convenience of future researchers. We conduct extensive experiments on three benchmark datasets, and the experimental results show the proposed method achieves the best performance both quantitatively and qualitatively. In addition, ablation studies are conducted, and experimental results illustrate the effectiveness of the proposed pretraining method. The benchmark and code are available at https://github.com/birkhoffkiki/CAS-Transformer.

摘要

荧光染色是生命科学中用于标记细胞成分的一项重要技术。然而,它也存在耗时、难以同时进行标记等问题。因此,不依赖化学标记的虚拟染色技术已经被引入。最近,像 Transformer 这样的深度学习模型已经被应用于虚拟染色任务中。然而,它们的性能依赖于大规模的预训练,这阻碍了它们在该领域的发展。为了减少对大量计算和数据的依赖,我们构建了一个 Swin-Transformer 模型,并提出了一种基于掩蔽自动编码器(MAE)的高效监督预训练方法。具体来说,我们采用下采样和网格采样来掩蔽 75%的像素并减少标记的数量。与原始 MAE 相比,我们的方法的预训练时间仅为 1/16。我们还设计了一个监督代理任务,用于预测具有多种样式的染色图像,而不是掩蔽像素。此外,大多数虚拟染色方法都是基于私有数据集,并使用不同的指标进行评估,这使得公平比较变得困难。因此,我们基于三个公共数据集开发了一个标准基准,并为方便未来的研究人员构建了一个基线。我们在三个基准数据集上进行了广泛的实验,实验结果表明,所提出的方法在定量和定性方面都取得了最佳性能。此外,还进行了消融研究,实验结果表明了所提出的预训练方法的有效性。基准和代码可在 https://github.com/birkhoffkiki/CAS-Transformer 上获得。

相似文献

1
Efficient Supervised Pretraining of Swin-Transformer for Virtual Staining of Microscopy Images.基于 Swin-Transformer 的高效监督预训练实现显微镜图像虚拟染色。
IEEE Trans Med Imaging. 2024 Apr;43(4):1388-1399. doi: 10.1109/TMI.2023.3337253. Epub 2024 Apr 3.
2
Swin MAE: Masked autoencoders for small datasets.Swin MAE:适用于小数据集的掩码自编码器。
Comput Biol Med. 2023 Jul;161:107037. doi: 10.1016/j.compbiomed.2023.107037. Epub 2023 May 23.
3
Self-supervised learning improves robustness of deep learning lung tumor segmentation models to CT imaging differences.自监督学习提高了深度学习肺肿瘤分割模型对CT成像差异的鲁棒性。
Med Phys. 2025 Mar;52(3):1573-1588. doi: 10.1002/mp.17541. Epub 2024 Dec 5.
4
Transformer-based unsupervised contrastive learning for histopathological image classification.基于 Transformer 的无监督对比学习在组织病理学图像分类中的应用。
Med Image Anal. 2022 Oct;81:102559. doi: 10.1016/j.media.2022.102559. Epub 2022 Jul 30.
5
MAE-TransRNet: An improved transformer-ConvNet architecture with masked autoencoder for cardiac MRI registration.MAE-TransRNet:一种用于心脏磁共振成像配准的、带有掩码自动编码器的改进型Transformer-ConvNet架构。
Front Med (Lausanne). 2023 Mar 9;10:1114571. doi: 10.3389/fmed.2023.1114571. eCollection 2023.
6
Global Pixel Transformers for Virtual Staining of Microscopy Images.全局像素变换在显微镜图像虚拟染色中的应用。
IEEE Trans Med Imaging. 2020 Jun;39(6):2256-2266. doi: 10.1109/TMI.2020.2968504. Epub 2020 Jan 21.
7
EndoViT: pretraining vision transformers on a large collection of endoscopic images.EndoViT:在大量内窥镜图像上预训练视觉转换器。
Int J Comput Assist Radiol Surg. 2024 Jun;19(6):1085-1091. doi: 10.1007/s11548-024-03091-5. Epub 2024 Apr 3.
8
SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images.SwinCross:用于 PET/CT 图像中头颈部肿瘤分割的跨模态 Swin 变换器。
Med Phys. 2024 Mar;51(3):2096-2107. doi: 10.1002/mp.16703. Epub 2023 Sep 30.
9
GO-MAE: Self-supervised pre-training via masked autoencoder for OCT image classification of gynecology.GO-MAE:通过掩码自动编码器进行自监督预训练用于妇科OCT图像分类
Neural Netw. 2025 Jan;181:106817. doi: 10.1016/j.neunet.2024.106817. Epub 2024 Oct 18.
10
MiM: Mask in Mask Self-Supervised Pre-Training for 3D Medical Image Analysis.MiM:用于3D医学图像分析的掩码内掩码自监督预训练
IEEE Trans Med Imaging. 2025 Apr 25;PP. doi: 10.1109/TMI.2025.3564382.

引用本文的文献

1
H&E to IHC virtual staining methods in breast cancer: an overview and benchmarking.乳腺癌中苏木精-伊红染色至免疫组化的虚拟染色方法:综述与基准测试
NPJ Digit Med. 2025 Jul 2;8(1):384. doi: 10.1038/s41746-025-01741-9.
2
Extensible Immunofluorescence (ExIF) accessibly generates high-plexity datasets by integrating standard 4-plex imaging data.可扩展免疫荧光技术(ExIF)通过整合标准的四重成像数据,可轻松生成高复杂性数据集。
Nat Commun. 2025 May 17;16(1):4606. doi: 10.1038/s41467-025-59592-7.