• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基准测试在荧光显微镜图像语义分割中深度神经网络的鲁棒性。

Benchmarking robustness of deep neural networks in semantic segmentation of fluorescence microscopy images.

机构信息

School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China.

State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China.

出版信息

BMC Bioinformatics. 2024 Aug 20;25(1):269. doi: 10.1186/s12859-024-05894-4.

DOI:10.1186/s12859-024-05894-4
PMID:39164632
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11334404/
Abstract

BACKGROUND

Fluorescence microscopy (FM) is an important and widely adopted biological imaging technique. Segmentation is often the first step in quantitative analysis of FM images. Deep neural networks (DNNs) have become the state-of-the-art tools for image segmentation. However, their performance on natural images may collapse under certain image corruptions or adversarial attacks. This poses real risks to their deployment in real-world applications. Although the robustness of DNN models in segmenting natural images has been studied extensively, their robustness in segmenting FM images remains poorly understood RESULTS: To address this deficiency, we have developed an assay that benchmarks robustness of DNN segmentation models using datasets of realistic synthetic 2D FM images with precisely controlled corruptions or adversarial attacks. Using this assay, we have benchmarked robustness of ten representative models such as DeepLab and Vision Transformer. We find that models with good robustness on natural images may perform poorly on FM images. We also find new robustness properties of DNN models and new connections between their corruption robustness and adversarial robustness. To further assess the robustness of the selected models, we have also benchmarked them on real microscopy images of different modalities without using simulated degradation. The results are consistent with those obtained on the realistic synthetic images, confirming the fidelity and reliability of our image synthesis method as well as the effectiveness of our assay.

CONCLUSIONS

Based on comprehensive benchmarking experiments, we have found distinct robustness properties of deep neural networks in semantic segmentation of FM images. Based on the findings, we have made specific recommendations on selection and design of robust models for FM image segmentation.

摘要

背景

荧光显微镜(FM)是一种重要且广泛采用的生物成像技术。分割通常是定量分析 FM 图像的第一步。深度神经网络(DNN)已成为图像分割的最新工具。然而,它们在自然图像上的性能可能会在某些图像损坏或对抗攻击下崩溃。这对它们在实际应用中的部署构成了真正的风险。尽管已经广泛研究了 DNN 模型在分割自然图像方面的鲁棒性,但它们在分割 FM 图像方面的鲁棒性仍知之甚少。

结果

为了解决这个不足,我们开发了一种使用具有精确控制的损坏或对抗攻击的逼真合成 2D FM 图像数据集来评估 DNN 分割模型鲁棒性的方法。使用该方法,我们基准测试了包括 DeepLab 和 Vision Transformer 在内的十个有代表性的模型的鲁棒性。我们发现,在自然图像上具有良好鲁棒性的模型在 FM 图像上可能表现不佳。我们还发现了 DNN 模型的新鲁棒性特性,以及它们的损坏鲁棒性和对抗鲁棒性之间的新联系。为了进一步评估选定模型的鲁棒性,我们还在没有使用模拟退化的情况下,在不同模态的真实显微镜图像上对其进行了基准测试。结果与在逼真合成图像上获得的结果一致,证实了我们的图像合成方法的保真度和可靠性,以及我们的测试方法的有效性。

结论

基于全面的基准测试实验,我们发现了 DNN 在 FM 图像语义分割中的明显鲁棒性特性。根据这些发现,我们对用于 FM 图像分割的鲁棒模型的选择和设计提出了具体建议。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/f93dd3ff934d/12859_2024_5894_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/a57905007389/12859_2024_5894_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/26bb980922b8/12859_2024_5894_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/79939176fb77/12859_2024_5894_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/38989cafe522/12859_2024_5894_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/85413710ed52/12859_2024_5894_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/e9ebc105863e/12859_2024_5894_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/f177ef1389d1/12859_2024_5894_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/57aca280bbd8/12859_2024_5894_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/17a4af75247b/12859_2024_5894_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/e994d3d36f36/12859_2024_5894_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/f1858a653dbe/12859_2024_5894_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/d0c174005324/12859_2024_5894_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/f93dd3ff934d/12859_2024_5894_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/a57905007389/12859_2024_5894_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/26bb980922b8/12859_2024_5894_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/79939176fb77/12859_2024_5894_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/38989cafe522/12859_2024_5894_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/85413710ed52/12859_2024_5894_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/e9ebc105863e/12859_2024_5894_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/f177ef1389d1/12859_2024_5894_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/57aca280bbd8/12859_2024_5894_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/17a4af75247b/12859_2024_5894_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/e994d3d36f36/12859_2024_5894_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/f1858a653dbe/12859_2024_5894_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/d0c174005324/12859_2024_5894_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/09e0/11334404/f93dd3ff934d/12859_2024_5894_Fig13_HTML.jpg

相似文献

1
Benchmarking robustness of deep neural networks in semantic segmentation of fluorescence microscopy images.基准测试在荧光显微镜图像语义分割中深度神经网络的鲁棒性。
BMC Bioinformatics. 2024 Aug 20;25(1):269. doi: 10.1186/s12859-024-05894-4.
2
ROOD-MRI: Benchmarking the robustness of deep learning segmentation models to out-of-distribution and corrupted data in MRI.R O O D-MRI:基准测试深度学习分割模型对 MRI 中分布外和损坏数据的鲁棒性。
Neuroimage. 2023 Sep;278:120289. doi: 10.1016/j.neuroimage.2023.120289. Epub 2023 Jul 24.
3
SUSAN: segment unannotated image structure using adversarial network.苏珊:使用对抗网络分割未注释的图像结构。
Magn Reson Med. 2019 May;81(5):3330-3345. doi: 10.1002/mrm.27627. Epub 2018 Dec 10.
4
Increasing-Margin Adversarial (IMA) training to improve adversarial robustness of neural networks.基于增加间隔的对抗(IMA)训练来提高神经网络的对抗鲁棒性。
Comput Methods Programs Biomed. 2023 Oct;240:107687. doi: 10.1016/j.cmpb.2023.107687. Epub 2023 Jun 24.
5
Semantic segmentation of human oocyte images using deep neural networks.基于深度学习的人卵母细胞图像语义分割。
Biomed Eng Online. 2021 Apr 23;20(1):40. doi: 10.1186/s12938-021-00864-w.
6
Image generation by GAN and style transfer for agar plate image segmentation.基于 GAN 和风格迁移的琼脂平板图像分割的图像生成。
Comput Methods Programs Biomed. 2020 Feb;184:105268. doi: 10.1016/j.cmpb.2019.105268. Epub 2019 Dec 17.
7
Semi-Supervised Semantic Image Segmentation by Deep Diffusion Models and Generative Adversarial Networks.基于深度扩散模型和生成对抗网络的半监督语义图像分割。
Int J Neural Syst. 2024 Nov;34(11):2450057. doi: 10.1142/S0129065724500576. Epub 2024 Aug 15.
8
A deep learning segmentation strategy that minimizes the amount of manually annotated images.一种深度学习分割策略,可最大限度地减少手动标注图像的数量。
F1000Res. 2021 Mar 30;10:256. doi: 10.12688/f1000research.52026.2. eCollection 2021.
9
Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.基于神经元敏感性的深度神经网络对抗鲁棒性解释与改进。
IEEE Trans Image Process. 2021;30:1291-1304. doi: 10.1109/TIP.2020.3042083. Epub 2020 Dec 23.
10
Application of convolutional neural networks towards nuclei segmentation in localization-based super-resolution fluorescence microscopy images.基于定位的超分辨率荧光显微镜图像中核分割的卷积神经网络应用。
BMC Bioinformatics. 2021 Jun 15;22(1):325. doi: 10.1186/s12859-021-04245-x.

本文引用的文献

1
The multimodality cell segmentation challenge: toward universal solutions.多模态细胞分割挑战赛:迈向通用解决方案。
Nat Methods. 2024 Jun;21(6):1103-1113. doi: 10.1038/s41592-024-02233-6. Epub 2024 Mar 26.
2
The Cell Tracking Challenge: 10 years of objective benchmarking.细胞追踪挑战赛:10 年客观基准测试。
Nat Methods. 2023 Jul;20(7):1010-1020. doi: 10.1038/s41592-023-01879-y. Epub 2023 May 18.
3
LIVECell-A large-scale dataset for label-free live cell segmentation.LiveCell-A 大型无标记活细胞分割数据集。
Nat Methods. 2021 Sep;18(9):1038-1045. doi: 10.1038/s41592-021-01249-6. Epub 2021 Aug 30.
4
Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes.三维残差通道注意力网络对荧光显微镜图像体积进行降噪和锐化。
Nat Methods. 2021 Jun;18(6):678-687. doi: 10.1038/s41592-021-01155-x. Epub 2021 May 31.
5
Image Segmentation Using Deep Learning: A Survey.基于深度学习的图像分割技术综述。
IEEE Trans Pattern Anal Mach Intell. 2022 Jul;44(7):3523-3542. doi: 10.1109/TPAMI.2021.3059968. Epub 2022 Jun 3.
6
Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl.跨影像实验的核分割:2018 年数据科学竞赛
Nat Methods. 2019 Dec;16(12):1247-1253. doi: 10.1038/s41592-019-0612-7. Epub 2019 Oct 21.
7
Evaluation of Deep Learning Strategies for Nucleus Segmentation in Fluorescence Images.深度学习策略在荧光图像中细胞核分割的评估。
Cytometry A. 2019 Sep;95(9):952-965. doi: 10.1002/cyto.a.23863. Epub 2019 Jul 16.
8
On the Robustness of Semantic Segmentation Models to Adversarial Attacks.对抗攻击下语义分割模型的稳健性研究
IEEE Trans Pattern Anal Mach Intell. 2020 Dec;42(12):3040-3053. doi: 10.1109/TPAMI.2019.2919707. Epub 2020 Nov 3.
9
Deep Learning in Microscopy Image Analysis: A Survey.深度学习在显微镜图像分析中的应用:综述。
IEEE Trans Neural Netw Learn Syst. 2018 Oct;29(10):4550-4568. doi: 10.1109/TNNLS.2017.2766168. Epub 2017 Nov 22.
10
Denoising of Microscopy Images: A Review of the State-of-the-Art, and a New Sparsity-Based Method.显微镜图像去噪:最新技术综述及一种基于稀疏性的新方法。
IEEE Trans Image Process. 2018 Aug;27(8):3842-3856. doi: 10.1109/TIP.2018.2819821.