• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于注意力机制的联邦生成对抗网络去除镜面高光

Specular highlight removal by federated generative adversarial network with attention mechanism.

作者信息

Zheng Yuanfeng, Gao Yanfei

机构信息

Zhongshan Torch Polytechnic, Guangdong, China.

出版信息

Sci Rep. 2024 Oct 8;14(1):23472. doi: 10.1038/s41598-024-74229-3.

DOI:10.1038/s41598-024-74229-3
PMID:39379503
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11461817/
Abstract

Specular highlight removal ensures the acquisition of high-quality images, which finds its important applications in stereo matching, text recognition and image segmentation. In order to prevent the leakage of images containing personal information, such as identification card (ID) photos, clients often train specular highlight removal models using local data resulting in a lack of precision and generalization of the trained model. To address this challenge, this paper introduces a new method to remove highlight in images using federated learning (FL) and attention generative adversarial network (AttGAN). Specifically, the former builds a global model in the central server and updates the global model by aggregating model parameters of clients. This process does not involve the transmission of image data, which enhances the privacy of clients; the later combining attention mechanisms and generative adversarial network aims to improve the quality of highlight removal by focusing on key image regions, resulting in more realistic and visually pleasing results. The proposed FL-AttGAN method is numerically evaluated, using SD1, SD2 amd RD datasets. The results show that the proposed FL-AttGAN outperforms existent methods.

摘要

镜面高光去除可确保获取高质量图像,这在立体匹配、文本识别和图像分割中具有重要应用。为防止包含个人信息的图像(如身份证照片)泄露,客户通常使用本地数据训练镜面高光去除模型,导致训练模型缺乏精度和泛化能力。为应对这一挑战,本文介绍一种使用联邦学习(FL)和注意力生成对抗网络(AttGAN)去除图像高光的新方法。具体而言,前者在中央服务器中构建全局模型,并通过聚合客户端的模型参数来更新全局模型。此过程不涉及图像数据传输,增强了客户端的隐私性;后者将注意力机制与生成对抗网络相结合,旨在通过关注关键图像区域来提高高光去除质量,从而产生更逼真、视觉效果更好的结果。使用SD1、SD2和RD数据集对所提出的FL-AttGAN方法进行了数值评估。结果表明,所提出的FL-AttGAN优于现有方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0596/11461817/62ae11a1b3a9/41598_2024_74229_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0596/11461817/48fceee9b2b5/41598_2024_74229_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0596/11461817/31e9adba09a6/41598_2024_74229_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0596/11461817/fad9d1188ac1/41598_2024_74229_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0596/11461817/8868dbbe0348/41598_2024_74229_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0596/11461817/62ae11a1b3a9/41598_2024_74229_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0596/11461817/48fceee9b2b5/41598_2024_74229_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0596/11461817/31e9adba09a6/41598_2024_74229_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0596/11461817/fad9d1188ac1/41598_2024_74229_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0596/11461817/8868dbbe0348/41598_2024_74229_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0596/11461817/62ae11a1b3a9/41598_2024_74229_Fig4_HTML.jpg

相似文献

1
Specular highlight removal by federated generative adversarial network with attention mechanism.基于注意力机制的联邦生成对抗网络去除镜面高光
Sci Rep. 2024 Oct 8;14(1):23472. doi: 10.1038/s41598-024-74229-3.
2
Federated transfer learning for auxiliary classifier generative adversarial networks: framework and industrial application.用于辅助分类器生成对抗网络的联邦迁移学习:框架与工业应用
J Intell Manuf. 2023 May 5:1-16. doi: 10.1007/s10845-023-02126-z.
3
Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.联邦生成对抗网络的后门攻击与防御在医学图像合成中的应用。
Med Image Anal. 2023 Dec;90:102965. doi: 10.1016/j.media.2023.102965. Epub 2023 Sep 22.
4
IFL-GAN: Improved Federated Learning Generative Adversarial Network With Maximum Mean Discrepancy Model Aggregation.IFL-GAN:基于最大均值差异模型聚合的改进型联邦学习生成对抗网络
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):10502-10515. doi: 10.1109/TNNLS.2022.3167482. Epub 2023 Nov 30.
5
Fast myocardial perfusion SPECT denoising using an attention-guided generative adversarial network.使用注意力引导生成对抗网络的快速心肌灌注单光子发射计算机断层扫描去噪
Front Med (Lausanne). 2023 Feb 3;10:1083413. doi: 10.3389/fmed.2023.1083413. eCollection 2023.
6
FedDPGAN: Federated Differentially Private Generative Adversarial Networks Framework for the Detection of COVID-19 Pneumonia.FedDPGAN:用于检测新冠肺炎肺炎的联邦差分隐私生成对抗网络框架
Inf Syst Front. 2021;23(6):1403-1415. doi: 10.1007/s10796-021-10144-6. Epub 2021 Jun 15.
7
A Federated Adversarial Fault Diagnosis Method Driven by Fault Information Discrepancy.一种基于故障信息差异驱动的联邦对抗故障诊断方法
Entropy (Basel). 2024 Aug 23;26(9):718. doi: 10.3390/e26090718.
8
A Generative Adversarial Network Fused with Dual-Attention Mechanism and Its Application in Multitarget Image Fine Segmentation.基于生成对抗网络融合双注意力机制及其在多目标图像精细分割中的应用。
Comput Intell Neurosci. 2021 Dec 18;2021:2464648. doi: 10.1155/2021/2464648. eCollection 2021.
9
A pavement crack synthesis method based on conditional generative adversarial networks.一种基于条件生成对抗网络的路面裂缝合成方法。
Math Biosci Eng. 2024 Jan;21(1):903-923. doi: 10.3934/mbe.2024038. Epub 2022 Dec 21.
10
Federated 3D multi-organ segmentation with partially labeled and unlabeled data.利用部分标记和未标记数据进行联邦3D多器官分割
Int J Comput Assist Radiol Surg. 2024 May 8. doi: 10.1007/s11548-024-03139-6.

本文引用的文献

1
Highlight Removal of Multi-View Facial Images.多视角人脸图像高光去除。
Sensors (Basel). 2022 Sep 2;22(17):6656. doi: 10.3390/s22176656.
2
Federated learning and differential privacy for medical image analysis.联邦学习与医学图像分析中的差分隐私。
Sci Rep. 2022 Feb 4;12(1):1953. doi: 10.1038/s41598-022-05539-7.
3
Preparing Medical Imaging Data for Machine Learning.医学影像数据的机器学习准备
Radiology. 2020 Apr;295(1):4-15. doi: 10.1148/radiol.2020192224. Epub 2020 Feb 18.