• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于自注意力和全局-局部融合的物联网赋能少样本图像生成用于电力场景缺陷检测

IoT-Enabled Few-Shot Image Generation for Power Scene Defect Detection Based on Self-Attention and Global-Local Fusion.

作者信息

Chen Yi, Yan Yunfeng, Wang Xianbo, Zheng Yi

机构信息

College of Electrical Engineering, Zhejiang University, Hangzhou 310058, China.

School of Mechanical Engineering, Zhejiang University, Hangzhou 310027, China.

出版信息

Sensors (Basel). 2023 Jul 19;23(14):6531. doi: 10.3390/s23146531.

DOI:10.3390/s23146531
PMID:37514825
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10383857/
Abstract

Defect detection in power scenarios is a critical task that plays a significant role in ensuring the safety, reliability, and efficiency of power systems. The existing technology requires enhancement in its learning ability from large volumes of data to achieve ideal detection effect results. Power scene data involve privacy and security issues, and there is an imbalance in the number of samples across different defect categories, all of which will affect the performance of defect detection models. With the emergence of the Internet of Things (IoT), the integration of IoT with machine learning offers a new direction for defect detection in power equipment. Meanwhile, a generative adversarial network based on multi-view fusion and self-attention is proposed for few-shot image generation, named MVSA-GAN. The IoT devices capture real-time data from the power scene, which are then used to train the MVSA-GAN model, enabling it to generate realistic and diverse defect data. The designed self-attention encoder focuses on the relevant features of different parts of the image to capture the contextual information of the input image and improve the authenticity and coherence of the image. A multi-view feature fusion module is proposed to capture the complex structure and texture of the power scene through the selective fusion of global and local features, and improve the authenticity and diversity of generated images. Experiments show that the few-shot image generation method proposed in this paper can generate real and diverse defect data for power scene defects. The proposed method achieved FID and LPIPS scores of 67.87 and 0.179, surpassing SOTA methods, such as FIGR and DAWSON.

摘要

电力场景中的缺陷检测是一项关键任务,对确保电力系统的安全、可靠和高效起着重要作用。现有技术需要提高其从大量数据中学习的能力,以实现理想的检测效果。电力场景数据涉及隐私和安全问题,并且不同缺陷类别的样本数量存在不平衡,所有这些都会影响缺陷检测模型的性能。随着物联网(IoT)的出现,物联网与机器学习的集成给电力设备缺陷检测提供了新方向。同时,提出了一种基于多视图融合和自注意力的生成对抗网络用于少样本图像生成,名为MVSA-GAN。物联网设备从电力场景中捕获实时数据,然后用于训练MVSA-GAN模型,使其能够生成逼真且多样的缺陷数据。所设计的自注意力编码器专注于图像不同部分的相关特征,以捕获输入图像的上下文信息并提高图像的真实性和连贯性。提出了一种多视图特征融合模块,通过全局和局部特征的选择性融合来捕获电力场景的复杂结构和纹理,并提高生成图像的真实性和多样性。实验表明,本文提出的少样本图像生成方法可以为电力场景缺陷生成真实且多样的缺陷数据。所提方法的FID和LPIPS分数分别为67.87和0.179,超过了FIGR和DAWSON等当前最优方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/cbd9d053b3cd/sensors-23-06531-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/57976b3e492a/sensors-23-06531-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/605b9d0a2312/sensors-23-06531-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/7d3cb2dcff3c/sensors-23-06531-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/6a92b4296e7b/sensors-23-06531-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/6a99d1f0653d/sensors-23-06531-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/e8ed07922a72/sensors-23-06531-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/cbd9d053b3cd/sensors-23-06531-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/57976b3e492a/sensors-23-06531-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/605b9d0a2312/sensors-23-06531-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/7d3cb2dcff3c/sensors-23-06531-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/6a92b4296e7b/sensors-23-06531-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/6a99d1f0653d/sensors-23-06531-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/e8ed07922a72/sensors-23-06531-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01aa/10383857/cbd9d053b3cd/sensors-23-06531-g007.jpg

相似文献

1
IoT-Enabled Few-Shot Image Generation for Power Scene Defect Detection Based on Self-Attention and Global-Local Fusion.基于自注意力和全局-局部融合的物联网赋能少样本图像生成用于电力场景缺陷检测
Sensors (Basel). 2023 Jul 19;23(14):6531. doi: 10.3390/s23146531.
2
DG-GAN: A High Quality Defect Image Generation Method for Defect Detection.DG-GAN:一种用于缺陷检测的高质量缺陷图像生成方法。
Sensors (Basel). 2023 Jun 26;23(13):5922. doi: 10.3390/s23135922.
3
A Novel Adversarial Deep Learning Method for Substation Defect Image Generation.一种用于变电站缺陷图像生成的新型对抗深度学习方法。
Sensors (Basel). 2024 Jul 12;24(14):4512. doi: 10.3390/s24144512.
4
Nighttime road scene image enhancement based on cycle-consistent generative adversarial network.基于循环一致生成对抗网络的夜间道路场景图像增强
Sci Rep. 2024 Jun 22;14(1):14375. doi: 10.1038/s41598-024-65270-3.
5
2D facial landmark localization method for multi-view face synthesis image using a two-pathway generative adversarial network approach.基于双通路生成对抗网络方法的多视角人脸合成图像的二维面部地标定位方法
PeerJ Comput Sci. 2022 Feb 16;8:e897. doi: 10.7717/peerj-cs.897. eCollection 2022.
6
MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion.MJ-GAN:用于红外与可见光图像融合的具有多粒度特征提取和联合注意力融合的生成对抗网络
Sensors (Basel). 2023 Jul 12;23(14):6322. doi: 10.3390/s23146322.
7
MFGAN: Multimodal Fusion for Industrial Anomaly Detection Using Attention-Based Autoencoder and Generative Adversarial Network.MFGAN:基于注意力自动编码器和生成对抗网络的工业异常检测多模态融合方法
Sensors (Basel). 2024 Jan 19;24(2):637. doi: 10.3390/s24020637.
8
Anomaly Detection for Internet of Things Time Series Data Using Generative Adversarial Networks With Attention Mechanism in Smart Agriculture.基于注意力机制的生成对抗网络在智能农业物联网时间序列数据中的异常检测
Front Plant Sci. 2022 Jun 6;13:890563. doi: 10.3389/fpls.2022.890563. eCollection 2022.
9
Few-shot learning approach with multi-scale feature fusion and attention for plant disease recognition.基于多尺度特征融合与注意力机制的少样本学习方法用于植物病害识别
Front Plant Sci. 2022 Sep 16;13:907916. doi: 10.3389/fpls.2022.907916. eCollection 2022.
10
Intraclass Image Augmentation for Defect Detection Using Generative Adversarial Neural Networks.基于生成对抗神经网络的缺陷检测的类内图像增强。
Sensors (Basel). 2023 Feb 7;23(4):1861. doi: 10.3390/s23041861.

本文引用的文献

1
Artificial Neural Networks for IoT-Enabled Smart Applications: Recent Trends.人工神经网络在物联网智能应用中的应用:最新趋势。
Sensors (Basel). 2023 May 18;23(10):4853. doi: 10.3390/s23104853.
2
Adaptive Context Caching for IoT-Based Applications: A Reinforcement Learning Approach.基于物联网的应用的自适应上下文缓存:一种强化学习方法。
Sensors (Basel). 2023 May 15;23(10):4767. doi: 10.3390/s23104767.
3
A Survey on Vision Transformer.视觉Transformer综述
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):87-110. doi: 10.1109/TPAMI.2022.3152247. Epub 2022 Dec 5.
4
A Flexible Memristor Model With Electronic Resistive Switching Memory Behavior and Its Application in Spiking Neural Network.一种具有电子电阻开关记忆行为的柔性忆阻器模型及其在脉冲神经网络中的应用。
IEEE Trans Nanobioscience. 2023 Jan;22(1):52-62. doi: 10.1109/TNB.2022.3152228. Epub 2022 Dec 29.
5
The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation.马修斯相关系数(MCC)在二分类评估中优于 F1 得分和准确率的优势。
BMC Genomics. 2020 Jan 2;21(1):6. doi: 10.1186/s12864-019-6413-7.
6
f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks.f-AnoGAN:基于生成对抗网络的快速无监督异常检测。
Med Image Anal. 2019 May;54:30-44. doi: 10.1016/j.media.2019.01.010. Epub 2019 Jan 31.
7
DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction.DAGAN:用于快速压缩感知 MRI 重建的深度去混淆生成对抗网络。
IEEE Trans Med Imaging. 2018 Jun;37(6):1310-1321. doi: 10.1109/TMI.2017.2785879.