• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度Q网络的单像素成像。

DQN based single-pixel imaging.

作者信息

Wang Zhirun, Zhao Wenjing, Zhai Aiping, He Peng, Wang Dong

出版信息

Opt Express. 2021 May 10;29(10):15463-15477. doi: 10.1364/OE.422636.

DOI:10.1364/OE.422636
PMID:33985246
Abstract

For an orthogonal transform based single-pixel imaging (OT-SPI), to accelerate its speed while degrading as little as possible of its imaging quality, the normal way is to artificially plan the sampling path for optimizing the sampling strategy based on the characteristic of the orthogonal transform. Here, we propose an optimized sampling method using a Deep Q-learning Network (DQN), which considers the sampling process as decision-making, and the improvement of the reconstructed image as feedback, to obtain a relatively optimal sampling strategy for an OT-SPI. We verify the effectiveness of the method through simulations and experiments. Thanks to the DQN, the proposed single-pixel imaging technique is capable of obtaining an optimal sampling strategy directly, and therefore it requires no artificial planning of the sampling path there, which eliminates the influence of the imperfect sampling path planning on the imaging performance.

摘要

对于基于正交变换的单像素成像(OT-SPI),为了在尽可能少地降低成像质量的同时加快其速度,通常的方法是根据正交变换的特性人工规划采样路径以优化采样策略。在此,我们提出一种使用深度Q学习网络(DQN)的优化采样方法,该方法将采样过程视为决策,将重建图像的改进视为反馈,以获得针对OT-SPI的相对最优采样策略。我们通过仿真和实验验证了该方法的有效性。得益于DQN,所提出的单像素成像技术能够直接获得最优采样策略,因此无需在此进行采样路径的人工规划,这消除了不完美的采样路径规划对成像性能的影响。

相似文献

1
DQN based single-pixel imaging.基于深度Q网络的单像素成像。
Opt Express. 2021 May 10;29(10):15463-15477. doi: 10.1364/OE.422636.
2
Constrained Deep Q-Learning Gradually Approaching Ordinary Q-Learning.受限深度Q学习逐步逼近普通Q学习。
Front Neurorobot. 2019 Dec 10;13:103. doi: 10.3389/fnbot.2019.00103. eCollection 2019.
3
Qualitative Measurements of Policy Discrepancy for Return-Based Deep Q-Network.基于回报的深度 Q 网络的政策差异的定性测量。
IEEE Trans Neural Netw Learn Syst. 2020 Oct;31(10):4374-4380. doi: 10.1109/TNNLS.2019.2948892. Epub 2019 Nov 22.
4
Deep reinforcement learning for automated radiation adaptation in lung cancer.深度强化学习在肺癌放射自适应中的应用。
Med Phys. 2017 Dec;44(12):6690-6705. doi: 10.1002/mp.12625. Epub 2017 Nov 14.
5
Improving single pixel imaging performance in high noise condition by under-sampling.欠采样改善高噪声条件下单像素成像性能。
Sci Rep. 2020 Nov 10;10(1):19451. doi: 10.1038/s41598-020-76487-3.
6
A Heuristically Accelerated Reinforcement Learning-Based Neurosurgical Path Planner.一种基于启发式加速强化学习的神经外科手术路径规划器。
Cyborg Bionic Syst. 2023 May 11;4:0026. doi: 10.34133/cbsystems.0026. eCollection 2023.
7
Slicing Resource Allocation Based on Dueling DQN for eMBB and URLLC Hybrid Services in Heterogeneous Integrated Networks.基于对偶 DQN 的切片资源分配在异构集成网络中的 eMBB 和 URLLC 混合服务。
Sensors (Basel). 2023 Feb 24;23(5):2518. doi: 10.3390/s23052518.
8
Sampling Efficient Deep Reinforcement Learning Through Preference-Guided Stochastic Exploration.通过偏好引导的随机探索实现采样高效的深度强化学习
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):18553-18564. doi: 10.1109/TNNLS.2023.3317628. Epub 2024 Dec 2.
9
Fast and high-quality single-pixel imaging.快速且高质量的单像素成像。
Opt Lett. 2022 Mar 1;47(5):1218-1221. doi: 10.1364/OL.448658.
10
E-DQN-Based Path Planning Method for Drones in Airsim Simulator under Unknown Environment.基于深度Q网络的未知环境下Airsim模拟器中无人机路径规划方法
Biomimetics (Basel). 2024 Apr 16;9(4):238. doi: 10.3390/biomimetics9040238.

引用本文的文献

1
Fast autofocusing based on single-pixel moment detection.基于单像素矩检测的快速自动对焦
Commun Eng. 2024 Oct 9;3(1):140. doi: 10.1038/s44172-024-00288-z.
2
Comparison of Common Algorithms for Single-Pixel Imaging via Compressed Sensing.基于压缩感知的单像素成像常用算法比较。
Sensors (Basel). 2023 May 11;23(10):4678. doi: 10.3390/s23104678.