Suppr超能文献

主动学习中卷积神经网络的反向二进制优化有效设计纳米光子结构。

Inverse binary optimization of convolutional neural network in active learning efficiently designs nanophotonic structures.

作者信息

Park Jaehyeon, Xu Zhihao, Park Gyeong-Moon, Luo Tengfei, Lee Eungkyu

机构信息

Department of Electronic Engineering, Kyung Hee University, Yongin-si, Gyonggi-do, 17104, Republic of Korea.

Department of Aerospace and Mechanical Engineering, University of Notre Dame, Notre Dame, IN, 46556, USA.

出版信息

Sci Rep. 2025 Apr 30;15(1):15187. doi: 10.1038/s41598-025-99570-z.

Abstract

Binary optimization using active learning schemes has gained attention for automating the discovery of optimal designs in nanophotonic structures and material configurations. Recently, active learning has utilized factorization machines (FM), which usually are second-order models, as surrogates to approximate the hypervolume of the design space, benefiting from rapid optimization by Ising machines such as quantum annealing (QA). However, due to their second-order nature, FM-based surrogate functions struggle to fully capture the complexity of the hypervolume. In this paper, we introduce an inverse binary optimization (IBO) scheme that optimizes a surrogate function based on a convolutional neural network (CNN) within an active learning framework. The IBO method employs backward error propagation to optimize the input binary vector, minimizing the output value while maintaining fixed parameters in the pre-trained CNN layers. We conduct a benchmarking study of the CNN-based surrogate function within the CNN-IBO framework by optimizing nanophotonic designs (e.g., planar multilayer and stratified grating structure) as a testbed. Our results demonstrate that CNN-IBO achieves optimal designs with fewer actively accumulated training data than FM-QA, indicating its potential as a powerful and efficient method for binary optimization.

摘要

使用主动学习方案的二元优化已受到关注,用于在纳米光子结构和材料配置中自动发现最优设计。最近,主动学习利用通常为二阶模型的因子分解机(FM)作为代理,来近似设计空间的超体积,并受益于诸如量子退火(QA)等伊辛机的快速优化。然而,由于其二阶性质,基于FM的代理函数难以完全捕捉超体积的复杂性。在本文中,我们引入了一种逆二元优化(IBO)方案,该方案在主动学习框架内基于卷积神经网络(CNN)优化代理函数。IBO方法采用反向误差传播来优化输入二元向量,在保持预训练CNN层参数固定的同时最小化输出值。我们通过优化纳米光子设计(例如平面多层和分层光栅结构)作为测试平台,在CNN-IBO框架内对基于CNN的代理函数进行了基准研究。我们的结果表明,与FM-QA相比,CNN-IBO使用更少的主动积累训练数据就能实现最优设计,这表明其作为一种强大而有效的二元优化方法的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d9e/12043942/bcb378a5227c/41598_2025_99570_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验