Department of Electrical Engineering , Stanford University , Stanford , California 94305 , United States.
Nano Lett. 2019 Aug 14;19(8):5366-5372. doi: 10.1021/acs.nanolett.9b01857. Epub 2019 Jul 15.
We present a global optimizer, based on a conditional generative neural network, which can output ensembles of highly efficient topology-optimized metasurfaces operating across a range of parameters. A key feature of the network is that it initially generates a distribution of devices that broadly samples the design space and then shifts and refines this distribution toward favorable design space regions over the course of optimization. Training is performed by calculating the forward and adjoint electromagnetic simulations of outputted devices and using the subsequent efficiency gradients for backpropagation. With metagratings operating across a range of wavelengths and angles as a model system, we show that devices produced from the trained generative network have efficiencies comparable to or better than the best devices produced by adjoint-based topology optimization, while requiring less computational cost. Our reframing of adjoint-based optimization to the training of a generative neural network applies generally to physical systems that can utilize gradients to improve performance.
我们提出了一种基于条件生成神经网络的全局优化器,它可以输出一系列在一系列参数下高效工作的拓扑优化超表面。该网络的一个关键特征是,它最初生成的设备分布广泛地对设计空间进行采样,然后在优化过程中逐渐向有利的设计空间区域转移和改进。通过计算输出设备的正向和伴随电磁模拟,并使用后续效率梯度进行反向传播,来实现训练。以在不同波长和角度下工作的超光栅作为模型系统,我们表明,从训练好的生成网络中生成的器件的效率与基于伴随的拓扑优化产生的最佳器件相当,甚至更好,而所需的计算成本更少。我们将基于伴随的优化重新定义为生成神经网络的训练,这通常适用于可以利用梯度来提高性能的物理系统。