Chen Xinyu, Li Renjie, Yu Yueyao, Shen Yuanwen, Li Wenye, Zhang Yin, Zhang Zhaoyu
Shenzhen Key Laboratory of Semiconductor Lasers, School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, 2001 Longxiang Ave, Shenzhen 518172, China.
Shenzhen Research Institute of Big Data (SRIBD), 2001 Longxiang Ave, Shenzhen 518172, China.
Nanomaterials (Basel). 2022 Dec 9;12(24):4401. doi: 10.3390/nano12244401.
We study a new technique for solving the fundamental challenge in nanophotonic design: fast and accurate characterization of nanoscale photonic devices with minimal human intervention. Much like the fusion between Artificial Intelligence and Electronic Design Automation (EDA), many efforts have been made to apply deep neural networks (DNN) such as convolutional neural networks to prototype and characterize next-gen optoelectronic devices commonly found in Photonic Integrated Circuits. However, state-of-the-art DNN models are still far from being directly applicable in the real world: e.g., DNN-produced correlation coefficients between target and predicted physical quantities are about 80%, which is much lower than what it takes to generate reliable and reproducible nanophotonic designs. Recently, attention-based transformer models have attracted extensive interests and been widely used in Computer Vision and Natural Language Processing. In this work, we for the first time propose a Transformer model (POViT) to efficiently design and simulate photonic crystal nanocavities with multiple objectives under consideration. Unlike the standard Vision Transformer, our model takes photonic crystals as input data and changes the activation layer from GELU to an absolute-value function. Extensive experiments show that POViT significantly improves results reported by previous models: correlation coefficients are increased by over 12% (i.e., to 92.0%) and prediction errors are reduced by an order of magnitude, among several key metric improvements. Our work has the potential to drive the expansion of EDA to fully automated photonic design (i.e., PDA). The complete dataset and code will be released to promote research in the interdisciplinary field of materials science/physics and computer science.
我们研究了一种新技术,以应对纳米光子学设计中的基本挑战:在最少人工干预的情况下,对纳米级光子器件进行快速准确的表征。就像人工智能与电子设计自动化(EDA)的融合一样,人们已经做出了许多努力,将诸如卷积神经网络之类的深度神经网络应用于光子集成电路中常见的下一代光电器件的原型设计和表征。然而,当前最先进的深度神经网络模型仍远不能直接应用于实际:例如,深度神经网络生成的目标物理量与预测物理量之间的相关系数约为80%,远低于生成可靠且可重复的纳米光子学设计所需的水平。最近,基于注意力机制的Transformer模型引起了广泛关注,并在计算机视觉和自然语言处理中得到广泛应用。在这项工作中,我们首次提出了一种Transformer模型(POViT),以在考虑多个目标的情况下高效设计和模拟光子晶体纳米腔。与标准的视觉Transformer不同,我们的模型将光子晶体作为输入数据,并将激活层从GELU更改为绝对值函数。大量实验表明,POViT显著改善了先前模型报告的结果:在几个关键指标的改进中,相关系数提高了超过12%(即达到92.0%),预测误差降低了一个数量级。我们的工作有可能推动EDA扩展到完全自动化的光子设计(即PDA)。完整的数据集和代码将被发布,以促进材料科学/物理学和计算机科学跨学科领域的研究。