Joint Center for Quantum Information and Computer Science, University of Maryland, College Park, MD, 20742, United States of America.
National Institute of Standards and Technology, Gaithersburg, MD, 20899, United States of America.
PLoS One. 2018 Oct 17;13(10):e0205844. doi: 10.1371/journal.pone.0205844. eCollection 2018.
Over the past decade, machine learning techniques have revolutionized how research and science are done, from designing new materials and predicting their properties to data mining and analysis to assisting drug discovery to advancing cybersecurity. Recently, we added to this list by showing how a machine learning algorithm (a so-called learner) combined with an optimization routine can assist experimental efforts in the realm of tuning semiconductor quantum dot (QD) devices. Among other applications, semiconductor quantum dots are a candidate system for building quantum computers. In order to employ QDs, one needs to tune the devices into a desirable configuration suitable for quantum computing. While current experiments adjust the control parameters heuristically, such an approach does not scale with the increasing size of the quantum dot arrays required for even near-term quantum computing demonstrations. Establishing a reliable protocol for tuning QD devices that does not rely on the gross-scale heuristics developed by experimentalists is thus of great importance.
To implement the machine learning-based approach, we constructed a dataset of simulated QD device characteristics, such as the conductance and the charge sensor response versus the applied electrostatic gate voltages. The gate voltages are the experimental 'knobs' for tuning the device into useful regimes. Here, we describe the methodology for generating the dataset, as well as its validation in training convolutional neural networks.
From 200 training sets sampled randomly from the full dataset, we show that the learner's accuracy in recognizing the state of a device is ≈ 96.5% when using either current-based or charge-sensor-based training. The spread in accuracy over our 200 training sets is 0.5% and 1.8% for current- and charge-sensor-based data, respectively. In addition, we also introduce a tool that enables other researchers to use this approach for further research: QFlow lite-a Python-based mini-software suite that uses the dataset to train neural networks to recognize the state of a device and differentiate between states in experimental data. This work gives the definitive reference for the new dataset that will help enable researchers to use it in their experiments or to develop new machine learning approaches and concepts.
在过去的十年中,机器学习技术彻底改变了研究和科学的方式,从设计新材料和预测其性质,到数据挖掘和分析,再到辅助药物发现,以及推进网络安全。最近,我们在这个列表中又增加了一项,展示了机器学习算法(所谓的学习者)与优化程序相结合如何协助半导体量子点(QD)器件领域的实验工作。在其他应用中,半导体量子点是构建量子计算机的候选系统。为了使用 QD,需要将器件调谐到适合量子计算的理想配置。虽然当前的实验是通过启发式方法调整控制参数,但这种方法不适用于为了实现近期的量子计算演示所需的量子点阵列的不断增加。因此,建立一种不依赖于实验人员开发的大规模启发式方法的可靠 QD 器件调谐协议非常重要。
为了实现基于机器学习的方法,我们构建了一个包含模拟 QD 器件特性的数据集,例如电导和电荷传感器响应与施加的静电栅极电压的关系。栅极电压是用于将器件调谐到有用区域的实验“旋钮”。在这里,我们描述了生成数据集的方法,以及在训练卷积神经网络中对其进行验证的方法。
从完整数据集中随机抽取 200 个训练集,我们展示了学习者识别器件状态的准确率约为 96.5%,无论是使用基于电流的还是基于电荷传感器的训练方法。在我们的 200 个训练集中,准确率的差异为 0.5%和 1.8%,分别用于基于电流和电荷传感器的数据。此外,我们还引入了一个工具,使其他研究人员能够使用这种方法进行进一步的研究:QFlow lite——一个基于 Python 的小型软件套件,使用该数据集训练神经网络,以识别器件的状态,并区分实验数据中的状态。这项工作为新数据集提供了明确的参考,将有助于研究人员在实验中使用它,或开发新的机器学习方法和概念。