Chen Samuel Yen-Chi, Yoo Shinjae
Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973, USA.
Entropy (Basel). 2021 Apr 13;23(4):460. doi: 10.3390/e23040460.
Distributed training across several quantum computers could significantly improve the training time and if we could share the learned model, not the data, it could potentially improve the data privacy as the training would happen where the data is located. One of the potential schemes to achieve this property is the federated learning (FL), which consists of several clients or local nodes learning on their own data and a central node to aggregate the models collected from those local nodes. However, to the best of our knowledge, no work has been done in quantum machine learning (QML) in federation setting yet. In this work, we present the federated training on hybrid quantum-classical machine learning models although our framework could be generalized to pure quantum machine learning model. Specifically, we consider the quantum neural network (QNN) coupled with classical pre-trained convolutional model. Our distributed federated learning scheme demonstrated almost the same level of trained model accuracies and yet significantly faster distributed training. It demonstrates a promising future research direction for scaling and privacy aspects.
在多台量子计算机上进行分布式训练可以显著缩短训练时间,而且如果我们能够共享学习到的模型而非数据,那么由于训练将在数据所在的位置进行,这有可能提升数据隐私性。实现这一特性的潜在方案之一是联邦学习(FL),它由多个客户端或本地节点在各自的数据上进行学习,以及一个中央节点来聚合从那些本地节点收集到的模型。然而,据我们所知,目前尚未有在联邦学习环境下进行量子机器学习(QML)的相关工作。在这项工作中,我们展示了混合量子 - 经典机器学习模型的联邦训练,尽管我们的框架可以推广到纯量子机器学习模型。具体而言,我们考虑将量子神经网络(QNN)与经典预训练卷积模型相结合。我们的分布式联邦学习方案展示了几乎相同水平的训练模型准确率,同时显著加快了分布式训练速度。它在扩展和隐私方面展示了一个有前景的未来研究方向。