Sittig D F, Orr J A
Yale Center for Medical Informatics, Yale University School of Medicine, New Haven, CT 06510.
Proc Annu Symp Comput Appl Med Care. 1991:290-4.
Various methods have been proposed in an attempt to solve problems in artifact and/or alarm identification including expert systems, statistical signal processing techniques, and artificial neural networks (ANN). ANNs consist of a large number of simple processing units connected by weighted links. To develop truly robust ANNs, investigators are required to train their networks on huge training data sets, requiring enormous computing power. We implemented a parallel version of the backward error propagation neural network training algorithm in the widely portable parallel programming language C-Linda. A maximum speedup of 4.06 was obtained with six processors. This speedup represents a reduction in total run-time from approximately 6.4 hours to 1.5 hours. We conclude that use of the master-worker model of parallel computation is an excellent method for obtaining speedups in the backward error propagation neural network training algorithm.
为了解决伪迹和/或警报识别问题,人们提出了各种方法,包括专家系统、统计信号处理技术和人工神经网络(ANN)。人工神经网络由大量通过加权链接连接的简单处理单元组成。为了开发真正强大的人工神经网络,研究人员需要在巨大的训练数据集上训练他们的网络,这需要巨大的计算能力。我们用广泛适用的并行编程语言C-Linda实现了反向误差传播神经网络训练算法的并行版本。使用六个处理器时,最大加速比为4.06。这种加速比意味着总运行时间从大约6.4小时减少到1.5小时。我们得出结论,使用并行计算的主从模型是在反向误差传播神经网络训练算法中获得加速比的一种极好方法。