Emmerson M D, Damper R I
Dept. of Electron. and Comput. Sci., Southampton Univ.
IEEE Trans Neural Netw. 1993;4(5):788-93. doi: 10.1109/72.248456.
We investigate empirically the performance under damage conditions of single- and multilayer perceptrons (MLP's), with various numbers of hidden units, in a representative pattern-recognition task. While some degree of graceful degradation was observed, the single-layer perceptron was considerably less fault tolerant than any of the multilayer perceptrons, including one with fewer adjustable weights. Our initial hypothesis that fault tolerance would be significantly improved for multilayer nets with larger numbers of hidden units proved incorrect. Indeed, there appeared to be a liability to having excess hidden units. A simple technique (called augmentation) is described, which was successful in translating excess hidden units into improved fault tolerance. Finally, our results were supported by applying singular value decomposition (SVD) analysis to the MLP's internal representations.
我们通过实证研究了具有不同数量隐藏单元的单层和多层感知器(MLP)在损伤条件下,于一项代表性模式识别任务中的性能。虽然观察到了一定程度的适度退化,但单层感知器的容错能力明显低于任何多层感知器,包括一个可调权重较少的多层感知器。我们最初的假设,即对于具有更多隐藏单元的多层网络,其容错能力将显著提高,结果证明是错误的。事实上,拥有过多隐藏单元似乎存在弊端。我们描述了一种简单的技术(称为增强),它成功地将过多的隐藏单元转化为了更高的容错能力。最后,通过对MLP的内部表示应用奇异值分解(SVD)分析,我们的结果得到了支持。