Cliff D, Noble J
School of Cognitive and Computing Sciences, University of Sussex, Brighton, UK.
Philos Trans R Soc Lond B Biol Sci. 1997 Aug 29;352(1358):1165-75. doi: 10.1098/rstb.1997.0100.
The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong.
正是这些内部表征在基于知识的视觉中纳入了“知识”,或者在基于模型的视觉中形成了“模型”。在本文中,我们讨论了通过人工进化而非传统工程设计技术开发的简单机器视觉系统,并指出,由于在因果机制层面缺乏表征的操作定义,在这类系统中识别内部表征的任务变得困难。因此,我们质疑在自然视觉系统(即动物)中假定使用的表征的性质,甚至其存在性。我们得出结论,特定视觉系统的外部观察者基于先验理由所主张的表征很可能是虚幻的,充其量只是尚未确定的因果机制相互作用的占位符。也就是说,在理解进化系统(机器或动物)时应用基于知识的视觉方法很可能会导致在内部一致、计算上合理但却完全错误的理论和模型。