Cleeremans Axel, Timmermans Bert, Pasquali Antoine
Cognitive Science Research Unit, Université Libre de Bruxelles CP 191, 50 ave. F.-D. Roosevelt, B1050 Bruxelles, Belgium.
Neural Netw. 2007 Nov;20(9):1032-9. doi: 10.1016/j.neunet.2007.09.011. Epub 2007 Sep 12.
When one is conscious of something, one is also conscious that one is conscious. Higher-Order Thought Theory [Rosenthal, D. (1997). A theory of consciousness. In N. Block, O. Flanagan, & G. Güzeldere (Eds.), The nature of consciousness: Philosophical debates. Cambridge, MA: MIT Press] takes it that it is in virtue of the fact that one is conscious of being conscious, that one is conscious. Here, we ask what the computational mechanisms may be that implement this intuition. Our starting point is Clark and Karmiloff-Smith's [Clark, A., & Karmiloff-Smith, A. (1993). The cognizer's innards: A psychological and philosophical perspective on the development of thought. Mind and Language, 8, 487-519] point that knowledge acquired by a connectionist network always remains "knowledge in the network rather than knowledge for the network". That is, while connectionist networks may become exquisitely sensitive to regularities contained in their input-output environment, they never exhibit the ability to access and manipulate this knowledge as knowledge: The knowledge can only be expressed through performing the task upon which the network was trained; it remains forever embedded in the causal pathways that developed as a result of training. To address this issue, we present simulations in which two networks interact. The states of a first-order network trained to perform a simple categorization task become input to a second-order network trained either as an encoder or on another categorization task. Thus, the second-order network "observes" the states of the first-order network and has, in the first case, to reproduce these states on its output units, and in the second case, to use the states as cues in order to solve the secondary task. This implements a limited form of metarepresentation, to the extent that the second-order network's internal representations become re-representations of the first-order network's internal states. We conclude that this mechanism provides the beginnings of a computational mechanism to account for mental attitudes, that is, an understanding by a cognitive system of the manner in which its first-order knowledge is held (belief, hope, fear, etc.). Consciousness, in this light, thus involves knowledge of the geography of one own's internal representations - a geography that is itself learned over time as a result of an agent's attributing value to the various experiences it enjoys through interaction with itself, the world, and others.
当一个人意识到某事物时,这个人也意识到自己正在意识到该事物。高阶思维理论[罗森塔尔,D.(1997年)。意识理论。载于N. 布洛克、O. 弗拉纳根和G. 居泽尔德雷(编),《意识的本质:哲学辩论》。马萨诸塞州剑桥:麻省理工学院出版社]认为,正是由于一个人意识到自己正在意识到,所以这个人才是有意识的。在此,我们要问实现这种直觉的计算机制可能是什么。我们的出发点是克拉克和卡米洛夫 - 史密斯[克拉克,A.,& 卡米洛夫 - 史密斯,A.(1993年)。认知者的内在:关于思维发展的心理学和哲学视角。《心灵与语言》,8,487 - 519]的观点,即联结主义网络所获得的知识始终只是“网络中的知识而非网络所拥有的知识”。也就是说,虽然联结主义网络可能会对其输入 - 输出环境中包含的规律变得极其敏感,但它们从不表现出将这种知识作为知识进行访问和操作的能力:该知识只能通过执行网络所训练的任务来表达;它永远嵌入在因训练而形成的因果路径中。为了解决这个问题,我们展示了两个网络相互作用的模拟。一个经过训练以执行简单分类任务的一阶网络的状态成为另一个被训练为编码器或执行另一个分类任务的二阶网络的输入。因此,二阶网络“观察”一阶网络的状态,在第一种情况下,要在其输出单元上重现这些状态,在第二种情况下,要将这些状态用作线索以解决次要任务。这实现了一种有限形式的元表征,因为二阶网络的内部表征成为了一阶网络内部状态的再表征。我们得出结论,这种机制为解释心理态度提供了一种计算机制的开端,也就是说,为认知系统对其一阶知识的持有方式(信念、希望、恐惧等)的理解提供了开端。从这个角度来看,意识因此涉及对自身内部表征版图的认识——这个版图本身是随着时间推移,由于主体对其通过与自身、世界和他人的互动所享有的各种体验赋予价值而习得的。