Suppr超能文献

面向语境下的隐性知识挖掘:视觉认知图模型与眼动图像解读。

Towards tacit knowledge mining within context: Visual cognitive graph model and eye movement image interpretation.

机构信息

School of Mechanical Engineering, Northwestern Polytechnical University, Xi'an, China; Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an, China.

School of Mechanical Engineering, Northwestern Polytechnical University, Xi'an, China.

出版信息

Comput Methods Programs Biomed. 2022 Nov;226:107107. doi: 10.1016/j.cmpb.2022.107107. Epub 2022 Sep 6.

Abstract

Visual attention is one of the most important brain cognitive functions, which filters the rich information of the outside world to ensure the efficient operation of limited cognitive resources. The underlying knowledge, i.e., tacit knowledge, hidden in the human attention allocation performances, is context-related and is hard to be expressed by experts, but it is essential for novice operator training and interaction system design. Traditional models of visual attention allocation and corresponding analysis methods seldomly involve task contextual information or present the tacit knowledge in an explicit and quantified way. Thus, it is challenging to pass on the expert's tacit knowledge to the novice or utilize it to construct an interaction system by employing traditional methods. Therefore, this paper first proposes a new model called the visual cognitive graph model based on graph theory to model the visual attention allocation associated with the task context. Then, based on this graph model, utilize the data mining method to reveal attention patterns within context to quantitatively analyze the operator's tacit knowledge during operation tasks. We introduced three physical quantities derived from graph theory to describe the tacit knowledge, which can be used directly to construct an interaction system or operator training. For example, discover the essential information within the task context, the relevant information affecting critical information, and the bridge information revealing the decision-making process. We tested the proposed method in the example of flight operation, the comparison results with the traditional eye movement graph model demonstrate that the proposed visual cognitive model can compromise the task context. The comparison results with the statistical analysis method demonstrate that our tacit knowledge mining method can reveal the underlying knowledge hidden in the visual information. Finally, we give practical applications in the examples of operator training guidance and adaptive interaction system. Our proposed method can explore more in-depth knowledge of visual information, such as the correlations of different obtained information and the way operator obtains information, most of which are even not noticed by operators themselves.

摘要

视觉注意是大脑最重要的认知功能之一,它过滤外界丰富的信息,以确保有限认知资源的高效运作。人类注意力分配表现背后的隐性知识(即默会知识)与上下文相关,很难被专家表达出来,但对于新手操作员培训和交互系统设计而言却是至关重要的。传统的视觉注意分配模型和相应的分析方法很少涉及任务上下文信息,或者以显式和量化的方式呈现隐性知识。因此,传统方法很难将专家的默会知识传递给新手,或者利用它来构建交互系统。因此,本文首先提出了一种新的基于图论的视觉认知图模型,用于对与任务上下文相关的视觉注意分配进行建模。然后,基于该图模型,利用数据挖掘方法揭示上下文内的注意模式,以定量分析操作员在操作任务期间的默会知识。我们引入了三个来自图论的物理量来描述默会知识,这些物理量可直接用于构建交互系统或操作员培训。例如,发现任务上下文中的关键信息、影响关键信息的相关信息以及揭示决策过程的桥梁信息。我们在飞行操作示例中测试了所提出的方法,与传统眼动图模型的比较结果表明,所提出的视觉认知模型可以兼顾任务上下文。与统计分析方法的比较结果表明,我们的默会知识挖掘方法可以揭示隐藏在视觉信息背后的潜在知识。最后,我们在操作员培训指导和自适应交互系统的示例中给出了实际应用。我们提出的方法可以探索更深入的视觉信息知识,例如不同获取信息之间的相关性以及操作员获取信息的方式,其中大部分甚至是操作员自己都没有注意到的。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验