Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN 55455, USA.
International Computer Science Institute (ICSI), Berkeley, CA 94704, USA.
Sensors (Basel). 2018 Sep 6;18(9):2979. doi: 10.3390/s18092979.
The working hypothesis in this project is that gaze interactions play a central role in structuring the joint control and guidance strategy of the human operator performing spatial tasks. Perceptual guidance and control is the idea that the visual and motor systems form a unified perceptuo-motor system where necessary information is naturally extracted by the visual system. As a consequence, the response of this system is constrained by the visual and motor mechanisms and these effects should manifest in the behavioral data. Modeling the perceptual processes of the human operator provides the foundation necessary for a systems-based approach to the design of control and display systems used by remotely operated vehicles. This paper investigates this hypothesis using flight tasks conducted with remotely controlled miniature rotorcraft, taking place in indoor settings that provide rich environments to investigate the key processes supporting spatial interactions. This work also applies to spatial control tasks in a range of application domains that include tele-operation, gaming, and virtual reality. The human-in-the-loop system combines the dynamics of the vehicle, environment, and human perception⁻action with the response of the overall system emerging from the interplay of perception and action. The main questions to be answered in this work are as follows: (i) what is the general control and guidance strategy of the human operator, and (ii) how is information about the vehicle and environment extracted visually by the operator. The general approach uses gaze as the primary sensory mechanism by decoding the gaze patterns of the pilot to provide information for estimation, control, and guidance. This work differs from existing research by taking what have largely been conceptual ideas on action⁻perception and structuring them to be implemented for a real-world problem. The paper proposes a system model that captures the human pilot's perception⁻action loop; the loop that delineates the main components of the pilot's perceptuo-motor system, including estimation of the vehicle state and task elements based on operator gaze patterns, trajectory planning, and tracking control. The identified human visuo-motor model is then exploited to demonstrate how the perceptual and control functions system can be augmented to reduce the operator workload.
本项目的工作假说是,注视互动在构建执行空间任务的人类操作员的联合控制和引导策略中起着核心作用。感知引导和控制的理念是,视觉和运动系统形成一个统一的感知运动系统,其中必要的信息是由视觉系统自然提取的。因此,该系统的响应受到视觉和运动机制的限制,这些影响应该在行为数据中表现出来。对人类操作员感知过程的建模为基于系统的方法设计用于远程操作车辆的控制和显示系统提供了必要的基础。本文通过在提供丰富环境的室内环境中进行远程控制微型直升机的飞行任务来研究该假设,这些环境可用于研究支持空间交互的关键过程。这项工作还适用于包括远程操作、游戏和虚拟现实在内的一系列应用领域的空间控制任务。人机交互系统将车辆、环境和人类感知⁻动作的动力学与感知和动作相互作用产生的整体系统的响应结合在一起。这项工作需要回答的主要问题如下:(i) 人类操作员的一般控制和引导策略是什么,以及 (ii) 操作员如何通过视觉提取有关车辆和环境的信息。一般方法使用注视作为主要感觉机制,通过解码飞行员的注视模式来提供用于估计、控制和引导的信息。这项工作与现有研究的不同之处在于,它将行动感知和结构的概念性想法用于解决实际问题。本文提出了一个系统模型,该模型捕捉了人类飞行员的感知⁻动作循环;该循环描述了飞行员感知运动系统的主要组件,包括基于飞行员注视模式估计车辆状态和任务元素、轨迹规划和跟踪控制。然后,利用所识别的人类视动模型来演示如何增强感知和控制功能系统以减少操作员的工作量。