Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA.
Mind/Brain Institute, Johns Hopkins University, Baltimore, USA; Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA; Mechanical Engineering Department, Johns Hopkins University, Baltimore, MD, USA.
J Neurosci Methods. 2022 Feb 15;368:109453. doi: 10.1016/j.jneumeth.2021.109453. Epub 2021 Dec 27.
Camera images can encode large amounts of visual information of an animal and its environment, enabling high fidelity 3D reconstruction of the animal and its environment using computer vision methods. Most systems, both markerless (e.g. deep learning based) and marker-based, require multiple cameras to track features across multiple points of view to enable such 3D reconstruction. However, such systems can be expensive and are challenging to set up in small animal research apparatuses.
We present an open-source, marker-based system for tracking the head of a rodent for behavioral research that requires only a single camera with a potentially wide field of view. The system features a lightweight visual target and computer vision algorithms that together enable high-accuracy tracking of the six-degree-of-freedom position and orientation of the animal's head. The system, which only requires a single camera positioned above the behavioral arena, robustly reconstructs the pose over a wide range of head angles (360° in yaw, and approximately ± 120° in roll and pitch).
Experiments with live animals demonstrate that the system can reliably identify rat head position and orientation. Evaluations using a commercial optical tracker device show that the system achieves accuracy that rivals commercial multi-camera systems.
Our solution significantly improves upon existing monocular marker-based tracking methods, both in accuracy and in allowable range of motion.
The proposed system enables the study of complex behaviors by providing robust, fine-scale measurements of rodent head motions in a wide range of orientations.
相机图像可以编码动物及其环境的大量视觉信息,使用计算机视觉方法可以实现对动物及其环境的高保真 3D 重建。大多数系统,无论是无标记(例如基于深度学习)还是标记系统,都需要多个相机来跟踪多个视角的特征,以实现这种 3D 重建。然而,这些系统可能很昂贵,并且在小型动物研究设备中设置起来具有挑战性。
我们提出了一种基于标记的开源系统,用于跟踪啮齿动物的头部进行行为研究,该系统仅需要一个具有潜在宽视场的相机。该系统具有轻量级的视觉目标和计算机视觉算法,它们共同实现了对动物头部六自由度位置和方向的高精度跟踪。该系统仅需要一个位于行为场上方的相机,能够在广泛的头部角度范围内(偏航 360°,以及滚动和俯仰方向约 ± 120°)稳健地重建姿态。
活体动物实验表明,该系统可以可靠地识别大鼠头部的位置和方向。使用商业光学跟踪器设备进行的评估表明,该系统的准确性可与商业多相机系统相媲美。
我们的解决方案在准确性和允许的运动范围方面都显著优于现有的单目标记跟踪方法。
该系统通过提供对啮齿动物头部运动在广泛方向上的稳健、精细的测量,实现了对复杂行为的研究。