Butail Sachit, Bollt Erik M, Porfiri Maurizio
Department of Mechanical and Aerospace Engineering, Polytechnic Institute of New York University, Brooklyn, NY 11201, USA.
J Theor Biol. 2013 Nov 7;336:185-99. doi: 10.1016/j.jtbi.2013.07.029. Epub 2013 Aug 9.
In this paper, we build a framework for the analysis and classification of collective behavior using methods from generative modeling and nonlinear manifold learning. We represent an animal group with a set of finite-sized particles and vary known features of the group structure and motion via a class of generative models to position each particle on a two-dimensional plane. Particle positions are then mapped onto training images that are processed to emphasize the features of interest and match attainable far-field videos of real animal groups. The training images serve as templates of recognizable patterns of collective behavior and are compactly represented in a low-dimensional space called embedding manifold. Two mappings from the manifold are derived: the manifold-to-image mapping serves to reconstruct new and unseen images of the group and the manifold-to-feature mapping allows frame-by-frame classification of raw video. We validate the combined framework on datasets of growing level of complexity. Specifically, we classify artificial images from the generative model, interacting self-propelled particle model, and raw overhead videos of schooling fish obtained from the literature.
在本文中,我们使用生成建模和非线性流形学习方法构建了一个用于集体行为分析和分类的框架。我们用一组有限大小的粒子来表示动物群体,并通过一类生成模型改变群体结构和运动的已知特征,以便将每个粒子定位在二维平面上。然后将粒子位置映射到训练图像上,对这些图像进行处理以突出感兴趣的特征,并与真实动物群体可获得的远场视频相匹配。训练图像作为集体行为可识别模式的模板,并在一个称为嵌入流形的低维空间中进行紧凑表示。从该流形导出了两种映射:流形到图像的映射用于重建群体的新的和未见过的图像,而流形到特征的映射允许对原始视频进行逐帧分类。我们在复杂度不断增加的数据集上验证了这个组合框架。具体来说,我们对来自生成模型、相互作用的自驱动粒子模型的人工图像以及从文献中获取的成群游动鱼类头顶视角的原始视频进行分类。