Spatial Intelligence Lab, Institute for Geoinformatics, University of Münster, 48149 Muenster, Germany.
Sensors (Basel). 2022 May 17;22(10):3798. doi: 10.3390/s22103798.
Analysing the dynamics in social interactions in indoor spaces entails evaluating spatial-temporal variables from the event, such as location and time. Additionally, social interactions include invisible spaces that we unconsciously acknowledge due to social constraints, e.g., space between people having a conversation with each other. Nevertheless, current sensor arrays focus on detecting the physically occupied spaces from social interactions, i.e., areas inhabited by physically measurable objects. Our goal is to detect the socially occupied spaces, i.e., spaces not physically occupied by subjects and objects but inhabited by the interaction they sustain. We evaluate the social representation of the space structure between two or more active participants, so-called F-Formation for small gatherings. We propose calculating body orientation and location from skeleton joint data sets by integrating depth cameras. The body orientation is derived by integrating the shoulders and spine joint data with head/face rotation data and spatial-temporal information from trajectories. From the physically occupied measurements, we can detect socially occupied spaces. In our user study implementing the system, we compared the capabilities and skeleton tracking datasets from three depth camera sensors, the Kinect v2, Azure Kinect, and Zed 2i. We collected 32 walking patterns for individual and dyad configurations and evaluated the system's accuracy regarding the intended and socially accepted orientations. Experimental results show accuracy above 90% for the Kinect v2, 96% for the Azure Kinect, and 89% for the Zed 2i for assessing socially relevant body orientation. Our algorithm contributes to the anonymous and automated assessment of socially occupied spaces. The depth sensor system is promising in detecting more complex social structures. These findings impact research areas that study group interactions within complex indoor settings.
分析室内空间中的社交互动动态需要评估事件的时空变量,例如位置和时间。此外,社交互动还包括我们由于社会约束而无意识地感知到的无形空间,例如正在交谈的两个人之间的空间。然而,当前的传感器阵列专注于检测社交互动中的物理占用空间,即由物理可测量物体占据的区域。我们的目标是检测社会占用空间,即没有被主体和物体物理占用但被它们维持的互动所占据的空间。我们评估两个或多个活跃参与者之间的空间结构的社交表示形式,即所谓的小团体的 F-Formation。我们提出通过集成深度相机从骨骼关节数据集计算身体方向和位置。身体方向是通过将肩部和脊柱关节数据与头部/面部旋转数据以及来自轨迹的时空信息集成来得出的。从物理占用的测量中,我们可以检测到社会占用的空间。在我们实施系统的用户研究中,我们比较了三个深度相机传感器(Kinect v2、Azure Kinect 和 Zed 2i)的能力和骨骼跟踪数据集。我们收集了 32 个单人及双人配置的行走模式,并评估了系统在预期和社会可接受的方向上的准确性。实验结果表明,Kinect v2 评估社会相关身体方向的准确性为 90%以上,Azure Kinect 为 96%,Zed 2i 为 89%。我们的算法有助于对社会占用空间进行匿名和自动评估。深度传感器系统在检测更复杂的社会结构方面具有很大的潜力。这些发现影响了研究复杂室内环境中群体互动的研究领域。