IEEE Trans Vis Comput Graph. 2022 Nov;28(11):3759-3766. doi: 10.1109/TVCG.2022.3203098. Epub 2022 Oct 21.
Stereoscopic AR and VR headsets have displays and lenses that are either fixed or adjustable to match a limited range of user inter-pupillary distances (IPDs). Projective geometry predicts a misperception of depth when either the displays or virtual cameras used to render images are misaligned with the eyes. However, misalignment between the eyes and lenses might also affect binocular convergence, which could further distort perceived depth. This possibility has been largely ignored in previous studies. Here, we evaluated this phenomenon in a VR headset in which the inter-lens and inter-axial camera separations are coupled and adjustable. In a baseline condition, both were matched to observers' IPDs. In two other conditions, the inter-lens and inter-axial camera separations were set to the maximum and minimum allowed by the headset. In each condition, observers were instructed to adjust a fold created by two intersecting, textured surfaces until it appeared to have an angle of 90°. The task was performed at three randomly interleaved viewing distances, monocularly and binocularly. In the monocular condition, observers underestimated the fold angle and there was no effect of viewing distance on their settings. In the binocular conditions, we found that when the lens and camera separation were less than the viewer's IPD, they exhibited compression of perceived slant relative to baseline. The reverse pattern was seen when the lens and camera separation were larger than the viewer's IPD. These results were well explained by a geometric model that considers shifts in convergence due to lens and display misalignment with the eyes, as well as the relative contribution of monocular cues.
立体 AR 和 VR 头显具有固定或可调节的显示器和镜头,以匹配有限范围的用户瞳距 (IPD)。投影几何预测,当用于渲染图像的显示器或虚拟相机与眼睛不对齐时,会产生深度感知的错觉。然而,眼睛和镜头之间的不对准也可能影响双眼会聚,这可能进一步扭曲感知的深度。在以前的研究中,这种可能性在很大程度上被忽视了。在这里,我们在一个 VR 头显中评估了这种现象,其中镜头之间和轴向相机之间的分离是耦合和可调节的。在基线条件下,两者都与观察者的 IPD 相匹配。在另外两种情况下,镜头之间和轴向相机之间的分离设置为头显允许的最大和最小距离。在每种情况下,观察者都被指示调整两个相交的纹理表面形成的褶皱,直到它看起来有 90°的角度。该任务在三个随机交错的观察距离下进行,包括单眼和双眼。在单眼条件下,观察者低估了褶皱角度,并且观察距离对他们的设置没有影响。在双眼条件下,我们发现当镜头和相机的分离小于观察者的 IPD 时,与基线相比,他们感知的倾斜度会受到压缩。当镜头和相机的分离大于观察者的 IPD 时,就会出现相反的模式。这些结果很好地解释了一个几何模型,该模型考虑了由于镜头和显示器与眼睛不对准以及单眼线索的相对贡献而导致的会聚的偏移。