IEEE Trans Vis Comput Graph. 2018 Mar;24(3):1246-1259. doi: 10.1109/TVCG.2017.2666150. Epub 2017 Feb 9.
We present a novel algorithm to generate virtual acoustic effects in captured 3D models of real-world scenes for multimodal augmented reality. We leverage recent advances in 3D scene reconstruction in order to automatically compute acoustic material properties. Our technique consists of a two-step procedure that first applies a convolutional neural network (CNN) to estimate the acoustic material properties, including frequency-dependent absorption coefficients, that are used for interactive sound propagation. In the second step, an iterative optimization algorithm is used to adjust the materials determined by the CNN until a virtual acoustic simulation converges to measured acoustic impulse responses. We have applied our algorithm to many reconstructed real-world indoor scenes and evaluated its fidelity for augmented reality applications.
我们提出了一种新的算法,用于为多模式增强现实生成捕获的真实场景 3D 模型中的虚拟声学效果。我们利用 3D 场景重建的最新进展,自动计算声学材料属性。我们的技术由两步组成,首先应用卷积神经网络(CNN)来估计用于交互式声音传播的声学材料属性,包括频率相关的吸收系数。在第二步中,使用迭代优化算法来调整 CNN 确定的材料,直到虚拟声学模拟收敛到测量的声学脉冲响应。我们已经将我们的算法应用于许多重建的真实室内场景,并评估了其用于增强现实应用的逼真度。