Cen Yunchi, Zhang Qifan, Liang Xiaohui
School of Computer Science and Engineering, Beihang University, Beijing 100191, China.
Entropy (Basel). 2023 Sep 17;25(9):1348. doi: 10.3390/e25091348.
Realistic fluid models play an important role in computer graphics applications. However, efficiently reconstructing volumetric fluid flows from monocular videos remains challenging. In this work, we present a novel approach for reconstructing 3D flows from monocular inputs through a physics-based differentiable renderer coupled with joint density and velocity estimation. Our primary contributions include the proposed efficient differentiable rendering framework and improved coupled density and velocity estimation strategy. Rather than relying on automatic differentiation, we derive the differential form of the radiance transfer equation under single scattering. This allows the direct computation of the radiance gradient with respect to density, yielding higher efficiency compared to prior works. To improve temporal coherence in the reconstructed flows, subsequent fluid densities are estimated via a coupled strategy that enables smooth and realistic fluid motions suitable for applications that require high efficiency. Experiments on synthetic and real-world data demonstrated our method's capacity to reconstruct plausible volumetric flows with smooth dynamics efficiently. Comparisons to prior work on fluid motion reconstruction from monocular video revealed over 50-170x speedups across multiple resolutions.
逼真的流体模型在计算机图形学应用中发挥着重要作用。然而,从单目视频中高效地重建体积流体流动仍然具有挑战性。在这项工作中,我们提出了一种新颖的方法,通过基于物理的可微渲染器结合联合密度和速度估计,从单目输入中重建三维流动。我们的主要贡献包括提出的高效可微渲染框架和改进的联合密度和速度估计策略。我们不是依赖自动微分,而是推导了单散射下辐射传输方程的微分形式。这使得能够直接计算相对于密度的辐射梯度,与先前的工作相比效率更高。为了提高重建流动中的时间连贯性,后续的流体密度通过一种耦合策略进行估计,该策略能够实现适合于需要高效率的应用的平滑且逼真的流体运动。在合成数据和真实世界数据上的实验证明了我们的方法能够有效地重建具有平滑动力学的合理体积流动。与先前从单目视频进行流体运动重建的工作相比,在多个分辨率下加速了50 - 170倍以上。