Kumar Atul, Garg Siddharth, Dutta Soumya
IEEE Trans Vis Comput Graph. 2025 Jan;31(1):1343-1353. doi: 10.1109/TVCG.2024.3456360. Epub 2024 Nov 22.
The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertainty-aware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of non-trivial vector field data sets.
深度神经网络(DNN)的广泛应用最近导致其被应用于具有挑战性的科学可视化任务。虽然先进的DNN展现出令人印象深刻的泛化能力,但理解诸如预测质量、置信度、鲁棒性和不确定性等因素至关重要。这些见解有助于应用科学家做出明智的决策。然而,DNN缺乏衡量预测不确定性的内在机制,这促使人们创建不同的框架来构建针对各种可视化任务的强大的不确定性感知模型。在这项工作中,我们开发了不确定性感知隐式神经表示,以有效地对稳态矢量场进行建模。我们全面评估了两种有原则的深度不确定性估计技术的有效性:(1)深度集成和(2)蒙特卡洛随机失活,旨在实现对稳态矢量场数据内特征的不确定性感知视觉分析。我们使用几个矢量数据集进行的详细探索表明,不确定性感知模型生成了矢量场特征的信息丰富的可视化结果。此外,纳入预测不确定性提高了我们DNN模型的弹性和可解释性,使其适用于分析非平凡的矢量场数据集。