Shen Jingyi, Li Haoyu, Xu Jiayi, Biswas Ayan, Shen Han-Wei
IEEE Trans Vis Comput Graph. 2023 Jan;29(1):679-689. doi: 10.1109/TVCG.2022.3209419. Epub 2022 Dec 16.
Deep learning based latent representations have been widely used for numerous scientific visualization applications such as isosurface similarity analysis, volume rendering, flow field synthesis, and data reduction, just to name a few. However, existing latent representations are mostly generated from raw data in an unsupervised manner, which makes it difficult to incorporate domain interest to control the size of the latent representations and the quality of the reconstructed data. In this paper, we present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis. We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation. We further reduced the latent size by a lossless entropy encoding algorithm trained together with the autoencoder, improving the storage and memory efficiency. We qualitatively and quantitatively evaluate the effectiveness and efficiency of latent representations generated by our method with data from multiple scientific visualization applications.
基于深度学习的潜在表示已被广泛应用于众多科学可视化应用中,如等值面相似性分析、体绘制、流场合成和数据缩减等,仅举几例。然而,现有的潜在表示大多以无监督的方式从原始数据生成,这使得难以纳入领域兴趣来控制潜在表示的大小和重建数据的质量。在本文中,我们提出了一种新颖的重要性驱动潜在表示,以促进领域兴趣引导的科学数据可视化和分析。我们利用空间重要性图来表示各种科学兴趣,并将它们作为特征变换网络的输入来指导潜在生成。我们还通过与自动编码器一起训练的无损熵编码算法进一步减小了潜在大小,提高了存储和内存效率。我们使用来自多个科学可视化应用的数据,定性和定量地评估了我们方法生成的潜在表示的有效性和效率。