Han Jun, Wang Chaoli
IEEE Trans Vis Comput Graph. 2022 Jun;28(6):2445-2456. doi: 10.1109/TVCG.2020.3032123. Epub 2022 May 2.
We present SSR-TVD, a novel deep learning framework that produces coherent spatial super-resolution (SSR) of time-varying data (TVD) using adversarial learning. In scientific visualization, SSR-TVD is the first work that applies the generative adversarial network (GAN) to generate high-resolution volumes for three-dimensional time-varying data sets. The design of SSR-TVD includes a generator and two discriminators (spatial and temporal discriminators). The generator takes a low-resolution volume as input and outputs a synthesized high-resolution volume. To capture spatial and temporal coherence in the volume sequence, the two discriminators take the synthesized high-resolution volume(s) as input and produce a score indicating the realness of the volume(s). Our method can work in the in situ visualization setting by downscaling volumetric data from selected time steps as the simulation runs and upscaling downsampled volumes to their original resolution during postprocessing. To demonstrate the effectiveness of SSR-TVD, we show quantitative and qualitative results with several time-varying data sets of different characteristics and compare our method against volume upscaling using bicubic interpolation and a solution solely based on CNN.
我们提出了SSR-TVD,这是一种新颖的深度学习框架,它使用对抗学习来生成时变数据(TVD)的连贯空间超分辨率(SSR)。在科学可视化中,SSR-TVD是第一项应用生成对抗网络(GAN)为三维时变数据集生成高分辨率体数据的工作。SSR-TVD的设计包括一个生成器和两个判别器(空间判别器和时间判别器)。生成器将低分辨率体数据作为输入,并输出合成的高分辨率体数据。为了捕捉体数据序列中的空间和时间连贯性,两个判别器将合成的高分辨率体数据作为输入,并生成一个表示体数据真实性的分数。我们的方法可以在原位可视化设置中工作,即在模拟运行时将选定时间步的体数据进行降采样,然后在后期处理中将下采样后的体数据放大到其原始分辨率。为了证明SSR-TVD的有效性,我们展示了几个具有不同特征的时变数据集的定量和定性结果,并将我们的方法与使用双三次插值的体数据放大方法以及仅基于卷积神经网络的解决方案进行了比较。