Han Jun, Zheng Hao, Xing Yunhao, Chen Danny Z, Wang Chaoli
IEEE Trans Vis Comput Graph. 2021 Feb;27(2):1290-1300. doi: 10.1109/TVCG.2020.3030346. Epub 2021 Jan 28.
We present V2V, a novel deep learning framework, as a general-purpose solution to the variable-to-variable (V2V) selection and translation problem for multivariate time-varying data (MTVD) analysis and visualization. V2V leverages a representation learning algorithm to identify transferable variables and utilizes Kullback-Leibler divergence to determine the source and target variables. It then uses a generative adversarial network (GAN) to learn the mapping from the source variable to the target variable via the adversarial, volumetric, and feature losses. V2V takes the pairs of time steps of the source and target variable as input for training, Once trained, it can infer unseen time steps of the target variable given the corresponding time steps of the source variable. Several multivariate time-varying data sets of different characteristics are used to demonstrate the effectiveness of V2V, both quantitatively and qualitatively. We compare V2V against histogram matching and two other deep learning solutions (Pix2Pix and CycleGAN).
我们提出了V2V,这是一种新颖的深度学习框架,作为多变量时变数据(MTVD)分析和可视化中变量到变量(V2V)选择与转换问题的通用解决方案。V2V利用一种表示学习算法来识别可转移变量,并使用库尔贝克-莱布勒散度来确定源变量和目标变量。然后,它使用生成对抗网络(GAN)通过对抗损失、体素损失和特征损失来学习从源变量到目标变量的映射。V2V将源变量和目标变量的时间步对作为训练输入,一旦训练完成,给定源变量的相应时间步,它就能推断出目标变量的未知时间步。使用几个具有不同特征的多变量时变数据集从定量和定性两方面证明了V2V的有效性。我们将V2V与直方图匹配以及其他两种深度学习解决方案(Pix2Pix和CycleGAN)进行了比较。