Bredesen Center, University of Tennessee, Knoxville, Tennessee, United States.
Department of Business Analytics and Statistics, University of Tennessee, Knoxville, Tennessee, United States.
Methods Inf Med. 2023 May;62(1-02):31-39. doi: 10.1055/a-2023-9181. Epub 2023 Jan 31.
Deep generative models (DGMs) present a promising avenue for generating realistic, synthetic data to augment existing health care datasets. However, exactly how the completeness of the original dataset affects the quality of the generated synthetic data is unclear.
In this paper, we investigate the effect of data completeness on samples generated by the most common DGM paradigms.
We create both cross-sectional and panel datasets with varying missingness and subset rates and train generative adversarial networks, variational autoencoders, and autoregressive models (Transformers) on these datasets. We then compare the distributions of generated data with original training data to measure similarity.
We find that increased incompleteness is directly correlated with increased dissimilarity between original and generated samples produced through DGMs.
Care must be taken when using DGMs to generate synthetic data as data completeness issues can affect the quality of generated data in both panel and cross-sectional datasets.
深度生成模型(DGM)为生成现实、合成数据以扩充现有医疗保健数据集提供了一个很有前途的途径。然而,原始数据集的完整性究竟如何影响生成的合成数据的质量尚不清楚。
在本文中,我们研究了数据完整性对最常见的 DGM 范例生成的样本的影响。
我们创建了具有不同缺失率和子集率的横截面数据集和面板数据集,并在这些数据集上训练生成对抗网络、变分自编码器和自回归模型(Transformer)。然后,我们将生成数据的分布与原始训练数据进行比较,以衡量相似性。
我们发现,不完整性的增加与 DGM 生成的原始样本和生成样本之间的差异直接相关。
在使用 DGM 生成合成数据时必须小心,因为数据完整性问题会影响面板数据集和横截面数据集中文本生成数据的质量。