Publication
Improved Techniques for Training Tabular GANs Using Cramer´s V Statistics
Melle Mendikowski; Benjamin Schindler; Thomas Schmid; Ralf Möller; Mattis Hartwig (Hrsg.)
Canadian Conference on Artificial Intelligence (AI-2023), located at The 36th Canadian Conference on Artificial Intelligence, June 5-9, Montreal, Quebec, Canada, Canadian Artificial Intelligence Association, 6/2023.
Abstract
Considering the growing global demand for machine learning training data, synthetic data generation is a reasonable way to address the versatile challenges in data acquisition. Conditional Tabular Generative Adversarial Network (CTGAN), an extension of the widely used Generative Adversarial Network (GAN), is considered one of the most promising techniques in the field of tabular data generation. Despite numerous successes of CTGAN, a lack of preserving categorical dependencies within the data has been identified. In prior work, the Cramer’s V (CV) as a natural metric for representing the correlation of categorical dependencies was proposed for hyperparameter tuning of CTGAN models. In this paper, we explore two novel strategies to directly integrate CV statistics of data batches within CTGAN training. The first approach is a generator loss term that penalizes differences between the CV statistics of the original and generated data. The second innovation is the extraction of the CV matrix as an additional feature for the critic. By applying our proposed methods to three benchmark datasets, we improve the averaged accuracy of supervised learning models trained on synthesized data by 11 % compared to the legacy CTGAN. We also outline the impact of CV statistics on preserving dependencies between categorical data columns in terms of integrity and contingency similarity, discuss existing challenges, and identify potential improvements.