Synthetic Data Generation for Enhancing Text Classification Performance Using Conditional Variational Autoencoders

Ömer Faruk Cebeci

Yıldız Technical University

https://orcid.org/0000-0001-8174-2046

Mehmet Fatih Amasyali

Yıldız Technical University

https://orcid.org/0000-0002-0404-5973

DOI: https://doi.org/10.56038/oprd.v5i1.581

Keywords: Metin sınıflandırma, Varyasyonel Otokodlayıcı, yapay metin üretimi


Abstract

This study investigates the effect of generating synthetic data using a Conditional Variational Autoencoder (CVAE) model on classification performance in scenarios where the amount of available data is limited or the data sources are constrained. Experiments were conducted on datasets with varying numbers of classes, where synthetic data were produced through two different methods using CVAE models. The first method aimed to generate sentences from noise, initiated by sampling from a Gaussian distribution. The second method involved providing the first half of a real sentence to the model, which then completed the remaining half to produce synthetic data. The synthetic datasets generated by both methods were integrated into the original training sets at various ratios, and the resulting changes in classification performance were observed. Both synthetic data generation approaches significantly improved the classification performance. However, as the amount of data used to train the classifiers increased, the marginal benefit of incorporating synthetic data decreased. These findings suggest that producing and utilizing synthetic data can be an effective strategy in text classification tasks that suffer from data scarcity.


References

X. Zhang, J. Zhao, and Y. LeCun, “Character-level convolutional networks for text classification,” 2016.

Z. Xie, S. I. Wang, J. Li, D. Lvy, A. Nie, D. Jurafsky, and A. Y. Ng, “Data noising as smoothing in neural network language models,” 2017.

J. Wei and K. Zou, “EDA: Easy data augmentation techniques for boosting performance on text classification tasks,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China: Association for Computational Linguistics, Nov. 2019, pp. 6382–6388. DOI: https://doi.org/10.18653/v1/D19-1670

R. Sennrich, B. Haddow, and A. Birch, “Improving neural machine translation models with monolingual data,” 2016. DOI: https://doi.org/10.18653/v1/P16-1009

HU, Zhiting, et al. Toward controlled generation of text. In: International conference on machine learning. PMLR, 2017. p. 1587-1596.

T. Zhao, R. Zhao, and M. Eskenazi, “Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders,” arXiv (Cornell University), Jan. 2017, doi: 10.48550/arxiv.1703.10960. DOI: https://doi.org/10.18653/v1/P17-1061

T. Wang and X. Wan, “T-cvae: Transformer-based conditioned variational autoencoder for story completion,” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. International Joint Conferences on Artificial Intelligence Organization, 7 2019, pp. 5233–5239. DOI: https://doi.org/10.24963/ijcai.2019/727

D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” 2014.

S. R. Bowman, L. Vilnis, O. Vinyals, A. Dai, R. Jozefowicz, and S. Bengio, “Generating sentences from a continuous space,” in Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 10–21. DOI: https://doi.org/10.18653/v1/K16-1002

S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. DOI: https://doi.org/10.1162/neco.1997.9.8.1735

Bilici, M. Şafak; Amasyali, Mehmet Fatih. Transformers as neural augmentors: Class conditional sentence generation via variational Bayes. arXiv preprint arXiv:2205.09391, 2022.

Sezer, T. (2021). TS TimeLine News Category Dataset (Version 001) [Data set]. TS Corpus. https://doi.org/10.57672/P23D-B492

J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp. 4171–4186

Most read articles by the same author(s)