Research Article
BibTex RIS Cite
Year 2023, Volume: 3 Issue: 3, 234 - 255, 30.09.2023
https://doi.org/10.53391/mmnsa.1327485

Abstract

Project Number

05M20UKA

References

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S. et al. Generative adversarial nets. Advances in Neural Information Processing Systems, 27, (2014).
  • de Meer Pardo, F. Enriching financial datasets with generative adversarial networks. MS thesis, Delft University of Technology, The Netherlands, (2019).
  • Wang, Y. A mathematical introduction to generative adversarial nets (GAN). ArXiv Prints, ArXiv:2009.00169, (2020).
  • Arjovsky, M. and Léon, B. Towards principled methods for training generative adversarial Networks. ArXiv Prints, arXiv:1701.04862, (2017).
  • Syed A.M. and Samuel, D.S. A general class of coefficients of divergence of one distribution from another. Journal of the Royal Statistical Society: Series B (Methodological), 28(1), 131-142, (1996).
  • Nguyen, X., Wainwright, M.J. and Jordan M.I. Estimating divergence functionals and the likelihood ratio by convex risk minimization. In Proceedings, IEEE Transactions on Information Theory, 56(11), pp. 5847-5861, (2010, October).
  • Nowozin, S., Cseke, B. and Tomioka, R. f-GAN: Training generative neural samplers using variational divergence minimization. In Proceedings, Advances in Neural Information Processing Systems 29 (NIPS), (2016, December).
  • Arjovsky, M., Chintala, S. and Bottou, L. Wasserstein generative adversarial networks. In Proceedings Proceedings of the 34th International Conference on Machine Learning (PMLR), pp. 214-223, (2017, July).
  • Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V. and Courville, A.C. Improved training of Wasserstein GANs. In Proceedings Advances in neural information processing systems 30 (NIPS), (2017, December).
  • Ni, H., Szpruch, L., Wiese, M., Liao, S. and Xiao, B. Conditional sig-wasserstein gans for time series generation. ArXiv Preprint, arXiv:2006.05421, (2020).
  • Wiese, M., Bai, L., Wood, B. and Buehler, H. Deep hedging: learning to simulate equity option markets. ArXiv Preprint, arXiv:1911.01700 (2019).
  • Wiese, M., Knobloch, R., Korn, R. and Kretschmer P. Quant GANs: deep generation of financial time series. Quantitative Finance, 20(9), 1419-1440, (2020).
  • Ni, H., Szpruch, L., Sabate-Vidales, M., Xiao, B., Wiese, M. and Liao, S. Sig-Wasserstein GANs for time series generation. In Proceedings of the Second ACM International Conference on AI in Finance (ICAIF), pp. 1-8, (2021, November).
  • Yoon, J., Jarrett, D. and Van der Schaar, M. Time-series generative adversarial networks. Advances in Neural Information Processing Systems 32 (NeurIPS), (2019, December).
  • Donahue, C., McAuley, J. and Puckette, M. Adversarial audio synthesis. ArXiv Preprint, arXiv:1802.04208, (2018).
  • Schreyer, M., Sattarov, T., Reimer, B. and Borth, D. Adversarial learning of deepfakes in accounting. ArXiv Preprint, arXiv:1910.03810, (2019).

Understanding the mathematical background of Generative Adversarial Networks (GANs)

Year 2023, Volume: 3 Issue: 3, 234 - 255, 30.09.2023
https://doi.org/10.53391/mmnsa.1327485

Abstract

Generative Adversarial Networks (GANs) have gained widespread attention since their introduction, leading to numerous extensions and applications of the original GAN idea. A thorough understanding of GANs' mathematical foundations is necessary to use and build upon these techniques. However, most studies on GANs are presented from a computer science or engineering perspective, which can be challenging for beginners to understand fully. Therefore, this paper aims to provide an overview of the mathematical background of GANs, including detailed proofs of optimal solutions for vanilla GANs and boundaries for $f$-GANs that minimize a variational approximation of the $f$-divergence between two distributions. These contributions will enhance the understanding of GANs for those with a mathematical background and pave the way for future research.

Supporting Institution

German Ministry of Education Research (BMBF)

Project Number

05M20UKA

References

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S. et al. Generative adversarial nets. Advances in Neural Information Processing Systems, 27, (2014).
  • de Meer Pardo, F. Enriching financial datasets with generative adversarial networks. MS thesis, Delft University of Technology, The Netherlands, (2019).
  • Wang, Y. A mathematical introduction to generative adversarial nets (GAN). ArXiv Prints, ArXiv:2009.00169, (2020).
  • Arjovsky, M. and Léon, B. Towards principled methods for training generative adversarial Networks. ArXiv Prints, arXiv:1701.04862, (2017).
  • Syed A.M. and Samuel, D.S. A general class of coefficients of divergence of one distribution from another. Journal of the Royal Statistical Society: Series B (Methodological), 28(1), 131-142, (1996).
  • Nguyen, X., Wainwright, M.J. and Jordan M.I. Estimating divergence functionals and the likelihood ratio by convex risk minimization. In Proceedings, IEEE Transactions on Information Theory, 56(11), pp. 5847-5861, (2010, October).
  • Nowozin, S., Cseke, B. and Tomioka, R. f-GAN: Training generative neural samplers using variational divergence minimization. In Proceedings, Advances in Neural Information Processing Systems 29 (NIPS), (2016, December).
  • Arjovsky, M., Chintala, S. and Bottou, L. Wasserstein generative adversarial networks. In Proceedings Proceedings of the 34th International Conference on Machine Learning (PMLR), pp. 214-223, (2017, July).
  • Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V. and Courville, A.C. Improved training of Wasserstein GANs. In Proceedings Advances in neural information processing systems 30 (NIPS), (2017, December).
  • Ni, H., Szpruch, L., Wiese, M., Liao, S. and Xiao, B. Conditional sig-wasserstein gans for time series generation. ArXiv Preprint, arXiv:2006.05421, (2020).
  • Wiese, M., Bai, L., Wood, B. and Buehler, H. Deep hedging: learning to simulate equity option markets. ArXiv Preprint, arXiv:1911.01700 (2019).
  • Wiese, M., Knobloch, R., Korn, R. and Kretschmer P. Quant GANs: deep generation of financial time series. Quantitative Finance, 20(9), 1419-1440, (2020).
  • Ni, H., Szpruch, L., Sabate-Vidales, M., Xiao, B., Wiese, M. and Liao, S. Sig-Wasserstein GANs for time series generation. In Proceedings of the Second ACM International Conference on AI in Finance (ICAIF), pp. 1-8, (2021, November).
  • Yoon, J., Jarrett, D. and Van der Schaar, M. Time-series generative adversarial networks. Advances in Neural Information Processing Systems 32 (NeurIPS), (2019, December).
  • Donahue, C., McAuley, J. and Puckette, M. Adversarial audio synthesis. ArXiv Preprint, arXiv:1802.04208, (2018).
  • Schreyer, M., Sattarov, T., Reimer, B. and Borth, D. Adversarial learning of deepfakes in accounting. ArXiv Preprint, arXiv:1910.03810, (2019).
There are 16 citations in total.

Details

Primary Language English
Subjects Experimental Mathematics, Financial Mathematics, Applied Mathematics (Other)
Journal Section Research Articles
Authors

Bilgi Yılmaz 0000-0002-9646-2757

Ralf Korn 0000-0002-9123-3883

Project Number 05M20UKA
Publication Date September 30, 2023
Submission Date July 14, 2023
Published in Issue Year 2023 Volume: 3 Issue: 3

Cite

APA Yılmaz, B., & Korn, R. (2023). Understanding the mathematical background of Generative Adversarial Networks (GANs). Mathematical Modelling and Numerical Simulation With Applications, 3(3), 234-255. https://doi.org/10.53391/mmnsa.1327485


Math Model Numer Simul Appl - 2024 
29033      
The published articles in MMNSA are licensed under a Creative Commons Attribution 4.0 International License 
28520