Research Article
BibTex RIS Cite

Elementary proof of Funahashi's theorem

Year 2024, , 30 - 44, 15.06.2024
https://doi.org/10.33205/cma.1466429

Abstract

Funahashi established that the space of two-layer feedforward neural networks is dense in the space of all continuous functions defined over compact sets in $n$-dimensional Euclidean space. The purpose of this short survey is to reexamine the proof of Theorem 1 in Funahashi \cite{Funahashi}. The Tietze extension theorem, whose proof is contained in the appendix, will be used. This paper is based on harmonic analysis, real analysis, and Fourier analysis. However, the audience in this paper is supposed to be researchers who do not specialize in these fields of mathematics. Some fundamental facts that are used in this paper without proofs will be collected after we present some notation in this paper.

References

  • D. Beniaguev, I. Segev and M. London: Single cortical neurons as deep artificial neural networks, Neuron, 109 (17) (2021), 2727–2739. e2723.
  • T. M. Cover: Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition, IEEE transactions on electronic computers, 14 (3) (1965), 326–334.
  • K. Funahashi: On the approximate realization of continuous mappings by neural networks, Neural Networks, 2 (1989), 183–192.
  • A. Gidon, T. A. Zolnik, P. Fidzinski, F. Bolduan, A. Papoutsi, P. Poirazi and M. E. Larkum: Dendritic action potentials and computation in human layer 2/3 cortical neurons, Science, 367 (6473) (2020), 83–87.
  • N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: A Global Universality of Two-Layer Neural Networks with ReLU Activations, Journal of Function Spaces, 2021 (2021), Article ID 6637220.
  • N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: Global universality of the two-layer neural network with the krectified linear unit, Journal of Function Spaces, 2024 (2024), Article ID 3262798.
  • A. Krizhevsky, I. Sutskever and G. E. Hinton: Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, 25 (2012).
  • W. S. McCulloch, W. Pitts: A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology, 52 (1990), 99–115.
  • M. Minsky, S. Papert: Perceptrons: An introduction to computational geometry, Cambridge tiass., HIT, 479 (480) (1969), 104.
  • A. B. Novikoff: On convergence proofs on perceptrons, Paper presented at the Proceedings of the Symposium on the Mathematical Theory of Automata (1962).
  • F. Rosenblatt: The perceptron: a probabilistic model for information storage and organization in the brain, Psychological Review, 65 (6) (1958), 386.
  • W. Rudin: Real and Complex Analysis (Third Edition), McGraw-Hill, New York (1987).
  • D. E. Rumelhart, D. E. Hinton and G. E. R. J. Williams: Learning representations by back-propagating errors, Nature, 323 (6088) (1986), 533–536.
  • T. J. Sejnowski: The unreasonable effectiveness of deep learning in artificial intelligence, Proceedings of the National Academy of Sciences, 117 (48) (2020), 30033–30038.
Year 2024, , 30 - 44, 15.06.2024
https://doi.org/10.33205/cma.1466429

Abstract

References

  • D. Beniaguev, I. Segev and M. London: Single cortical neurons as deep artificial neural networks, Neuron, 109 (17) (2021), 2727–2739. e2723.
  • T. M. Cover: Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition, IEEE transactions on electronic computers, 14 (3) (1965), 326–334.
  • K. Funahashi: On the approximate realization of continuous mappings by neural networks, Neural Networks, 2 (1989), 183–192.
  • A. Gidon, T. A. Zolnik, P. Fidzinski, F. Bolduan, A. Papoutsi, P. Poirazi and M. E. Larkum: Dendritic action potentials and computation in human layer 2/3 cortical neurons, Science, 367 (6473) (2020), 83–87.
  • N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: A Global Universality of Two-Layer Neural Networks with ReLU Activations, Journal of Function Spaces, 2021 (2021), Article ID 6637220.
  • N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: Global universality of the two-layer neural network with the krectified linear unit, Journal of Function Spaces, 2024 (2024), Article ID 3262798.
  • A. Krizhevsky, I. Sutskever and G. E. Hinton: Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, 25 (2012).
  • W. S. McCulloch, W. Pitts: A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology, 52 (1990), 99–115.
  • M. Minsky, S. Papert: Perceptrons: An introduction to computational geometry, Cambridge tiass., HIT, 479 (480) (1969), 104.
  • A. B. Novikoff: On convergence proofs on perceptrons, Paper presented at the Proceedings of the Symposium on the Mathematical Theory of Automata (1962).
  • F. Rosenblatt: The perceptron: a probabilistic model for information storage and organization in the brain, Psychological Review, 65 (6) (1958), 386.
  • W. Rudin: Real and Complex Analysis (Third Edition), McGraw-Hill, New York (1987).
  • D. E. Rumelhart, D. E. Hinton and G. E. R. J. Williams: Learning representations by back-propagating errors, Nature, 323 (6088) (1986), 533–536.
  • T. J. Sejnowski: The unreasonable effectiveness of deep learning in artificial intelligence, Proceedings of the National Academy of Sciences, 117 (48) (2020), 30033–30038.
There are 14 citations in total.

Details

Primary Language English
Subjects Approximation Theory and Asymptotic Methods
Journal Section Articles
Authors

Mitsuo Izuki This is me

Takahiro Noi This is me

Yoshihiro Sawano 0000-0003-2844-8053

Hirokazu Tanaka This is me

Early Pub Date May 10, 2024
Publication Date June 15, 2024
Submission Date April 7, 2024
Acceptance Date May 1, 2024
Published in Issue Year 2024

Cite

APA Izuki, M., Noi, T., Sawano, Y., Tanaka, H. (2024). Elementary proof of Funahashi’s theorem. Constructive Mathematical Analysis, 7(2), 30-44. https://doi.org/10.33205/cma.1466429
AMA Izuki M, Noi T, Sawano Y, Tanaka H. Elementary proof of Funahashi’s theorem. CMA. June 2024;7(2):30-44. doi:10.33205/cma.1466429
Chicago Izuki, Mitsuo, Takahiro Noi, Yoshihiro Sawano, and Hirokazu Tanaka. “Elementary Proof of Funahashi’s Theorem”. Constructive Mathematical Analysis 7, no. 2 (June 2024): 30-44. https://doi.org/10.33205/cma.1466429.
EndNote Izuki M, Noi T, Sawano Y, Tanaka H (June 1, 2024) Elementary proof of Funahashi’s theorem. Constructive Mathematical Analysis 7 2 30–44.
IEEE M. Izuki, T. Noi, Y. Sawano, and H. Tanaka, “Elementary proof of Funahashi’s theorem”, CMA, vol. 7, no. 2, pp. 30–44, 2024, doi: 10.33205/cma.1466429.
ISNAD Izuki, Mitsuo et al. “Elementary Proof of Funahashi’s Theorem”. Constructive Mathematical Analysis 7/2 (June 2024), 30-44. https://doi.org/10.33205/cma.1466429.
JAMA Izuki M, Noi T, Sawano Y, Tanaka H. Elementary proof of Funahashi’s theorem. CMA. 2024;7:30–44.
MLA Izuki, Mitsuo et al. “Elementary Proof of Funahashi’s Theorem”. Constructive Mathematical Analysis, vol. 7, no. 2, 2024, pp. 30-44, doi:10.33205/cma.1466429.
Vancouver Izuki M, Noi T, Sawano Y, Tanaka H. Elementary proof of Funahashi’s theorem. CMA. 2024;7(2):30-44.