Araştırma Makalesi
BibTex RIS Kaynak Göster

Elementary proof of Funahashi's theorem

Yıl 2024, , 30 - 44, 15.06.2024
https://doi.org/10.33205/cma.1466429

Öz

Funahashi established that the space of two-layer feedforward neural networks is dense in the space of all continuous functions defined over compact sets in $n$-dimensional Euclidean space. The purpose of this short survey is to reexamine the proof of Theorem 1 in Funahashi \cite{Funahashi}. The Tietze extension theorem, whose proof is contained in the appendix, will be used. This paper is based on harmonic analysis, real analysis, and Fourier analysis. However, the audience in this paper is supposed to be researchers who do not specialize in these fields of mathematics. Some fundamental facts that are used in this paper without proofs will be collected after we present some notation in this paper.

Kaynakça

  • D. Beniaguev, I. Segev and M. London: Single cortical neurons as deep artificial neural networks, Neuron, 109 (17) (2021), 2727–2739. e2723.
  • T. M. Cover: Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition, IEEE transactions on electronic computers, 14 (3) (1965), 326–334.
  • K. Funahashi: On the approximate realization of continuous mappings by neural networks, Neural Networks, 2 (1989), 183–192.
  • A. Gidon, T. A. Zolnik, P. Fidzinski, F. Bolduan, A. Papoutsi, P. Poirazi and M. E. Larkum: Dendritic action potentials and computation in human layer 2/3 cortical neurons, Science, 367 (6473) (2020), 83–87.
  • N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: A Global Universality of Two-Layer Neural Networks with ReLU Activations, Journal of Function Spaces, 2021 (2021), Article ID 6637220.
  • N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: Global universality of the two-layer neural network with the krectified linear unit, Journal of Function Spaces, 2024 (2024), Article ID 3262798.
  • A. Krizhevsky, I. Sutskever and G. E. Hinton: Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, 25 (2012).
  • W. S. McCulloch, W. Pitts: A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology, 52 (1990), 99–115.
  • M. Minsky, S. Papert: Perceptrons: An introduction to computational geometry, Cambridge tiass., HIT, 479 (480) (1969), 104.
  • A. B. Novikoff: On convergence proofs on perceptrons, Paper presented at the Proceedings of the Symposium on the Mathematical Theory of Automata (1962).
  • F. Rosenblatt: The perceptron: a probabilistic model for information storage and organization in the brain, Psychological Review, 65 (6) (1958), 386.
  • W. Rudin: Real and Complex Analysis (Third Edition), McGraw-Hill, New York (1987).
  • D. E. Rumelhart, D. E. Hinton and G. E. R. J. Williams: Learning representations by back-propagating errors, Nature, 323 (6088) (1986), 533–536.
  • T. J. Sejnowski: The unreasonable effectiveness of deep learning in artificial intelligence, Proceedings of the National Academy of Sciences, 117 (48) (2020), 30033–30038.
Yıl 2024, , 30 - 44, 15.06.2024
https://doi.org/10.33205/cma.1466429

Öz

Kaynakça

  • D. Beniaguev, I. Segev and M. London: Single cortical neurons as deep artificial neural networks, Neuron, 109 (17) (2021), 2727–2739. e2723.
  • T. M. Cover: Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition, IEEE transactions on electronic computers, 14 (3) (1965), 326–334.
  • K. Funahashi: On the approximate realization of continuous mappings by neural networks, Neural Networks, 2 (1989), 183–192.
  • A. Gidon, T. A. Zolnik, P. Fidzinski, F. Bolduan, A. Papoutsi, P. Poirazi and M. E. Larkum: Dendritic action potentials and computation in human layer 2/3 cortical neurons, Science, 367 (6473) (2020), 83–87.
  • N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: A Global Universality of Two-Layer Neural Networks with ReLU Activations, Journal of Function Spaces, 2021 (2021), Article ID 6637220.
  • N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: Global universality of the two-layer neural network with the krectified linear unit, Journal of Function Spaces, 2024 (2024), Article ID 3262798.
  • A. Krizhevsky, I. Sutskever and G. E. Hinton: Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, 25 (2012).
  • W. S. McCulloch, W. Pitts: A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology, 52 (1990), 99–115.
  • M. Minsky, S. Papert: Perceptrons: An introduction to computational geometry, Cambridge tiass., HIT, 479 (480) (1969), 104.
  • A. B. Novikoff: On convergence proofs on perceptrons, Paper presented at the Proceedings of the Symposium on the Mathematical Theory of Automata (1962).
  • F. Rosenblatt: The perceptron: a probabilistic model for information storage and organization in the brain, Psychological Review, 65 (6) (1958), 386.
  • W. Rudin: Real and Complex Analysis (Third Edition), McGraw-Hill, New York (1987).
  • D. E. Rumelhart, D. E. Hinton and G. E. R. J. Williams: Learning representations by back-propagating errors, Nature, 323 (6088) (1986), 533–536.
  • T. J. Sejnowski: The unreasonable effectiveness of deep learning in artificial intelligence, Proceedings of the National Academy of Sciences, 117 (48) (2020), 30033–30038.
Toplam 14 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yaklaşım Teorisi ve Asimptotik Yöntemler
Bölüm Makaleler
Yazarlar

Mitsuo Izuki Bu kişi benim

Takahiro Noi Bu kişi benim

Yoshihiro Sawano 0000-0003-2844-8053

Hirokazu Tanaka Bu kişi benim

Erken Görünüm Tarihi 10 Mayıs 2024
Yayımlanma Tarihi 15 Haziran 2024
Gönderilme Tarihi 7 Nisan 2024
Kabul Tarihi 1 Mayıs 2024
Yayımlandığı Sayı Yıl 2024

Kaynak Göster

APA Izuki, M., Noi, T., Sawano, Y., Tanaka, H. (2024). Elementary proof of Funahashi’s theorem. Constructive Mathematical Analysis, 7(2), 30-44. https://doi.org/10.33205/cma.1466429
AMA Izuki M, Noi T, Sawano Y, Tanaka H. Elementary proof of Funahashi’s theorem. CMA. Haziran 2024;7(2):30-44. doi:10.33205/cma.1466429
Chicago Izuki, Mitsuo, Takahiro Noi, Yoshihiro Sawano, ve Hirokazu Tanaka. “Elementary Proof of Funahashi’s Theorem”. Constructive Mathematical Analysis 7, sy. 2 (Haziran 2024): 30-44. https://doi.org/10.33205/cma.1466429.
EndNote Izuki M, Noi T, Sawano Y, Tanaka H (01 Haziran 2024) Elementary proof of Funahashi’s theorem. Constructive Mathematical Analysis 7 2 30–44.
IEEE M. Izuki, T. Noi, Y. Sawano, ve H. Tanaka, “Elementary proof of Funahashi’s theorem”, CMA, c. 7, sy. 2, ss. 30–44, 2024, doi: 10.33205/cma.1466429.
ISNAD Izuki, Mitsuo vd. “Elementary Proof of Funahashi’s Theorem”. Constructive Mathematical Analysis 7/2 (Haziran 2024), 30-44. https://doi.org/10.33205/cma.1466429.
JAMA Izuki M, Noi T, Sawano Y, Tanaka H. Elementary proof of Funahashi’s theorem. CMA. 2024;7:30–44.
MLA Izuki, Mitsuo vd. “Elementary Proof of Funahashi’s Theorem”. Constructive Mathematical Analysis, c. 7, sy. 2, 2024, ss. 30-44, doi:10.33205/cma.1466429.
Vancouver Izuki M, Noi T, Sawano Y, Tanaka H. Elementary proof of Funahashi’s theorem. CMA. 2024;7(2):30-44.