We present a simplified computational rule for the back-propagation formulas for artificial neural networks. In this work, we provide a generic two-step rule for the back-propagation algorithm in matrix notation. Moreover, this rule incorporates both the forward and backward phases of the computations involved in the learning process. Specifically, this recursive computing rule permits the propagation of the changes to all synaptic weights in the network, layer by layer, efficiently. In particular, we use this rule to compute both the up and down partial derivatives of the cost function of all the connections feeding into the output layer.
Alber, M., Bello, I., Zoph, B., Kindermans, P. J., Ramachandran, P., & Le, Q. (2018).
Backprop evolution. Preprint at https://arxiv.org/abs/1808.02822.
Baldi, P. (2021). Deep learning in science. Cambridge University Press.
Boughammoura, A. (2023). Backpropagation and F-adjoint. Preprint at
https://arxiv.org/abs/2304.13820.
Hojabr, R., Givaki, K., Pourahmadi, K., Nooralinejad, P., Khonsari, A., Rahmati,
D., & Najafi, M. H. (2020, October). TaxoNN: A Light-Weight Accelerator for Deep
Neural Network Training. In 2020 IEEE International Symposium on Circuits and
Systems (ISCAS) (pp. 1-5). IEEE.
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in
nervous activity. The bulletin of mathematical biophysics, 5, 115-133.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations
by back-propagating errors. nature, 323 (6088), 533-536.
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural
networks, 61, 85-117.
Ye, J. C. (2022). Geometry of Deep Learning. Springer Singapore.
Alber, M., Bello, I., Zoph, B., Kindermans, P. J., Ramachandran, P., & Le, Q. (2018).
Backprop evolution. Preprint at https://arxiv.org/abs/1808.02822.
Baldi, P. (2021). Deep learning in science. Cambridge University Press.
Boughammoura, A. (2023). Backpropagation and F-adjoint. Preprint at
https://arxiv.org/abs/2304.13820.
Hojabr, R., Givaki, K., Pourahmadi, K., Nooralinejad, P., Khonsari, A., Rahmati,
D., & Najafi, M. H. (2020, October). TaxoNN: A Light-Weight Accelerator for Deep
Neural Network Training. In 2020 IEEE International Symposium on Circuits and
Systems (ISCAS) (pp. 1-5). IEEE.
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in
nervous activity. The bulletin of mathematical biophysics, 5, 115-133.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations
by back-propagating errors. nature, 323 (6088), 533-536.
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural
networks, 61, 85-117.
Ye, J. C. (2022). Geometry of Deep Learning. Springer Singapore.
Boughammoura, A. (2023). A Two-Step Rule for Backpropagation. International Journal of Informatics and Applied Mathematics, 6(1), 57-69. https://doi.org/10.53508/ijiam.1265832
AMA
Boughammoura A. A Two-Step Rule for Backpropagation. IJIAM. June 2023;6(1):57-69. doi:10.53508/ijiam.1265832
Chicago
Boughammoura, Ahmed. “A Two-Step Rule for Backpropagation”. International Journal of Informatics and Applied Mathematics 6, no. 1 (June 2023): 57-69. https://doi.org/10.53508/ijiam.1265832.
EndNote
Boughammoura A (June 1, 2023) A Two-Step Rule for Backpropagation. International Journal of Informatics and Applied Mathematics 6 1 57–69.
IEEE
A. Boughammoura, “A Two-Step Rule for Backpropagation”, IJIAM, vol. 6, no. 1, pp. 57–69, 2023, doi: 10.53508/ijiam.1265832.
ISNAD
Boughammoura, Ahmed. “A Two-Step Rule for Backpropagation”. International Journal of Informatics and Applied Mathematics 6/1 (June 2023), 57-69. https://doi.org/10.53508/ijiam.1265832.
JAMA
Boughammoura A. A Two-Step Rule for Backpropagation. IJIAM. 2023;6:57–69.
MLA
Boughammoura, Ahmed. “A Two-Step Rule for Backpropagation”. International Journal of Informatics and Applied Mathematics, vol. 6, no. 1, 2023, pp. 57-69, doi:10.53508/ijiam.1265832.
Vancouver
Boughammoura A. A Two-Step Rule for Backpropagation. IJIAM. 2023;6(1):57-69.