Research Article
BibTex RIS Cite
Year 2025, Volume: 7 Issue: 1, 10 - 30
https://doi.org/10.51537/chaos.1578830

Abstract

References

  • Aihara, K., T. Takabe, and M. Toyoda, 1990 Chaotic neural networks. Physics letters A 144: 333–340.
  • Alligood, K. T., T. D. Sauer, J. A. Yorke, and D. Chillingworth, 1998 Chaos: an introduction to dynamical systems. SIAM Review 40: 732–732.
  • AS, R. A., N. Harikrishnan, and N. Nagaraj, 2023 Analysis of logistic map based neurons in neurochaos learning architectures for data classification. Chaos, Solitons & Fractals 170: 113347.
  • Asif, S., M. Zhao, F. Tang, and Y. Zhu, 2023 An enhanced deep learning method for multi-class brain tumor classification using deep transfer learning. Multimedia Tools and Applications pp. 1–28.
  • Balakrishnan, H. N., A. Kathpalia, S. Saha, and N. Nagaraj, 2019 Chaosnet: A chaos based artificial neural network architecture for classification. Chaos: An Interdisciplinary Journal of Nonlinear Science 29: 113125.
  • Berrar, D., 2018 Bayes’ theorem and naive bayes classifier. Encyclopedia of bioinformatics and computational biology: ABC of bioinformatics 403: 412.
  • Boser, B. E., I. M. Guyon, and V. N. Vapnik, 1992 A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, pp. 144–152.
  • Breiman, L., 2001 Random forests. Machine learning 45: 5–32. Chakrabarty, N., Accesed:2019-10-09 Brain mri images for brain tumor detection. https://www.kaggle.com/datasets/navoneel/ brain-mri-images-for-brain-tumor-detection/code.
  • Cover, T. and P. Hart, 1967 Nearest neighbor pattern classification. IEEE transactions on information theory 13: 21–27.
  • Delahunt, C. B. and J. N. Kutz, 2019 Putting a bug in ml: The moth olfactory network learns to read mnist. Neural Networks 118: 54–64.
  • Dua, D., C. Graff, et al., 2017 Uci machine learning repository . Fisher, R. A., 1936 The use of multiple measurements in taxonomic problems. Annals of eugenics 7: 179–188.
  • Gillich, E. and V. Lohweg, 2010 Banknote authentication. 1. Jahreskolloquium Bild. Der Autom pp. 1–8.
  • Graves, A., A.-r. Mohamed, and G. Hinton, 2013 Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pp. 6645–6649, Ieee.
  • Haberman, S. J., 1973 The analysis of residuals in cross-classified tables. Biometrics pp. 205–220.
  • Harikrishnan, J., A. Sudarsan, A. Sadashiv, and R. A. Ajai, 2019 Vision-face recognition attendance monitoring system for surveillance using deep learning technology and computer vision. In 2019 international conference on vision towards emerging trends in communication and networking (ViTECoN), pp. 1–5, IEEE.
  • Harikrishnan, N. and N. Nagaraj, 2020 Neurochaos inspired hybrid machine learning architecture for classification. In 2020 International Conference on Signal Processing and Communications (SPCOM), pp. 1–5, IEEE.
  • Harikrishnan, N. and N. Nagaraj, 2021 When noise meets chaos: Stochastic resonance in neurochaos learning. Neural Networks 143: 425–435.
  • Harikrishnan, N., R. Vinayakumar, and K. Soman, 2018 A machine learning approach towards phishing email detection. In Proceedings of the Anti-Phishing Pilot at ACM International Workshop on Security and Privacy Analytics (IWSPA AP), volume 2013, pp. 455–468.
  • Jackson, Z., C. Souza, J. Flaks, Y. Pan, H. Nicolas, et al., 2018 Jakobovski/free-spoken-digit-dataset: v1. 0.8. Zenodo, August . Korn, H. and P. Faure, 2003 Is there chaos in the brain? ii. experimental evidence and related models. Comptes rendus biologies 326: 787–840.
  • Krishna, S. and A. R. Ajai, 2019 Analysis of three point checklist and abcd methods for the feature extraction of dermoscopic images to detect melanoma. In 2019 9th International Symposium on Embedded Computing and System Design (ISED), pp. 1–5, IEEE.
  • Nagaraj, N., 2022 The unreasonable effectiveness of the chaotic tent map in engineering applications. Chaos Theory and Applications 4: 197–204.
  • NB, H., A. Kathpalia, and N. Nagaraj, 2022 Causality preserving chaotic transformation and classification using neurochaos learning. Advances in Neural Information Processing Systems 35: 2046–2058.
  • Perez-Nieves, N., V. C. Leung, P. L. Dragotti, and D. F. Goodman, 2021 Neural heterogeneity promotes robust learning. Nature communications 12: 5791.
  • Phatak, S. and S. S. Rao, 1995 Logistic map: A possible randomnumber generator. Physical review E 51: 3670.
  • Planet Labs Inc., Accessed: 2019-10-09 Planet Imagery and Archive. https://www.planet.com.
  • Quinlan, J. R., 1986 Induction of decision trees. Machine learning 1: 81–106.
  • Ramachandran, V., S. Blakeslee, and R. J. Dolan, 1998 Phantoms in the brain probing the mysteries of the human mind. Nature 396: 639–640.
  • Remya Ajai, A. and S. Gopalan, 2020 Analysis of active contours without edge-based segmentation technique for brain tumor classification using svm and knn classifiers. In Advances in Communication Systems and Networks: Select Proceedings of ComNet 2019, pp. 1–10, Springer.
  • Schapire, R. E., 2013 Explaining adaboost. In Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik, pp. 37–52, Springer. Sebe, N., 2005 Machine learning in computer vision, volume 29. Springer Science & Business Media.
  • Sethi, D., N. Nagaraj, and N. Harikrishnan, 2023 Neurochaos feature transformation for machine learning. Integration.
  • Sigillito, V. G., S. P. Wing, L. V. Hutton, and K. B. Baker, 1989 Classification of radar returns from the ionosphere using neural networks. Johns Hopkins APL Technical Digest 10: 262–266.
  • Sridharan, A., R. A. AS, and S. Gopalan, 2020 A novel methodology for the classification of debris scars using discrete wavelet transform and support vector machine. Procedia computer science 171: 609–616.
  • Street, W. N., W. H. Wolberg, and O. L. Mangasarian, 1993 Nuclear feature extraction for breast tumor diagnosis. In Biomedical image processing and biomedical visualization, volume 1905, pp. 861–870, SPIE.
  • Vandeginste, B., 1990 Parvus: An extendable package of programs for data exploration, classification and correlation, m. forina, r. leardi, c. armanino and s. lanteri, elsevier, amsterdam, 1988, isbn 0-444-43012-1. Journal of Chemometrics 4: 191–193.
  • Weis, S., M. Sonnberger, A. Dunzinger, E. Voglmayr, M. Aichholzer, et al., 2019 pp. 225–265 in Histological Constituents of the Nervous System, Springer Vienna.

Random Heterogeneous Neurochaos Learning Architecture for Data Classification

Year 2025, Volume: 7 Issue: 1, 10 - 30
https://doi.org/10.51537/chaos.1578830

Abstract

Inspired by the human brain's structure and function, Artificial Neural Networks (ANN) were developed for data classification. However, existing Neural Networks, including Deep Neural Networks, do not mimic the brain's rich structure. They lack key features such as randomness and neuron heterogeneity, which are inherently chaotic in their firing behavior. Neurochaos Learning (NL), a chaos-based neural network, recently employed one-dimensional chaotic maps like Generalized Lüroth Series (GLS) and Logistic map as neurons. For the first time, we propose a random heterogeneous extension of NL, where various chaotic neurons are randomly placed in the input layer, mimicking the randomness and heterogeneous nature of human brain networks. We evaluated the performance of the newly proposed Random Heterogeneous Neurochaos Learning (RHNL) architectures combined with traditional Machine Learning (ML) methods. On public datasets, RHNL outperformed both homogeneous NL and fixed heterogeneous NL architectures in nearly all classification tasks. RHNL achieved high F1 scores on the Wine dataset (1.0), Bank Note Authentication dataset (0.99), Breast Cancer Wisconsin dataset (0.99), and Free Spoken Digit Dataset (FSDD) (0.98). These RHNL results are among the best in the literature for these datasets. We investigated RHNL performance on image datasets, where it outperformed stand-alone ML classifiers. In low training sample regimes, RHNL was the best among stand-alone ML. Our architecture bridges the gap between existing ANN architectures and the human brain's chaotic, random, and heterogeneous properties. We foresee the development of several novel learning algorithms centered around Random Heterogeneous Neurochaos Learning in the coming days.

References

  • Aihara, K., T. Takabe, and M. Toyoda, 1990 Chaotic neural networks. Physics letters A 144: 333–340.
  • Alligood, K. T., T. D. Sauer, J. A. Yorke, and D. Chillingworth, 1998 Chaos: an introduction to dynamical systems. SIAM Review 40: 732–732.
  • AS, R. A., N. Harikrishnan, and N. Nagaraj, 2023 Analysis of logistic map based neurons in neurochaos learning architectures for data classification. Chaos, Solitons & Fractals 170: 113347.
  • Asif, S., M. Zhao, F. Tang, and Y. Zhu, 2023 An enhanced deep learning method for multi-class brain tumor classification using deep transfer learning. Multimedia Tools and Applications pp. 1–28.
  • Balakrishnan, H. N., A. Kathpalia, S. Saha, and N. Nagaraj, 2019 Chaosnet: A chaos based artificial neural network architecture for classification. Chaos: An Interdisciplinary Journal of Nonlinear Science 29: 113125.
  • Berrar, D., 2018 Bayes’ theorem and naive bayes classifier. Encyclopedia of bioinformatics and computational biology: ABC of bioinformatics 403: 412.
  • Boser, B. E., I. M. Guyon, and V. N. Vapnik, 1992 A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, pp. 144–152.
  • Breiman, L., 2001 Random forests. Machine learning 45: 5–32. Chakrabarty, N., Accesed:2019-10-09 Brain mri images for brain tumor detection. https://www.kaggle.com/datasets/navoneel/ brain-mri-images-for-brain-tumor-detection/code.
  • Cover, T. and P. Hart, 1967 Nearest neighbor pattern classification. IEEE transactions on information theory 13: 21–27.
  • Delahunt, C. B. and J. N. Kutz, 2019 Putting a bug in ml: The moth olfactory network learns to read mnist. Neural Networks 118: 54–64.
  • Dua, D., C. Graff, et al., 2017 Uci machine learning repository . Fisher, R. A., 1936 The use of multiple measurements in taxonomic problems. Annals of eugenics 7: 179–188.
  • Gillich, E. and V. Lohweg, 2010 Banknote authentication. 1. Jahreskolloquium Bild. Der Autom pp. 1–8.
  • Graves, A., A.-r. Mohamed, and G. Hinton, 2013 Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pp. 6645–6649, Ieee.
  • Haberman, S. J., 1973 The analysis of residuals in cross-classified tables. Biometrics pp. 205–220.
  • Harikrishnan, J., A. Sudarsan, A. Sadashiv, and R. A. Ajai, 2019 Vision-face recognition attendance monitoring system for surveillance using deep learning technology and computer vision. In 2019 international conference on vision towards emerging trends in communication and networking (ViTECoN), pp. 1–5, IEEE.
  • Harikrishnan, N. and N. Nagaraj, 2020 Neurochaos inspired hybrid machine learning architecture for classification. In 2020 International Conference on Signal Processing and Communications (SPCOM), pp. 1–5, IEEE.
  • Harikrishnan, N. and N. Nagaraj, 2021 When noise meets chaos: Stochastic resonance in neurochaos learning. Neural Networks 143: 425–435.
  • Harikrishnan, N., R. Vinayakumar, and K. Soman, 2018 A machine learning approach towards phishing email detection. In Proceedings of the Anti-Phishing Pilot at ACM International Workshop on Security and Privacy Analytics (IWSPA AP), volume 2013, pp. 455–468.
  • Jackson, Z., C. Souza, J. Flaks, Y. Pan, H. Nicolas, et al., 2018 Jakobovski/free-spoken-digit-dataset: v1. 0.8. Zenodo, August . Korn, H. and P. Faure, 2003 Is there chaos in the brain? ii. experimental evidence and related models. Comptes rendus biologies 326: 787–840.
  • Krishna, S. and A. R. Ajai, 2019 Analysis of three point checklist and abcd methods for the feature extraction of dermoscopic images to detect melanoma. In 2019 9th International Symposium on Embedded Computing and System Design (ISED), pp. 1–5, IEEE.
  • Nagaraj, N., 2022 The unreasonable effectiveness of the chaotic tent map in engineering applications. Chaos Theory and Applications 4: 197–204.
  • NB, H., A. Kathpalia, and N. Nagaraj, 2022 Causality preserving chaotic transformation and classification using neurochaos learning. Advances in Neural Information Processing Systems 35: 2046–2058.
  • Perez-Nieves, N., V. C. Leung, P. L. Dragotti, and D. F. Goodman, 2021 Neural heterogeneity promotes robust learning. Nature communications 12: 5791.
  • Phatak, S. and S. S. Rao, 1995 Logistic map: A possible randomnumber generator. Physical review E 51: 3670.
  • Planet Labs Inc., Accessed: 2019-10-09 Planet Imagery and Archive. https://www.planet.com.
  • Quinlan, J. R., 1986 Induction of decision trees. Machine learning 1: 81–106.
  • Ramachandran, V., S. Blakeslee, and R. J. Dolan, 1998 Phantoms in the brain probing the mysteries of the human mind. Nature 396: 639–640.
  • Remya Ajai, A. and S. Gopalan, 2020 Analysis of active contours without edge-based segmentation technique for brain tumor classification using svm and knn classifiers. In Advances in Communication Systems and Networks: Select Proceedings of ComNet 2019, pp. 1–10, Springer.
  • Schapire, R. E., 2013 Explaining adaboost. In Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik, pp. 37–52, Springer. Sebe, N., 2005 Machine learning in computer vision, volume 29. Springer Science & Business Media.
  • Sethi, D., N. Nagaraj, and N. Harikrishnan, 2023 Neurochaos feature transformation for machine learning. Integration.
  • Sigillito, V. G., S. P. Wing, L. V. Hutton, and K. B. Baker, 1989 Classification of radar returns from the ionosphere using neural networks. Johns Hopkins APL Technical Digest 10: 262–266.
  • Sridharan, A., R. A. AS, and S. Gopalan, 2020 A novel methodology for the classification of debris scars using discrete wavelet transform and support vector machine. Procedia computer science 171: 609–616.
  • Street, W. N., W. H. Wolberg, and O. L. Mangasarian, 1993 Nuclear feature extraction for breast tumor diagnosis. In Biomedical image processing and biomedical visualization, volume 1905, pp. 861–870, SPIE.
  • Vandeginste, B., 1990 Parvus: An extendable package of programs for data exploration, classification and correlation, m. forina, r. leardi, c. armanino and s. lanteri, elsevier, amsterdam, 1988, isbn 0-444-43012-1. Journal of Chemometrics 4: 191–193.
  • Weis, S., M. Sonnberger, A. Dunzinger, E. Voglmayr, M. Aichholzer, et al., 2019 pp. 225–265 in Histological Constituents of the Nervous System, Springer Vienna.
There are 35 citations in total.

Details

Primary Language English
Subjects Dynamical Systems in Applications
Journal Section Research Articles
Authors

Remya Ajai A S 0000-0002-7920-3374

Nithin Nagaraj 0000-0003-0097-4131

Publication Date
Submission Date November 4, 2024
Acceptance Date January 1, 2025
Published in Issue Year 2025 Volume: 7 Issue: 1

Cite

APA Ajai A S, R., & Nagaraj, N. (n.d.). Random Heterogeneous Neurochaos Learning Architecture for Data Classification. Chaos Theory and Applications, 7(1), 10-30. https://doi.org/10.51537/chaos.1578830

Chaos Theory and Applications in Applied Sciences and Engineering: An interdisciplinary journal of nonlinear science 23830 28903   

The published articles in CHTA are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License Cc_by-nc_icon.svg