Derleme
BibTex RIS Kaynak Göster

Yapay Sinir Ağları Eğitiminde Kullanılan Optimizasyon Yöntemlerinin İncelenmesi ve Kan Nakli Hizmet Merkezi Veri Seti Üzerinden Değerlendirilmesi

Yıl 2020, Cilt: 5 Sayı: 2, 99 - 113, 01.12.2020

Öz

Yapay Sinir Ağları (YSA) makine öğrenmesi gibi birçok uygulama alanında yaygın şekilde kullanılan bir yöntemdir. YSA’da ağın eğitilmesi için ağırlık değerlerinin güncellenmesi ve optimize edilmesi gerekmektedir. Ağırlık değerlerinin optimizasyonunda ağın başarımı yani öğrenme başarısı kullanılan eğitim yöntemi ve öğrenme sürecine bağlıdır. Bu çalışmanın amacı biyolojik sinir ağlarının çalışma şeklinden yola çıkılarak oluşturulan YSA’ların farklı teknikler ile nasıl optimize edildiğini araştırmaktır. Bununla ilgili olarak çeşitli çalışmalar incelenmiştir. Elde edilen sonuçlar, YSA’ların metasezgisel, karma ve özel yöntemlerle optimize edilebildiğini göstermiştir. Bu alanda disiplinler arası çalışmaların da yapıldığı görülmüştür. Bu çalışma kapsamında sinir ağlarını optimize etmek için kullanılan yöntemlerden birisi olan istatistiksel veya deneysel yöntemler uygulanmıştır. Bunun için sinir ağı parametrelerinden batch boyutu, devir sayısı, öğrenme oranı, momentum, ağırlık başlatma, nöron aktivasyon fonksiyonu, gizli katmandaki nöron sayısı kullanılarak 5 farklı deney yapılmıştır. Yapılan deneylerin sonuçlarında sinir ağı başarımının arttığı görülmüştür.

Kaynakça

  • Yang S, Chen Y. “An evolutionary constructive and pruning algorithm for artificial neural networks and its prediction applications”. Neurocomputing, 86, 140-149, 2012.
  • Kaytan M, Hanbay D. “Effective Classification of Phishing Web Pages Based on New Rules by Using Extreme Learning Machines”. Journal of Computer Sciences, 2(1), 15-36, 2017.
  • Aydilek İB, Aslan A. “A Novel Hybrid Approach to Estimating Missing Values in Databases Using K-Nearest Neighbors and Neural Networks”. International Journal of Innovative Computing, Information and Control, 8(7(A)), 4705-4717, 2012.
  • Prieto A, Prieto B, Ortigosa EM, Ros E, Pelayo F, Ortega J, Rojas I. “Neural networks: An overview of early research, current frameworks and new challenges”. Neurocomputing, 214, 242-268, 2016.
  • Benardos PG, Vosniakos GC. “Optimizing feedforward artificial neural network architecture”. Engineering Applications of Artificial Intelligence, 20(3), 365-382, 2007.
  • Soltoggio A, Stanley KO, Risi S. “Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks”. Neural Networks, 108, 48-67, 2018.
  • Barrett DG, Morcos AS, Macke JH. “Analyzing biological and artificial neural networks: challenges with opportunities for synergy?”. Current Opinion in Neurobiology, 55, 55-64, 2019.
  • Abiodun OI, Jantan A, Omolara AE, Dada KV, Mohamed NA, Arshad H. “State-of-the-art in artificial neural network applications: A survey”. Heliyon, 4(11), 1-41, 2018.
  • Tavanaei A, Ghodrati M, Kheradpisheh SR, Masquelier T, Maida A. “Deep learning in spiking neural networks”. Neural Networks, 111, 47-63, 2019.
  • Yang HH, Murata M, Amari S. “Statistical inference: learning in artificial neural networks”. Trends in Cognitive Sciences, 2(1), 4-10, 1998.
  • Schmidhuber J. “Deep learning in neural networks: An overview”. Neural Networks, 61, 85-117, 2015.
  • Paliwal M, Kumar UA. “Neural networks and statistical techniques: A review of applications”. Expert Systems with Applications, 36(1), 2-17, 2009.
  • Whittington JCR, Bogacz R. “Theories of Error Back-Propagation in the Brain”. Trends in Cognitive Sciences, 23(3), 235-250, 2019.
  • Lampinen J, Vehtari A. “Bayesian approach for neural networks—review and case studies”. Neural Networks, 14(3), 257-274, 2001.
  • Tsoi AC, Back A. “Discrete time recurrent neural network architectures: A unifying review”. Neurocomputing, 15(3-4), 183-223, 1997.
  • Heo S, Lee JH. “Parallel neural networks for improved nonlinear principal component analysis”. Computers & Chemical Engineering, 127, 1-10, 2019.
  • Bornholt J, Lopez R, Carmean DM, Ceze L, Seelig G, Strauss K. “A DNA-Based Archival Storage System”. Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems, Atlanta, Georgia, USA, 2-6 April 2016.
  • Abas AR. “Using incremental general regression neural network for learning mixture models from incomplete data”. Egyptian Informatics Journal, 12(3), 185-196, 2011.
  • Kumar R, Chen T, Hardt M, Beymer D, Brannon K, Syeda-Mahmood T. “Multiple Kernel Completion and its application to cardiac disease discrimination”. IEEE 10th International Symposium on Biomedical Imaging: From Nano to Macro, San Francisco, CA, USA, 7-11 April 2013.
  • He H, Garcia EA. “Learning from Imbalanced Data”. IEEE Transactions on Knowledge and Data Engineering, 21(9), 1263-1284, 2009.
  • Frasca M, Bertoni A, Re M, Valentini G. “A neural network algorithm for semi-supervised node label learning from unbalanced data”. Neural Networks, 43, 84-98, 2013.
  • Kohavi R. “A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection”. Fourteenth International Joint Conference on Artificial Intelligence, Montreal, Quebec, Canada, 20-25 August 1995.
  • Bashiri M, Geranmayeh AF. “Tuning the parameters of an artificial neural network using central composite design and genetic algorithm”. Scientia Iranica, 18(6), 1600-1608, 2011.
  • Benjamin BV, Gao P, McQuinn E, Choudhary S, Chandrasekaran AR, Bussat J, Alvarez-Icaza R, Arthur JV, Merolla PA, Boahen K. “Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations”. Proceedings of the IEEE, 102(5), 699-716, 2014.
  • Naveros F, Luque NR, Garrido JA, Carrillo RR, Anguita M, Ros E. “A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study”. IEEE Transactions on Neural Networks and Learning Systems, 26(7), 1567-1574, 2015.
  • Shepherd GM, Mirsky JS, Healy MD, Singer MS, Skoufos E, Hines MS, Nadkarni PM, Miller PL. “The Human Brain Project: neuroinformatics tools for integrating, searching and modeling multidisciplinary neuroscience data”. Trends in Neurosciences, 21(11), 460-468, 1998.
  • Sarikan SS, Ozbayoglu AM, Zilci O. “Automated Vehicle Classification with Image Processing and Computational Intelligence”. Procedia Computer Science, 114, 515-522, 2017.
  • Castro LND. “Fundamentals of natural computing: an overview”. Physics of Life Reviews, 4,(1), 1-36, 2007.
  • Brunton BW, Beyeler M. “Data-driven models in human neuroscience and neuroengineering”. Current Opinion in Neurobiology, 58, 21-29, 2019.
  • Buccelli S, Bornat Y, Colombi I, Ambroise M, Martines L, Pasquale V, Bisio M, Tessadori J, Nowak P, Grassia F, Averna A, Tedesco M, Bonifazi P, Difato F, Massobrio P, Levi T, Chiappalone M. “A Neuromorphic Prosthesis to Restore Communication in Neuronal Networks”. İScience, 19, 402-414, 2019.
  • Rosenblatt F. “The perceptron: A probabilistic model for information storage and organization in the brain”. Psychological Review, 65(6), 386-408, 1958.
  • Willshaw DJ, Malsburg CVD. “How patterned neural connections can be set up by self-organization”. Proceedings of the Royal Society of London - Biological Sciences, 194(1117), 431-445, 1976.
  • Amari S. “Topographic organization of nerve fields”. Bulletin of Mathematical Biology, 42(3), 339-364, 1980.
  • Kohonen T. “Self-Organized Formation of Topologically Correct Feature Maps”. Biological Cybernetics, 43(1), 59-69, 1982.
  • Hornik K, Stinchcombe M, White H. “Multilayer feedforward networks are universal approximators”. Neural Networks, 2(5), 359-366, 1989.
  • Jordan MI, Rumelhart DE. “Forward models: Supervised learning with a distal teacher”. Cognitive Science, 16(3), 307-354, 1992.
  • Wang J, Wang H, Chen Y, Liu C. “A constructive algorithm for unsupervised learning with incremental neural network”. Journal of Applied Research and Technology, 13(2), 188-196, 2015.
  • Plasencia A, Shichkina Y, Suárez I, Ruiz Z. “Open Source Robotic Simulators Platforms for Teaching Deep Reinforcement Learning Algorithms”. Procedia Computer Science, 150, 162-170, 2019.
  • Kiranyaz S, Ince T, Yildirim A, Gabbouj M. “Evolutionary artificial neural networks by multi-dimensional particle swarm optimization”. Neural Networks, 22(10), 1448-1462, 2009.
  • Castellani M, Rowlands H. “Evolutionary Artificial Neural Network Design and Training for wood veneer classification”. Engineering Applications of Artificial Intelligence, 22(4-5), 732-741, 2009.
  • Rashid TA, Fattah P, Awla DK. “Using Accuracy Measure for Improving the Training of LSTM with Metaheuristic Algorithms”. Procedia Computer Science, 140, 324-333, 2018.
  • Castillo PA, Merelo JJ, Prieto A, Rivas V, Romero G. “G-Prop: Global optimization of multilayer perceptrons using GAs”. Neurocomputing, 35(1-4), 149-163, 2000.
  • Yao X, Liu Y. “Towards designing artificial neural networks by evolution”. Applied Mathematics and Computation, 91(1), 83-90, 1998.
  • Jiang X, Wah AHKS. “Constructing and training feed-forward neural networks for pattern classification”. Pattern Recognition, 36(4), 853-867, 2003.
  • Balkin SD, Ord JK. “Automatic neural network modeling for univariate time series”. International Journal of Forecasting, 16(4), 509-515, 2000.
  • Leski J, Czogola E. “A new artificial neural network based fuzzy inference system with moving consequents in if–then rules and selected applications”. Fuzzy Sets and Systems, 108(3), 289-297, 1999.
  • Benardos PG, Vosniakos GC. “Prediction of surface roughness in CNC face milling using neural networks and Taguchi's design of experiments”. Robotics and Computer-Integrated Manufacturing, 18(5-6), 343-354, 2002.
  • Maier HR, Dandy GC. “Understanding the behaviour and optimising the performance of back-propagation neural networks: an empirical study”. Environmental Modelling & Software, 13(2), 179-191, 1998.
  • Wang Y, Fariello G. “On Neuroinformatics: Mathematical Models of Neuroscience and Neurocomputing”. Journal of Advanced Mathematics and Applications, 1(2), 206-217, 2012.
  • Branitskiy A, Kotenko I. “Hybridization of computational intelligence methods for attack detection in computer networks”. Journal of Computational Science, 23, 145-156, 2017.
  • Pal SK, Meher SK. “Title Paper: Natural computing: A problem solving paradigm with granular information processing”. Applied Soft Computing, 13(9), 3944-3955, 2013.
  • Lozo P. “Selective Attention Adaptive Resonance Theory (SAART) Neural Network for Neuro-engineering of Robust ATR Systems”. International Conference on Neural Networks, Perth, WA, Australia, 27 November-1 December 1995.
  • Disney A, Reynolds J, Schuman CD, Klibisz A, Young A, Plank JS. “DANNA: A neuromorphic software ecosystem”. Biologically Inspired Cognitive Architectures, 17, 49-56, 2016.
  • Arifovic J, Gençay R. “Using genetic algorithms to select architecture of a feedforward artificial neural network”. Physica A: Statistical Mechanics and its Applications, 289(3-4), 574-594, 2001.
  • Ma L, Khorasani K. “A new strategy for adaptively constructing multilayer feedforward neural networks”. Neurocomputing, 51, 361-385, 2003.
  • Chen J, Chang J. “Fuzzy Perceptron Neural Networks for Classifiers with Numerical Data and Linguistic Rules as Inputs”. IEEE Transactions on Fuzzy Systems, 8(6), 730-745, 2000.
  • Maier HR, Dandy GC. “The effect of internal parameters and geometry on the performance of back-propagation neural networks: an empirical study”. Environmental Modelling & Software, 13(2), 193-209, 1998.
  • School of Information and Computer Sciences, University of California, Irvine, CA, USA. “UCI Machine Learning Repository”. https://archive.ics.uci.edu/ml/index.php. (16.10.2019).
  • School of Information and Computer Sciences, University of California, Irvine, CA, USA. “UCI Machine Learning Repository: Blood Transfusion Service Center Data Set”. https://archive.ics.uci.edu/ml/datasets/Blood+Transfusion+Service+Center. (16.10.2019).
  • Yeh I, Yang K, Ting T. “Knowledge discovery on RFM model using Bernoulli sequence”. Expert Systems with Applications, 36(3), 5866-5871, 2009.
Yıl 2020, Cilt: 5 Sayı: 2, 99 - 113, 01.12.2020

Öz

Kaynakça

  • Yang S, Chen Y. “An evolutionary constructive and pruning algorithm for artificial neural networks and its prediction applications”. Neurocomputing, 86, 140-149, 2012.
  • Kaytan M, Hanbay D. “Effective Classification of Phishing Web Pages Based on New Rules by Using Extreme Learning Machines”. Journal of Computer Sciences, 2(1), 15-36, 2017.
  • Aydilek İB, Aslan A. “A Novel Hybrid Approach to Estimating Missing Values in Databases Using K-Nearest Neighbors and Neural Networks”. International Journal of Innovative Computing, Information and Control, 8(7(A)), 4705-4717, 2012.
  • Prieto A, Prieto B, Ortigosa EM, Ros E, Pelayo F, Ortega J, Rojas I. “Neural networks: An overview of early research, current frameworks and new challenges”. Neurocomputing, 214, 242-268, 2016.
  • Benardos PG, Vosniakos GC. “Optimizing feedforward artificial neural network architecture”. Engineering Applications of Artificial Intelligence, 20(3), 365-382, 2007.
  • Soltoggio A, Stanley KO, Risi S. “Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks”. Neural Networks, 108, 48-67, 2018.
  • Barrett DG, Morcos AS, Macke JH. “Analyzing biological and artificial neural networks: challenges with opportunities for synergy?”. Current Opinion in Neurobiology, 55, 55-64, 2019.
  • Abiodun OI, Jantan A, Omolara AE, Dada KV, Mohamed NA, Arshad H. “State-of-the-art in artificial neural network applications: A survey”. Heliyon, 4(11), 1-41, 2018.
  • Tavanaei A, Ghodrati M, Kheradpisheh SR, Masquelier T, Maida A. “Deep learning in spiking neural networks”. Neural Networks, 111, 47-63, 2019.
  • Yang HH, Murata M, Amari S. “Statistical inference: learning in artificial neural networks”. Trends in Cognitive Sciences, 2(1), 4-10, 1998.
  • Schmidhuber J. “Deep learning in neural networks: An overview”. Neural Networks, 61, 85-117, 2015.
  • Paliwal M, Kumar UA. “Neural networks and statistical techniques: A review of applications”. Expert Systems with Applications, 36(1), 2-17, 2009.
  • Whittington JCR, Bogacz R. “Theories of Error Back-Propagation in the Brain”. Trends in Cognitive Sciences, 23(3), 235-250, 2019.
  • Lampinen J, Vehtari A. “Bayesian approach for neural networks—review and case studies”. Neural Networks, 14(3), 257-274, 2001.
  • Tsoi AC, Back A. “Discrete time recurrent neural network architectures: A unifying review”. Neurocomputing, 15(3-4), 183-223, 1997.
  • Heo S, Lee JH. “Parallel neural networks for improved nonlinear principal component analysis”. Computers & Chemical Engineering, 127, 1-10, 2019.
  • Bornholt J, Lopez R, Carmean DM, Ceze L, Seelig G, Strauss K. “A DNA-Based Archival Storage System”. Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems, Atlanta, Georgia, USA, 2-6 April 2016.
  • Abas AR. “Using incremental general regression neural network for learning mixture models from incomplete data”. Egyptian Informatics Journal, 12(3), 185-196, 2011.
  • Kumar R, Chen T, Hardt M, Beymer D, Brannon K, Syeda-Mahmood T. “Multiple Kernel Completion and its application to cardiac disease discrimination”. IEEE 10th International Symposium on Biomedical Imaging: From Nano to Macro, San Francisco, CA, USA, 7-11 April 2013.
  • He H, Garcia EA. “Learning from Imbalanced Data”. IEEE Transactions on Knowledge and Data Engineering, 21(9), 1263-1284, 2009.
  • Frasca M, Bertoni A, Re M, Valentini G. “A neural network algorithm for semi-supervised node label learning from unbalanced data”. Neural Networks, 43, 84-98, 2013.
  • Kohavi R. “A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection”. Fourteenth International Joint Conference on Artificial Intelligence, Montreal, Quebec, Canada, 20-25 August 1995.
  • Bashiri M, Geranmayeh AF. “Tuning the parameters of an artificial neural network using central composite design and genetic algorithm”. Scientia Iranica, 18(6), 1600-1608, 2011.
  • Benjamin BV, Gao P, McQuinn E, Choudhary S, Chandrasekaran AR, Bussat J, Alvarez-Icaza R, Arthur JV, Merolla PA, Boahen K. “Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations”. Proceedings of the IEEE, 102(5), 699-716, 2014.
  • Naveros F, Luque NR, Garrido JA, Carrillo RR, Anguita M, Ros E. “A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study”. IEEE Transactions on Neural Networks and Learning Systems, 26(7), 1567-1574, 2015.
  • Shepherd GM, Mirsky JS, Healy MD, Singer MS, Skoufos E, Hines MS, Nadkarni PM, Miller PL. “The Human Brain Project: neuroinformatics tools for integrating, searching and modeling multidisciplinary neuroscience data”. Trends in Neurosciences, 21(11), 460-468, 1998.
  • Sarikan SS, Ozbayoglu AM, Zilci O. “Automated Vehicle Classification with Image Processing and Computational Intelligence”. Procedia Computer Science, 114, 515-522, 2017.
  • Castro LND. “Fundamentals of natural computing: an overview”. Physics of Life Reviews, 4,(1), 1-36, 2007.
  • Brunton BW, Beyeler M. “Data-driven models in human neuroscience and neuroengineering”. Current Opinion in Neurobiology, 58, 21-29, 2019.
  • Buccelli S, Bornat Y, Colombi I, Ambroise M, Martines L, Pasquale V, Bisio M, Tessadori J, Nowak P, Grassia F, Averna A, Tedesco M, Bonifazi P, Difato F, Massobrio P, Levi T, Chiappalone M. “A Neuromorphic Prosthesis to Restore Communication in Neuronal Networks”. İScience, 19, 402-414, 2019.
  • Rosenblatt F. “The perceptron: A probabilistic model for information storage and organization in the brain”. Psychological Review, 65(6), 386-408, 1958.
  • Willshaw DJ, Malsburg CVD. “How patterned neural connections can be set up by self-organization”. Proceedings of the Royal Society of London - Biological Sciences, 194(1117), 431-445, 1976.
  • Amari S. “Topographic organization of nerve fields”. Bulletin of Mathematical Biology, 42(3), 339-364, 1980.
  • Kohonen T. “Self-Organized Formation of Topologically Correct Feature Maps”. Biological Cybernetics, 43(1), 59-69, 1982.
  • Hornik K, Stinchcombe M, White H. “Multilayer feedforward networks are universal approximators”. Neural Networks, 2(5), 359-366, 1989.
  • Jordan MI, Rumelhart DE. “Forward models: Supervised learning with a distal teacher”. Cognitive Science, 16(3), 307-354, 1992.
  • Wang J, Wang H, Chen Y, Liu C. “A constructive algorithm for unsupervised learning with incremental neural network”. Journal of Applied Research and Technology, 13(2), 188-196, 2015.
  • Plasencia A, Shichkina Y, Suárez I, Ruiz Z. “Open Source Robotic Simulators Platforms for Teaching Deep Reinforcement Learning Algorithms”. Procedia Computer Science, 150, 162-170, 2019.
  • Kiranyaz S, Ince T, Yildirim A, Gabbouj M. “Evolutionary artificial neural networks by multi-dimensional particle swarm optimization”. Neural Networks, 22(10), 1448-1462, 2009.
  • Castellani M, Rowlands H. “Evolutionary Artificial Neural Network Design and Training for wood veneer classification”. Engineering Applications of Artificial Intelligence, 22(4-5), 732-741, 2009.
  • Rashid TA, Fattah P, Awla DK. “Using Accuracy Measure for Improving the Training of LSTM with Metaheuristic Algorithms”. Procedia Computer Science, 140, 324-333, 2018.
  • Castillo PA, Merelo JJ, Prieto A, Rivas V, Romero G. “G-Prop: Global optimization of multilayer perceptrons using GAs”. Neurocomputing, 35(1-4), 149-163, 2000.
  • Yao X, Liu Y. “Towards designing artificial neural networks by evolution”. Applied Mathematics and Computation, 91(1), 83-90, 1998.
  • Jiang X, Wah AHKS. “Constructing and training feed-forward neural networks for pattern classification”. Pattern Recognition, 36(4), 853-867, 2003.
  • Balkin SD, Ord JK. “Automatic neural network modeling for univariate time series”. International Journal of Forecasting, 16(4), 509-515, 2000.
  • Leski J, Czogola E. “A new artificial neural network based fuzzy inference system with moving consequents in if–then rules and selected applications”. Fuzzy Sets and Systems, 108(3), 289-297, 1999.
  • Benardos PG, Vosniakos GC. “Prediction of surface roughness in CNC face milling using neural networks and Taguchi's design of experiments”. Robotics and Computer-Integrated Manufacturing, 18(5-6), 343-354, 2002.
  • Maier HR, Dandy GC. “Understanding the behaviour and optimising the performance of back-propagation neural networks: an empirical study”. Environmental Modelling & Software, 13(2), 179-191, 1998.
  • Wang Y, Fariello G. “On Neuroinformatics: Mathematical Models of Neuroscience and Neurocomputing”. Journal of Advanced Mathematics and Applications, 1(2), 206-217, 2012.
  • Branitskiy A, Kotenko I. “Hybridization of computational intelligence methods for attack detection in computer networks”. Journal of Computational Science, 23, 145-156, 2017.
  • Pal SK, Meher SK. “Title Paper: Natural computing: A problem solving paradigm with granular information processing”. Applied Soft Computing, 13(9), 3944-3955, 2013.
  • Lozo P. “Selective Attention Adaptive Resonance Theory (SAART) Neural Network for Neuro-engineering of Robust ATR Systems”. International Conference on Neural Networks, Perth, WA, Australia, 27 November-1 December 1995.
  • Disney A, Reynolds J, Schuman CD, Klibisz A, Young A, Plank JS. “DANNA: A neuromorphic software ecosystem”. Biologically Inspired Cognitive Architectures, 17, 49-56, 2016.
  • Arifovic J, Gençay R. “Using genetic algorithms to select architecture of a feedforward artificial neural network”. Physica A: Statistical Mechanics and its Applications, 289(3-4), 574-594, 2001.
  • Ma L, Khorasani K. “A new strategy for adaptively constructing multilayer feedforward neural networks”. Neurocomputing, 51, 361-385, 2003.
  • Chen J, Chang J. “Fuzzy Perceptron Neural Networks for Classifiers with Numerical Data and Linguistic Rules as Inputs”. IEEE Transactions on Fuzzy Systems, 8(6), 730-745, 2000.
  • Maier HR, Dandy GC. “The effect of internal parameters and geometry on the performance of back-propagation neural networks: an empirical study”. Environmental Modelling & Software, 13(2), 193-209, 1998.
  • School of Information and Computer Sciences, University of California, Irvine, CA, USA. “UCI Machine Learning Repository”. https://archive.ics.uci.edu/ml/index.php. (16.10.2019).
  • School of Information and Computer Sciences, University of California, Irvine, CA, USA. “UCI Machine Learning Repository: Blood Transfusion Service Center Data Set”. https://archive.ics.uci.edu/ml/datasets/Blood+Transfusion+Service+Center. (16.10.2019).
  • Yeh I, Yang K, Ting T. “Knowledge discovery on RFM model using Bernoulli sequence”. Expert Systems with Applications, 36(3), 5866-5871, 2009.
Toplam 60 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Bilgisayar Yazılımı
Bölüm PAPERS
Yazarlar

Mustafa Kaytan

Celaleddin Yeroğlu Bu kişi benim

İbrahim Berkan Aydilek 0000-0001-8037-8625

Yayımlanma Tarihi 1 Aralık 2020
Gönderilme Tarihi 12 Mayıs 2020
Kabul Tarihi 7 Kasım 2020
Yayımlandığı Sayı Yıl 2020 Cilt: 5 Sayı: 2

Kaynak Göster

APA Kaytan, M., Yeroğlu, C., & Aydilek, İ. B. (2020). Yapay Sinir Ağları Eğitiminde Kullanılan Optimizasyon Yöntemlerinin İncelenmesi ve Kan Nakli Hizmet Merkezi Veri Seti Üzerinden Değerlendirilmesi. Computer Science, 5(2), 99-113.

The Creative Commons Attribution 4.0 International License 88x31.png  is applied to all research papers published by JCS and

a Digital Object Identifier (DOI)     Logo_TM.png  is assigned for each published paper.