Araştırma Makalesi
BibTex RIS Kaynak Göster

Stokastik ve deterministik hareketlerin optimizasyon süreçlerindeki etkileri

Yıl 2022, Cilt: 37 Sayı: 2, 949 - 966, 28.02.2022
https://doi.org/10.17341/gazimmfd.887976

Öz

Bu çalışmada Çok Değişkenli Lineer Regresyon (Multivariate Linear Regression-MLR) modeli kullanarak iris veri setini temsil eden lineer bir fonksiyon elde edilmektedir. Bu fonksiyondaki katsayıların optimum değerlerini bulmak için Stokastik Dik İniş (Stochastic Gradient Descent-SGD), Momentum, Adagrad, RMSProp, Adadelta ve Adam optimizasyon algoritmaları kullanılmaktadır. MLR’lerde genelde sabit veya rastgele bir değer ile başlatılan bu katsayılar için başlangıç popülasyonuna sahip bir başlatma yöntemi önerilmektedir. Kullanılan MLR modelinde amaç fonksiyonu olarak Mutlak Hata Değeri (Integral of the Absolute Value of the Error-IAE), Zaman Ağırlıklı Mutlak Hata Değeri (Integral of the Time-Weighted Absolute Error-ITAE), Ortalama Karesel Hata Değeri (Mean Square of the Error-MSE) ve Karesel Hata Değeri (Integral of the Square Error-ISE) hata fonksiyonlarından yararlanılmaktadır. Öncelikle alt ve üst sınırlar arasında önerilen bir deterministik ve klasik stokastik başlatma yöntemleri ile uygulamaların başlangıç popülasyonları oluşturulmaktadır. Stokastik olarak başlatılan uygulama literatürdeki gibi birçok defa çalıştırılıp ortalama değerleri alınmaktadır. Buna karşılık deterministik olarak başlatılan uygulama ise bir kez çalıştırılmaktadır. Deterministik ve stokastik başlatma yöntemlerinin sonuçlarına göre iki uygulamada da elde edilen katsayıların ve adım sayılarının birbirine yakın olduğu gözlemlenmektedir. Buna rağmen deterministik olarak başlatılan uygulamada çok yüksek zamansal kazanım elde edilmektedir. İki uygulama arasında elde edilen sonuçlar karşılaştırılarak analiz edilmektedir. Karşılaştırmalar sonucunda Adadelta optimizasyon algoritması ve MSE amaç fonksiyonuyla elde edilen lineer model en kısa zamanda sonuca ulaşmaktadır.

Kaynakça

  • Eröz E., Tanyildizi E., Güncel Metasezgisel Optimizasyon Algoritmalarının Performans Karşılaştırılması, 2018 International Conference on Artificial Intelligence and Data Processing, Malatya-Türkiye, 1-16, 28-30 Eylül, 2018.
  • Zaki S., El-Nasser A., Zaied H., Meta-heuristics Algorithms: A survey, International Journal of Engineering Trends and Technology, 67(5), 67-74, 2019.
  • Ponce-Ortega J. M., Hernández-Pérez L. G., Optimization of Process Flowsheets through Metaheuristic Techniques, Springer, UK, 2019.
  • Dokeroglu T., Sevinc E., Kucukyilmaz T., Cosar A., A survey on new generation metaheuristic algorithms, Computer & Industrial Engineering, 137, 106040-106069, 2019.
  • Nocedal J., Stephen W., Numerical Optimization, Springer Science & Business Media, USA, 2006.
  • Chong E. K. P., Zak S. H., An introduction to optimization, Wiley, USA, 2013.
  • Türkay M., Optimizasyon Modelleri ve Çözüm Metodları, New Frontiers in Total Quality and Strategic Management, 309-328, 2006.
  • Stork J., Eiben A. E., Bartz-Beielstein T., A new taxonomy of continuous global optimization algorithms, Natural Computing, arXiv, 1-24, 2020.
  • Collet P., Rennard J.-P., Stochastic Optimization Algorithms, Intelligent Information Technologies, 1121-1137, 2008.
  • Fausto F., Reyna-Orta A., Cuevas E., Andrade Á. G., Perez-Cisneros M., From ants to whales: metaheuristics for all tastes, Artificial. Intelligence Review, 53, 753–810, 2020.
  • Villarrubia G., De Paz J. F., Chamoso P., De la Prieta F., Artificial neural networks used in optimization problems, Neurocomputing, 272, 1-16, 2018.
  • Kubat M., An Introduction to Machine Learning, Springer International Publishing AG, USA, 2017.
  • Shalev-Shwartz S., Ben-David S., Understanding machine learning: From theory to algorithms, Cambridge University Press, UK, 2014.
  • Deisenroth M. P., Faisal A. A., Ong C. S., Mathematics for Machine Learning, Cambridge University Press, UK, 2020.
  • Alpaydin E., Introduction to Machine Learning, Mit Press, USA, 2020.
  • Akanksha R.. ML Classification vs Regression. Geeksforgeeks. https://www.geeksforgeeks.org/ml-classification-vs-regression/. Yayın tarihi Aralık 2, 2019. Erişim tarihi Eylül 15, 2020.
  • Garbade D. M. J.. Regression Versus Classification Machine Learning: What’s the Difference?. Medium. https://medium.com/quick-code/regression-versus-classification-machine-learning-whats-the-difference-345c56dd15f7. Yayın tarihi Ağustos 11, 2018. Erişim tarihi Ekim 8, 2020.
  • Huang X., Guan Z., Yang L., An effective hybrid algorithm for multi-objective flexible job-shop scheduling problem, Advances Mechanical Engineering, 10(9), 1-14, 2018.
  • Ali M., Pant M., Abraham A., Unconventional initialization methods for differential evolution, Applied Mathematic and Computation, 2019 (9), 4474-4494, 2013.
  • Kazimipour B., Li X., Qin A. K., A review of population initialization techniques for evolutionary algorithms, 2014 IEEE Congress on Evolutionary Computation, Beijing-China, 2585–2592, 2014.
  • Kazemzadeh Azad S., Seeding the initial population with feasible solutions in metaheuristic optimization of steel trusses, Engineering Optimization, 50(1), 89-105, 2018.
  • Zhang G., Gao L., Shi Y., An effective genetic algorithm for the flexible job-shop scheduling problem, Expert Systems with Applications, 38(4), 3563-3573, 2011.
  • Tometzki T., Engell S., Systematic initialization techniques for hybrid evolutionary algorithms for solving two-stage stochastic mixed-integer programs, IEEE Transactions on Evolıtionary Computation, 15(2), 196-214, 2010.
  • Dong N., Wu C. H., Ip W. H., Chen Z. Q., Chan C. Y., Yung K. L., An opposition-based chaotic GA/PSO hybrid algorithm and its application in circle detection, Computer & Mathmatics with Applications, 64(6), 1886-1902, 2012.
  • Zhang Y., Saxe A. M., Advani M. S., Lee A. A., Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning, Molecular Physics, 116(21-22), 3214-3223, 2018.
  • Vani S., Rao T. V. M., An experimental approach towards the performance assessment of various optimizers on convolutional neural network, International Conference on Trends in Electronics and Informatics, Tirunelveli-India, 331-336, 2019.
  • Rajakumaran G., Venkataraman N., Mukkamala R. R., Denial of Service Attack Prediction Using Gradient Descent Algorithm, SN Computer Science, 1(1), 1-8, 2020.
  • Dogo E. M., Afolabi O. J., Nwulu N. I., Twala B., Aigbavboa C. O., A Comparative Analysis of Gradient Descent-Based Optimization Algorithms on Convolutional Neural Networks, International Conference on Computational Techniques, Electronics and Mechanical Systems, Belgaum-India, 92-99, 2018.
  • YAZAN E., Talu M. F., Comparison of the stochastic gradient descent based optimization techniques, International Artificial Intelligence and Data Processing Symposium, Malatya-Türkiye, 1-5, 2017.
  • Pal S., Gauri S. K., Assessing effectiveness of the various performance metrics for multi-response optimization using multiple regression, Computer Industrial Engineering, 59(4), 976-985, 2010.
  • Parnianifard A., Azfanizam A. S., Ariffin M. K. A., Ismail M. I. S., Design and Analysis of Computer Experiments Using Polynomial Regression and Latin Hypercube Sampling in Optimal Design of PID Controller, Journal of Applied. Research on Industrial Engineering, 5(2), 156-168, 2018.
  • Ansari A. Q., Katiyar S., Application of ant colony algorithm for calculation and analysis of performance indices for adaptive control system, Innovative Applications of Computational Intelligence on Power, Energy and Controls with Their Impact on Humanity, Ghaziabad-India, 466-471, 2014.
  • Amador-Angulo L., Mendoza O., Castro J. R., Rodríguez-Díaz A., Melin P., Castillo O., Fuzzy sets in dynamic adaptation of parameters of a bee colony optimization for controlling the trajectory of an autonomous mobile robot, Sensors (Switzerland), 16(9), 1458, 2016.
  • Kakchapati S., Choonpradub C., Forecasting Tuberculosis (Mortality) in Thailand Using Multivariate Linear Regression, Journal of Health Research, 26(1), 51-54, 2012.
  • Gustafsson T., Eriksson A., Wingren C. J., Multivariate linear regression modelling of lung weight in 24,056 Swedish medico-legal autopsy cases, Journal of Forensic and Legal Medicine, 46, 20-22, 2017.
  • Nasri M., Hamdi M., LTE QoS Parameters Prediction Using Multivariate Linear Regression Algorithm, 22nd Conference on Innovation in Clouds, Internet and Networks and Workshops, Paris-France, 145-150, 2019.
  • Patel K., Vala J., Pandya J., Comparison of various classification algorithms on iris datasets using WEKA, International Journal of Advance Engineering Research Develoment, 1(1), 2014.
  • Shekhawat M., Dhande S. S., Building an Iris Plant Data Classifier Using Neural Network Associative Classification, International Journal of Advancement in Technology, 2(4), 491-506, 2011.
  • Tiwari M., Singh R., Comparative Investigation of K-Means and K-Medoid Algorithm on Iris Data, International Journal of Engineering Research and Development, 4(8), 69-72, 2012.
  • Demir M., Karcı A., Veri̇ kümelemede fi̇dan geli̇şi̇m algori̇tmasinin kullanilmasi, Elektrik, Elektronik, Bilgisayar, Biyomedikal Mühendisliği Ulusal Kongresi ve Fuarı, Eskişehir-Türkiye, 14-18, 2007.
  • Bin Karim R.. 10 Gradient Descent Optimisation Algorithms + Cheat Sheet. Kdnuggets. https://www.kdnuggets.com/2019/06/gradient-descent-algorithms-cheat-sheet.html. Yayın tarihi Haziran, 2019. Erişim tarihi Aralık 22, 2020.
  • Zeiler M. D., Adadelta: An Adaptive Learning Rate Method, arXiv preprint arXiv:1212.5701, 2012.
  • Ruder S., An overview of gradient descent optimization algorithms, arXiv preprint arXiv:1609.04747, 1-14, 2016.
  • Lydia A. A., Francis F. S., Adagrad: An Optimizer for Stochastic Gradient Descent, Internatıonal Journal of Informatıon And Computıng Scıence, 6(5), 599-568, 2019.
  • Krishnan K., Karpagam G., Comparison of PID Controller Tuning Techniques for a FOPDT System, International Journal of Current Engineering and Technology, 4(4), 2667-2670, 2014.
  • Kishnani M., Pareek S., Gupta R., Optimal tuning of DC motor via simulated annealing, International Conference on Advances in Engineering and Technology Research, Unnao-India, 1-5, 2014.
  • Allafi N. I., Zayed A. S., Daw M. S., Ahmed W. M., Investigating the effect of GA based PID controller performance indices with application to stirred-Tank heater (jacket model), 17th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering, Sousse-Tunisia, 26-33 2016.
  • Fisher R.A, The use of multiple measurements in taxonomic problems, Annals of Eugenics, 7(2), 179-188, 1936.
  • Ravi J.. Machine learning-Iris classification. Medium. https://medium.com/@jebaseelanravi96/machine-learning-iris-classification-33aa18a4a983. Yayın tarihi Ağustos 1, 2018. Erişim tarihi Ekim 11, 2020.

Effects of the stochastic and deterministic movements in the optimization processes

Yıl 2022, Cilt: 37 Sayı: 2, 949 - 966, 28.02.2022
https://doi.org/10.17341/gazimmfd.887976

Öz

In this study, a linear function representing the iris data set is obtained by making use of the Multivariate Linear Regression (MLR) model. Stochastic Gradient Descent (SGD), Momentum, Adagrad, RMSProp, Adadelta and Adam optimization algorithms are used to find the optimum values of coefficients of this function. An initialization method with initial population is recommended for these coefficients, which are generally initialized with a fixed or random value in MLRs. Integral of the Absolute Value of the Error (IAE), Integral of the Time-Weighted Absolute Error (ITAE), Mean Square of the Error (MSE) and Integral of the Square Error (ISE) error functions are used as objective functions in the MLR model used. First, initial populations of the methods are developed by using a proposed deterministic and classical stochastic initialization methods between upper and lower bounds. The method that are initialized stochasticaly is run several times as seen in literature and the mean values are calculated. On the other hand, the application that is initialized deterministic is only run once. According to the results of deterministic and stochastic initialization methods, it is observed that the coefficients and iteration numbers obtained in both applications are close to each other. Despite very high temporal gain is achieved from the application that is initialized deterministic. The results obtained between the two applications are compared and analyzed. As a result of the comparisons, the linear model obtained with Adadelta optimization algorithm and MSE objective function reaches the result in the shortest time.

Kaynakça

  • Eröz E., Tanyildizi E., Güncel Metasezgisel Optimizasyon Algoritmalarının Performans Karşılaştırılması, 2018 International Conference on Artificial Intelligence and Data Processing, Malatya-Türkiye, 1-16, 28-30 Eylül, 2018.
  • Zaki S., El-Nasser A., Zaied H., Meta-heuristics Algorithms: A survey, International Journal of Engineering Trends and Technology, 67(5), 67-74, 2019.
  • Ponce-Ortega J. M., Hernández-Pérez L. G., Optimization of Process Flowsheets through Metaheuristic Techniques, Springer, UK, 2019.
  • Dokeroglu T., Sevinc E., Kucukyilmaz T., Cosar A., A survey on new generation metaheuristic algorithms, Computer & Industrial Engineering, 137, 106040-106069, 2019.
  • Nocedal J., Stephen W., Numerical Optimization, Springer Science & Business Media, USA, 2006.
  • Chong E. K. P., Zak S. H., An introduction to optimization, Wiley, USA, 2013.
  • Türkay M., Optimizasyon Modelleri ve Çözüm Metodları, New Frontiers in Total Quality and Strategic Management, 309-328, 2006.
  • Stork J., Eiben A. E., Bartz-Beielstein T., A new taxonomy of continuous global optimization algorithms, Natural Computing, arXiv, 1-24, 2020.
  • Collet P., Rennard J.-P., Stochastic Optimization Algorithms, Intelligent Information Technologies, 1121-1137, 2008.
  • Fausto F., Reyna-Orta A., Cuevas E., Andrade Á. G., Perez-Cisneros M., From ants to whales: metaheuristics for all tastes, Artificial. Intelligence Review, 53, 753–810, 2020.
  • Villarrubia G., De Paz J. F., Chamoso P., De la Prieta F., Artificial neural networks used in optimization problems, Neurocomputing, 272, 1-16, 2018.
  • Kubat M., An Introduction to Machine Learning, Springer International Publishing AG, USA, 2017.
  • Shalev-Shwartz S., Ben-David S., Understanding machine learning: From theory to algorithms, Cambridge University Press, UK, 2014.
  • Deisenroth M. P., Faisal A. A., Ong C. S., Mathematics for Machine Learning, Cambridge University Press, UK, 2020.
  • Alpaydin E., Introduction to Machine Learning, Mit Press, USA, 2020.
  • Akanksha R.. ML Classification vs Regression. Geeksforgeeks. https://www.geeksforgeeks.org/ml-classification-vs-regression/. Yayın tarihi Aralık 2, 2019. Erişim tarihi Eylül 15, 2020.
  • Garbade D. M. J.. Regression Versus Classification Machine Learning: What’s the Difference?. Medium. https://medium.com/quick-code/regression-versus-classification-machine-learning-whats-the-difference-345c56dd15f7. Yayın tarihi Ağustos 11, 2018. Erişim tarihi Ekim 8, 2020.
  • Huang X., Guan Z., Yang L., An effective hybrid algorithm for multi-objective flexible job-shop scheduling problem, Advances Mechanical Engineering, 10(9), 1-14, 2018.
  • Ali M., Pant M., Abraham A., Unconventional initialization methods for differential evolution, Applied Mathematic and Computation, 2019 (9), 4474-4494, 2013.
  • Kazimipour B., Li X., Qin A. K., A review of population initialization techniques for evolutionary algorithms, 2014 IEEE Congress on Evolutionary Computation, Beijing-China, 2585–2592, 2014.
  • Kazemzadeh Azad S., Seeding the initial population with feasible solutions in metaheuristic optimization of steel trusses, Engineering Optimization, 50(1), 89-105, 2018.
  • Zhang G., Gao L., Shi Y., An effective genetic algorithm for the flexible job-shop scheduling problem, Expert Systems with Applications, 38(4), 3563-3573, 2011.
  • Tometzki T., Engell S., Systematic initialization techniques for hybrid evolutionary algorithms for solving two-stage stochastic mixed-integer programs, IEEE Transactions on Evolıtionary Computation, 15(2), 196-214, 2010.
  • Dong N., Wu C. H., Ip W. H., Chen Z. Q., Chan C. Y., Yung K. L., An opposition-based chaotic GA/PSO hybrid algorithm and its application in circle detection, Computer & Mathmatics with Applications, 64(6), 1886-1902, 2012.
  • Zhang Y., Saxe A. M., Advani M. S., Lee A. A., Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning, Molecular Physics, 116(21-22), 3214-3223, 2018.
  • Vani S., Rao T. V. M., An experimental approach towards the performance assessment of various optimizers on convolutional neural network, International Conference on Trends in Electronics and Informatics, Tirunelveli-India, 331-336, 2019.
  • Rajakumaran G., Venkataraman N., Mukkamala R. R., Denial of Service Attack Prediction Using Gradient Descent Algorithm, SN Computer Science, 1(1), 1-8, 2020.
  • Dogo E. M., Afolabi O. J., Nwulu N. I., Twala B., Aigbavboa C. O., A Comparative Analysis of Gradient Descent-Based Optimization Algorithms on Convolutional Neural Networks, International Conference on Computational Techniques, Electronics and Mechanical Systems, Belgaum-India, 92-99, 2018.
  • YAZAN E., Talu M. F., Comparison of the stochastic gradient descent based optimization techniques, International Artificial Intelligence and Data Processing Symposium, Malatya-Türkiye, 1-5, 2017.
  • Pal S., Gauri S. K., Assessing effectiveness of the various performance metrics for multi-response optimization using multiple regression, Computer Industrial Engineering, 59(4), 976-985, 2010.
  • Parnianifard A., Azfanizam A. S., Ariffin M. K. A., Ismail M. I. S., Design and Analysis of Computer Experiments Using Polynomial Regression and Latin Hypercube Sampling in Optimal Design of PID Controller, Journal of Applied. Research on Industrial Engineering, 5(2), 156-168, 2018.
  • Ansari A. Q., Katiyar S., Application of ant colony algorithm for calculation and analysis of performance indices for adaptive control system, Innovative Applications of Computational Intelligence on Power, Energy and Controls with Their Impact on Humanity, Ghaziabad-India, 466-471, 2014.
  • Amador-Angulo L., Mendoza O., Castro J. R., Rodríguez-Díaz A., Melin P., Castillo O., Fuzzy sets in dynamic adaptation of parameters of a bee colony optimization for controlling the trajectory of an autonomous mobile robot, Sensors (Switzerland), 16(9), 1458, 2016.
  • Kakchapati S., Choonpradub C., Forecasting Tuberculosis (Mortality) in Thailand Using Multivariate Linear Regression, Journal of Health Research, 26(1), 51-54, 2012.
  • Gustafsson T., Eriksson A., Wingren C. J., Multivariate linear regression modelling of lung weight in 24,056 Swedish medico-legal autopsy cases, Journal of Forensic and Legal Medicine, 46, 20-22, 2017.
  • Nasri M., Hamdi M., LTE QoS Parameters Prediction Using Multivariate Linear Regression Algorithm, 22nd Conference on Innovation in Clouds, Internet and Networks and Workshops, Paris-France, 145-150, 2019.
  • Patel K., Vala J., Pandya J., Comparison of various classification algorithms on iris datasets using WEKA, International Journal of Advance Engineering Research Develoment, 1(1), 2014.
  • Shekhawat M., Dhande S. S., Building an Iris Plant Data Classifier Using Neural Network Associative Classification, International Journal of Advancement in Technology, 2(4), 491-506, 2011.
  • Tiwari M., Singh R., Comparative Investigation of K-Means and K-Medoid Algorithm on Iris Data, International Journal of Engineering Research and Development, 4(8), 69-72, 2012.
  • Demir M., Karcı A., Veri̇ kümelemede fi̇dan geli̇şi̇m algori̇tmasinin kullanilmasi, Elektrik, Elektronik, Bilgisayar, Biyomedikal Mühendisliği Ulusal Kongresi ve Fuarı, Eskişehir-Türkiye, 14-18, 2007.
  • Bin Karim R.. 10 Gradient Descent Optimisation Algorithms + Cheat Sheet. Kdnuggets. https://www.kdnuggets.com/2019/06/gradient-descent-algorithms-cheat-sheet.html. Yayın tarihi Haziran, 2019. Erişim tarihi Aralık 22, 2020.
  • Zeiler M. D., Adadelta: An Adaptive Learning Rate Method, arXiv preprint arXiv:1212.5701, 2012.
  • Ruder S., An overview of gradient descent optimization algorithms, arXiv preprint arXiv:1609.04747, 1-14, 2016.
  • Lydia A. A., Francis F. S., Adagrad: An Optimizer for Stochastic Gradient Descent, Internatıonal Journal of Informatıon And Computıng Scıence, 6(5), 599-568, 2019.
  • Krishnan K., Karpagam G., Comparison of PID Controller Tuning Techniques for a FOPDT System, International Journal of Current Engineering and Technology, 4(4), 2667-2670, 2014.
  • Kishnani M., Pareek S., Gupta R., Optimal tuning of DC motor via simulated annealing, International Conference on Advances in Engineering and Technology Research, Unnao-India, 1-5, 2014.
  • Allafi N. I., Zayed A. S., Daw M. S., Ahmed W. M., Investigating the effect of GA based PID controller performance indices with application to stirred-Tank heater (jacket model), 17th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering, Sousse-Tunisia, 26-33 2016.
  • Fisher R.A, The use of multiple measurements in taxonomic problems, Annals of Eugenics, 7(2), 179-188, 1936.
  • Ravi J.. Machine learning-Iris classification. Medium. https://medium.com/@jebaseelanravi96/machine-learning-iris-classification-33aa18a4a983. Yayın tarihi Ağustos 1, 2018. Erişim tarihi Ekim 11, 2020.
Toplam 49 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Ebubekir Seyyarer 0000-0002-8981-0266

Ali Karci 0000-0002-8489-8617

Abdullah Ateş 0000-0002-4236-6794

Yayımlanma Tarihi 28 Şubat 2022
Gönderilme Tarihi 1 Mart 2021
Kabul Tarihi 29 Ağustos 2021
Yayımlandığı Sayı Yıl 2022 Cilt: 37 Sayı: 2

Kaynak Göster

APA Seyyarer, E., Karci, A., & Ateş, A. (2022). Stokastik ve deterministik hareketlerin optimizasyon süreçlerindeki etkileri. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi, 37(2), 949-966. https://doi.org/10.17341/gazimmfd.887976
AMA Seyyarer E, Karci A, Ateş A. Stokastik ve deterministik hareketlerin optimizasyon süreçlerindeki etkileri. GUMMFD. Şubat 2022;37(2):949-966. doi:10.17341/gazimmfd.887976
Chicago Seyyarer, Ebubekir, Ali Karci, ve Abdullah Ateş. “Stokastik Ve Deterministik Hareketlerin Optimizasyon süreçlerindeki Etkileri”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 37, sy. 2 (Şubat 2022): 949-66. https://doi.org/10.17341/gazimmfd.887976.
EndNote Seyyarer E, Karci A, Ateş A (01 Şubat 2022) Stokastik ve deterministik hareketlerin optimizasyon süreçlerindeki etkileri. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 37 2 949–966.
IEEE E. Seyyarer, A. Karci, ve A. Ateş, “Stokastik ve deterministik hareketlerin optimizasyon süreçlerindeki etkileri”, GUMMFD, c. 37, sy. 2, ss. 949–966, 2022, doi: 10.17341/gazimmfd.887976.
ISNAD Seyyarer, Ebubekir vd. “Stokastik Ve Deterministik Hareketlerin Optimizasyon süreçlerindeki Etkileri”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 37/2 (Şubat 2022), 949-966. https://doi.org/10.17341/gazimmfd.887976.
JAMA Seyyarer E, Karci A, Ateş A. Stokastik ve deterministik hareketlerin optimizasyon süreçlerindeki etkileri. GUMMFD. 2022;37:949–966.
MLA Seyyarer, Ebubekir vd. “Stokastik Ve Deterministik Hareketlerin Optimizasyon süreçlerindeki Etkileri”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi, c. 37, sy. 2, 2022, ss. 949-66, doi:10.17341/gazimmfd.887976.
Vancouver Seyyarer E, Karci A, Ateş A. Stokastik ve deterministik hareketlerin optimizasyon süreçlerindeki etkileri. GUMMFD. 2022;37(2):949-66.