Araştırma Makalesi
BibTex RIS Kaynak Göster

Controlling a Single Tank Liquid Level System with Classical Control Methods and Reinforcement Learning Methods

Yıl 2024, , 30 - 41, 31.05.2024
https://doi.org/10.34088/kojose.1278657

Öz

In this study, the control of the single tank liquid level system used in control systems has been carried out. The control of the single tank liquid level system has been performed with the classic PI, modified PI, state feedback with integrator action, and Q learning algorithm and SARSA algorithms, one of the artificial intelligence methods. The tank system to be modelled was carried out using classical physics, namely Newton's laws. Then, the mathematical model obtained of the system that are continuous model in time is acquired. The originality of the study; the non-linear liquid tank system is controlled by classical controllers and reinforcement methods. For this purpose, the system was firstly designed to model the system, then the system has been linearized at a specific point in order to design classic PI, modified PI, and state feedback with integral. After that, agents of the Q Learning algorithm and SARSA algorithms were trained for the system. Then the agents have controlled the single-level tank system. The results of the classic controllers and supervised controllers are contrasted with regard to performance criteria such as rising time, settling time, overshoot and integral square error. Consequently, Q learning method has produced 0.0804-sec rising time, 0.943 sec settling time and 0.574 integral square errors. So, Q learning algorithm has produced and exhibited more thriving and successful results for controlling single liquid tank system than PI, Modified PI, state feedback controllers and SARSA.

Destekleyen Kurum

-

Proje Numarası

-

Teşekkür

-

Kaynakça

  • [1] M. A. Pala, M. E. Çimen, Ö. F. Boyraz, M. Z. Yildiz, and A. Boz, 2019. Meme Kanserinin Teşhis Edilmesinde Karar Ağacı Ve KNN Algoritmalarının Karşılaştırmalı Başarım Analizi. Acad. Perspect. Procedia, 2(3). doi: 10.33793/acperpro.02.03.47.
  • [2] S. Tajjour, S. Garg, S. S. Chandel, and D. Sharma, 2023. A novel hybrid artificial neural network technique for the early skin cancer diagnosis using color space conversions of original images. Int. J. Imaging Syst. Technol., 33(1), pp. 276–286.
  • [3] D. Şengür, 2021. EEG, EMG and ECG based determination of psychosocial risk levels in teachers based on wavelet extreme learning machine autoencoders. Politek. Derg., 25(3), pp. 85–989, 2021, doi: 10.2339/politeknik.886593.
  • [4] F. Bayram, 2020. “Derin öğrenme tabanlı otomatik plaka tanıma,” Politek. Derg., 23(4), pp. 955–960.
  • [5] J. Lisowski, 2023. Artificial Intelligence Methods in Safe Ship Control Based on Marine Environment Remote Sensing. Remote Sens., 15(1), p. 203. doi: https://doi.org/10.3390/rs15010203.
  • [6] A. Kurani, P. Doshi, A. Vakharia, and M. Shah, “A comprehensive comparative study of artificial neural network (ANN) and support vector machines (SVM) on stock forecasting,” Ann. Data Sci., vol. 10, no. 1, pp. 183–208, 2023, doi: https://doi.org/10.1007/s40745-021-00344-x.
  • [7] M. S. Ünlü, “Teknik Analiz Ve Derin Pekiştirmeli Öğrenme İle Kriptopara Alım-Satımı,” Okan Üniversitesi, 2019.
  • [8] X. Huang, D. Zou, G. Cheng, X. Chen, and H. Xie, “Trends, research issues and applications of artificial intelligence in language education,” Educ. Technol. Soc., vol. 26, no. 1, pp. 112–131, 2023, doi: 10.30191/ETS.202301_26(1).0009.
  • [9] K. Souchleris and G. A. Sidiropoulos, G. K. Papakostas, “Reinforcement Learning in Game Industry—Review, Prospects and Challenges,” Appl. Sci., vol. 13, no. 4, p. 2443, 2023, doi: 10.3390/app13042443.
  • [10] F. Candan, S. Emir, M. Doğan, and T. Kumbasar, “Takviyeli Q-Öğrenme Yöntemiyle Labirent Problemi Çözümü Labyrinth Problem Solution with Reinforcement Q-Learning Method,” in TOK2018 Otomatik Kontrol Ulusal Toplantısı, 2048.
  • [11] G. Mbuwir, B., Ruelens, F., Spiessens, F., & Deconinck, “Reinforcement learning-based battery energy management in a solar microgrid,” Energy-Open, vol. 2, no. 4, p. 36, 2017.
  • [12] M. Harmon and S. Harmon, “Reinforcement Learning: A Tutorial,” 1997. [Online]. Available: https://apps.dtic.mil/sti/pdfs/ADA323194.pdf
  • [13] S. A. Ğ. Reyhan and Z. H. Tuğcu, “Akıllı Şebeke Uygulamalarında Derin Öğrenme Tekniklerinin Kullanımına İlişkin Kısa Bir İnceleme,” EMO Bilim. Dergi, vol. 13, no. 1, pp. 41–61, 2022, [Online]. Available: https://dergipark.org.tr/en/pub/emobd/issue/75563/1196333
  • [14] I. Tunc and M. T. Soylemez, “Fuzzy logic and deep Q learning based control for traffic lights,” Alexandria Eng. J., vol. 67, pp. 343–359, 2023, doi: 10.1016/j.aej.2022.12.028.
  • [15] A. Leite, M. Candadai, and E. J. Izquierdo, “Reinforcement learning beyond the Bellman equation: Exploring critic objectives using evolution,” in Artificial Life Conference Proceedings 32, 2020, pp. 441–449.
  • [16] I. C. Dolcetta and M. Falcone, “Discrete dynamic programming and viscosity solutions of the Bellman equation,” in In Annales de l’Institut Henri Poincaré C, Analyse non linéaire, 1989, pp. 161–183.
  • [17] C. Boutilier, R. Reiter, and B. Price, “Symbolic dynamic programming for first-order MDPs,” in IJCAI International Joint Conference on Artificial Intelligence, 2001, vol. 1, pp. 690–697. doi: 10.1609/aaai.v24i1.7747.
  • [18] W. B. Powell, Approximate Dynamic Programming: Solving the curses of dimensionality. John Wiley, 2007.
  • [19] D. Michie, “Experiments on the mechanization of game-learning Part I. Characterization of the model and its parameters,” Comput. J., vol. 6, no. 3, pp. 232–236, 1963.
  • [20] M. L. Minsky, Theory of neural-analog reinforcement systems and its application to the brain-model problem. Princeton University, 1954.
  • [21] J. Karlsson, “Learning to solve multiple goals,” University of Rochester, 1997.
  • [22] A. L. Samuel, “Some studies in machine learning using the game of checkers. II—Recent progress,” Annu. Rev. Autom. Program., vol. 6, pp. 1–36, 1969.
  • [23] C. J. C. H. Watkins, “Learning from delayed rewards,” King’s College UK, 1989.
  • [24] C. J. Watkins and P. Dayan, “Q-learning,” Mach. Learn., vol. 8, pp. 279–292, 1992.
  • [25] G. Tesauro, “Neurogammon: A neural-network backgammon program,” in JCNN international joint conference on neural networks IEE, 1990, pp. 33–39. doi: 10.1109/IJCNN.1990.137821.
  • [26] G. Tesauro, “Practical issues in temporal difference learning,” Adv. neural Inf. Process. Syst., vol. 4, 1991.
  • [27] C. Ozan, “İyileştirilmiş pekiştirmeli öğrenme yöntemi ve dinamik yükleme ile kentiçi ulaşım ağlarının tasarımı,” Pamukkale Üniversitesi, 2012.
  • [28] M. K. Çalışır, S., & Pehlivanoğlu, “Model-free reinforcement learning algorithms: A survey,” in 27th Signal Processing and Communications Applications Conference (SIU), 2019, pp. 1–4.
  • [29] A. O. Köroğlu, A. E. Edem, S. N. Akmeşe, Ö. Elmas, I. Tunc, and M. T. Soylemez, “Agent-Based Route Planning with Deep Q Learning,” in 13th International Conference on Electrical and Electronics Engineering (ELECO), 2021, pp. 403–407.
  • [30] Y. Li, “Deep reinforcement learning: An overview,” arXiv Prepr. arXiv1701.07274, 2017, doi: https://doi.org/10.48550/arXiv.1701.07274.
  • [31] A. Bir and M. Kacar, Pioneers of Automatic Control Systems. 2006.
  • [32] M. E. Çimen, Z. Garip, M. Emekl, and A. F. Boz, “Fuzzy Logic PID Design using Genetic Algorithm under Overshoot Constrained Conditions for Heat Exchanger Control,” J. Inst. Sci. Technol., vol. 12, no. 1, pp. 164–181, 2022, doi: 10.21597/jist.980726.
  • [33] ME, Cimen, and Y. Yalçın, “A novel hybrid firefly–whale optimization algorithm and its application to optimization of MPC parameters,” Soft Comput., vol. 26, no. 4, pp. 1845–1872, 2022, doi: 10.1007/s00500-021-06441-6.
  • [34] Z. Mizumoto, I., Ikeda, D., Hirahata, T., & Iwai, “Design of discrete time adaptive PID control systems with parallel feedforward compensator,” Control Eng. Pract., vol. 18, no. 2, 2010, doi: https://doi.org/10.1016/j.conengprac.2009.09.003.
  • [35] D. Taler, T. Sobota, M. Jaremkiewicz, and J. Taler, “Control of the temperature in the hot liquid tank by using a digital PID controller considering the random errors of the thermometer indications,” Energy, 2022, doi: https://doi.org/10.1016/j.energy.2021.122771.
  • [36] R. E. Samin, L. M. Jie, and M. A. Zawawi, “PID implementation of heating tank in mini automation plant using Programmable Logic Controller (PLC),” in International Conference on Electrical, Control and Computer Engineering 2011 (InECCE), 2011.
  • [37] G. Yüksek, A. N. Mete, and A. Alkaya, “PID parametrelerinin LQR ve GA tabanlı optimizasyonu: sıvı seviye kontrol uygulaması,” Politek. Derg., vol. 23, no. 4, pp. 1111–1119, 2020, doi: 10.2339/politeknik.603344.
  • [38] N. A. Selamat, F. S. Daud, H. I. Jaafar, and N. H. Shamsudin, “Comparison of LQR and PID Controller Tuning Using PSO for Coupled Tank System,” in 11th International Colloquium on Signal Processing & Its Applications (CSPA), 2015.
  • [39] D. Sastry, K. Mohan, M. Naidu, and N. M. Rao, “An Implementation of Different Non Linear PID Controllers on a Single Tank level Control using Matlab,” Int. J. Comput. Appl. (, vol. 54, no. 1, 2012.
  • [40] and Y. S. Wei, Le, Fang Fang, “Adaptive backstepping-based composite nonlinear feedback water level control for the nuclear U-tube steam generator.,” IEEE Trans. Control Syst. Technol., vol. 22, no. 1, 2013, doi: 10.1109/TCST.2013.2250504.
  • [41] Q. Xiao, D. Zou, and P. Wei, “Fuzzy Adaptive PID Control Tank Level,” in International Conference on Multimedia Communications, 2010. doi: 10.1109/MEDIACOM.2010.10.
  • [42] C. Esakkiappan, “Soft Computing Based Tuning of PI Controller With Cuckoo Search Optimization For Level Control of Hopper Tank System,” Res. Sq., 2021, doi: https://doi.org/10.21203/rs.3.rs-920228/v1.
  • [43] N. N. Son, “Level Control of Quadruple Tank System Based on Adaptive Inverse Evolutionary Neural Controller,” Int. J. Control. Autom. Syst., vol. 18, no. 9, 2020, doi: 10.1007/s12555-019-0504-8.
  • [44] C. Urrea and F. Páez, “Design and Comparison of Strategies for Level Control in a Nonlinear Tank,” Processes, vol. 9, 2021, doi: 10.3390/pr9050735.
  • [45] R. S. Sutton and G. A. Barto, Reinforcement Learning: An Introduction. Cambridge: MIT Press, 1998.
  • [46] M. Rummery, G. A., & Niranjan, On-line Q-learning using connectionist systems. Cambridge, UK: University of Cambridge, 1994.
  • [47] C. J. Watkins and P. Dayan, “Q-Learning,” Mach. Learn., vol. 8, pp. 279–292, 1992, doi: 10.1007/BF00992698.
  • [48] A. Wang, H., Emmerich, M., & Plaat, “Monte Carlo Q-learning for General Game Playing,” arXiv Prepr. arXiv1802.05944., doi: 10.48550/arXiv.1802.05944.
  • [49] K. L., R. Bars, J. Hetthéssy, and C. Bányász, Control Engineering. Springer, 2019. doi: 10.1007/978-981-10-8297-9.
  • [50] J. Cimbala and Y. Cengel, Fluid mechanics: fundamentals and applications. McGraw-Hill Higher Education, 2006.
Yıl 2024, , 30 - 41, 31.05.2024
https://doi.org/10.34088/kojose.1278657

Öz

Proje Numarası

-

Kaynakça

  • [1] M. A. Pala, M. E. Çimen, Ö. F. Boyraz, M. Z. Yildiz, and A. Boz, 2019. Meme Kanserinin Teşhis Edilmesinde Karar Ağacı Ve KNN Algoritmalarının Karşılaştırmalı Başarım Analizi. Acad. Perspect. Procedia, 2(3). doi: 10.33793/acperpro.02.03.47.
  • [2] S. Tajjour, S. Garg, S. S. Chandel, and D. Sharma, 2023. A novel hybrid artificial neural network technique for the early skin cancer diagnosis using color space conversions of original images. Int. J. Imaging Syst. Technol., 33(1), pp. 276–286.
  • [3] D. Şengür, 2021. EEG, EMG and ECG based determination of psychosocial risk levels in teachers based on wavelet extreme learning machine autoencoders. Politek. Derg., 25(3), pp. 85–989, 2021, doi: 10.2339/politeknik.886593.
  • [4] F. Bayram, 2020. “Derin öğrenme tabanlı otomatik plaka tanıma,” Politek. Derg., 23(4), pp. 955–960.
  • [5] J. Lisowski, 2023. Artificial Intelligence Methods in Safe Ship Control Based on Marine Environment Remote Sensing. Remote Sens., 15(1), p. 203. doi: https://doi.org/10.3390/rs15010203.
  • [6] A. Kurani, P. Doshi, A. Vakharia, and M. Shah, “A comprehensive comparative study of artificial neural network (ANN) and support vector machines (SVM) on stock forecasting,” Ann. Data Sci., vol. 10, no. 1, pp. 183–208, 2023, doi: https://doi.org/10.1007/s40745-021-00344-x.
  • [7] M. S. Ünlü, “Teknik Analiz Ve Derin Pekiştirmeli Öğrenme İle Kriptopara Alım-Satımı,” Okan Üniversitesi, 2019.
  • [8] X. Huang, D. Zou, G. Cheng, X. Chen, and H. Xie, “Trends, research issues and applications of artificial intelligence in language education,” Educ. Technol. Soc., vol. 26, no. 1, pp. 112–131, 2023, doi: 10.30191/ETS.202301_26(1).0009.
  • [9] K. Souchleris and G. A. Sidiropoulos, G. K. Papakostas, “Reinforcement Learning in Game Industry—Review, Prospects and Challenges,” Appl. Sci., vol. 13, no. 4, p. 2443, 2023, doi: 10.3390/app13042443.
  • [10] F. Candan, S. Emir, M. Doğan, and T. Kumbasar, “Takviyeli Q-Öğrenme Yöntemiyle Labirent Problemi Çözümü Labyrinth Problem Solution with Reinforcement Q-Learning Method,” in TOK2018 Otomatik Kontrol Ulusal Toplantısı, 2048.
  • [11] G. Mbuwir, B., Ruelens, F., Spiessens, F., & Deconinck, “Reinforcement learning-based battery energy management in a solar microgrid,” Energy-Open, vol. 2, no. 4, p. 36, 2017.
  • [12] M. Harmon and S. Harmon, “Reinforcement Learning: A Tutorial,” 1997. [Online]. Available: https://apps.dtic.mil/sti/pdfs/ADA323194.pdf
  • [13] S. A. Ğ. Reyhan and Z. H. Tuğcu, “Akıllı Şebeke Uygulamalarında Derin Öğrenme Tekniklerinin Kullanımına İlişkin Kısa Bir İnceleme,” EMO Bilim. Dergi, vol. 13, no. 1, pp. 41–61, 2022, [Online]. Available: https://dergipark.org.tr/en/pub/emobd/issue/75563/1196333
  • [14] I. Tunc and M. T. Soylemez, “Fuzzy logic and deep Q learning based control for traffic lights,” Alexandria Eng. J., vol. 67, pp. 343–359, 2023, doi: 10.1016/j.aej.2022.12.028.
  • [15] A. Leite, M. Candadai, and E. J. Izquierdo, “Reinforcement learning beyond the Bellman equation: Exploring critic objectives using evolution,” in Artificial Life Conference Proceedings 32, 2020, pp. 441–449.
  • [16] I. C. Dolcetta and M. Falcone, “Discrete dynamic programming and viscosity solutions of the Bellman equation,” in In Annales de l’Institut Henri Poincaré C, Analyse non linéaire, 1989, pp. 161–183.
  • [17] C. Boutilier, R. Reiter, and B. Price, “Symbolic dynamic programming for first-order MDPs,” in IJCAI International Joint Conference on Artificial Intelligence, 2001, vol. 1, pp. 690–697. doi: 10.1609/aaai.v24i1.7747.
  • [18] W. B. Powell, Approximate Dynamic Programming: Solving the curses of dimensionality. John Wiley, 2007.
  • [19] D. Michie, “Experiments on the mechanization of game-learning Part I. Characterization of the model and its parameters,” Comput. J., vol. 6, no. 3, pp. 232–236, 1963.
  • [20] M. L. Minsky, Theory of neural-analog reinforcement systems and its application to the brain-model problem. Princeton University, 1954.
  • [21] J. Karlsson, “Learning to solve multiple goals,” University of Rochester, 1997.
  • [22] A. L. Samuel, “Some studies in machine learning using the game of checkers. II—Recent progress,” Annu. Rev. Autom. Program., vol. 6, pp. 1–36, 1969.
  • [23] C. J. C. H. Watkins, “Learning from delayed rewards,” King’s College UK, 1989.
  • [24] C. J. Watkins and P. Dayan, “Q-learning,” Mach. Learn., vol. 8, pp. 279–292, 1992.
  • [25] G. Tesauro, “Neurogammon: A neural-network backgammon program,” in JCNN international joint conference on neural networks IEE, 1990, pp. 33–39. doi: 10.1109/IJCNN.1990.137821.
  • [26] G. Tesauro, “Practical issues in temporal difference learning,” Adv. neural Inf. Process. Syst., vol. 4, 1991.
  • [27] C. Ozan, “İyileştirilmiş pekiştirmeli öğrenme yöntemi ve dinamik yükleme ile kentiçi ulaşım ağlarının tasarımı,” Pamukkale Üniversitesi, 2012.
  • [28] M. K. Çalışır, S., & Pehlivanoğlu, “Model-free reinforcement learning algorithms: A survey,” in 27th Signal Processing and Communications Applications Conference (SIU), 2019, pp. 1–4.
  • [29] A. O. Köroğlu, A. E. Edem, S. N. Akmeşe, Ö. Elmas, I. Tunc, and M. T. Soylemez, “Agent-Based Route Planning with Deep Q Learning,” in 13th International Conference on Electrical and Electronics Engineering (ELECO), 2021, pp. 403–407.
  • [30] Y. Li, “Deep reinforcement learning: An overview,” arXiv Prepr. arXiv1701.07274, 2017, doi: https://doi.org/10.48550/arXiv.1701.07274.
  • [31] A. Bir and M. Kacar, Pioneers of Automatic Control Systems. 2006.
  • [32] M. E. Çimen, Z. Garip, M. Emekl, and A. F. Boz, “Fuzzy Logic PID Design using Genetic Algorithm under Overshoot Constrained Conditions for Heat Exchanger Control,” J. Inst. Sci. Technol., vol. 12, no. 1, pp. 164–181, 2022, doi: 10.21597/jist.980726.
  • [33] ME, Cimen, and Y. Yalçın, “A novel hybrid firefly–whale optimization algorithm and its application to optimization of MPC parameters,” Soft Comput., vol. 26, no. 4, pp. 1845–1872, 2022, doi: 10.1007/s00500-021-06441-6.
  • [34] Z. Mizumoto, I., Ikeda, D., Hirahata, T., & Iwai, “Design of discrete time adaptive PID control systems with parallel feedforward compensator,” Control Eng. Pract., vol. 18, no. 2, 2010, doi: https://doi.org/10.1016/j.conengprac.2009.09.003.
  • [35] D. Taler, T. Sobota, M. Jaremkiewicz, and J. Taler, “Control of the temperature in the hot liquid tank by using a digital PID controller considering the random errors of the thermometer indications,” Energy, 2022, doi: https://doi.org/10.1016/j.energy.2021.122771.
  • [36] R. E. Samin, L. M. Jie, and M. A. Zawawi, “PID implementation of heating tank in mini automation plant using Programmable Logic Controller (PLC),” in International Conference on Electrical, Control and Computer Engineering 2011 (InECCE), 2011.
  • [37] G. Yüksek, A. N. Mete, and A. Alkaya, “PID parametrelerinin LQR ve GA tabanlı optimizasyonu: sıvı seviye kontrol uygulaması,” Politek. Derg., vol. 23, no. 4, pp. 1111–1119, 2020, doi: 10.2339/politeknik.603344.
  • [38] N. A. Selamat, F. S. Daud, H. I. Jaafar, and N. H. Shamsudin, “Comparison of LQR and PID Controller Tuning Using PSO for Coupled Tank System,” in 11th International Colloquium on Signal Processing & Its Applications (CSPA), 2015.
  • [39] D. Sastry, K. Mohan, M. Naidu, and N. M. Rao, “An Implementation of Different Non Linear PID Controllers on a Single Tank level Control using Matlab,” Int. J. Comput. Appl. (, vol. 54, no. 1, 2012.
  • [40] and Y. S. Wei, Le, Fang Fang, “Adaptive backstepping-based composite nonlinear feedback water level control for the nuclear U-tube steam generator.,” IEEE Trans. Control Syst. Technol., vol. 22, no. 1, 2013, doi: 10.1109/TCST.2013.2250504.
  • [41] Q. Xiao, D. Zou, and P. Wei, “Fuzzy Adaptive PID Control Tank Level,” in International Conference on Multimedia Communications, 2010. doi: 10.1109/MEDIACOM.2010.10.
  • [42] C. Esakkiappan, “Soft Computing Based Tuning of PI Controller With Cuckoo Search Optimization For Level Control of Hopper Tank System,” Res. Sq., 2021, doi: https://doi.org/10.21203/rs.3.rs-920228/v1.
  • [43] N. N. Son, “Level Control of Quadruple Tank System Based on Adaptive Inverse Evolutionary Neural Controller,” Int. J. Control. Autom. Syst., vol. 18, no. 9, 2020, doi: 10.1007/s12555-019-0504-8.
  • [44] C. Urrea and F. Páez, “Design and Comparison of Strategies for Level Control in a Nonlinear Tank,” Processes, vol. 9, 2021, doi: 10.3390/pr9050735.
  • [45] R. S. Sutton and G. A. Barto, Reinforcement Learning: An Introduction. Cambridge: MIT Press, 1998.
  • [46] M. Rummery, G. A., & Niranjan, On-line Q-learning using connectionist systems. Cambridge, UK: University of Cambridge, 1994.
  • [47] C. J. Watkins and P. Dayan, “Q-Learning,” Mach. Learn., vol. 8, pp. 279–292, 1992, doi: 10.1007/BF00992698.
  • [48] A. Wang, H., Emmerich, M., & Plaat, “Monte Carlo Q-learning for General Game Playing,” arXiv Prepr. arXiv1802.05944., doi: 10.48550/arXiv.1802.05944.
  • [49] K. L., R. Bars, J. Hetthéssy, and C. Bányász, Control Engineering. Springer, 2019. doi: 10.1007/978-981-10-8297-9.
  • [50] J. Cimbala and Y. Cengel, Fluid mechanics: fundamentals and applications. McGraw-Hill Higher Education, 2006.
Toplam 50 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yapay Zeka
Bölüm Makaleler
Yazarlar

Murat Erhan Çimen 0000-0002-1793-485X

Zeynep Garip 0000-0002-0420-8541

Proje Numarası -
Erken Görünüm Tarihi 31 Mayıs 2024
Yayımlanma Tarihi 31 Mayıs 2024
Kabul Tarihi 19 Ağustos 2023
Yayımlandığı Sayı Yıl 2024

Kaynak Göster

APA Çimen, M. E., & Garip, Z. (2024). Controlling a Single Tank Liquid Level System with Classical Control Methods and Reinforcement Learning Methods. Kocaeli Journal of Science and Engineering, 7(1), 30-41. https://doi.org/10.34088/kojose.1278657