Research Article
BibTex RIS Cite

HAZARD AND OBSTACLE DETECTION IN SMART ROBOT SWEEPERS WITH ARTIFICIAL INTELLIGENCE-BASED IMAGE PROCESSING

Year 2024, Volume: 8 Issue: 2, 154 - 163, 31.12.2024
https://doi.org/10.62301/usmtd.1568320

Abstract

Nowadays, with the rapid development of technology, artificial intelligence technologies have begun to be used frequently in many areas. Artificial intelligence technologies are frequently used in many interdisciplinary fields such as health, education, and engineering. One of the important areas of use of artificial intelligence technology is the field of engineering. Artificial intelligence technologies are also used in mechatronics engineering, which is an interdisciplinary engineering field where machinery, electrical-electronics, and computer systems are used together. Robotic artificial intelligence algorithms are frequently used in mechatronics engineering. In the study, the application of artificial intelligence-based image processing techniques to the obstacle and danger detection function in robot vacuum cleaners was carried out. Due to the high costs and detection accuracy limitations of traditional sensor-based systems, camera and artificial intelligence-supported alternative artificial intelligence-based systems are aimed to detect inanimate objects and dangerous areas in the home environment and clean robot vacuum cleaners safely. For this purpose, a data set consisting of chair, armchair, toy, and slipper objects was created and the obtained data set was trained with VGG-19, AlexNet, MobileNet V2 deep learning architectures. Among the three deep learning architectures used in the study, the MobileNet V2 model was determined to be the most successful model with an accuracy rate of 97.87%. The deep learning approach implemented in the study offers a more advantageous solution in terms of cost effectiveness and environmental awareness compared to sensor-based systems.

References

  • H. He, J. Gray, A. Cangelosi, Q. Meng, T.M. McGinnity, J. Mehnen, The challenges and opportunities of artificial intelligence for trustworthy robots and autonomous systems, in: Proceedings of the 2020 3rd International Conference on Intelligent Robotic and Control Engineering (IRCE), IEEE, August 2020, pp. 68–74.
  • A. Pandey, A. Kaushik, A.K. Jha, G. Kapse, A technological survey on autonomous home cleaning robots, Int. J. Sci. Res. Publ. 4 (4) (2014) 1–7.
  • N. Lopac, I. Jurdana, A. Brnelić, T. Krljan, Application of laser systems for detection and ranging in the modern road transportation and maritime sector, Sensors 22 (16) (2022) 5946.
  • Y.H. Lee, T.S. Leung, G. Medioni, Real-time staircase detection from a wearable stereo system, in: Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), IEEE, November 2012, pp. 3770–3773.
  • G.A. Affonso, A.L. De Menezes, R.B. Nunes, D. Almonfrey, Using artificial intelligence for anomaly detection using security cameras, in: Proceedings of the 2021 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), IEEE, October 2021, pp. 1–5.
  • J. Vélez, W. McShea, H. Shamon, P.J. Castiblanco-Camacho, M.A. Tabak, C. Chalmers, et al., An evaluation of platforms for processing camera-trap data using artificial intelligence, Methods Ecol. Evol. 14 (2) (2023) 459–477.
  • X. Gao, J. Liu, W. Chen, Deep learning-based obstacle detection and classification for autonomous vacuum robots, J. Artif. Intell. Res. 75 (2022) 45–58, https://doi.org/10.1016/j.jair.2022.06.001.
  • Y. Zhu, T. Wang, X. Li, Real-time object recognition for robotic vacuum cleaners using convolutional neural networks, Robot. Auton. Syst. 125 (2023) 102420, https://doi.org/10.1016/j.robot.2023.102420.
  • K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in: Proceedings of the International Conference on Learning Representations, 2015.
  • K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
  • C. Szegedy, W. Liu, Y. Jia, et al., Going deeper with convolutions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.
  • S. Han, J. Pool, J. Tran, W. Dally, Learning both weights and connections for efficient neural networks, Adv. Neural Inf. Process. Syst. 28 (2015) 1135–1143.
  • J. Yosinski, J. Clune, Y. Bengio, H. Lipson, How transferable are features in deep neural networks? Adv. Neural Inf. Process. Syst. 27 (2014) 3320–3328.
  • G. Litjens, T. Kooi, B.E. Bejnordi, et al., A survey on deep learning in medical image analysis, Med. Image Anal. 42 (2017) 60–88.
  • M. Tan, Q.V. Le, EfficientNet: rethinking model scaling for convolutional neural networks, in: Proceedings of the 36th International Conference on Machine Learning, 2019, pp. 6105–6114.
  • A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet classification with deep convolutional neural networks, Adv. Neural Inf. Process. Syst. 25 (2012) 1097–1105.
  • Y. Bengio, Learning deep architectures for AI, Found. Trends Mach. Learn. 2 (1) (2009) 1–127.
  • M.D. Zeiler, R. Fergus, Visualizing and understanding convolutional networks, in: Proceedings of the European Conference on Computer Vision, 2014, pp. 818–833.
  • V. Nair, G.E. Hinton, Rectified linear units improve restricted Boltzmann machines, in: Proceedings of the 27th International Conference on Machine Learning, 2010, pp. 807–814.
  • N. Srivastava, G. Hinton, A. Krizhevsky, et al., Dropout: a simple way to prevent neural networks from overfitting, J. Mach. Learn. Res. 15 (1) (2014) 1929–1958.
  • R. Raina, A. Madhavan, A.Y. Ng, Large-scale deep unsupervised learning using graphics processors, in: Proceedings of the International Conference on Machine Learning, 2009, pp. 873–881.
  • Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (7553) (2015) 436–444.
  • R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580–587.
  • M. Oquab, L. Bottou, I. Laptev, J. Sivic, Learning and transferring mid-level image representations using convolutional neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1717–1724.
  • C. Szegedy, W. Liu, Y. Jia, et al., Going deeper with convolutions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.
  • M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, MobileNetV2: inverted residuals and linear bottlenecks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520.
  • A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, et al., MobileNets: efficient convolutional neural networks for mobile vision applications, arXiv preprint arXiv:1704.04861, 2017.
  • F.N. Iandola, S. Han, M.W. Moskewicz, et al., SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, arXiv preprint arXiv:1602.07360, 2016.
  • Z. Huang, X. Wang, L. Wu, Lightweight MobileNetV2 based on depthwise separable convolution for mobile device, Int. J. Comput. Intell. Syst. 11 (1) (2018) 1100–1108.
  • M. Tan, Q.V. Le, EfficientNet: rethinking model scaling for convolutional neural networks, in: Proceedings of the 36th International Conference on Machine Learning, 2019, pp. 6105–6114.
  • Y. Chen, T. Yang, X. Zhang, et al., DetNAS: backbone search for object detection, Adv. Neural Inf. Process. Syst. 31 (2018) 6638–6648.
  • Z.-Q. Zhao, P. Zheng, S.-T. Xu, X. Wu, Object detection with deep learning: a review, IEEE Trans. Neural Netw. Learn. Syst. 30 (11) (2019) 3212–3232.
  • D.M.W. Powers, Evaluation: from precision, recall and F-measure to ROC, informedness, markedness & correlation, J. Mach. Learn. Technol. 2 (1) (2011) 37–63.
  • R. Kohavi, A study of cross-validation and bootstrap for accuracy estimation and model selection, in: Proceedings of the 14th International Joint Conference on Artificial Intelligence, 1995, pp. 1137–1143.
  • M. Sokolova, G. Lapalme, A systematic analysis of performance measures for classification tasks, Inf. Process. Manag. 45 (4) (2009) 427–435.
  • J. Davis, M. Goadrich, The relationship between precision-recall and ROC curves, in: Proceedings of the 23rd International Conference on Machine Learning, 2006, pp. 233–240.
  • C.J. Van Rijsbergen, Information Retrieval, 2nd ed., Butterworth-Heinemann, 1979.
  • C.D. Manning, P. Raghavan, H. Schütze, Introduction to Information Retrieval, Cambridge University Press, 2008.
  • N. Chinchor, MUC-4 evaluation metrics, in: Proceedings of the 4th Message Understanding Conference, 1992, pp. 22–29.
  • J. Opitz, S. Burst, Macro F1 and macro F1, arXiv preprint arXiv:1911.03347, 2019.
  • H. He, E.A. Garcia, Learning from imbalanced data, IEEE Trans. Knowl. Data Eng. 21 (9) (2009) 1263–1284.
  • N. Japkowicz, S. Stephen, The class imbalance problem: a systematic study, Intell. Data Anal. 6 (5) (2002) 429–442.

YAPAY ZEKA TABANLI GÖRÜNTÜ İŞLEME İLE AKILLI ROBOT SÜPÜRGELERDE TEHLİKE VE ENGEL ALGILAMA

Year 2024, Volume: 8 Issue: 2, 154 - 163, 31.12.2024
https://doi.org/10.62301/usmtd.1568320

Abstract

Günümüzde teknolojinin hızla gelişmesi ile birlikte yapay zekâ teknolojileri birçok alanda sıklıkla kullanılmaya başlanmıştır. Yapay zekâ teknolojileri sağlık, eğitim, mühendislik gibi birçok disiplinler arası alanda sıklıkla kullanılmaktadır. Yapay zekâ teknolojisinin önemli kullanım alanlarından birisi de mühendislik alanıdır. Özellikle makina, elektrik-elektronik ve bilgisayar sistemlerinin bir arada kullanıldığı disiplinler arası bir mühendislik alanı olan mekatronik mühendisliğinde de yapay zekâ teknolojilerinden faydalanılmaktadır. Mekatronik mühendisliğinde özellikle robotik yapay zekâ algoritmaları sıklıkla kullanılmaktadır. Yapılan çalışmada, yapay zekâ tabanlı görüntü işleme tekniklerinin robot süpürgelerde engel ve tehlike algılama işlevine uygulanması gerçekleştirilmiştir. Geleneksel sensör tabanlı sistemlerin yüksek maliyetleri ve algılama doğruluğu sınırlamaları nedeniyle, kamera ve yapay zekâ destekli bir alternatif olarak yapay zekâ tabanlı sistemler robot süpürgelerin ev ortamındaki cansız nesneleri ve tehlikeli bölgeleri algılayarak güvenli bir şekilde temizlik yapması hedeflenmiştir. Bu amaçla, sandalye, koltuk, oyuncak ve terlik nesnelerden oluşan bir veri seti oluşturularak elde edilen veri seti VGG-19, AlexNet, MobileNet V2 derin öğrenme mimarileri ile eğitilmiştir. Çalışmada kullanılan üç derin öğrenme mimarisi içerisinde MobileNet V2 modeli %97.87 doğruluk oranı ile en başarılı model olarak tespit edilmiştir. Çalışmada gerçekleştirilen derin öğrenme yaklaşımı, sensör tabanlı sistemlere kıyasla maliyet etkinliği ve çevresel farkındalık açısından daha avantajlı bir çözüm sunmaktadır.

Thanks

Bu çalışma 6. Uluslararası Mühendislikte Yapay Zeka ve Uygulamalı Matematik Konferansı'nda (ICAIAME 2024) özet metin olarak sunulmuştur.

References

  • H. He, J. Gray, A. Cangelosi, Q. Meng, T.M. McGinnity, J. Mehnen, The challenges and opportunities of artificial intelligence for trustworthy robots and autonomous systems, in: Proceedings of the 2020 3rd International Conference on Intelligent Robotic and Control Engineering (IRCE), IEEE, August 2020, pp. 68–74.
  • A. Pandey, A. Kaushik, A.K. Jha, G. Kapse, A technological survey on autonomous home cleaning robots, Int. J. Sci. Res. Publ. 4 (4) (2014) 1–7.
  • N. Lopac, I. Jurdana, A. Brnelić, T. Krljan, Application of laser systems for detection and ranging in the modern road transportation and maritime sector, Sensors 22 (16) (2022) 5946.
  • Y.H. Lee, T.S. Leung, G. Medioni, Real-time staircase detection from a wearable stereo system, in: Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), IEEE, November 2012, pp. 3770–3773.
  • G.A. Affonso, A.L. De Menezes, R.B. Nunes, D. Almonfrey, Using artificial intelligence for anomaly detection using security cameras, in: Proceedings of the 2021 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), IEEE, October 2021, pp. 1–5.
  • J. Vélez, W. McShea, H. Shamon, P.J. Castiblanco-Camacho, M.A. Tabak, C. Chalmers, et al., An evaluation of platforms for processing camera-trap data using artificial intelligence, Methods Ecol. Evol. 14 (2) (2023) 459–477.
  • X. Gao, J. Liu, W. Chen, Deep learning-based obstacle detection and classification for autonomous vacuum robots, J. Artif. Intell. Res. 75 (2022) 45–58, https://doi.org/10.1016/j.jair.2022.06.001.
  • Y. Zhu, T. Wang, X. Li, Real-time object recognition for robotic vacuum cleaners using convolutional neural networks, Robot. Auton. Syst. 125 (2023) 102420, https://doi.org/10.1016/j.robot.2023.102420.
  • K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in: Proceedings of the International Conference on Learning Representations, 2015.
  • K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
  • C. Szegedy, W. Liu, Y. Jia, et al., Going deeper with convolutions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.
  • S. Han, J. Pool, J. Tran, W. Dally, Learning both weights and connections for efficient neural networks, Adv. Neural Inf. Process. Syst. 28 (2015) 1135–1143.
  • J. Yosinski, J. Clune, Y. Bengio, H. Lipson, How transferable are features in deep neural networks? Adv. Neural Inf. Process. Syst. 27 (2014) 3320–3328.
  • G. Litjens, T. Kooi, B.E. Bejnordi, et al., A survey on deep learning in medical image analysis, Med. Image Anal. 42 (2017) 60–88.
  • M. Tan, Q.V. Le, EfficientNet: rethinking model scaling for convolutional neural networks, in: Proceedings of the 36th International Conference on Machine Learning, 2019, pp. 6105–6114.
  • A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet classification with deep convolutional neural networks, Adv. Neural Inf. Process. Syst. 25 (2012) 1097–1105.
  • Y. Bengio, Learning deep architectures for AI, Found. Trends Mach. Learn. 2 (1) (2009) 1–127.
  • M.D. Zeiler, R. Fergus, Visualizing and understanding convolutional networks, in: Proceedings of the European Conference on Computer Vision, 2014, pp. 818–833.
  • V. Nair, G.E. Hinton, Rectified linear units improve restricted Boltzmann machines, in: Proceedings of the 27th International Conference on Machine Learning, 2010, pp. 807–814.
  • N. Srivastava, G. Hinton, A. Krizhevsky, et al., Dropout: a simple way to prevent neural networks from overfitting, J. Mach. Learn. Res. 15 (1) (2014) 1929–1958.
  • R. Raina, A. Madhavan, A.Y. Ng, Large-scale deep unsupervised learning using graphics processors, in: Proceedings of the International Conference on Machine Learning, 2009, pp. 873–881.
  • Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (7553) (2015) 436–444.
  • R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580–587.
  • M. Oquab, L. Bottou, I. Laptev, J. Sivic, Learning and transferring mid-level image representations using convolutional neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1717–1724.
  • C. Szegedy, W. Liu, Y. Jia, et al., Going deeper with convolutions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.
  • M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, MobileNetV2: inverted residuals and linear bottlenecks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520.
  • A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, et al., MobileNets: efficient convolutional neural networks for mobile vision applications, arXiv preprint arXiv:1704.04861, 2017.
  • F.N. Iandola, S. Han, M.W. Moskewicz, et al., SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, arXiv preprint arXiv:1602.07360, 2016.
  • Z. Huang, X. Wang, L. Wu, Lightweight MobileNetV2 based on depthwise separable convolution for mobile device, Int. J. Comput. Intell. Syst. 11 (1) (2018) 1100–1108.
  • M. Tan, Q.V. Le, EfficientNet: rethinking model scaling for convolutional neural networks, in: Proceedings of the 36th International Conference on Machine Learning, 2019, pp. 6105–6114.
  • Y. Chen, T. Yang, X. Zhang, et al., DetNAS: backbone search for object detection, Adv. Neural Inf. Process. Syst. 31 (2018) 6638–6648.
  • Z.-Q. Zhao, P. Zheng, S.-T. Xu, X. Wu, Object detection with deep learning: a review, IEEE Trans. Neural Netw. Learn. Syst. 30 (11) (2019) 3212–3232.
  • D.M.W. Powers, Evaluation: from precision, recall and F-measure to ROC, informedness, markedness & correlation, J. Mach. Learn. Technol. 2 (1) (2011) 37–63.
  • R. Kohavi, A study of cross-validation and bootstrap for accuracy estimation and model selection, in: Proceedings of the 14th International Joint Conference on Artificial Intelligence, 1995, pp. 1137–1143.
  • M. Sokolova, G. Lapalme, A systematic analysis of performance measures for classification tasks, Inf. Process. Manag. 45 (4) (2009) 427–435.
  • J. Davis, M. Goadrich, The relationship between precision-recall and ROC curves, in: Proceedings of the 23rd International Conference on Machine Learning, 2006, pp. 233–240.
  • C.J. Van Rijsbergen, Information Retrieval, 2nd ed., Butterworth-Heinemann, 1979.
  • C.D. Manning, P. Raghavan, H. Schütze, Introduction to Information Retrieval, Cambridge University Press, 2008.
  • N. Chinchor, MUC-4 evaluation metrics, in: Proceedings of the 4th Message Understanding Conference, 1992, pp. 22–29.
  • J. Opitz, S. Burst, Macro F1 and macro F1, arXiv preprint arXiv:1911.03347, 2019.
  • H. He, E.A. Garcia, Learning from imbalanced data, IEEE Trans. Knowl. Data Eng. 21 (9) (2009) 1263–1284.
  • N. Japkowicz, S. Stephen, The class imbalance problem: a systematic study, Intell. Data Anal. 6 (5) (2002) 429–442.
There are 42 citations in total.

Details

Primary Language Turkish
Subjects Software Engineering (Other)
Journal Section Research Articles
Authors

Mustafa Melikşah Özmen 0000-0003-3585-0518

Muzaffer Eylence 0000-0001-7299-8525

Bekir Aksoy 0000-0001-8052-9411

Publication Date December 31, 2024
Submission Date October 16, 2024
Acceptance Date December 5, 2024
Published in Issue Year 2024 Volume: 8 Issue: 2

Cite

APA Özmen, M. M., Eylence, M., & Aksoy, B. (2024). YAPAY ZEKA TABANLI GÖRÜNTÜ İŞLEME İLE AKILLI ROBOT SÜPÜRGELERDE TEHLİKE VE ENGEL ALGILAMA. Uluslararası Sürdürülebilir Mühendislik Ve Teknoloji Dergisi, 8(2), 154-163. https://doi.org/10.62301/usmtd.1568320
AMA Özmen MM, Eylence M, Aksoy B. YAPAY ZEKA TABANLI GÖRÜNTÜ İŞLEME İLE AKILLI ROBOT SÜPÜRGELERDE TEHLİKE VE ENGEL ALGILAMA. Uluslararası Sürdürülebilir Mühendislik ve Teknoloji Dergisi. December 2024;8(2):154-163. doi:10.62301/usmtd.1568320
Chicago Özmen, Mustafa Melikşah, Muzaffer Eylence, and Bekir Aksoy. “YAPAY ZEKA TABANLI GÖRÜNTÜ İŞLEME İLE AKILLI ROBOT SÜPÜRGELERDE TEHLİKE VE ENGEL ALGILAMA”. Uluslararası Sürdürülebilir Mühendislik Ve Teknoloji Dergisi 8, no. 2 (December 2024): 154-63. https://doi.org/10.62301/usmtd.1568320.
EndNote Özmen MM, Eylence M, Aksoy B (December 1, 2024) YAPAY ZEKA TABANLI GÖRÜNTÜ İŞLEME İLE AKILLI ROBOT SÜPÜRGELERDE TEHLİKE VE ENGEL ALGILAMA. Uluslararası Sürdürülebilir Mühendislik ve Teknoloji Dergisi 8 2 154–163.
IEEE M. M. Özmen, M. Eylence, and B. Aksoy, “YAPAY ZEKA TABANLI GÖRÜNTÜ İŞLEME İLE AKILLI ROBOT SÜPÜRGELERDE TEHLİKE VE ENGEL ALGILAMA”, Uluslararası Sürdürülebilir Mühendislik ve Teknoloji Dergisi, vol. 8, no. 2, pp. 154–163, 2024, doi: 10.62301/usmtd.1568320.
ISNAD Özmen, Mustafa Melikşah et al. “YAPAY ZEKA TABANLI GÖRÜNTÜ İŞLEME İLE AKILLI ROBOT SÜPÜRGELERDE TEHLİKE VE ENGEL ALGILAMA”. Uluslararası Sürdürülebilir Mühendislik ve Teknoloji Dergisi 8/2 (December 2024), 154-163. https://doi.org/10.62301/usmtd.1568320.
JAMA Özmen MM, Eylence M, Aksoy B. YAPAY ZEKA TABANLI GÖRÜNTÜ İŞLEME İLE AKILLI ROBOT SÜPÜRGELERDE TEHLİKE VE ENGEL ALGILAMA. Uluslararası Sürdürülebilir Mühendislik ve Teknoloji Dergisi. 2024;8:154–163.
MLA Özmen, Mustafa Melikşah et al. “YAPAY ZEKA TABANLI GÖRÜNTÜ İŞLEME İLE AKILLI ROBOT SÜPÜRGELERDE TEHLİKE VE ENGEL ALGILAMA”. Uluslararası Sürdürülebilir Mühendislik Ve Teknoloji Dergisi, vol. 8, no. 2, 2024, pp. 154-63, doi:10.62301/usmtd.1568320.
Vancouver Özmen MM, Eylence M, Aksoy B. YAPAY ZEKA TABANLI GÖRÜNTÜ İŞLEME İLE AKILLI ROBOT SÜPÜRGELERDE TEHLİKE VE ENGEL ALGILAMA. Uluslararası Sürdürülebilir Mühendislik ve Teknoloji Dergisi. 2024;8(2):154-63.