Araştırma Makalesi
BibTex RIS Kaynak Göster

Şeffaf Kontrol Sistemlerine Doğru: Tekrarlı Öğrenme Kontrolünde Açıklanabilir Yapay Zekanın Rolü

Yıl 2024, , 2370 - 2386, 23.10.2024
https://doi.org/10.29130/dubited.1535271

Öz

Bu makale, Açıklanabilir Yapay Zeka (XAI) tekniklerinin entegrasyonu yoluyla Tekrarlı Öğrenme Kontrolü (ILC) sistemlerinin performansını ve yorumlanabilirliğini iyileştirmek için yeni bir yaklaşım sunmaktadır. ILC, robotik, süreç kontrolü ve trafik yönetimi dahil olmak üzere çeşitli alanlarda kullanılan güçlü bir yöntemdir ve burada sistem çıktısındaki hataları en aza indirmek için geçmiş performansa dayalı olarak kontrol girdilerini tekrarlı olarak iyileştirir. Ancak, geleneksel ILC yöntemleri genellikle "kara kutular" olarak çalışır ve kullanıcıların karar alma sürecini anlamasını zorlaştırır. Bu zorluğun üstesinden gelmek için, algoritmanın davranışına ilişkin şeffaf ve yorumlanabilir içgörüler sağlamak üzere XAI'yi, özellikle SHapley Eklemeli Açıklamaları (SHAP) ILC çerçevesine dahil ediyoruz. Çalışma, ILC'nin evrimini ayrıntılı olarak açıklayarak, öngörücü optimal kontrol ve uyarlanabilir şemalar gibi önemli gelişmeleri vurgulayarak başlıyor ve ardından XAI'yi ILC'ye entegre etme metodolojisine geçiyor. Entegre sistem, robotik kol yörünge takibi ve trafik akışı yönetimi senaryolarına odaklanarak kapsamlı simülasyonlar yoluyla değerlendirildi. Sonuçlar, XAI ile geliştirilmiş ILC'nin yalnızca hızlı yakınsama ve yüksek kontrol doğruluğu elde etmekle kalmayıp aynı zamanda harici bozulmalar karşısında sağlamlığını da koruduğunu göstermektedir. SHAP analizleri, orantılı kazanç (Kp) ve türev kazancı (Kd) gibi parametrelerin sistem performansını yönlendirmede kritik olduğunu ve detaylı görselleştirmelerin sistem iyileştirmesi için eyleme geçirilebilir içgörüler sağladığını ortaya koymuştur. Kontrol hassasiyeti için kritik bir istatistik, kök ortalama kare hatasıydı (RMSE). RMSE, robotik kol durumunda 0,02 radyana kadar düşürüldü ve bu, amaçlanan rotanın son derece hassas bir şekilde izlendiğini göstermektedir. Karşılaştırıldığında, ILC algoritması, trafik yönetimi senaryosunda ideal trafik yoğunluğunu önceden belirlenmiş sınırlar içinde etkili bir şekilde korudu ve bunun sonucunda temel kontrol önlemleriyle karşılaştırıldığında tıkanıklıkta %40'lık bir azalma sağlandı. Sistem modeline değişiklikler, dış bozulmalar ve sensör gürültüsü eklenerek ILC algoritmasının dayanıklılığı incelendi. Algoritma, bu bozulmalar karşısında yüksek derecede kararlılık ve doğruluk gösterdi. Örneğin, robotik kol durumunda, sensör okumalarına gürültü eklemek algoritmanın performansı üzerinde ihmal edilebilir bir etkiye sahipti ve RMSE'yi %5'ten daha az artırdı. XAI'nin ILC'ye bu şekilde entegre edilmesi, özellikle güvenlik açısından kritik uygulamalarda hem yüksek performans hem de şeffaflık sunarak kontrol sistemi tasarımındaki önemli bir boşluğu giderir. Bulgular, gelecekteki araştırmaların ek XAI tekniklerini araştırarak ve entegre sistemi daha karmaşık, gerçek dünya senaryolarına uygulayarak bu yaklaşımı daha da geliştirebileceğini göstermektedir.

Kaynakça

  • [1] S. Arimoto, “A brief history of iterative learning control,” Iterative learning control: Analysis, design, integration and applications, pp. 3–7, 1998.
  • [2] S.-R. Oh, Z. Bien, and I. H. Suh, “An iterative learning control method with application to robot manipulators,” IEEE Journal on Robotics and Automation, vol. 4, no. 5, pp. 508–514, 1988.
  • [3] H.-S. Lee and Z. Bien, “Study on robustness of iterative learning control with non-zero initial error,” Int J Control, vol. 64, no. 3, pp. 345–359, 1996.
  • [4] N. Amann, D. H. Owens, and E. Rogers, “Predictive optimal iterative learning control,” Int J Control, vol. 69, no. 2, pp. 203–226, 1998.
  • [5] K. Zhang, P. Xu, and J. Zhang, “Explainable AI in deep reinforcement learning models: A shap method applied in power system emergency control,” in 2020 IEEE 4th conference on energy internet and energy system integration (EI2), IEEE, 2020, pp. 711–716.
  • [6] Y. Xie, N. Pongsakornsathien, A. Gardi, and R. Sabatini, “Explanation of machine-learning solutions in air-traffic management,” Aerospace, vol. 8, no. 8, p. 224, 2021.
  • [7] R.-K. Sheu and M. S. Pardeshi, “A survey on medical explainable AI (XAI): recent progress, explainability approach, human interaction and scoring system,” Sensors, vol. 22, no. 20, p. 8068, 2022.
  • [8] P. Kang, J. Li, S. Jiang, and P. B. Shull, “Reduce system redundancy and optimize sensor disposition for EMG–IMU multimodal fusion human–machine interfaces with XAI,” IEEE Trans Instrum Meas, vol. 72, pp. 1–9, 2022.
  • [9] A. Dobrovolskis, E. Kazanavičius, and L. Kižauskienė, “Building XAI-Based Agents for IoT Systems,” Applied Sciences, vol. 13, no. 6, p. 4040, 2023.
  • [10] W. Maxwell and B. Dumas, “Meaningful XAI based on user-centric design methodology,” arXiv preprint arXiv:2308.13228, 2023.
  • [11] D. Doran, S. Schulz, and T. R. Besold, “What does explainable AI really mean? A new conceptualization of perspectives,” arXiv preprint arXiv:1710.00794, 2017.
  • [12] T. Miller, P. Howe, and L. Sonenberg, “Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences,” arXiv preprint arXiv:1712.00547, 2017.
  • [13] A. Krajna, M. Kovac, M. Brcic, and A. Šarčević, “Explainable artificial intelligence: An updated perspective,” in 2022 45th Jubilee International Convention on Information, Communication and Electronic Technology (MIPRO), IEEE, 2022, pp. 859–864.
  • [14] L. Bacco, A. Cimino, F. Dell’Orletta, and M. Merone, “Extractive Summarization for Explainable Sentiment Analysis using Transformers.,” in DeepOntoNLP/X-SENTIMENT@ ESWC, 2021, pp. 62–73.
  • [15] N. Scarpato et al., “Evaluating Explainable Machine Learning Models for Clinicians,” Cognit Comput, pp. 1–11, 2024.
  • [16] P. Chotikunnan, B. Panomruttanarug, and P. Manoonpong, “Dual design iterative learning controller for robotic manipulator application,” Journal of Control Engineering and Applied Informatics, vol. 24, no. 3, pp. 76–85, 2022.
  • [17] R. Jena et al., “Earthquake spatial probability and hazard estimation using various explainable AI (XAI) models at the Arabian Peninsula,” Remote Sens Appl, vol. 31, p. 101004, 2023.
  • [18] X. Tang et al., “Explainable multi-task learning for multi-modality biological data analysis,” Nat Commun, vol. 14, no. 1, p. 2546, 2023.
  • [19] K. Prag, M. Woolway, and T. Celik, “Toward data-driven optimal control: A systematic review of the landscape,” IEEE Access, vol. 10, pp. 32190–32212, 2022.
  • [20] M. A. Hessami, E. Bowles, J. N. Popp, and A. T. Ford, “Indigenizing the North American model of wildlife conservation,” Facets, vol. 6, no. 1, pp. 1285–1306, 2021.
  • [21] Y. Li, Y. Chen, and H. Ahn, “Fractional‐order iterative learning control for fractional‐order linear systems,” Asian J Control, vol. 13, no. 1, pp. 54–63, 2011.
  • [22] K. Patan and K. Patan, Robust and Fault-Tolerant Control. Springer, 2019.
  • [23] K. Hamamoto and T. Sugie, “An iterative learning control algorithm within prescribed input–output subspace,” Automatica, vol. 37, no. 11, pp. 1803–1809, 2001.
  • [24] C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nat Mach Intell, vol. 1, no. 5, pp. 206–215, 2019.
  • [25] N. Amann, D. H. Owens, and E. Rogers, “Predictive optimal iterative learning control,” Int J Control, vol. 69, no. 2, pp. 203–226, 1998.
  • [26] Z. Hou, J.-X. Xu, and H. Zhong, “Freeway traffic control using iterative learning control-based ramp metering and speed signaling,” IEEE Trans Veh Technol, vol. 56, no. 2, pp. 466–477, 2007.
  • [27] N. Amann, D. H. Owens, and E. Rogers, “Predictive optimal iterative learning control,” Int J Control, vol. 69, no. 2, pp. 203–226, 1998.
  • [28] K. Hamamoto and T. Sugie, “An iterative learning control algorithm within prescribed input–output subspace,” Automatica, vol. 37, no. 11, pp. 1803–1809, 2001.
  • [29] Z. Hou, J.-X. Xu, and H. Zhong, “Freeway traffic control using iterative learning control-based ramp metering and speed signaling,” IEEE Trans Veh Technol, vol. 56, no. 2, pp. 466–477, 2007.
  • [30] A. Tayebi, “Adaptive iterative learning control for robot manipulators,” Automatica, vol. 40, no. 7, pp. 1195–1203, 2004.
  • [31] T. Miller, P. Howe, and L. Sonenberg, “Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences,” arXiv preprint arXiv:1712.00547, 2017.
  • [32] B. Ding and Y. Yang, Model predictive control. John Wiley & Sons, 2024.
  • [33] S. Kundu, M. Singh, and A. K. Giri, “Adaptive control approach-based isolated microgrid system with alleviating power quality problems,” Electric Power Components and Systems, vol. 52, no. 7, pp. 1219–1234, 2024.
  • [34] C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nat Mach Intell, vol. 1, no. 5, pp. 206–215, 2019.

Towards Transparent Control Systems: The Role of Explainable AI in Iterative Learning Control

Yıl 2024, , 2370 - 2386, 23.10.2024
https://doi.org/10.29130/dubited.1535271

Öz

This paper presents a novel approach to improving the performance and interpretability of Iterative Learning Control (ILC) systems through the integration of Explainable Artificial Intelligence (XAI) techniques. ILC is a powerful method used across various domains, including robotics, process control, and traffic management, where it iteratively refines control inputs based on past performance to minimize errors in system output. However, traditional ILC methods often operate as "black boxes," making it difficult for users to understand the decision-making process. To address this challenge, we incorporate XAI, specifically SHapley Additive exPlanations (SHAP), into the ILC framework to provide transparent and interpretable insights into the algorithm's behavior. The study begins by detailing the evolution of ILC, highlighting key advancements such as predictive optimal control and adaptive schemes, and then transitions into the methodology for integrating XAI into ILC. The integrated system was evaluated through extensive simulations, focusing on robotic arm trajectory tracking and traffic flow management scenarios. Results indicate that the XAI-enhanced ILC not only achieved rapid convergence and high control accuracy but also maintained robustness in the face of external disturbances. SHAP analyses revealed that parameters such as the proportional gain (Kp) and derivative gain (Kd) were critical in driving system performance, with detailed visualizations providing actionable insights for system refinement. A crucial metric for control precision was the root mean square error (RMSE), which was reduced to as low as 0.02 radians in the robotic arm case, indicating extremely precise tracking of the intended route. Similarly, the ILC algorithm effectively maintained the ideal traffic density within the predetermined bounds in the traffic management scenario, resulting in a 40% reduction in congestion compared to baseline control measures. The resilience of the ILC algorithm was also examined by introducing changes to the system model, external disturbances, and sensor noise. The algorithm demonstrated a high degree of stability and accuracy in the face of these disruptions. For instance, in the robotic arm case, adding noise to the sensor readings had a negligible effect on the algorithm's performance, increasing the RMSE by less than 5%. This integration of XAI into ILC addresses a significant gap in control system design by offering both high performance and transparency, particularly in safety critical applications. The findings suggest that future research could further enhance this approach by exploring additional XAI techniques and applying the integrated system to more complex, real-world scenarios.

Kaynakça

  • [1] S. Arimoto, “A brief history of iterative learning control,” Iterative learning control: Analysis, design, integration and applications, pp. 3–7, 1998.
  • [2] S.-R. Oh, Z. Bien, and I. H. Suh, “An iterative learning control method with application to robot manipulators,” IEEE Journal on Robotics and Automation, vol. 4, no. 5, pp. 508–514, 1988.
  • [3] H.-S. Lee and Z. Bien, “Study on robustness of iterative learning control with non-zero initial error,” Int J Control, vol. 64, no. 3, pp. 345–359, 1996.
  • [4] N. Amann, D. H. Owens, and E. Rogers, “Predictive optimal iterative learning control,” Int J Control, vol. 69, no. 2, pp. 203–226, 1998.
  • [5] K. Zhang, P. Xu, and J. Zhang, “Explainable AI in deep reinforcement learning models: A shap method applied in power system emergency control,” in 2020 IEEE 4th conference on energy internet and energy system integration (EI2), IEEE, 2020, pp. 711–716.
  • [6] Y. Xie, N. Pongsakornsathien, A. Gardi, and R. Sabatini, “Explanation of machine-learning solutions in air-traffic management,” Aerospace, vol. 8, no. 8, p. 224, 2021.
  • [7] R.-K. Sheu and M. S. Pardeshi, “A survey on medical explainable AI (XAI): recent progress, explainability approach, human interaction and scoring system,” Sensors, vol. 22, no. 20, p. 8068, 2022.
  • [8] P. Kang, J. Li, S. Jiang, and P. B. Shull, “Reduce system redundancy and optimize sensor disposition for EMG–IMU multimodal fusion human–machine interfaces with XAI,” IEEE Trans Instrum Meas, vol. 72, pp. 1–9, 2022.
  • [9] A. Dobrovolskis, E. Kazanavičius, and L. Kižauskienė, “Building XAI-Based Agents for IoT Systems,” Applied Sciences, vol. 13, no. 6, p. 4040, 2023.
  • [10] W. Maxwell and B. Dumas, “Meaningful XAI based on user-centric design methodology,” arXiv preprint arXiv:2308.13228, 2023.
  • [11] D. Doran, S. Schulz, and T. R. Besold, “What does explainable AI really mean? A new conceptualization of perspectives,” arXiv preprint arXiv:1710.00794, 2017.
  • [12] T. Miller, P. Howe, and L. Sonenberg, “Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences,” arXiv preprint arXiv:1712.00547, 2017.
  • [13] A. Krajna, M. Kovac, M. Brcic, and A. Šarčević, “Explainable artificial intelligence: An updated perspective,” in 2022 45th Jubilee International Convention on Information, Communication and Electronic Technology (MIPRO), IEEE, 2022, pp. 859–864.
  • [14] L. Bacco, A. Cimino, F. Dell’Orletta, and M. Merone, “Extractive Summarization for Explainable Sentiment Analysis using Transformers.,” in DeepOntoNLP/X-SENTIMENT@ ESWC, 2021, pp. 62–73.
  • [15] N. Scarpato et al., “Evaluating Explainable Machine Learning Models for Clinicians,” Cognit Comput, pp. 1–11, 2024.
  • [16] P. Chotikunnan, B. Panomruttanarug, and P. Manoonpong, “Dual design iterative learning controller for robotic manipulator application,” Journal of Control Engineering and Applied Informatics, vol. 24, no. 3, pp. 76–85, 2022.
  • [17] R. Jena et al., “Earthquake spatial probability and hazard estimation using various explainable AI (XAI) models at the Arabian Peninsula,” Remote Sens Appl, vol. 31, p. 101004, 2023.
  • [18] X. Tang et al., “Explainable multi-task learning for multi-modality biological data analysis,” Nat Commun, vol. 14, no. 1, p. 2546, 2023.
  • [19] K. Prag, M. Woolway, and T. Celik, “Toward data-driven optimal control: A systematic review of the landscape,” IEEE Access, vol. 10, pp. 32190–32212, 2022.
  • [20] M. A. Hessami, E. Bowles, J. N. Popp, and A. T. Ford, “Indigenizing the North American model of wildlife conservation,” Facets, vol. 6, no. 1, pp. 1285–1306, 2021.
  • [21] Y. Li, Y. Chen, and H. Ahn, “Fractional‐order iterative learning control for fractional‐order linear systems,” Asian J Control, vol. 13, no. 1, pp. 54–63, 2011.
  • [22] K. Patan and K. Patan, Robust and Fault-Tolerant Control. Springer, 2019.
  • [23] K. Hamamoto and T. Sugie, “An iterative learning control algorithm within prescribed input–output subspace,” Automatica, vol. 37, no. 11, pp. 1803–1809, 2001.
  • [24] C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nat Mach Intell, vol. 1, no. 5, pp. 206–215, 2019.
  • [25] N. Amann, D. H. Owens, and E. Rogers, “Predictive optimal iterative learning control,” Int J Control, vol. 69, no. 2, pp. 203–226, 1998.
  • [26] Z. Hou, J.-X. Xu, and H. Zhong, “Freeway traffic control using iterative learning control-based ramp metering and speed signaling,” IEEE Trans Veh Technol, vol. 56, no. 2, pp. 466–477, 2007.
  • [27] N. Amann, D. H. Owens, and E. Rogers, “Predictive optimal iterative learning control,” Int J Control, vol. 69, no. 2, pp. 203–226, 1998.
  • [28] K. Hamamoto and T. Sugie, “An iterative learning control algorithm within prescribed input–output subspace,” Automatica, vol. 37, no. 11, pp. 1803–1809, 2001.
  • [29] Z. Hou, J.-X. Xu, and H. Zhong, “Freeway traffic control using iterative learning control-based ramp metering and speed signaling,” IEEE Trans Veh Technol, vol. 56, no. 2, pp. 466–477, 2007.
  • [30] A. Tayebi, “Adaptive iterative learning control for robot manipulators,” Automatica, vol. 40, no. 7, pp. 1195–1203, 2004.
  • [31] T. Miller, P. Howe, and L. Sonenberg, “Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences,” arXiv preprint arXiv:1712.00547, 2017.
  • [32] B. Ding and Y. Yang, Model predictive control. John Wiley & Sons, 2024.
  • [33] S. Kundu, M. Singh, and A. K. Giri, “Adaptive control approach-based isolated microgrid system with alleviating power quality problems,” Electric Power Components and Systems, vol. 52, no. 7, pp. 1219–1234, 2024.
  • [34] C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nat Mach Intell, vol. 1, no. 5, pp. 206–215, 2019.
Toplam 34 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Makine Öğrenmesi Algoritmaları, Kontrol Mühendisliği, Mekatronik ve Robotik (Diğer)
Bölüm Makaleler
Yazarlar

Mustafa Çağrı Kutlu 0000-0003-1663-2523

Mohammed Mansour 0000-0001-9672-0106

Yayımlanma Tarihi 23 Ekim 2024
Gönderilme Tarihi 30 Ağustos 2024
Kabul Tarihi 23 Eylül 2024
Yayımlandığı Sayı Yıl 2024

Kaynak Göster

APA Kutlu, M. Ç., & Mansour, M. (2024). Towards Transparent Control Systems: The Role of Explainable AI in Iterative Learning Control. Duzce University Journal of Science and Technology, 12(4), 2370-2386. https://doi.org/10.29130/dubited.1535271
AMA Kutlu MÇ, Mansour M. Towards Transparent Control Systems: The Role of Explainable AI in Iterative Learning Control. DÜBİTED. Ekim 2024;12(4):2370-2386. doi:10.29130/dubited.1535271
Chicago Kutlu, Mustafa Çağrı, ve Mohammed Mansour. “Towards Transparent Control Systems: The Role of Explainable AI in Iterative Learning Control”. Duzce University Journal of Science and Technology 12, sy. 4 (Ekim 2024): 2370-86. https://doi.org/10.29130/dubited.1535271.
EndNote Kutlu MÇ, Mansour M (01 Ekim 2024) Towards Transparent Control Systems: The Role of Explainable AI in Iterative Learning Control. Duzce University Journal of Science and Technology 12 4 2370–2386.
IEEE M. Ç. Kutlu ve M. Mansour, “Towards Transparent Control Systems: The Role of Explainable AI in Iterative Learning Control”, DÜBİTED, c. 12, sy. 4, ss. 2370–2386, 2024, doi: 10.29130/dubited.1535271.
ISNAD Kutlu, Mustafa Çağrı - Mansour, Mohammed. “Towards Transparent Control Systems: The Role of Explainable AI in Iterative Learning Control”. Duzce University Journal of Science and Technology 12/4 (Ekim 2024), 2370-2386. https://doi.org/10.29130/dubited.1535271.
JAMA Kutlu MÇ, Mansour M. Towards Transparent Control Systems: The Role of Explainable AI in Iterative Learning Control. DÜBİTED. 2024;12:2370–2386.
MLA Kutlu, Mustafa Çağrı ve Mohammed Mansour. “Towards Transparent Control Systems: The Role of Explainable AI in Iterative Learning Control”. Duzce University Journal of Science and Technology, c. 12, sy. 4, 2024, ss. 2370-86, doi:10.29130/dubited.1535271.
Vancouver Kutlu MÇ, Mansour M. Towards Transparent Control Systems: The Role of Explainable AI in Iterative Learning Control. DÜBİTED. 2024;12(4):2370-86.