Research Article
BibTex RIS Cite
Year 2024, Volume: 1 Issue: 1, 33 - 38, 20.07.2024

Abstract

References

  • K. Shingate, K. Jagdale, and Y. Dias, “Adaptive traffic control system using reinforcement learning,” International Journal of Engineering Research and Technol-ogy, vol. 9, 2020.
  • I. Tunç, Ö. Elmas, A. Edem, A. Köroglu, S. Akmese,and M. Söylemez, “Derin q ögrenme teknigi ile trafik isık sinyalizasyonu,” 2023.
  • M. Vardhana, N. Arunkumar, E. Abdulhay, et al., “Iot based real time traffic control using cloud computing,” Cluster Computing, vol. 22, no. Suppl 1, pp. 2495–2504, 2019.
  • X. Yin, G. Wu, J. Wei, Y. Shen, H. Qi, and B. Yin,“Deep learning on traffic prediction: Methods, analysis, and future directions,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 6,pp. 4927–4943, 2022. DOI: 10 . 1109 / TITS . 2021 .3054840.
  • M. Abdoos, N. Mozayani, and A. Bazzan, “Traffic light control in non-stationary environments based on multi agent q-learning,” in IEEE Conference on In-telligent Transportation Systems, Proceedings, ITSC, Oct. 2011.
  • D. Fan and P. Shi, “Improvement of dijkstra’s algorithm and its application in route planning,” in 2010seventh international conference on fuzzy systems and knowledge discovery, vol. 4, 2010, pp. 1901–1904.
  • G. Han, Q. Zheng, L. Liao, P. Tang, Z. Li, and Y. Zhu, “Deep reinforcement learning for intersection signal control considering pedestrian behavior,” Electronics(Basel), vol. 11, no. 21, p. 3519, 2022.
  • S. A. El-Tantawy, H. Abdelgawad, and R. A. Ra-madan, “Cooperative deep q-learning for traffic signalcontrol,” Transportation Research Part C: Emerging Technologies, vol. 71, pp. 1–16, 2016.
  • M. Behrisch, L. Bieker, J. Erdmann, D. Krajzewicz,and C. Rössel, “Sumo – simulation of urban mobility: An overview,” International Journal on Advances inSystems and Measurements, vol. 4, no. 34, pp. 308–316, 2011.
  • Z. Shen, K. Yang, W. Du, X. Zhao, and J. Zou,“Deepapp: A deep reinforcement learning framework for mobile application usage prediction,” Nov. 2019,pp. 153–165, ISBN: 978-1-4503-6950-3. DOI: 10 .1145/3356250.3360038.
  • T. Pan, “Traffic light control with reinforcement learning,” pp. 4–5, Aug. 2023.

Control of Emergency Vehicles with Deep Q-Learning

Year 2024, Volume: 1 Issue: 1, 33 - 38, 20.07.2024

Abstract

In contemporary times, the issue of traffic congestion has become a paramount concern affecting a broad
spectrum of society. However, when it comes to emergency vehicles, particularly ambulances, this matter takes on even greater significance. This study addresses a research endeavor aimed at mitigating traffic risks for emergency situations. The primary objective of the research is to employ Deep Q-Learning methodology to ensure that ambulances transport patients to hospitals in the quickest and most optimal routes. Factors such as urgency levels, traffic density, and distances between patients and ambulances are modeled using state vectors. The Deep Q-Learning algorithm utilizes these vectors to select the most effective actions, determining the most efficient routes for ambulances to transport patients. The reward function is transformed into a penalty function by prioritizing patients based on their waiting times.The study evaluates the learning outcomes of the agent created with Deep Q-Learning, demonstrating the successful completion of the learning process. This method represents a significant step in optimizing the intra-city mobility of emergency vehicles.

References

  • K. Shingate, K. Jagdale, and Y. Dias, “Adaptive traffic control system using reinforcement learning,” International Journal of Engineering Research and Technol-ogy, vol. 9, 2020.
  • I. Tunç, Ö. Elmas, A. Edem, A. Köroglu, S. Akmese,and M. Söylemez, “Derin q ögrenme teknigi ile trafik isık sinyalizasyonu,” 2023.
  • M. Vardhana, N. Arunkumar, E. Abdulhay, et al., “Iot based real time traffic control using cloud computing,” Cluster Computing, vol. 22, no. Suppl 1, pp. 2495–2504, 2019.
  • X. Yin, G. Wu, J. Wei, Y. Shen, H. Qi, and B. Yin,“Deep learning on traffic prediction: Methods, analysis, and future directions,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 6,pp. 4927–4943, 2022. DOI: 10 . 1109 / TITS . 2021 .3054840.
  • M. Abdoos, N. Mozayani, and A. Bazzan, “Traffic light control in non-stationary environments based on multi agent q-learning,” in IEEE Conference on In-telligent Transportation Systems, Proceedings, ITSC, Oct. 2011.
  • D. Fan and P. Shi, “Improvement of dijkstra’s algorithm and its application in route planning,” in 2010seventh international conference on fuzzy systems and knowledge discovery, vol. 4, 2010, pp. 1901–1904.
  • G. Han, Q. Zheng, L. Liao, P. Tang, Z. Li, and Y. Zhu, “Deep reinforcement learning for intersection signal control considering pedestrian behavior,” Electronics(Basel), vol. 11, no. 21, p. 3519, 2022.
  • S. A. El-Tantawy, H. Abdelgawad, and R. A. Ra-madan, “Cooperative deep q-learning for traffic signalcontrol,” Transportation Research Part C: Emerging Technologies, vol. 71, pp. 1–16, 2016.
  • M. Behrisch, L. Bieker, J. Erdmann, D. Krajzewicz,and C. Rössel, “Sumo – simulation of urban mobility: An overview,” International Journal on Advances inSystems and Measurements, vol. 4, no. 34, pp. 308–316, 2011.
  • Z. Shen, K. Yang, W. Du, X. Zhao, and J. Zou,“Deepapp: A deep reinforcement learning framework for mobile application usage prediction,” Nov. 2019,pp. 153–165, ISBN: 978-1-4503-6950-3. DOI: 10 .1145/3356250.3360038.
  • T. Pan, “Traffic light control with reinforcement learning,” pp. 4–5, Aug. 2023.
There are 11 citations in total.

Details

Primary Language English
Subjects Autonomous Agents and Multiagent Systems
Journal Section Research Articles
Authors

Hasan Yıldız

Furkan Güney

İlhan Tunç 0000-0003-2239-0954

Mehmet Turan Söylemez 0000-0002-7600-0707

Publication Date July 20, 2024
Submission Date January 22, 2024
Acceptance Date May 1, 2024
Published in Issue Year 2024 Volume: 1 Issue: 1

Cite

IEEE H. Yıldız, F. Güney, İ. Tunç, and M. T. Söylemez, “Control of Emergency Vehicles with Deep Q-Learning”, ITU Computer Science AI and Robotics, vol. 1, no. 1, pp. 33–38, 2024.

ITU Computer Science AI and Robotics