Research Article
BibTex RIS Cite

Türk Trafik İşareti Tanıma: Eğitim Adım Sayıları ve Aydınlatma Koşullarının Karşılaştırılması

Year 2021, Issue: 28, 1469 - 1475, 30.11.2021
https://doi.org/10.31590/ejosat.1015972

Abstract

Yollardaki araç sayısının her geçen gün artmasıyla birlikte trafik işaretleri her geçen gün daha da önem kazanmaktadır. Trafik işaretleri basit ve anlaşılması kolay olmasına rağmen, sıkışık trafikte sürücüler bunları gözden kaçırabilir. Milisaniyelerin bile kazaları önlemede büyük fark yarattığını göz önünde bulundurarak, sürücüye trafik işaretleri konusunda yardımcı olacak bir sistemin olmasının büyük bir fayda sağlayacağı oldukça açıktır. Bunun için bir trafik işareti tanıma sisteminin geliştirilmesi gerekmektedir. Bu makalede, Daha Hızlı R-CNN algoritması kullanılarak bir Türk trafik işareti tespit ve tanıma sisteminin geliştirilmesi amaçlanmaktadır. Önerilen çözüm, TensorFlow çerçevesi ile nesne algılama modelini eğitmek için Daha Hızlı R-CNN Inception-v2-COCO'yu kullanır. Modelin eğitilmesi için 54 sınıf ve 10842 adet Türk trafik işareti görüntüsünü içeren yeni bir veri seti oluşturulmuştur. Modelin eğitimi sırasıyla 51.217 ve 200.000 eğitim adım numaraları ile iki kez gerçekleştirilir. Daha sonra bu iki model kullanılarak gündüz ve gece çekilen 10 adet Türk trafik işareti görüntüsü tespit edilmeye çalışılmıştır. Sonuçlar, önerilen modellerin 51.217 eğitim adımıyla eğitildiğinde ortalama hassasiyetin %67,2 ve ortalama hatırlamanın %78,3 olduğunu göstermektedir; Öte yandan, model 200.000 eğitim adımıyla eğitildiğinde ortalama hassasiyet %76'ya ve ortalama hatırlamanın da %82,8'e yükselir.

References

  • Davis, S., & Boundy, R. G. (2020). Transportation Energy Data Book: Edition 38.2 (No. ORNL/TM-2019/1333). Oak Ridge National Lab.(ORNL), Oak Ridge, TN (United States).
  • Bucsuházy, K., Matuchová, E., Zůvala, R., Moravcová, P., Kostíková, M., & Mikulec, R. (2020). Human factors contributing to the road traffic accident occurrence. Transportation research procedia, 45, 555-561.
  • Stallkamp, J., Schlipsing, M., Salmen, J., & Igel, C. (2011, July). The German traffic sign recognition benchmark: a multi-class classification competition. In The 2011 international joint conference on neural networks (pp. 1453-1460). IEEE.
  • Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M., & Igel, C. (2013, August). Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark. In The 2013 international joint conference on neural networks (IJCNN) (pp. 1-8). Ieee.
  • Yaliç, H. Y., & Can, A. B. (2011, September). Automatic recognition of traffic signs. In 2011 7th International Symposium on Image and Signal Processing and Analysis (ISPA) (pp. 361-366). IEEE.
  • Gündüz, H., Kaplan, S., Günal, S., & Akınlar, C. (2013, April). Circular traffic sign recognition empowered by circle detection algorithm. In 2013 21st Signal Processing and Communications Applications Conference (SIU) (pp. 1-4). IEEE.
  • CINAR, I., TASPINAR, Y. S., SARITAS, M. M., & KOKLU, M. (2020). FEATURE EXTRACTION AND RECOGNITION ON TRAFFIC SIGN IMAGES. Selçuk-Teknik Dergisi, 19(4), 282-292.
  • Kilic, I., & Aydin, G. (2020, September). Traffic Sign Detection And Recognition Using TensorFlow’s Object Detection API With A New Benchmark Dataset. In 2020 International Conference on Electrical Engineering (ICEE) (pp. 1-5). IEEE.
  • Çetinkaya, M., & Acarman, T. (2021, May). Traffic Sign Detection by Image Preprocessing and Deep Learning. In 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS) (pp. 1165-1170). IEEE.
  • De La Escalera, A., Armingol, J. M., Pastor, J. M., & Rodríguez, F. J. (2004). Visual sign information extraction and identification by deformable models for intelligent vehicles. IEEE transactions on intelligent transportation systems, 5(2), 57-68.
  • Garcia-Garrido, M. A., Sotelo, M. A., & Martin-Gorostiza, E. (2006, September). Fast traffic sign detection and recognition under changing lighting conditions. In 2006 IEEE Intelligent Transportation Systems Conference (pp. 811-816). IEEE.
  • Maldonado-Bascón, S., Lafuente-Arroyo, S., Gil-Jimenez, P., Gómez-Moreno, H., & López-Ferreras, F. (2007). Road-sign detection and recognition based on support vector machines. IEEE transactions on intelligent transportation systems, 8(2), 264-278.
  • Dalal, N., & Triggs, B. (2005, June). Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05) (Vol. 1, pp. 886-893). Ieee.
  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 1097-1105.
  • Khan, S., Rahmani, H., Shah, S. A. A., & Bennamoun, M. (2018). A guide to convolutional neural networks for computer vision. Synthesis Lectures on Computer Vision, 8(1), 1-207.
  • Phung, V. H., & Rhee, E. J. (2018). A deep learning approach for classification of cloud image patches on small datasets. Journal of information and communication convergence engineering, 16(3), 173-178.
  • Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
  • Uijlings, J. R., Van De Sande, K. E., Gevers, T., & Smeulders, A. W. (2013). Selective search for object recognition. International journal of computer vision, 104(2), 154-171.
  • Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448).
  • Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28, 91-99.
  • Bisong, E. (2019). Building machine learning and deep learning models on Google cloud platform: A comprehensive guide for beginners. Apress.
  • Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., ... & Murphy, K. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7310-7311).
  • Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2818-2826).
  • Tzutalin. (2015) LabelImg. Git code. Retrieved August 30, 2021, from https://github.com/tzutalin/labelImg

Turkish Traffic Sign Recognition: Comparison of Training Step Numbers and Lighting Conditions

Year 2021, Issue: 28, 1469 - 1475, 30.11.2021
https://doi.org/10.31590/ejosat.1015972

Abstract

With the ever increasing number of vehicles on the roads, traffic signs are becoming more and more important every passing day. Despite the fact that traffic signs are simple and easy to understand, in congested traffic drivers may miss them. Considering that even milliseconds can make a huge difference in preventing accidents, it would make a big help if a system could assist the driver with traffic signs. In order to achieve this, a traffic sign recognition system needs to be implemented. Accordingly, this study aims to develop a Turkish traffic sign detection and recognition system using the Faster R-CNN algorithm. The proposed solution utilizes TensorFlow framework and specifically makes use of the Faster R-CNN Inception-v2-COCO to train the object detection model. For training purposes, indigenous dataset is created containing 54 classes and 10842 Turkish traffic sign images. The training process of the model is carried out twice with step numbers 51,217 and 200,000, respectively. Then, these two models are used to detect 10 Turkish traffic sign images taken both daytime and nighttime. The results indicate that the proposed system’s average precision is 67.2% and average recall is 78.3% when trained with 51,217 steps; on the other hand, the average precision increases to 76% and average recall to 82.8% when trained with 200,000 steps.

References

  • Davis, S., & Boundy, R. G. (2020). Transportation Energy Data Book: Edition 38.2 (No. ORNL/TM-2019/1333). Oak Ridge National Lab.(ORNL), Oak Ridge, TN (United States).
  • Bucsuházy, K., Matuchová, E., Zůvala, R., Moravcová, P., Kostíková, M., & Mikulec, R. (2020). Human factors contributing to the road traffic accident occurrence. Transportation research procedia, 45, 555-561.
  • Stallkamp, J., Schlipsing, M., Salmen, J., & Igel, C. (2011, July). The German traffic sign recognition benchmark: a multi-class classification competition. In The 2011 international joint conference on neural networks (pp. 1453-1460). IEEE.
  • Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M., & Igel, C. (2013, August). Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark. In The 2013 international joint conference on neural networks (IJCNN) (pp. 1-8). Ieee.
  • Yaliç, H. Y., & Can, A. B. (2011, September). Automatic recognition of traffic signs. In 2011 7th International Symposium on Image and Signal Processing and Analysis (ISPA) (pp. 361-366). IEEE.
  • Gündüz, H., Kaplan, S., Günal, S., & Akınlar, C. (2013, April). Circular traffic sign recognition empowered by circle detection algorithm. In 2013 21st Signal Processing and Communications Applications Conference (SIU) (pp. 1-4). IEEE.
  • CINAR, I., TASPINAR, Y. S., SARITAS, M. M., & KOKLU, M. (2020). FEATURE EXTRACTION AND RECOGNITION ON TRAFFIC SIGN IMAGES. Selçuk-Teknik Dergisi, 19(4), 282-292.
  • Kilic, I., & Aydin, G. (2020, September). Traffic Sign Detection And Recognition Using TensorFlow’s Object Detection API With A New Benchmark Dataset. In 2020 International Conference on Electrical Engineering (ICEE) (pp. 1-5). IEEE.
  • Çetinkaya, M., & Acarman, T. (2021, May). Traffic Sign Detection by Image Preprocessing and Deep Learning. In 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS) (pp. 1165-1170). IEEE.
  • De La Escalera, A., Armingol, J. M., Pastor, J. M., & Rodríguez, F. J. (2004). Visual sign information extraction and identification by deformable models for intelligent vehicles. IEEE transactions on intelligent transportation systems, 5(2), 57-68.
  • Garcia-Garrido, M. A., Sotelo, M. A., & Martin-Gorostiza, E. (2006, September). Fast traffic sign detection and recognition under changing lighting conditions. In 2006 IEEE Intelligent Transportation Systems Conference (pp. 811-816). IEEE.
  • Maldonado-Bascón, S., Lafuente-Arroyo, S., Gil-Jimenez, P., Gómez-Moreno, H., & López-Ferreras, F. (2007). Road-sign detection and recognition based on support vector machines. IEEE transactions on intelligent transportation systems, 8(2), 264-278.
  • Dalal, N., & Triggs, B. (2005, June). Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05) (Vol. 1, pp. 886-893). Ieee.
  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 1097-1105.
  • Khan, S., Rahmani, H., Shah, S. A. A., & Bennamoun, M. (2018). A guide to convolutional neural networks for computer vision. Synthesis Lectures on Computer Vision, 8(1), 1-207.
  • Phung, V. H., & Rhee, E. J. (2018). A deep learning approach for classification of cloud image patches on small datasets. Journal of information and communication convergence engineering, 16(3), 173-178.
  • Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
  • Uijlings, J. R., Van De Sande, K. E., Gevers, T., & Smeulders, A. W. (2013). Selective search for object recognition. International journal of computer vision, 104(2), 154-171.
  • Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448).
  • Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28, 91-99.
  • Bisong, E. (2019). Building machine learning and deep learning models on Google cloud platform: A comprehensive guide for beginners. Apress.
  • Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., ... & Murphy, K. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7310-7311).
  • Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2818-2826).
  • Tzutalin. (2015) LabelImg. Git code. Retrieved August 30, 2021, from https://github.com/tzutalin/labelImg
There are 24 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Articles
Authors

Kaan Kocakanat 0000-0002-5906-7969

Tacha Serif 0000-0003-1819-4926

Publication Date November 30, 2021
Published in Issue Year 2021 Issue: 28

Cite

APA Kocakanat, K., & Serif, T. (2021). Turkish Traffic Sign Recognition: Comparison of Training Step Numbers and Lighting Conditions. Avrupa Bilim Ve Teknoloji Dergisi(28), 1469-1475. https://doi.org/10.31590/ejosat.1015972