Araştırma Makalesi
BibTex RIS Kaynak Göster

Otonom Araçlar İçin Derin Öğrenme Şerit Tespiti Modellerinin Performans Karşılaştırması

Yıl 2024, Cilt: 39 Sayı: 4, 861 - 871, 25.12.2024
https://doi.org/10.21605/cukurovaumfd.1605865

Öz

Son yıllarda derin öğrenme alanlarında meydana gelen ilerlemeler, otonom araçların sürüş yeteneklerini önemli ölçüde geliştirmiştir. Bu çalışma, otonom araçların şerit tespit yeteneklerine odaklanmaktadır ve derin öğrenme tabanlı yaklaşımların bu bağlamdaki kullanımını incelemektedir. Araştırma kapsamında, TuSimple veri seti kullanılarak U-Net, SCNN, ENet ve ENet-SAD gibi çeşitli derin öğrenme modellerinin şerit tespiti performansları karşılaştırılmıştır. Modeller, doğruluk, hassasiyet, duyarlılık, F1 skoru ve IoU gibi çeşitli nicel metrikler kullanılarak değerlendirilmiştir. Yapılan kapsamlı deneyler sonucunda, U-Net modelinin %98.3 doğruluk oranı ile en yüksek performansı sergilediği tespit edilmiştir. SCNN modeli ise hassasiyet, duyarlılık, F1 skoru ve IoU metrikleri açısından öne çıkmıştır. Çıkarım süresi açısından değerlendirildiğinde, 20.12 milisaniye ile U-Net modelinin en hızlı şerit tespitini gerçekleştiren model olduğu belirlenmiştir. Bu sonuçlar, özellikle gerçek zamanlı ve düşük işlem gücü gerektiren sistemler için U-Net modelinin tercih edilebileceğini göstermektedir. Ayrıca, şerit tespit başarısının nitel değerlendirilmesi sonucunda, SCNN ve U-Net modellerinin şeritlerin bulunduğu pikselleri daha doğru bir şekilde tespit ettiği, buna karşın ENet ve ENet-SAD modellerinin false-negative (yanlış negatif) hata yapmaya daha meyilli olduğu gözlemlenmiştir.

Kaynakça

  • 1. Sevim, M.A., Kircova, İ., Çuhadar, E., 2019. Yerel yönetimlerde akıllı şehir vizyonu: şehir yönetim araçları ve trendleri. Strategic Public Management Journal, 5(9), 109-126.
  • 2. Richter, M.A., Hagenmaier, M., Bandte, O., Parida, V., Wincent, J., 2022. Smart cities, urban mobility and autonomous vehicles: How different cities needs different sustainable investment strategies. Technological Forecasting and Social Change, 184, 121857.
  • 3. De-Las-Heras, G., Sanchez-Soriano, J., Puertas, E., 2021. Advanced driver assistance systems (ADAS) based on machine learning techniques for the detection and transcription of variable message signs on roads. Sensors, 21(17), 5866.
  • 4. Marti, E., De Miguel, M.A., Garcia, F., Perez, J., 2019. A review of sensor technologies for perception in automated driving. IEEE Intelligent Transportation Systems Magazine, 11(4), 94-108.
  • 5. Zakaria, N.J., Shapiai, M.I., Ghani, R.A., Yasin, M.N.M., Ibrahim, M.Z., Wahid, N., 2023. Lane detection in autonomous vehicles: A systematic review. IEEE Access.
  • 6. Dillmann, J., den Hartigh, R.J.R., Kurpiers, C.M., Pelzer, J., Raisch, F.K., Cox, R.F.A., de Waard, D., 2021. Keeping the driver in the loop through semi-automated or manual lane changes in conditionally automated driving. Accident Analysis & Prevention, 162, 106397.
  • 7. Haas, R.E., Bhattacharjee, S., Möller, D.P., 2020. Advanced driver assistance systems. Smart Technologies: Scope and Applications, 345-371.
  • 8. Kukkala, V.K., Tunnell, J., Pasricha, S., Bradley, T., 2018. Advanced driver-assistance systems: A path toward autonomous vehicles. IEEE Consumer Electronics Magazine, 7(5), 18-25.
  • 9. https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries, Erişim tarihi: 08.11.2023.
  • 10. Kuyumcu, Z.Ç., Aslan, H., Yose, M.A., Ahadi, S., 2020. Türkiye’de trafik kazaları ve sürücülerin kazalardaki payı. Academic Perspective Procedia, 3(1), 694-702.
  • 11. Novikov, A., Shevtsova, A., Vasilieva, V., 2020. Development of approach to reduce number of accidents caused by drivers. Transportation Research Procedia, 50, 491-498.
  • 12. Mukhopadhyay, A., Murthy, L.R.D., Mukherjee, I., Biswas, P., 2022. A hybrid lane detection model for wild road conditions. IEEE Transactions on Artificial Intelligence.
  • 13. Ying, Z., Li, G., Zang, X., Wang, R., Wang, W., 2016. A novel shadow-free feature extractor for real-time road detection. In Proceedings of the 24th ACM international conference on Multimedia, 611-615.
  • 14. Khan, H.U., Ali, A.R., Hassan, A., Ali, A., Kazmi, W., Zaheer, A., 2020. Lane detection using lane boundary marker network with road geometry constraints. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1834-1843.
  • 15. Li, C., Shi, J., Wang, Y., Cheng, G., 2022. Reconstruct from top view: A 3d lane detection approach based on geometry structure prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4370-4379.
  • 16. Giesemann, F., Payá-Vayá, G., Blume, H., Limmer, M., Ritter, W.R., 2017. Deep learning for advanced driver assistance systems. In Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems, 105. River Publishers.
  • 17. De-Las-Heras, G., Sanchez-Soriano, J., Puertas, E., 2021. Advanced driver assistance systems (ADAS) based on machine learning techniques for the detection and transcription of variable message signs on roads. Sensors, 21(17), 5866.
  • 18. TuSimple, 2024. Tusimple dataset. https://github.com/TuSimple/tusimple-benchmark/wiki/
  • 19. Zhang, X., Zhou, X., Lin, M., Sun, J., 2018. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6848-6856.
  • 20. Ronneberger, O., Fischer, P., Brox, T., 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234-241. Springer International Publishing.
  • 21. Pan, X., Shi, J., Luo, P., Wang, X., Tang, X., 2018. Spatial as deep: Spatial cnn for traffic scene understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, 32, 1.
  • 22. Hou, Y., Ma, Z., Liu, C., Loy, C.C. 2019. Learning lightweight lane detection cnns by self attention distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1013-1021.
  • 23. Paszke, A., Chaurasia, A., Kim, S., Culurciello, E., 2016. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv Preprint arXiv: 1606.02147.
  • 24. Khoshdeli, M., Winkelmaier, G., Parvin, B., 2018. Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes. BMC Bioinformatics, 19(1), 1-11.
  • 25. Hou, Y., Ma, Z., Liu, C., Loy, C.C., 2019. Learning to steer by mimicking features from heterogeneous auxiliary networks. In Proceedings of the AAAI Conference on Artificial Intelligence, 33(1), 8433-8440.
  • 26. Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., Tang, X., 2017. Residual attention network for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3156-3164.
  • 27. Zagoruyko, S., Komodakis, N., 2016. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. arXiv Preprint arXiv: 1612.03928.
  • 28. Yang, W.J., Cheng, Y.T., Chung, P.C., 2019. Improved lane detection with multilevel features in branch convolutional neural networks. IEEE Access, 7, 173148-173156.

Performance Comparison of Deep Learning Lane Detection Models for Autonomous Vehicles

Yıl 2024, Cilt: 39 Sayı: 4, 861 - 871, 25.12.2024
https://doi.org/10.21605/cukurovaumfd.1605865

Öz

Recent advancements in the field of deep learning have significantly improved the driving capabilities of autonomous vehicles. This study focuses on the lane detection abilities of autonomous vehicles and examines the use of deep learning-based approaches in this context. The research compares the lane detection performance of various deep learning models, including U-Net, SCNN, ENet, and ENet-SAD, utilizing the TuSimple dataset. The models were evaluated using various quantitative metrics such as accuracy, precision, sensitivity, F1 score, and IoU. Extensive experiments have determined that the U-Net model exhibited the highest performance with an accuracy rate of 98.3%. The SCNN model, on the other hand, stood out in terms of precision, sensitivity, F1 score, and IoU metrics. In terms of inference time, the U-Net model was identified as the fastest lane detection model with a time of 20.12 ms. These results indicate that the U-Net model is particularly suitable for real-time systems requiring low computational power. Additionally, a qualitative assessment of lane detection success revealed that the SCNN and U-Net models more accurately detected pixels where lanes are present, whereas the ENet and ENet-SAD models were more prone to false-negative errors.

Kaynakça

  • 1. Sevim, M.A., Kircova, İ., Çuhadar, E., 2019. Yerel yönetimlerde akıllı şehir vizyonu: şehir yönetim araçları ve trendleri. Strategic Public Management Journal, 5(9), 109-126.
  • 2. Richter, M.A., Hagenmaier, M., Bandte, O., Parida, V., Wincent, J., 2022. Smart cities, urban mobility and autonomous vehicles: How different cities needs different sustainable investment strategies. Technological Forecasting and Social Change, 184, 121857.
  • 3. De-Las-Heras, G., Sanchez-Soriano, J., Puertas, E., 2021. Advanced driver assistance systems (ADAS) based on machine learning techniques for the detection and transcription of variable message signs on roads. Sensors, 21(17), 5866.
  • 4. Marti, E., De Miguel, M.A., Garcia, F., Perez, J., 2019. A review of sensor technologies for perception in automated driving. IEEE Intelligent Transportation Systems Magazine, 11(4), 94-108.
  • 5. Zakaria, N.J., Shapiai, M.I., Ghani, R.A., Yasin, M.N.M., Ibrahim, M.Z., Wahid, N., 2023. Lane detection in autonomous vehicles: A systematic review. IEEE Access.
  • 6. Dillmann, J., den Hartigh, R.J.R., Kurpiers, C.M., Pelzer, J., Raisch, F.K., Cox, R.F.A., de Waard, D., 2021. Keeping the driver in the loop through semi-automated or manual lane changes in conditionally automated driving. Accident Analysis & Prevention, 162, 106397.
  • 7. Haas, R.E., Bhattacharjee, S., Möller, D.P., 2020. Advanced driver assistance systems. Smart Technologies: Scope and Applications, 345-371.
  • 8. Kukkala, V.K., Tunnell, J., Pasricha, S., Bradley, T., 2018. Advanced driver-assistance systems: A path toward autonomous vehicles. IEEE Consumer Electronics Magazine, 7(5), 18-25.
  • 9. https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries, Erişim tarihi: 08.11.2023.
  • 10. Kuyumcu, Z.Ç., Aslan, H., Yose, M.A., Ahadi, S., 2020. Türkiye’de trafik kazaları ve sürücülerin kazalardaki payı. Academic Perspective Procedia, 3(1), 694-702.
  • 11. Novikov, A., Shevtsova, A., Vasilieva, V., 2020. Development of approach to reduce number of accidents caused by drivers. Transportation Research Procedia, 50, 491-498.
  • 12. Mukhopadhyay, A., Murthy, L.R.D., Mukherjee, I., Biswas, P., 2022. A hybrid lane detection model for wild road conditions. IEEE Transactions on Artificial Intelligence.
  • 13. Ying, Z., Li, G., Zang, X., Wang, R., Wang, W., 2016. A novel shadow-free feature extractor for real-time road detection. In Proceedings of the 24th ACM international conference on Multimedia, 611-615.
  • 14. Khan, H.U., Ali, A.R., Hassan, A., Ali, A., Kazmi, W., Zaheer, A., 2020. Lane detection using lane boundary marker network with road geometry constraints. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1834-1843.
  • 15. Li, C., Shi, J., Wang, Y., Cheng, G., 2022. Reconstruct from top view: A 3d lane detection approach based on geometry structure prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4370-4379.
  • 16. Giesemann, F., Payá-Vayá, G., Blume, H., Limmer, M., Ritter, W.R., 2017. Deep learning for advanced driver assistance systems. In Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems, 105. River Publishers.
  • 17. De-Las-Heras, G., Sanchez-Soriano, J., Puertas, E., 2021. Advanced driver assistance systems (ADAS) based on machine learning techniques for the detection and transcription of variable message signs on roads. Sensors, 21(17), 5866.
  • 18. TuSimple, 2024. Tusimple dataset. https://github.com/TuSimple/tusimple-benchmark/wiki/
  • 19. Zhang, X., Zhou, X., Lin, M., Sun, J., 2018. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6848-6856.
  • 20. Ronneberger, O., Fischer, P., Brox, T., 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234-241. Springer International Publishing.
  • 21. Pan, X., Shi, J., Luo, P., Wang, X., Tang, X., 2018. Spatial as deep: Spatial cnn for traffic scene understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, 32, 1.
  • 22. Hou, Y., Ma, Z., Liu, C., Loy, C.C. 2019. Learning lightweight lane detection cnns by self attention distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1013-1021.
  • 23. Paszke, A., Chaurasia, A., Kim, S., Culurciello, E., 2016. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv Preprint arXiv: 1606.02147.
  • 24. Khoshdeli, M., Winkelmaier, G., Parvin, B., 2018. Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes. BMC Bioinformatics, 19(1), 1-11.
  • 25. Hou, Y., Ma, Z., Liu, C., Loy, C.C., 2019. Learning to steer by mimicking features from heterogeneous auxiliary networks. In Proceedings of the AAAI Conference on Artificial Intelligence, 33(1), 8433-8440.
  • 26. Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., Tang, X., 2017. Residual attention network for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3156-3164.
  • 27. Zagoruyko, S., Komodakis, N., 2016. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. arXiv Preprint arXiv: 1612.03928.
  • 28. Yang, W.J., Cheng, Y.T., Chung, P.C., 2019. Improved lane detection with multilevel features in branch convolutional neural networks. IEEE Access, 7, 173148-173156.
Toplam 28 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Bilgisayar Görüşü
Bölüm Makaleler
Yazarlar

Muhammed Said Ataş 0009-0007-6572-9010

Yahya Doğan 0000-0003-1529-6118

Cüneyt Özdemir 0000-0002-9252-5888

Yayımlanma Tarihi 25 Aralık 2024
Gönderilme Tarihi 19 Şubat 2024
Kabul Tarihi 23 Aralık 2024
Yayımlandığı Sayı Yıl 2024 Cilt: 39 Sayı: 4

Kaynak Göster

APA Ataş, M. S., Doğan, Y., & Özdemir, C. (2024). Performance Comparison of Deep Learning Lane Detection Models for Autonomous Vehicles. Çukurova Üniversitesi Mühendislik Fakültesi Dergisi, 39(4), 861-871. https://doi.org/10.21605/cukurovaumfd.1605865