TY - JOUR T1 - Not all fog removers are equal: Unmasking the impact of dehazing on object detection TT - Tüm sis gidericiler aynı değildir: Sis gideriminin nesne tespiti üzerindeki etkisinin ortaya çıkarılması AU - Bozkır, Ahmet Selman AU - Özenç, Nurçiçek PY - 2025 DA - June Y2 - 2024 JF - Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi PB - Pamukkale University WT - DergiPark SN - 2147-5881 SP - 373 EP - 383 VL - 31 IS - 3 LA - en AB - Dehazing is an important branch of computational photography aiming to enhancing image clarity by removing atmospheric haze and scattering effects, crucial for improving visibility in applications such as unmanned aerial vehicles, traffic control, and autonomous driving. However, most of the studies in this particular field lack an assessment of the developed algorithm in context of object detection (OD). In this study, we aim to quantify and evaluate the contribution of several stateof-the-art dehazing methods (C2PNet, D4, Dehamer, gUNet) on OD using YOLOv8, known for its superior performance. For this purpose, we utilized the test portion of the VisDrone-DET dataset including 548 haze-free aerial images as the data source. For a more comprehensive assessment, we evaluated these approaches to object detection under different haze levels and resolutions. Since it is inherently impossible to obtain hazy and clean images simultaneously, we (1) generated synthetically hazed images involving varying haze densities and (2) resized to 640p and 1280p resolutions. Next, we used YOLO8 and YOLO10 models to evaluate the OD performance in (i) haze-free ground truth, (ii) three different hazed versions, and (iii) their dehazed counterparts through several metrics. Our experiments showed that the gUNET approach, incorporating a variant of the U-Net model inspired by GCANet and GridDehazeNet outperformed the others in terms of OD performance. Surprisingly, the Dehamer negatively affected the OD performance due to the artifacts it produced. This assessment not only provides valuable findings into the effectiveness of these methods but also sheds light on how to benefit them when it comes to object detection under hazy atmospheric conditions. KW - Object detection KW - YOLO KW - Image dehazing KW - Synthetic haze N2 - Sis giderimi insansız hava araçları, trafik kontrolü ve otonom sürüş gibi uygulamalarda hayati önemdeki görünürlüğü iyileştirmek amacıyla atmosferik pus ve saçılım etkilerini ortadan kaldırmayı hedefleyen hesaplamalı fotografinin önemli bir dalıdır. Ancak bu alandaki çalışmaların birçoğu geliştirilen algoritmanın nesne tespiti (NT) bağlamında değerlendirilmesinden yoksundur. Bu çalışmada üstün performansıyla bilinen YOLOv8 üzerinden son teknoloji ürünü çeşitli sis giderici yöntemlerin (C2PNet, D4, Dehamer, gUNet) katkısının NT bağlamında ölçülmesi ve değerlendirilmesini amaçlanmıştır. Bu amaçla veri kaynağı olarak VisDrone-DET veri kümesindeki 548 sissiz gökyüzü görüntüsü içeren test kısmından faydalandık. Daha kapsamlı bir değerlendirme için farklı sis seviyeleri ve çözünürlükler altında NT bağlamında bu yaklaşımları değerlendirdik. Sisli ve temiz imgeleri doğal olarak aynı anda elde etmek mümkün olmadığındavn, (1) değişen sis yoğunlukları içeren sentetik sisli imgeler oluşturduk ve (2) 640p ve 1280p çözünürlüklerinde yeniden boyutlandırdık. Ardından (i) sissiz kesin referans, (ii) üç farklı sislendirilmiş sürüm ve (iii) bunların sisi giderilmiş muadillerinde YOLO8 ve YOLO10 modelini kullanarak NT performansını çeşitli ölçütler üzerinden değerlendirdik. Deneylerimiz GCANet ile GridDehazeNet'ten esinlenen ve U-Net modelinin bir varyantını içeren gUNET yaklaşımının NT performansı açısından diğerlerinden daha iyi başarım gösterdiğini ortaya koymuştur. Dehamer yöntemi saşırtıcı şekilde üretilen “artifakt” nedeniyle NT başarımını olumsuz etkilemiştir. Bu değerlendirme ilgili yöntemlerin etkinliği hakkında değerli bulgular sunmakla kalmayarak sisli hava koşullarında NT söz konusu olduğunda bu yöntemlerden nasıl faydalanılacağına da ışık tutmaktadır. CR - [1] Yang Y, Wang C, Liu R, Zhang L, Guo X, Tao D. “Selfaugmented unpaired image dehazing via density and depth decomposition”. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022. CR - [2] Li B, Ren W, Fu D, Tao D, Feng D, Zeng W, Wang Z. “Benchmarking single-image dehazing and beyond”. IEEE Transactions on Image Processing, 28(1), 492-505, 2019. CR - [3] Chahal KS, Dey K. “A survey of modern object detection literature using deep learning”. arXiv, 2018. https://arxiv.org/pdf/1808.07256 CR - [4] Medium, “Synthesize Hazy/Foggy Images using Monodepth and Atmospheric Scattering Model”. https://towardsdatascience.com/synthesize-hazy-foggyimage-using-monodepth-and-atmospheric-scatteringmodel-9850c721b74e (08.08.2024). CR - [5] Tran LA, Do TD, Park DC, Le MH. “Robustness enhancement of object detection in advanced driver assistance systems (ADAS)”. https://arxiv.org/pdf/2105.01580. arXiv, 2021. CR - [6] Song Y, He Z, Qian H, Du X. “Vision transformers for single image dehazing”. IEEE Transactions on Image Processing, 32, 1927-1941, 2023. CR - [7] Song Y, Zhou Y, Qian H, Du X. “Rethinking performance gains in image dehazing networks”. arXiv 2022. https://arxiv.org/pdf/2209.11448 CR - [8] Thakur N, Nagrath P, Jain R, Saini D, Sharma N, Hemanth J. “Object detection in deep surveillance”. Research Square, 2021. https://doi.org/10.21203/rs.3.rs-901583/v1 CR - [9] Ali S, Abdullah Athar A, Ali M, Hussain A, Kim HC. "Computer vision-based military tank recognition using object detection technique: an application of the YOLO framework". 1st International Conference on Advanced Innovations in Smart Cities, Jeddah, Saudi Arabia, 23-25 January 2023. CR - [10] Rahadianti L, Azizah A Y, Deborah H. "Evaluation of the quality indicators in dehazed images: color, contrast, naturalness, and visual pleasingness". Heliyon, 7(9), 1-12, 2021. CR - [11] Wu H, Qu Y, Lin S, Zhou JJ, Qiao R, Zhang Z, Xie Y, Ma L. “Contrastive learning for compact single image dehazing”. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, 19–25 June 2021. CR - [12] Yang Y, Wang C, Liu R, Zhang L, Guo X, Tao D. “Selfaugmented unpaired image dehazing via density and depth decomposition”, IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022. CR - [13] Guo C, Yan Q, Anwar S, Cong R, Ren W, Li C. “Image dehazing transformer with transmission-aware 3D position embedding”. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022. CR - [14] Zheng Y, Zhan J, He S, Dong J, Du Y. “Curricular contrastive regularization for physics-aware single image dehazing”. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 18-22 June 2023. CR - [15] He K, Sun J, Tang X. “Single image haze removal using dark channel prior”. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20-25 June 2009. CR - [16] Berman D, Treibitz T, Avidan S. “Non-local image dehazing”. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June-1 July 2016. CR - [17] Li B, Peng X, Wang Z, Xu J, Feng D. “AOD-Net: all-in-one dehazing network”. IEEE International Conference on Computer Vision, Venice, Italy, 22-29 October 2017. CR - [18] Ancuti CO, Ancuti C. “Single image dehazing by multi-scale fusion”. IEEE Transactions on Image Processing, 22(8), 3271-3282, 2013. CR - [19] Ultralytics. “VisDrone”. https://docs.ultralytics.com/tr/datasets/detect/visdrone /#citations-and-acknowledgments (08.02.2024). CR - [20] GitHub. “VisDrone/VisDrone-Dataset”. https://github.com/VisDrone/VisDrone-Dataset (08.07.2024). CR - [21] GitHub. “tranleanh/haze-synthesis”. https://github.com/tranleanh/haze-synthesis (09.05.2024). CR - [22] Wang A, Chen H, Liu L, Chen K, Lin Z, Han J, Ding, G. “Yolov10: Real-Time End-To-End Object Detection”. arXiv 2024. https://arxiv.org/pdf/2405.14458 CR - [23] Hussain M. “YOLO-v1 to YOLO-v8, the rise of YOLO and its complementary nature toward digital manufacturing and industrial defect detection”. Machines, 11(7), 677, 2023. CR - [24] Roboflow Blog. “Your Comprehensive Guide to the YOLO Family of Models”. https://blog.roboflow.com/guide-toyolo-models/ (08.02.2024). CR - [25] Ghosh A. “YOLOv10: The Dual-Head OG of YOLO Series”. https://learnopencv.com/yolov10/ (01.07.2024). CR - [26] Marium A, Srinivasan D G, Shetty S A. “Literature survey on object detection using YOLO”. International Research Journal of Engineering and Technology, 7(6), 3082-3088, 2020. CR - [27] Jiang P, Ergu D, Liu F, Cai Y, Ma B. “A review of YOLO algorithm developments”. Procedia Computer Science, 199, 1066-1073, 2022. CR - [28] Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg, AC. “SSD: single shot multibox detector”. Computer VisionECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016. CR - [29] Deng C, Wang M, Liu L, Liu Y, Jiang Y. “Extended feature pyramid network for small object detection”. IEEE Transactions on Multimedia, 24, 1968-1979, 2022. CR - [30] Hnewa M, Radha H. “Multiscale domain adaptive YOLO for cross-domain object detection”. 2021 IEEE International Conference on Image Processing, Anchorage, Alaska, USA, 19-22 September 2021. CR - [31] Sirisha U, Praveen SP, Srinivasu PN, Barsocchi P, Bhoi AK. “Statistical analysis of design aspects of various YOLObased deep learning models for object detection”. International Journal of Computational Intelligence Systems, 16(126), 1-29, 2023. CR - [32] GitHub “Li-Chongyi/Dehamer”. https://github.com/LiChongyi/Dehamer (08.02.2024). CR - [33] Wu B, Xu C, Dai X, Wan A, Zhang P, Yan Z, Tomizuka M, Gonzalez J, Keutzer K, Vajda P. “Visual transformers: tokenbased image representation and processing for computer vision”. arXiv 2020. https://arxiv.org/pdf/2006.03677 CR - [34] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. “Attention is all you need”. arXiv, 2017. https://arxiv.org/pdf/1706.03762 CR - [35] Dosovitskiy A. “An image is worth 16x16 words: transformers for image recognition at scale”. arXiv, 2020. https://arxiv.org/pdf/2010.11929 CR - [36] GitHub. “IDKiro/gUNet”. https://github.com/IDKiro/gUNet (08.02.2024). CR - [37] Shah T. “Measuring object detection models - mAP - what is mean average precision?”. https://tarangshah.com/blog/2018-01-27/what-is-mapunderstanding-the-statistic-of-choice-for-comparingobject-detection-models/ (08.02.2024). CR - [38] LearnOpenCV. “Mean average precision (mAP) in object detection”. https://learnopencv.com/mean-averageprecision-map-object-detection-model-evaluationmetric/ (08.02.2024). CR - [39] Altun M, Türker M. “Vehicle detection in urban areas from very high resolution UAV color images”. Pamukkale University Journal of Engineering Sciences, 26(2), 371-384, 2020. UR - https://dergipark.org.tr/en/pub/pajes/issue//1727971 L1 - https://dergipark.org.tr/en/download/article-file/4993950 ER -