Araştırma Makalesi
BibTex RIS Kaynak Göster

Evrişimli sinir ağı kullanarak çoklu-pozlamalı görüntü birleştirme

Yıl 2023, , 1439 - 1452, 06.01.2023
https://doi.org/10.17341/gazimmfd.1067400

Öz

Aynı sahneye ait iki ya da daha fazla düşük dinamik alana (LDR) sahip görüntülerden yüksek dinamik alana (HDR) sahip tek bir görüntü elde etme yöntemlerine çoklu-pozlamalı görüntü birleştirme (MEF) denir. Bu çalışmada MEF için derin öğrenme (DL) modellerinden evrişimli sinir ağı (CNN) kullanan yeni bir yöntem önerilmiştir. Önerilen yöntemde ilk adımda CNN modeli kullanılarak kaynak görüntülerden birleştirme haritası (fmap) elde edilmiştir. Birleştirilmiş görüntülerde testere-dişi etkisini ortadan kaldırmak için fmap üzerinde ağırlıklandırma işlemi gerçekleştirilmiştir. Daha sonra ağırlıklandırılmış fmap kullanılarak her tarafı iyi pozlanmış birleştirilmiş görüntüler oluşturulmuştur. Önerilen yöntem literatürde yaygın olarak kullanılan MEF veri setlerine uygulanmış ve elde edilen birleştirilmiş görüntüler kalite metrikleri kullanılarak değerlendirilmiştir. Önerilen yöntem ve diğer iyi bilinen görüntü birleştirme yöntemleri, görsel ve niceliksel değerlendirme açısından karşılaştırılmıştır. Elde edilen sonuçlar, geliştirilen tekniğin uygulanabilirliğini göstermektedir.

Kaynakça

  • Kaur, H., Koundal, D., Kadyan, V. Image Fusion Techniques: A Survey, Arch Computat Methods Eng, cilt 28, p. 4425–4447, 2021.
  • Karishma, C. B., Bhumika, S., A Review of Image Fusion Techniques, 2018 Second International Conference on Computing Methodologies and Communication (ICCMC), 2018.
  • Ma, J., Ma, Y., Li, C., Infrared and visible image fusion methods and applications: A survey, Information Fusion, cilt 45, pp. 153-178, 2019.
  • Aslantas, V., Bendes, E., Kurban, R., Toprak, A. N., New optimised region-based multi-scale image fusion method for thermal and visible images, IET Image Processing, pp. 289-299, 2014.
  • Maruthi, R., Lakshmi, I., Multi-Focus Image Fusion Methods – A Survey, IOSR Journal of Computer Engineering (IOSR-JCE), cilt 19, no. 4, pp. 9-25, 2017.
  • Aslantas, V., Kurban, R., Fusion of multi-focus images using differential evolution algorithm, Expert Systems with Applications, p. 8861–8870, 2010.
  • Aslantaş, V., Kurban, R., A comparison of criterion functions for fusion of multi-focus noisy images, Optics Communications, no. 282, p. 3231–3242, 2009.
  • Aslantas, V., Bendes, E., A new image quality metric for image fusion: The sum of the correlations of differences, Int. J. Electron. Commun., cilt 69, pp. 1890-1896, 2015.
  • Jing, Z, Pan, H., Li, Y., Dong, P., Evaluation of Focus Measures in Multi-Focus Image Fusion. In: Non-Cooperative Target Tracking, Fusion and Control. Information Fusion and Data Science, pp. 269-281, 2018.
  • Ke, P., Jung, C. , Fang, Y., Perceptual multi-exposure image fusion with overall image quality index and local saturation, Multimedia Systems, cilt 23, no. 2, p. 239–250, 2017.
  • Singh, S, Mittal, N, Singh, H, Review of Various Image Fusion Algorithms and Image Fusion Performance Metric, Archives of Computational Methods in Engineering, cilt 28, no. 5, p. 3645–3659, 2021. Zhao, S., Wang, Y., A Novel Patch-Based Multi-Exposure Image Fusion Using Super-Pixel Segmentation, IEEE Access, cilt 8, pp. 39034-39045, 2020.
  • Yadong , X., Beibei , S., Color-compensated multi-scale exposure fusion based on physical features, Optik, cilt 223, no. 165494, 2020.
  • Bavirisetti, D. P., Dhuli, R., Multi-focus image fusion using multi-scale image decomposition and saliency detection, Ain Shams Engineering Journal, cilt 9, p. 1103–1117, 2018.
  • Zhang, X., Benchmarking and comparing multi-exposure image fusion algorithms, Information Fusion, cilt 74, pp. 111-131, 2021.
  • Mertens, T., Kautz, J., Reeth, F. V., Exposure Fusion, 15th Pacific Conference on Computer Graphics and Applications, Maui, HI, USA, 2007.
  • Malik, M. H., Gilani, S. A. M., Anwaar-ul-Haq, Wavelet Based Exposure Fusion, Proceedings of the World Congress on Engineering, London, 2008. Wang, J., Xu, D., Lang, C., Li, B., Exposure Fusion Based on Shift-Invariant Discrete Wavelet Transform, Journal Of Information Science And Engineering, cilt 27, pp. 197-211, 2011.
  • Martorell, O., Sbert, C., Buades, A., Ghosting-free DCT based multi-exposure image fusion, Signal Processing: Image Communication, cilt 78, pp. 409-425, 2019.
  • Kou, F., Zhengguo , L., Changyun , W., Weihai , C., Edge-preserving smoothing pyramid based multi-scale exposure fusion, J. Vis. Commun. Image Represent, cilt 53, p. 235–244, 2018. Hayat, N., Imran, M., Ghost-free multi exposure image fusion technique using dense SIFT descriptor and guided filter, Journal of Visual Communication and Image Representation, cilt 62, pp. 295-308, 2019.
  • Qiegen, L., Leung, H., Variable augmented neural network for decolorization and multi-exposure fusion, Information Fusion, cilt 46, pp. 114-127, 2019.
  • Song, M., Tao, D., Chen, C., Bu, J, Luo, J, Zh, C, Probabilistic exposure fusion, IEEE Trans. Image Process, cilt 21, no. 1, pp. 341-357, 2012.
  • Gu, B., Li, W., Wong, J., Zhu, M., Wang, M, Gradient field multi-exposure images fusion for high dynamic range image visualization, J. Vis. Commun. Image Represent, cilt 23, no. 4, pp. 604-610, 2012.
  • Li, S., Kang, X., Fast multi-exposure image fusion with median filter and recursive filter, IEEE Trans. Consum. Electron, cilt 58, no. 2, pp. 626-632, 2012.
  • Bo, G., Wujing, L., Jiangtao, W., Minyun, Z., Minghui, W., Gradient field multi-exposure images fusion for high dynamic range image visualization, Journal of Visual Communication and Image Representation, cilt 23, no. 4, pp. 604-610, 2012.
  • Zhang, W., Cham, W., Gradient-directed multiexposure composition, IEEE Transactions on Image Processing, cilt 21, no. 4, pp. 2318 - 2323, 2012.
  • Sujoy, P., Ioana, S. S. , Panajotis , A., Multi-Exposure and Multi-Focus Image Fusion in Gradient Domain, Journal of Circuits, Systems and Computers, cilt 25, no. 10, p. 1650123, 2016.
  • Goshtasby, A. A., Fusion of multi-exposure images, Image and Vision Computing, no. 23, p. 611–618, 2005.
  • Kong, J., Wang, R., Lu, Y., Feng, X., Zhang, J., A Novel Fusion Approach of Multi-exposure Image, EUROCON 2007 The International Conference on “Computer as a Tool”, Warsaw, Poland, 2007.
  • Kede, M., Hui , L., Hongwei , Y., Zhou , W., Deyu , M., Lei , Z., Robust Multi-Exposure Image Fusion: A Structural Patch Decomposition Approach, IEEE Trans. Image Processing, cilt 26, no. 5, p. 2519–2532, 2017.
  • Zhang, W., Hu, S.,Liu, K., Patch-based correlation for deghosting in exposure fusion, Information Sciences, cilt 415–416, pp. 19-27, 2017.
  • Zhang, W., Hu, S., Liu, K., Yao, J., Motion-free exposure fusion based on inter-consistency and intra-consistency, Information Sciences, cilt 376, pp. 190-201, 2017.
  • Ma, K., Duanmu, Z., Zhu, H., Fang, Y. ve Wang, Z., Deep Guided Learning for Fast Multi-Exposure Image Fusion, IEEE Transactions on Image Processing, cilt 29, pp. 2808-2819, 2020.
  • Xu, H., Ma, J., Zhang, X., MEF-GAN: Multi-Exposure Image Fusion via Generative Adversarial Networks, IEEE Transactions on Image Processing, cilt 29, pp. 7203-7216, 2020.
  • Prabhakar, K. R., Srikar, V S., Babu, R. V., A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs, International Conference on Computer Vision (ICCV), 2017.
  • Qi, Y., Zhou, S., Zhang, Z., Luo, S., Lin, X., Wang, L., Qiang, B., Deep unsupervised learning based on color un-referenced loss functions for multi-exposure image fusion, Information Fusion, cilt 66, pp. 18-39, 2021.
  • Romanuke, V. V., An infinitely scalable dataset of single-polygon grayscale images as a fast test platform for semantic image segmentation, KPI Science News, cilt 1, pp. 24-34, 2019.
  • Alzubaidi, L. , Zhang, J. , Humaidi, A. J., Al-Dujaili, A. , Duan, Y., Review of deep learning: concepts, CNN architectures, challenges, applications, future directions, Journal of Big Data, cilt 8, no. 53, 2021.
  • Krizhevsky, A., Sutskever, I., Hinton, G. E., Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, p. 1097–1105, 2012.
  • Simonyan, K., Zisserman, A., Very deep convolutional networks for large-scale image recognition, 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 2015.
  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P. , Reed, S. , Anguelov, D. , Erhan, D. , Vanhoucke, V. , Rabinovich, A., Going deeper with convolutions, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  • He, K., Zhang, X., Ren, S., Sun, J., Deep Residual Learning for Image Recognition, IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • Badrinarayanan, V., Kendall, A., Cipolla, R., SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, cilt 39, no. 12, pp. 2481-2495, 2017. Minaee, S., Boykov, Y. Y, Porikli, F., Plaza, A. J, Kehtarnavaz, N., Terzopoulos, D., Image Segmentation Using Deep Learning: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
  • Paoletti, M. E., Haut, J. M., Plaza, J., Plaza, A., Deep learning classifiers for hyperspectral imaging: A review, ISPRS Journal of Photogrammetry and Remote Sensing, cilt 158, pp. 279-317, 2019.
  • Jagalingam, P., Hegde, A. V., A Review of Quality Metrics for Fused Image, Aquatic Procedia, pp. 133-142, 2015.
  • Nayar, S. K., Nakagawa, Y., Shape from focus, IEEE Transactions on Pattern Analysis and Machine Intelligence, cilt 16, no. 8, p. 824–831, 1994.
  • Chen, Y., Blum, R. S., A new automated quality assessment algorithm for image fusion, Image and Vision Computing, pp. 1421-1432, 2009.
  • Wang, Z., Bovik, A. C., Sheikh , H. R., Simoncelli, E. P., Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing, cilt 13, no. 4, pp. 600-612, 2004.
  • Hasan, M., Sohel, F., Diepeveen, D. , Laga, H., Jones, M. G.K., A survey of deep learning techniques for weed detection from images, Computers and Electronics in Agriculture, cilt 184, 2021.
  • Li, H., Zhang, L., Multi-Exposure Fusion With Cnn Features, 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 2018.
  • Li, H., Ma, K., Yong, H., Zhang, L., Fast multi-scale structural patch decomposition for multi-exposure image fusion, IEEE Trans. Image Process, cilt 29, p. 5805–5816, 2020.
Toplam 49 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Harun Akbulut 0000-0002-9117-8407

Veysel Aslantaş 0000-0002-0952-0315

Yayımlanma Tarihi 6 Ocak 2023
Gönderilme Tarihi 2 Şubat 2022
Kabul Tarihi 16 Haziran 2022
Yayımlandığı Sayı Yıl 2023

Kaynak Göster

APA Akbulut, H., & Aslantaş, V. (2023). Evrişimli sinir ağı kullanarak çoklu-pozlamalı görüntü birleştirme. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi, 38(3), 1439-1452. https://doi.org/10.17341/gazimmfd.1067400
AMA Akbulut H, Aslantaş V. Evrişimli sinir ağı kullanarak çoklu-pozlamalı görüntü birleştirme. GUMMFD. Ocak 2023;38(3):1439-1452. doi:10.17341/gazimmfd.1067400
Chicago Akbulut, Harun, ve Veysel Aslantaş. “Evrişimli Sinir ağı Kullanarak çoklu-Pozlamalı görüntü birleştirme”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 38, sy. 3 (Ocak 2023): 1439-52. https://doi.org/10.17341/gazimmfd.1067400.
EndNote Akbulut H, Aslantaş V (01 Ocak 2023) Evrişimli sinir ağı kullanarak çoklu-pozlamalı görüntü birleştirme. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 38 3 1439–1452.
IEEE H. Akbulut ve V. Aslantaş, “Evrişimli sinir ağı kullanarak çoklu-pozlamalı görüntü birleştirme”, GUMMFD, c. 38, sy. 3, ss. 1439–1452, 2023, doi: 10.17341/gazimmfd.1067400.
ISNAD Akbulut, Harun - Aslantaş, Veysel. “Evrişimli Sinir ağı Kullanarak çoklu-Pozlamalı görüntü birleştirme”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 38/3 (Ocak 2023), 1439-1452. https://doi.org/10.17341/gazimmfd.1067400.
JAMA Akbulut H, Aslantaş V. Evrişimli sinir ağı kullanarak çoklu-pozlamalı görüntü birleştirme. GUMMFD. 2023;38:1439–1452.
MLA Akbulut, Harun ve Veysel Aslantaş. “Evrişimli Sinir ağı Kullanarak çoklu-Pozlamalı görüntü birleştirme”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi, c. 38, sy. 3, 2023, ss. 1439-52, doi:10.17341/gazimmfd.1067400.
Vancouver Akbulut H, Aslantaş V. Evrişimli sinir ağı kullanarak çoklu-pozlamalı görüntü birleştirme. GUMMFD. 2023;38(3):1439-52.