Araştırma Makalesi
BibTex RIS Kaynak Göster

ÖN EĞİTİMLİ EVRİŞİMLİ SİNİR AĞLARI DESTEKLİ GÖRÜNTÜ SAHTECİLİK TESPİTİ YÖNTEMİ

Yıl 2020, , 705 - 714, 07.08.2020
https://doi.org/10.28948/ngumuh.654519

Öz

İnternet ve bilgisayar teknolojilerinin gelişmesi ile görüntü sahteciliği tespiti önem kazanmıştır. Ayrıca, görüntü iyileştirme uygulamalarında kullanılan tekniklerin iyi başarım göstermesi için görüntülere uygulanan saldırı çeşitlerinin ve bölgelerinin doğru bir şekilde tespit edilmesi gerekmektedir. Bu çalışmada, görüntülere uygulanan saldırı çeşitlerini ve saldırı bölgelerini tespit etmek için ön eğitimli AlexNet ve GoogLeNet evrişimli sinir ağları destekli görüntü sahtecilik tespiti yöntemi önerilmiştir. Öncelikle; MICC-F2000 veri kümesinde bulunan görüntüler kullanılarak orijinal ve saldırılmış görüntülerin olduğu görüntü sahteciliği tespiti veri kümesi oluşturulmuştur. Saldırılmış görüntüleri elde etmek için Gauss bulanıklaştırma, medyan filtreleme, Gauss gürültü ekleme, Poisson gürültü ekleme ve keskinleştirme saldırıları kullanılmıştır. Daha sonra, ön eğitimli AlexNet ve GoogLeNet ağlarının tam bağlantılı katmanları deneysel veri kümesindeki altı veri sınıfı için yeni tam bağlantılı katmanlar ile değiştirilmiştir. Oluşturulan AlexNet ve GoogLeNet destekli ağlar hazırlanan görüntü sahteciliği tespiti veri kümesi ile eğitilerek test edilmiştir. Faklı hiperparametre değerleri için ağların başarımları ölçülmüştür. AlexNet destekli ağlarda en yüksek başarım %99,48’lik doğruluk oranı ile elde edilirken, GoogLeNet destekli ağlarda ise en yüksek başarım %99,92’lik doğruluk oranı ile elde edilmiştir. Ayrıca, geliştirilen AlexNet ve GoogLeNet destekli sahtecilik tespiti yönteminin CoMoFoD veri kümesinden alınan görüntüler üzerindeki saldırıları tespit edebilme başarısı gözlemlenmiştir. Deneysel sonuçlar önerilen yöntemin başarılı bir şekilde görüntü sahteciliği tespiti için kullanılabileceğini göstermiştir.

Kaynakça

  • N. B. A. Warif, A. W. A. Wahab, M. Y. I. Idris, R. Ramli, R. Salleh, S. Shamshirband, and K. K. R. Choo, “Copy-move forgery detection: Survey, challenges and future directions”, Journal of Network and Computer Applications, vol. 75, pp. 259-278, 2016.
  • W. Ding, W. Yan and D. Qi, “Digital image watermarking based on Discrete Wavelet Transform”, Journal of Computer Science and Technology, vol. 17, no. 2, pp. 129-139, 2002.
  • X. Wu, “A new technique for digital image watermarking”, Journal of Computer Science and Technology, vol. 20, no. 6, pp. 843-848, 2005.
  • E. Gul and S. Ozturk, "A novel hash function based fragile watermarking method for image integrity.", Multimedia Tools and Applications, vol. 78, no. 13, pp. 17701-17718, 2019.
  • H. Zhang, C. Yang and X. Quan, “Image authentication based on digital signature and semi-fragile watermarking”, Journal of Computer Science and Technology, vol. 19, no. 6, pp. 752-759, 2004.
  • M. Alkawaz, G. Sulong, T. Saba and A. Rehman, “Detection of copy-move image forgery based on Discrete Cosine Transform”, Neural Computing and Applications, vol. 30, no. 1, pp. 183-192, 2018.
  • Y. Liu, Q. Guan and X. Zhao, “Copy-move forgery detection based on convolutional kernel network”, Multimedia Tools and Applications, vol. 77, no. 14, pp. 18269-18293, 2018.
  • J. Han, T. Park, Y. Moon and I. Eom, “Quantization-based Markov feature extraction method for image splicing detection”, Machine Vision and Applications, vol. 29, no. 3, pp. 543-552, 2018.
  • D. Cozzolino, G. Poggi and L. Verdoliva, “Recasting Residual-based Local Descriptors as convolutional neural networks”, In Proc. 5th ACM Workshop on Information Hiding and Multimedia Securit-IHMMSec '17, 2017, pp. 159–164.
  • Y. LeCun, Y. Bengio and G. Hinton, "Deep learning", Nature, vol. 521, no. 7553, pp. 436-444, 2015.
  • J. Chen, X. Kang, Y. Liu and Z. J. Wang, “Median filtering forensics based on convolutional neural networks”, IEEE Signal Processing Letters, vol. 22, no. 11, pp. 1849-1853, 2015.
  • B. Bayar and M. C. Stamm, “A Deep learning approach to universal image manipulation detection using a new convolutional layer”, In Proc. 4th ACM Workshop on Information Hiding and Multimedia Security-IH&MMSec '16, 2016, pp. 5–10.
  • B. Bayar and M. C. Stamm, “On the robustness of constrained convolutional neural networks to JPEG post-compression for image resampling detection”, In Proc. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 2152-2156.
  • Y. Rao and J. Ni, “A deep learning approach to detection of splicing and copy-move forgeries in images”, In Proc. 2016 IEEE International Workshop on Information Forensics and Security (WIFS), 2016, pp. 1-6.
  • I. Amerini, T. Uricchio, L. Ballan and R. Caldelli, “Localization of JPEG double compression through multi-domain convolutional neural networks”, In Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1865-1871.
  • Q. Wang and R. Zhang, “Double JPEG compression forensics based on a convolutional neural network”, EURASIP Journal on Information Security, vol. 2016, no. 1, 2016.
  • Bunk, J.; Bappy, J.H.; Mohammed, T.M.; Nataraj, L.; Flenner, A.; Manjunath, B.; Chandrasekaran, S.; Roy-Chowdhury A.K.and Peterson, L. "Detection and localization of image forgeries using resampling features and deep learning”, In Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1881-1889.
  • L. Bondi, S. Lameri, D. Güera, P. Bestagini, E. J. Delp and S. Tubaro, “Tampering detection and localization through clustering of camera-based CNN features”, In Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1855-1864.
  • D. C. Cireşan, U. Meier and J. Schmidhuber, “Transfer learning for Latin and Chinese characters with deep neural networks”, In Proc. The 2012 International Joint Conference on Neural Networks (IJCNN), 2012, pp. 1-6.
  • A. Krizhevsky, I. Sutskever, G. E. Hinton, “Imagenet classification with deep convolutional neural networks”, In Proc. Advances in Neural Information Processing Systems 25 (NIPS 2012), 2012, pp. 1097-1105.
  • Andrew Rabinovich C Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich, “Going deeper with convolutions”, In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1-9
  • B. Zou, Y. Guo, Q. He, P. Ouyang, K. Liu and Z. Chen, “3D Filtering by block matching and convolutional neural network for image denoising”, Journal of Computer Science and Technology, vol. 33, no. 4, pp. 838-848, 2018.
  • Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu and M. Lew, “Deep learning for visual understanding: A review”, Neurocomputing, vol. 187, pp. 27-48, 2016.
  • E. A. Hadhrami, M. A. Mufti, B. Taha and N. Werghi, “Transfer learning with convolutional neural networks for moving target classification with micro-Doppler radar spectrograms,”, In Proc. 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD), 2018, pp. 148-154.
  • M. A. Mufti, E. A. Hadhrami, B. Taha and N. Werghi, “Automatic target recognition in SAR images: Comparison between pre-trained CNNs in a tranfer learning based approach,”, In Proc. 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD), 2018, pp. 160-164.
  • J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?”, In Proc. Advances in Neural Information Processing Systems 27 (NIPS 2014), 2014, pp. 3320-3328.
  • M. Mehdipour Ghazi, B. Yanikoglu and E. Aptoula, “Plant identification using deep neural networks via optimization of transfer learning parameters”, Neurocomputing, vol. 235, pp. 228-235, 2017.
  • I. Amerini, L. Ballan, R. Caldelli, A. Del Bimbo and G. Serra, “A SIFT-based forensic method for copy–move attack detection and transformation recovery”, IEEE Transactions on Information Forensics and Security, vol. 6, no. 3, pp. 1099-1110, 2011.
  • T. Ozcan and A. Basturk, "Transfer learning-based convolutional neural networks with heuristic optimization for hand gesture recognition.", Neural Computing and Applications, vol. 31, no. 12, pp. 8955-8970, 2019.
  • D. Tralic, I. Zupancic, S. Grgic, and M. Grgic, “CoMoFoD-New database for copy-move forgery detection”, In Proc. Electronics in Marine ELMAR-2013, 2013, pp. 49-54.
  • K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition.”, CoRR, abs/1409.1556, 2014.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition.”, CoRR, abs/1512.03385, 2015.

PRE-TRAINED CONVOLUTIONAL NEURAL NETWORK BASED IMAGE TAMPER DETECTION METHOD

Yıl 2020, , 705 - 714, 07.08.2020
https://doi.org/10.28948/ngumuh.654519

Öz

With the development of internet and the computer technologies, image forgery detection has become important issue. In addition, in order to obtain successful performance in image enhancement techniques, the types and regions of the attacks applied to the images must be determined correctly. In this study, in order to detect the types and regions of the attacks applied to the images, pre-trained AlexNet and GoogLeNet convolutional neural networks-based forgery detection method has been proposed. Firstly, image forgery detection dataset containing the original and the attacked images has been created using the images in the MICC-F2000 dataset. Gaussian blurring, median filtering, Gaussian noise adding, Poisson noise adding and sharpening attacks have been used to obtain the attacked images. Then, the fully connected layers of the pre-trained AlexNet and GoogLeNet networks have been replaced with the new fully connected layers for the six classes of the created image forgery detection dataset. The modified AlexNet and GoogLeNet based networks have been trained and tested with the created image forgery detection dataset. The networks performances have been evaluated for different hyper parameter values. While the highest accuracy rate of 99.48% has been achieved in AlexNet supported networks, the highest accuracy rate of 99.92% has been achieved in GoogLeNet supported networks. Also, the proposed AlexNet and GoogLeNet-based forgery detection method has been tested on the images from CoMoFoD dataset. Experimental results show that the proposed method can be used successfully for the image forgery detection.

Kaynakça

  • N. B. A. Warif, A. W. A. Wahab, M. Y. I. Idris, R. Ramli, R. Salleh, S. Shamshirband, and K. K. R. Choo, “Copy-move forgery detection: Survey, challenges and future directions”, Journal of Network and Computer Applications, vol. 75, pp. 259-278, 2016.
  • W. Ding, W. Yan and D. Qi, “Digital image watermarking based on Discrete Wavelet Transform”, Journal of Computer Science and Technology, vol. 17, no. 2, pp. 129-139, 2002.
  • X. Wu, “A new technique for digital image watermarking”, Journal of Computer Science and Technology, vol. 20, no. 6, pp. 843-848, 2005.
  • E. Gul and S. Ozturk, "A novel hash function based fragile watermarking method for image integrity.", Multimedia Tools and Applications, vol. 78, no. 13, pp. 17701-17718, 2019.
  • H. Zhang, C. Yang and X. Quan, “Image authentication based on digital signature and semi-fragile watermarking”, Journal of Computer Science and Technology, vol. 19, no. 6, pp. 752-759, 2004.
  • M. Alkawaz, G. Sulong, T. Saba and A. Rehman, “Detection of copy-move image forgery based on Discrete Cosine Transform”, Neural Computing and Applications, vol. 30, no. 1, pp. 183-192, 2018.
  • Y. Liu, Q. Guan and X. Zhao, “Copy-move forgery detection based on convolutional kernel network”, Multimedia Tools and Applications, vol. 77, no. 14, pp. 18269-18293, 2018.
  • J. Han, T. Park, Y. Moon and I. Eom, “Quantization-based Markov feature extraction method for image splicing detection”, Machine Vision and Applications, vol. 29, no. 3, pp. 543-552, 2018.
  • D. Cozzolino, G. Poggi and L. Verdoliva, “Recasting Residual-based Local Descriptors as convolutional neural networks”, In Proc. 5th ACM Workshop on Information Hiding and Multimedia Securit-IHMMSec '17, 2017, pp. 159–164.
  • Y. LeCun, Y. Bengio and G. Hinton, "Deep learning", Nature, vol. 521, no. 7553, pp. 436-444, 2015.
  • J. Chen, X. Kang, Y. Liu and Z. J. Wang, “Median filtering forensics based on convolutional neural networks”, IEEE Signal Processing Letters, vol. 22, no. 11, pp. 1849-1853, 2015.
  • B. Bayar and M. C. Stamm, “A Deep learning approach to universal image manipulation detection using a new convolutional layer”, In Proc. 4th ACM Workshop on Information Hiding and Multimedia Security-IH&MMSec '16, 2016, pp. 5–10.
  • B. Bayar and M. C. Stamm, “On the robustness of constrained convolutional neural networks to JPEG post-compression for image resampling detection”, In Proc. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 2152-2156.
  • Y. Rao and J. Ni, “A deep learning approach to detection of splicing and copy-move forgeries in images”, In Proc. 2016 IEEE International Workshop on Information Forensics and Security (WIFS), 2016, pp. 1-6.
  • I. Amerini, T. Uricchio, L. Ballan and R. Caldelli, “Localization of JPEG double compression through multi-domain convolutional neural networks”, In Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1865-1871.
  • Q. Wang and R. Zhang, “Double JPEG compression forensics based on a convolutional neural network”, EURASIP Journal on Information Security, vol. 2016, no. 1, 2016.
  • Bunk, J.; Bappy, J.H.; Mohammed, T.M.; Nataraj, L.; Flenner, A.; Manjunath, B.; Chandrasekaran, S.; Roy-Chowdhury A.K.and Peterson, L. "Detection and localization of image forgeries using resampling features and deep learning”, In Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1881-1889.
  • L. Bondi, S. Lameri, D. Güera, P. Bestagini, E. J. Delp and S. Tubaro, “Tampering detection and localization through clustering of camera-based CNN features”, In Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1855-1864.
  • D. C. Cireşan, U. Meier and J. Schmidhuber, “Transfer learning for Latin and Chinese characters with deep neural networks”, In Proc. The 2012 International Joint Conference on Neural Networks (IJCNN), 2012, pp. 1-6.
  • A. Krizhevsky, I. Sutskever, G. E. Hinton, “Imagenet classification with deep convolutional neural networks”, In Proc. Advances in Neural Information Processing Systems 25 (NIPS 2012), 2012, pp. 1097-1105.
  • Andrew Rabinovich C Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich, “Going deeper with convolutions”, In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1-9
  • B. Zou, Y. Guo, Q. He, P. Ouyang, K. Liu and Z. Chen, “3D Filtering by block matching and convolutional neural network for image denoising”, Journal of Computer Science and Technology, vol. 33, no. 4, pp. 838-848, 2018.
  • Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu and M. Lew, “Deep learning for visual understanding: A review”, Neurocomputing, vol. 187, pp. 27-48, 2016.
  • E. A. Hadhrami, M. A. Mufti, B. Taha and N. Werghi, “Transfer learning with convolutional neural networks for moving target classification with micro-Doppler radar spectrograms,”, In Proc. 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD), 2018, pp. 148-154.
  • M. A. Mufti, E. A. Hadhrami, B. Taha and N. Werghi, “Automatic target recognition in SAR images: Comparison between pre-trained CNNs in a tranfer learning based approach,”, In Proc. 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD), 2018, pp. 160-164.
  • J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?”, In Proc. Advances in Neural Information Processing Systems 27 (NIPS 2014), 2014, pp. 3320-3328.
  • M. Mehdipour Ghazi, B. Yanikoglu and E. Aptoula, “Plant identification using deep neural networks via optimization of transfer learning parameters”, Neurocomputing, vol. 235, pp. 228-235, 2017.
  • I. Amerini, L. Ballan, R. Caldelli, A. Del Bimbo and G. Serra, “A SIFT-based forensic method for copy–move attack detection and transformation recovery”, IEEE Transactions on Information Forensics and Security, vol. 6, no. 3, pp. 1099-1110, 2011.
  • T. Ozcan and A. Basturk, "Transfer learning-based convolutional neural networks with heuristic optimization for hand gesture recognition.", Neural Computing and Applications, vol. 31, no. 12, pp. 8955-8970, 2019.
  • D. Tralic, I. Zupancic, S. Grgic, and M. Grgic, “CoMoFoD-New database for copy-move forgery detection”, In Proc. Electronics in Marine ELMAR-2013, 2013, pp. 49-54.
  • K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition.”, CoRR, abs/1409.1556, 2014.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition.”, CoRR, abs/1512.03385, 2015.
Toplam 32 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Bilgisayar Yazılımı
Bölüm Bilgisayar Mühendisliği
Yazarlar

Ertuğrul Gül 0000-0002-5591-3435

Serkan Öztürk 0000-0002-0309-3420

Yayımlanma Tarihi 7 Ağustos 2020
Gönderilme Tarihi 4 Aralık 2019
Kabul Tarihi 14 Mayıs 2020
Yayımlandığı Sayı Yıl 2020

Kaynak Göster

APA Gül, E., & Öztürk, S. (2020). ÖN EĞİTİMLİ EVRİŞİMLİ SİNİR AĞLARI DESTEKLİ GÖRÜNTÜ SAHTECİLİK TESPİTİ YÖNTEMİ. Niğde Ömer Halisdemir Üniversitesi Mühendislik Bilimleri Dergisi, 9(2), 705-714. https://doi.org/10.28948/ngumuh.654519
AMA Gül E, Öztürk S. ÖN EĞİTİMLİ EVRİŞİMLİ SİNİR AĞLARI DESTEKLİ GÖRÜNTÜ SAHTECİLİK TESPİTİ YÖNTEMİ. NÖHÜ Müh. Bilim. Derg. Ağustos 2020;9(2):705-714. doi:10.28948/ngumuh.654519
Chicago Gül, Ertuğrul, ve Serkan Öztürk. “ÖN EĞİTİMLİ EVRİŞİMLİ SİNİR AĞLARI DESTEKLİ GÖRÜNTÜ SAHTECİLİK TESPİTİ YÖNTEMİ”. Niğde Ömer Halisdemir Üniversitesi Mühendislik Bilimleri Dergisi 9, sy. 2 (Ağustos 2020): 705-14. https://doi.org/10.28948/ngumuh.654519.
EndNote Gül E, Öztürk S (01 Ağustos 2020) ÖN EĞİTİMLİ EVRİŞİMLİ SİNİR AĞLARI DESTEKLİ GÖRÜNTÜ SAHTECİLİK TESPİTİ YÖNTEMİ. Niğde Ömer Halisdemir Üniversitesi Mühendislik Bilimleri Dergisi 9 2 705–714.
IEEE E. Gül ve S. Öztürk, “ÖN EĞİTİMLİ EVRİŞİMLİ SİNİR AĞLARI DESTEKLİ GÖRÜNTÜ SAHTECİLİK TESPİTİ YÖNTEMİ”, NÖHÜ Müh. Bilim. Derg., c. 9, sy. 2, ss. 705–714, 2020, doi: 10.28948/ngumuh.654519.
ISNAD Gül, Ertuğrul - Öztürk, Serkan. “ÖN EĞİTİMLİ EVRİŞİMLİ SİNİR AĞLARI DESTEKLİ GÖRÜNTÜ SAHTECİLİK TESPİTİ YÖNTEMİ”. Niğde Ömer Halisdemir Üniversitesi Mühendislik Bilimleri Dergisi 9/2 (Ağustos 2020), 705-714. https://doi.org/10.28948/ngumuh.654519.
JAMA Gül E, Öztürk S. ÖN EĞİTİMLİ EVRİŞİMLİ SİNİR AĞLARI DESTEKLİ GÖRÜNTÜ SAHTECİLİK TESPİTİ YÖNTEMİ. NÖHÜ Müh. Bilim. Derg. 2020;9:705–714.
MLA Gül, Ertuğrul ve Serkan Öztürk. “ÖN EĞİTİMLİ EVRİŞİMLİ SİNİR AĞLARI DESTEKLİ GÖRÜNTÜ SAHTECİLİK TESPİTİ YÖNTEMİ”. Niğde Ömer Halisdemir Üniversitesi Mühendislik Bilimleri Dergisi, c. 9, sy. 2, 2020, ss. 705-14, doi:10.28948/ngumuh.654519.
Vancouver Gül E, Öztürk S. ÖN EĞİTİMLİ EVRİŞİMLİ SİNİR AĞLARI DESTEKLİ GÖRÜNTÜ SAHTECİLİK TESPİTİ YÖNTEMİ. NÖHÜ Müh. Bilim. Derg. 2020;9(2):705-14.

download