Research Article
BibTex RIS Cite

İHA Görüntülerinde Hareket Tabanlı Nesne Sınıflandırması

Year 2023, Volume: 12 Issue: 2, 93 - 104, 17.11.2023

Abstract

Video görüntülerinden hareket algılama başta güvenlik olmak üzere çeşitli alanlarda farklı amaçlarla kullanılmaktadır. Ancak hareketli nesnelerin tespitinde bazı zorluklarla karşılaşılabilinmektedir. Özellikle kamera hareket halindeyken hareket algılama zorlaşmaktadır. Bu çalışmada, video görüntülerinden hareketli nesnenin tespit edilmesi ve tespit edilen bu nesnenin sınıflandırılması amaçlanmaktadır. Hareketli nesnenin tespitinde iki farklı yöntemin bir arada kullanılması önerilmektedir. Birinci yöntemde hareketli kısımları kamera hareketinden ayırmak için görüntü üzerinde referans noktaları belirlenir. Bu noktalar, video görüntüsündeki ardışık kareler boyunca optik akış ile takip edilmektedir. İzlenen bu noktalardan hareket vektörleri oluşturulmaktadır. Eğim ve uzunluk açısından diğerlerinden farklılaşan vektörler hareketli nesneye ait vektörler olduğu tespit edilmiştir. Daha sonra ikinci yöntemde kareler arası dönüşüm matrisi hesaplanarak arka plan stabilizasyonu ile video görüntüsünden hareket tespiti yapılmıştır. Son olarak bu iki yöntemin kesiştirilerek hareketli olduğu tespit edilen bölgeler, önceden eğitilmiş VGG16 modeli ile bir sınıflandırıcı katman eklenerek sınıflandırılmaya çalışılmıştır. Çalışma ile VIVID Veri Seti üzerinde hareket eden nesnelerin doğru bir şekilde tespit edilip sınıflandırılması mümkün olmuştur.

References

  • Alkanat, T., Emre T., Sinan Ö., 2015. A Real-time, Automatic Target Detection and Tracking Method for Variable Number of Targets in Airborne Imagery, In VISAPP (2) 61-69.
  • Bal M., Karakaya İ., Başeske E., 2017. Point based motion detection on UAV cameras, 2017 25th Signal Processing and Communications Applications Conference (SIU), 1–4. https://doi.org/10.1109/SIU.2017.7960468.
  • Bulut F., 2017. A new clinical decision support system with instance based ensemble classifiers, Journal of the Faculty of Engineering and Architecture of Gazi University, 32 (1) 65-76.
  • Carmona, E. J., Martínez-Cantos, J., & Mira, J., 2008. A new video segmentation method of moving objects based on blob-level knowledge, Pattern Recognition Letters, 29(3) 272-285.
  • Chapel M-N, Bouwmans T., 2020. Moving objects detection with a moving camera: A comprehensive review., Computer Science Review, 38 (100310). https://doi.org/10.1016/j.cosrev.2020.100310.
  • Chollet, F., 2018. Deep Learning with Python. Shelter Island, NY: Manning Publications.
  • Da Rocha, D. A., Ferreira, F. M. F., Peixoto, Z. M. A., 2022. Diabetic retinopathy classification using VGG16 neural network, Research on Biomedical Engineering, 38 (2) 761-772.
  • Deli̇başoğlu İ., 2022. Vehicle Detection from Aerial Images with Object and Motion Detection, Turkish Journal of Mathematics and Computer Science, 14 174–83. https://doi.org/10.47000/tjmcs.1002767.
  • Fischer, P., Dosovitskiy, A., Ilg, E., Häusser, P., Hazırbaş, C., Golkov, V., Van der Smagt, P., Cremers, D., Brox, T., 2015. Flownet: Learning optical flow with convolutional networks.
  • Girshick, R., 2015. Fast r-cnn, In Proceedings of the IEEE international conference on computer vision, 1440-1448.
  • Jodoin, P. M., 2009. Mignotte, M., Optical-flow based on an edge-avoidance procedure, Computer Vision and Image Understanding, 113 (4) 511-531.
  • Kriegel, H. P., Kroger, P., Schubert, E., Zimek, A., 2011. Interpreting and unifying outlier scores, In Proceedings of the 2011 SIAM International Conference on Data Mining, Society for Industrial and Applied Mathematics 13-24.
  • Lin, C., 2021. Introduction to Motion Estimation with Optical Flow, (2019). Available at: https://nanonets.com/blog/optical-flow/. [Accessed 11 August 2021].
  • Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., Berg, A. C., 2016. Ssd: Single shot multibox detector, In European conference on computer vision, Springer, Cham 21-37.
  • Logoglu, K. B., Lezki, H., Kerim Yucel, M., Ozturk, A., Kucukkomurler, A., Karagoz, B., Erdem, E., Erdem, A., 2017. Feature-based efficient moving object detection for low-altitude aerial platforms, In Proceedings of the IEEE International Conference on Computer Vision Workshops, 2119-2128.
  • Pu, Y., Yang, H., Ma, X., Sun, X., 2019. Recognition of Voltage Sag Sources Based on Phase Space Reconstruction and Improved VGG Transfer Learning. Entropy, 21 (10) 999.
  • Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only look once: Unified, real-time object detection, In Proceedings of the IEEE conference on computer vision and pattern recognition 779-788.
  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., 2015. Imagenet large scale visual recognition challenge, International journal of computer vision, 115 (3) 211-252.
  • Teed, Z., Deng, J., 2020. Raft: Recurrent all-pairs field transforms for optical flow, In European conference on computer visioni, Springer, Cham, 402-419.
  • Zhu J., Wang Z., Wang S., Chen S., 2020. Moving Object Detection Based on Background Compensation and Deep Learning, Symmetry, 12 (1965). https://doi.org/10.3390/sym12121965.

Motion-Based Object Classification on UAV Images

Year 2023, Volume: 12 Issue: 2, 93 - 104, 17.11.2023

Abstract

Motion detection from video images is used for different purposes in various fields, especially in security. However, some difficulties may be encountered in detecting moving objects. Especially when the camera is moving, motion detection becomes difficult. This study, it is aimed to detect the moving object from the video images and to classify this detected object. It is recommended to use two different methods together for detecting the moving object. In the first method, reference points are determined on the image to separate the moving parts from the camera movement. These points were followed by optical flow along successive frames in the video image. Motion vectors are created from these traced points. The vectors that differed from the others in terms of slope and length were those belonging to the moving object. Then, in the second method, by calculating the transformation matrix between the frames, motion detection from the video image was performed with background stabilization. Finally, the parts that these two methods detected as moving were tried to be classified by adding a classifier layer to the pre-trained VGG16 model. With the study, moving objects on the VIVID Dataset could be detected and classified correctly.

References

  • Alkanat, T., Emre T., Sinan Ö., 2015. A Real-time, Automatic Target Detection and Tracking Method for Variable Number of Targets in Airborne Imagery, In VISAPP (2) 61-69.
  • Bal M., Karakaya İ., Başeske E., 2017. Point based motion detection on UAV cameras, 2017 25th Signal Processing and Communications Applications Conference (SIU), 1–4. https://doi.org/10.1109/SIU.2017.7960468.
  • Bulut F., 2017. A new clinical decision support system with instance based ensemble classifiers, Journal of the Faculty of Engineering and Architecture of Gazi University, 32 (1) 65-76.
  • Carmona, E. J., Martínez-Cantos, J., & Mira, J., 2008. A new video segmentation method of moving objects based on blob-level knowledge, Pattern Recognition Letters, 29(3) 272-285.
  • Chapel M-N, Bouwmans T., 2020. Moving objects detection with a moving camera: A comprehensive review., Computer Science Review, 38 (100310). https://doi.org/10.1016/j.cosrev.2020.100310.
  • Chollet, F., 2018. Deep Learning with Python. Shelter Island, NY: Manning Publications.
  • Da Rocha, D. A., Ferreira, F. M. F., Peixoto, Z. M. A., 2022. Diabetic retinopathy classification using VGG16 neural network, Research on Biomedical Engineering, 38 (2) 761-772.
  • Deli̇başoğlu İ., 2022. Vehicle Detection from Aerial Images with Object and Motion Detection, Turkish Journal of Mathematics and Computer Science, 14 174–83. https://doi.org/10.47000/tjmcs.1002767.
  • Fischer, P., Dosovitskiy, A., Ilg, E., Häusser, P., Hazırbaş, C., Golkov, V., Van der Smagt, P., Cremers, D., Brox, T., 2015. Flownet: Learning optical flow with convolutional networks.
  • Girshick, R., 2015. Fast r-cnn, In Proceedings of the IEEE international conference on computer vision, 1440-1448.
  • Jodoin, P. M., 2009. Mignotte, M., Optical-flow based on an edge-avoidance procedure, Computer Vision and Image Understanding, 113 (4) 511-531.
  • Kriegel, H. P., Kroger, P., Schubert, E., Zimek, A., 2011. Interpreting and unifying outlier scores, In Proceedings of the 2011 SIAM International Conference on Data Mining, Society for Industrial and Applied Mathematics 13-24.
  • Lin, C., 2021. Introduction to Motion Estimation with Optical Flow, (2019). Available at: https://nanonets.com/blog/optical-flow/. [Accessed 11 August 2021].
  • Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., Berg, A. C., 2016. Ssd: Single shot multibox detector, In European conference on computer vision, Springer, Cham 21-37.
  • Logoglu, K. B., Lezki, H., Kerim Yucel, M., Ozturk, A., Kucukkomurler, A., Karagoz, B., Erdem, E., Erdem, A., 2017. Feature-based efficient moving object detection for low-altitude aerial platforms, In Proceedings of the IEEE International Conference on Computer Vision Workshops, 2119-2128.
  • Pu, Y., Yang, H., Ma, X., Sun, X., 2019. Recognition of Voltage Sag Sources Based on Phase Space Reconstruction and Improved VGG Transfer Learning. Entropy, 21 (10) 999.
  • Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only look once: Unified, real-time object detection, In Proceedings of the IEEE conference on computer vision and pattern recognition 779-788.
  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., 2015. Imagenet large scale visual recognition challenge, International journal of computer vision, 115 (3) 211-252.
  • Teed, Z., Deng, J., 2020. Raft: Recurrent all-pairs field transforms for optical flow, In European conference on computer visioni, Springer, Cham, 402-419.
  • Zhu J., Wang Z., Wang S., Chen S., 2020. Moving Object Detection Based on Background Compensation and Deep Learning, Symmetry, 12 (1965). https://doi.org/10.3390/sym12121965.
There are 20 citations in total.

Details

Primary Language English
Subjects Image Processing
Journal Section Araştırma Makaleleri
Authors

Ahmet Er 0000-0002-6624-9306

Abdullah Yavuz

Early Pub Date September 30, 2023
Publication Date November 17, 2023
Published in Issue Year 2023 Volume: 12 Issue: 2

Cite

APA Er, A., & Yavuz, A. (2023). Motion-Based Object Classification on UAV Images. Gaziosmanpaşa Bilimsel Araştırma Dergisi, 12(2), 93-104.
AMA Er A, Yavuz A. Motion-Based Object Classification on UAV Images. GBAD. November 2023;12(2):93-104.
Chicago Er, Ahmet, and Abdullah Yavuz. “Motion-Based Object Classification on UAV Images”. Gaziosmanpaşa Bilimsel Araştırma Dergisi 12, no. 2 (November 2023): 93-104.
EndNote Er A, Yavuz A (November 1, 2023) Motion-Based Object Classification on UAV Images. Gaziosmanpaşa Bilimsel Araştırma Dergisi 12 2 93–104.
IEEE A. Er and A. Yavuz, “Motion-Based Object Classification on UAV Images”, GBAD, vol. 12, no. 2, pp. 93–104, 2023.
ISNAD Er, Ahmet - Yavuz, Abdullah. “Motion-Based Object Classification on UAV Images”. Gaziosmanpaşa Bilimsel Araştırma Dergisi 12/2 (November 2023), 93-104.
JAMA Er A, Yavuz A. Motion-Based Object Classification on UAV Images. GBAD. 2023;12:93–104.
MLA Er, Ahmet and Abdullah Yavuz. “Motion-Based Object Classification on UAV Images”. Gaziosmanpaşa Bilimsel Araştırma Dergisi, vol. 12, no. 2, 2023, pp. 93-104.
Vancouver Er A, Yavuz A. Motion-Based Object Classification on UAV Images. GBAD. 2023;12(2):93-104.