Araştırma Makalesi
BibTex RIS Kaynak Göster

Fotokapan Görüntülerinde Yerel Öznitelikler ile Nesne Tespiti

Yıl 2019, Cilt: 9 Sayı: 4, 633 - 644, 15.10.2019
https://doi.org/10.17714/gumusfenbil.510717

Öz

Fotokapanlar doğal ortamda yaşayan canlıların
davranışlarını izlemek amacıyla yaygın olarak kullanılan cihazlardır. Fotoğraf
ve video kaydı yapan bu cihazlar ile doğal görüntülerde yapılan nesne (hayvan
veya insan) tespiti işlemi, arka planının karmaşık yapıda olması, ışık şiddeti
yetersizliği, ışık şiddeti değişimi, nesnenin parçalı bulunması gibi
nedenlerden dolayı zor bir problemdir. Ayrıca nesnenin hareketli olması,
görüntü içerisinde bulunduğu konumun tespit edilmesini zorlaştırmaktadır. Son
yıllarda kullanılan yerel öznitelikler konum bilgisi içerdiğinden, hem
konumlandırma problemine çözüm olmakta hem de yerel öznitelik dönüşüm
yöntemlerinin içerdiği ölçek, dönme, afin dönüşümü, aydınlatma değişimi gibi
zorluklara karşı değişmezlikler sayesinde daha başarılı tespit işlemi
yapılabilmektedir. Bu çalışmada foto-kapan görüntülerinde yerel öznitelik
dönüşüm yöntemleri olan Ölçek Değişmez Öznitelik Dönüşümü (Scale Invariant
Feature Transform-SIFT), Hızlandırılmış Sağlam Öznitelikler (Speeded Up Robust
Features-SURF), İkili Sağlam Bağımsız Temel Öznitelikler (Binary Robust
Independent Elementary Features-BRIEF), Yönlendirilmiş Hızlı ve Sağlam Brief
Öznitelikleri (Oriented Fast And Robust Brief-ORB), öznitelik eşleştirme
yöntemlerinde kullanılarak nesne tespiti gerçekleştirilmiştir. Hatalı yerel
öznitelik eşleşmelerinin elenmesi için yüzdelik ve medyan tabanlı aykırılık
tespiti ile k-en yakın komşu öznitelik eleme yöntemleri kullanılmıştır.
Çalışmada öznitelik dönüşüm yöntemleri ile elde edilen nesne tespit başarıları,
eşleşen öznitelik sayıları, sınırlayıcı kutu büyüklükleri, elenen öznitelik
sayıları ve bunların nesne tespit başarısına olan etkileri incelenmiştir.


Kaynakça

  • Andavarapu, N., & Vatsavayi, V. K. (2017). Wild-Animal Recognition in Agriculture Farms Using W- COHOG for Agro-Security. International Journal of Computational Intelligence Research, 13(9), 2247-2257.
  • Bay, H., Tuytelaars, T., & Van Gool, L. (2006, May). Surf: Speeded up robust features. In European conference on computer vision (pp. 404-417). Springer, Berlin, Heidelberg.
  • Calonder, M., Lepetit, V., Strecha, C., & Fua, P. (2010, September). Brief: Binary robust independent elementary features. In European conference on computer vision (pp. 778-792). Springer, Berlin, Heidelberg.
  • Chen, Q., Song, Z., Dong, J., Huang, Z., Hua, Y., & Yan, S. (2015). Contextualizing object detection and classification. IEEE transactions on pattern analysis and machine intelligence, 37(1), 13-27.
  • Karakuş, P., & Karabörk, H. “Surf Algoritması Kullanılarak Uzaktan Algılama Görüntülerinin Geometrik Kaydı”. 5. Uzaktan Algılama-Cbs Sempozyumu (Uzal-Cbs 2014), 14-17 Ekim 2014, İstanbul.
  • Karami, E., Prasad, S., & Shehata, M. (2017). Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images. arXiv preprint arXiv:1710.02726.
  • Karami, E., Shehata, M., & Smith, A. (2017). Image Identification Using SIFT Algorithm: Performance Analysis against Different Image Deformations. arXiv preprint arXiv:1710.02728.
  • Kays, R., Tilak, S., Kranstauber, B., Jansen, P. A., Carbone, C., Rowcliffe, M. J., ... & He, Z. (2010). Monitoring wild animal communities with arrays of motion sensitive camera traps. arXiv preprint arXiv:1009.5718.
  • Khorrami, P., Wang, J., & Huang, T. (2012, November). Multiple animal species detection using robust principal component analysis and large displacement optical flow. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR), Workshop on Visual Observation and Analysis of Animal and Insect Behavior (pp. 11-15).
  • Lisin, D. A., Mattar, M. A., Blaschko, M. B., Learned-Miller, E. G., & Benfield, M. C. (2005, June). Combining local and global image features for object class recognition. In Computer vision and pattern recognition-workshops, 2005. CVPR workshops. IEEE Computer society conference on (pp. 47-47). IEEE.
  • Lowe, D. G. (1999). Object recognition from local scale-invariant features. In Computer vision, 1999. The proceedings of the seventh IEEE international conference on (Vol. 2, pp. 1150-1157). Ieee.
  • Lowe, D. G. (2001). Local feature view clustering for 3D object recognition. In Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on (Vol. 1, pp. I-I). IEEE.
  • Meek, P. D., Ballard, G., Claridge, A., Kays, R., Moseby, K., O’brien, T., ... & Townsend, S. (2014). Recommended guiding principles for reporting on camera trapping research. Biodiversity and conservation, 23(9), 2321-2343.
  • Murphy, K., Torralba, A., Eaton, D., & Freeman, W. (2006). Object detection and localization using local and global features. In Toward Category-Level Object Recognition (pp. 382-400). Springer, Berlin, Heidelberg.
  • Nguyen, H., Maclagan, S. J., Nguyen, T. D., Nguyen, T., Flemons, P., Andrews, K., ... & Phung, D. (2017, October). Animal recognition and identification with deep convolutional neural networks for automated wildlife monitoring. In Data Science and Advanced Analytics (DSAA), 2017 IEEE International Conference on (pp. 40-49). IEEE.
  • Norouzzadeh, M. S., Nguyen, A., Kosmala, M., Swanson, A., Palmer, M. S., Packer, C., & Clune, J. (2018). Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proceedings of the National Academy of Sciences, 201719367.
  • Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011, November). ORB: An efficient alternative to SIFT or SURF. In Computer Vision (ICCV), 2011 IEEE international conference on(pp. 2564-2571). IEEE.
  • Se, S., Lowe, D., & Little, J. (2002). Mobile robot localization and mapping with uncertainty using scale- invariant visual landmarks. The international Journal of robotics Research, 21(8), 735-758.
  • Snavely, N., Seitz, S. M., & Szeliski, R. (2008, June). Skeletal graphs for efficient structure from motion. In CVPR (Vol. 1, p. 2).
  • Tuytelaars, T., & Mikolajczyk, K. (2008). Local invariant feature detectors: a survey. Foundations and trends® in computer graphics and vision, 3(3), 177-280.
  • Yu, X., Wang, J., Kays, R., Jansen, P. A., Wang, T., & Huang, T. (2013). Automated identification of animal species in camera trap images. EURASIP Journal on Image and Video Processing, 2013(1), 52.
  • Zhang, J., Marszałek, M., Lazebnik, S., & Schmid, C. (2007). Local features and kernels for classification of texture and object categories: A comprehensive study. International journal of computer vision, 73(2), 213-238.
  • Zhang, Z., He, Z., Cao, G., & Cao, W. (2016). Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification. IEEE Transactions on Multimedia, 18(10), 2079-2092.
  • Zitnick, C. L., & Dollár, P. (2014, September). Edge boxes: Locating object proposals from edges. In European conference on computer vision (pp. 391-405). Springer, Cham.

Object Detection on Camera-Trap Images with Local Features

Yıl 2019, Cilt: 9 Sayı: 4, 633 - 644, 15.10.2019
https://doi.org/10.17714/gumusfenbil.510717

Öz

Camera-traps are the devices that commonly used to
monitor the behavior of living creatures in the natural environment. Object
(animal or human) detection in the natural image or video by recorded these
devices has difficulties such as cluttered background, inefficient light
intensity, light intensity change or partial object presence. Furthermore, the
fact that the object is moving makes it difficult to determine the position in
the image. Since the local features used in recent years contain location
information, it is a solution to the problem of localization as well as more
successful detection can be made by the invariance of the scale, rotation,
affine transformation, lighting change included in the local feature
transformation methods. In this study, local feature description methods are
used in camera-trap images, such as Scale Invariant Feature Transform (SIFT),
Speeded-Up Robust Features-SURF, Binary Robust Independent Elementary Features
(BRIEF),  Oriented Fast and Robust
Brief-ORB was performed with feature matching methods. Percentile and median
based outlier detection methods and k nearest neighboring feature elimination
methods were used to eliminate incorrect feature matches. In this study, the
effect of feature description methods on object detection accuracies, number of
matching features, bounding box sizes, number of eliminated features and their
effects on object detection success were analyzed
.

Kaynakça

  • Andavarapu, N., & Vatsavayi, V. K. (2017). Wild-Animal Recognition in Agriculture Farms Using W- COHOG for Agro-Security. International Journal of Computational Intelligence Research, 13(9), 2247-2257.
  • Bay, H., Tuytelaars, T., & Van Gool, L. (2006, May). Surf: Speeded up robust features. In European conference on computer vision (pp. 404-417). Springer, Berlin, Heidelberg.
  • Calonder, M., Lepetit, V., Strecha, C., & Fua, P. (2010, September). Brief: Binary robust independent elementary features. In European conference on computer vision (pp. 778-792). Springer, Berlin, Heidelberg.
  • Chen, Q., Song, Z., Dong, J., Huang, Z., Hua, Y., & Yan, S. (2015). Contextualizing object detection and classification. IEEE transactions on pattern analysis and machine intelligence, 37(1), 13-27.
  • Karakuş, P., & Karabörk, H. “Surf Algoritması Kullanılarak Uzaktan Algılama Görüntülerinin Geometrik Kaydı”. 5. Uzaktan Algılama-Cbs Sempozyumu (Uzal-Cbs 2014), 14-17 Ekim 2014, İstanbul.
  • Karami, E., Prasad, S., & Shehata, M. (2017). Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images. arXiv preprint arXiv:1710.02726.
  • Karami, E., Shehata, M., & Smith, A. (2017). Image Identification Using SIFT Algorithm: Performance Analysis against Different Image Deformations. arXiv preprint arXiv:1710.02728.
  • Kays, R., Tilak, S., Kranstauber, B., Jansen, P. A., Carbone, C., Rowcliffe, M. J., ... & He, Z. (2010). Monitoring wild animal communities with arrays of motion sensitive camera traps. arXiv preprint arXiv:1009.5718.
  • Khorrami, P., Wang, J., & Huang, T. (2012, November). Multiple animal species detection using robust principal component analysis and large displacement optical flow. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR), Workshop on Visual Observation and Analysis of Animal and Insect Behavior (pp. 11-15).
  • Lisin, D. A., Mattar, M. A., Blaschko, M. B., Learned-Miller, E. G., & Benfield, M. C. (2005, June). Combining local and global image features for object class recognition. In Computer vision and pattern recognition-workshops, 2005. CVPR workshops. IEEE Computer society conference on (pp. 47-47). IEEE.
  • Lowe, D. G. (1999). Object recognition from local scale-invariant features. In Computer vision, 1999. The proceedings of the seventh IEEE international conference on (Vol. 2, pp. 1150-1157). Ieee.
  • Lowe, D. G. (2001). Local feature view clustering for 3D object recognition. In Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on (Vol. 1, pp. I-I). IEEE.
  • Meek, P. D., Ballard, G., Claridge, A., Kays, R., Moseby, K., O’brien, T., ... & Townsend, S. (2014). Recommended guiding principles for reporting on camera trapping research. Biodiversity and conservation, 23(9), 2321-2343.
  • Murphy, K., Torralba, A., Eaton, D., & Freeman, W. (2006). Object detection and localization using local and global features. In Toward Category-Level Object Recognition (pp. 382-400). Springer, Berlin, Heidelberg.
  • Nguyen, H., Maclagan, S. J., Nguyen, T. D., Nguyen, T., Flemons, P., Andrews, K., ... & Phung, D. (2017, October). Animal recognition and identification with deep convolutional neural networks for automated wildlife monitoring. In Data Science and Advanced Analytics (DSAA), 2017 IEEE International Conference on (pp. 40-49). IEEE.
  • Norouzzadeh, M. S., Nguyen, A., Kosmala, M., Swanson, A., Palmer, M. S., Packer, C., & Clune, J. (2018). Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proceedings of the National Academy of Sciences, 201719367.
  • Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011, November). ORB: An efficient alternative to SIFT or SURF. In Computer Vision (ICCV), 2011 IEEE international conference on(pp. 2564-2571). IEEE.
  • Se, S., Lowe, D., & Little, J. (2002). Mobile robot localization and mapping with uncertainty using scale- invariant visual landmarks. The international Journal of robotics Research, 21(8), 735-758.
  • Snavely, N., Seitz, S. M., & Szeliski, R. (2008, June). Skeletal graphs for efficient structure from motion. In CVPR (Vol. 1, p. 2).
  • Tuytelaars, T., & Mikolajczyk, K. (2008). Local invariant feature detectors: a survey. Foundations and trends® in computer graphics and vision, 3(3), 177-280.
  • Yu, X., Wang, J., Kays, R., Jansen, P. A., Wang, T., & Huang, T. (2013). Automated identification of animal species in camera trap images. EURASIP Journal on Image and Video Processing, 2013(1), 52.
  • Zhang, J., Marszałek, M., Lazebnik, S., & Schmid, C. (2007). Local features and kernels for classification of texture and object categories: A comprehensive study. International journal of computer vision, 73(2), 213-238.
  • Zhang, Z., He, Z., Cao, G., & Cao, W. (2016). Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification. IEEE Transactions on Multimedia, 18(10), 2079-2092.
  • Zitnick, C. L., & Dollár, P. (2014, September). Edge boxes: Locating object proposals from edges. In European conference on computer vision (pp. 391-405). Springer, Cham.
Toplam 24 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Emrah Şimşek 0000-0002-1652-9553

Barış Özyer 0000-0003-0117-6983

Gülşah Tümüklü Özyer Bu kişi benim 0000-0002-0596-0065

Yayımlanma Tarihi 15 Ekim 2019
Gönderilme Tarihi 9 Ocak 2019
Kabul Tarihi 19 Haziran 2019
Yayımlandığı Sayı Yıl 2019 Cilt: 9 Sayı: 4

Kaynak Göster

APA Şimşek, E., Özyer, B., & Tümüklü Özyer, G. (2019). Fotokapan Görüntülerinde Yerel Öznitelikler ile Nesne Tespiti. Gümüşhane Üniversitesi Fen Bilimleri Dergisi, 9(4), 633-644. https://doi.org/10.17714/gumusfenbil.510717