Research Article
BibTex RIS Cite

Moving Object Detection and Localization in Camera-Trap Images

Year 2019, Volume: 12 Issue: 2, 902 - 919, 31.08.2019
https://doi.org/10.18185/erzifbed.509571

Abstract



Camera-traps are usually placed on a fixed point in a forest land and are
used to monitor natural life. Millions of images are recorded to investigate
the natural life of living things by using camera-traps. Computer based
automatic methods are developed for detecting and identifying living things
on recorded images. Also problems such as background complexity, moving
background, change of light intensity and fragmentations of the object in
camera-trap images make moving object detection difficult. In the literature,
for this purpose the model images of moving objects obtained from manually
are used as preliminary information in classification based methods.
Detecting and cropping
model images of the objects manually is a difficult, laborious,
time-consuming process and requires high workload. In our study, To reduce
this workload it was aimed to detect moving objects automatically and to
determine the location of moving objects in camera-trap images that obtained
from natural environment. In the proposed method for this purpose, background
extraction and frame difference methods were applied to the images. Gaussian
Average and Mixture of Gaussian were used to create a background model.
Gaussian Blur and Median Filter were used to reduce noise and to clarify objects.
.OTSU thresholding was used to eliminate the errors of foregrounds. In the
camera-trap data sets, the success of detecting moving objects was 82% and
the object localization success was 80%.


References

  • Kays, R., Tilak, S., Kranstauber, B., Jansen, P. A., Carbone, C., Rowcliffe, M. J., & He, Z. (2010). Monitoring wild animal communities with arrays of motion sensitive camera traps. arXiv preprint arXiv:1009.5718.
  • Yu, X., Wang, J., Kays, R., Jansen, P. A., Wang, T., & Huang, T. (2013). Automated identification of animal species in camera trap images. EURASIP Journal on Image and Video Processing, 2013(1), 52.
  • Nguyen, H., Maclagan, S. J., Nguyen, T. D., Nguyen, T., Flemons, P., Andrews, K., ... & Phung, D. (2017, October). Animal Recognition and Identification with Deep Convolutional Neural Networks for Automated Wildlife Monitoring. In Data Science and Advanced Analytics (DSAA), 2017 IEEE International Conference on (pp. 40-49). IEEE.
  • Norouzzadeh, M. S., Nguyen, A., Kosmala, M., Swanson, A., Palmer, M. S., Packer, C., & Clune, J. (2018). Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proceedings of the National Academy of Sciences, 201719367.
  • Meek, P. D., Ballard, G., Claridge, A., Kays, R., Moseby, K., O’brien, T., & Townsend, S. (2014). Recommended guiding principles for reporting on camera trapping research. Biodiversity and Conservation, 23(9), 2321-2343.
  • Meek, P. D., Vernes, K., & Falzon, G. (2013). On the reliability of expert identification of small-medium sized mammals from camera trap photos. Wildlife Biology in Practice, 9(2), 1-19.
  • Kulchandani, J. S., & Dangarwala, K. J. (2015, January). Moving object detection: Review of recent research trends. In Pervasive Computing (ICPC), 2015 International Conference on (pp. 1-5). IEEE.
  • Piccardi, M. (2004, October). Background subtraction techniques: a review. In Systems, man and cybernetics, 2004 IEEE international conference on (Vol. 4, pp. 3099-3104). IEEE
  • Jain, R., & Nagel, H. H. (1979). On the analysis of accumulative difference pictures from image sequences of real world scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence, (2), 206-214.
  • Lin, K. H., Khorrami, P., Wang, J., Hasegawa-Johnson, M., & Huang, T. S. (2014, October). Foreground object detection in highly dynamic scenes using saliency. In Image Processing (ICIP), 2014 IEEE International Conference on (pp. 1125-1129). IEEE.
  • Andavarapu, N., & Vatsavayi, V. K. (2017). Wild-Animal Recognition in Agriculture Farms Using W-COHOG for Agro-Security. International Journal of Computational Intelligence Research, 13(9), 2247-2257.
  • Zhang, Z., He, Z., Cao, G., & Cao, W. (2016). Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification. IEEE Transactions on Multimedia, 18(10), 2079-2092.
  • Gonçalves, D. N., de Arruda, M. D. S., da Silva, L. A., Araujo, R. F. S., Machado, B. B., & Gonçalves, W. N. Recognition of Pantanal Animal Species using Convolutional Neural Networks.
  • He, Z., Kays, R., Zhang, Z., Ning, G., Huang, C., Han, T. X., ... & McShea, W. (2016). Visual informatics tools for supporting large-scale collaborative wildlife monitoring with citizen scientists. IEEE Circuits and Systems Magazine, 16(1), 73-86.
  • Khorrami, P., Wang, J., & Huang, T. (2012, November). Multiple animal species detection using robust principal component analysis and large displacement optical flow. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR), Workshop on Visual Observation and Analysis of Animal and Insect Behavior (pp. 11-15).
  • Chen, Q., Song, Z., Dong, J., Huang, Z., Hua, Y., & Yan, S. (2015). Contextualizing object detection and classification. IEEE transactions on pattern analysis and machine intelligence, 37(1), 13-27.
  • Weiming Hu, Tieniu Tan, Liang Wang, and Steve Maybank, “A Survey on Visual Surveillance of Object Motion and Behaviors,” IEEE Trans. on Systems, Man, and Cybernetics—Part C: Applications and Reviews,vol. 34, no. 3, pp. 334-352, August 2004.
  • Sebastian, E., & Daniel, N. (2017). A Survey on Various Saliency Detection Methods. International Journal of Computer Applications, 161(5).
  • Nurhadiyatna, A., Jatmiko, W., Hardjono, B., Wibisono, A., Sina, I., & Mursanto, P. (2013, October). Background subtraction using gaussian mixture model enhanced by hole filling algorithm (gmmhf). In Systems, Man, and Cybernetics (SMC), 2013 IEEE International Conference on (pp. 4006-4011). IEEE.
  • Singla, N. (2014). Motion detection based on frame difference method. International Journal of Information & Computation Technology, 4(15), 1559-1565.
  • Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1), 62-66.
  • Schindelin, J., Arganda-Carreras, I., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., & Tinevez, J. Y. (2012). Fiji: an open-source platform for biological-image analysis. Nature methods, 9(7), 676.
  • Suzuki, S. (1985). Topological structural analysis of digitized binary images by border following. Computer vision, graphics, and image processing, 30(1), 32-46.
  • Wren, C. R., Azarbayejani, A., Darrell, T., & Pentland, A. P. (1997). Pfinder: Real-time tracking of the human body. IEEE Transactions on Pattern Analysis & Machine Intelligence, (7), 780-785.
  • Lo, B. P. L., & Velastin, S. A. (2001). Automatic congestion detection system for underground platforms. In Intelligent Multimedia, Video and Speech Processing, 2001. Proceedings of 2001 International Symposium on (pp. 158-161). IEEE.
  • Stauffer, C., & Grimson, W. E. L. (1999, June). Adaptive background mixture models for real-time tracking. In cvpr (p. 2246). IEEE.
  • Elgammal, A., Harwood, D., & Davis, L. (2000, June). Non-parametric model for background subtraction. In European conference on computer vision (pp. 751-767). Springer, Berlin, Heidelberg.
  • Seki, M., Wada, T., Fujiwara, H., & Sumi, K. (2003, June). Background subtraction based on cooccurrence of image variations. In Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on (Vol. 2, pp. II-II). IEEE.
  • Oliver, N. M., Rosario, B., & Pentland, A. P. (2000). A Bayesian computer vision system for modeling human interactions. IEEE transactions on pattern analysis and machine intelligence, 22(8), 831-843.
  • Oliver, N. M., Rosario, B., & Pentland, A. P. (2000). A Bayesian computer vision system for modeling human interactions. IEEE transactions on pattern analysis and machine intelligence, 22(8), 831-843.
  • Zitnick, C. L., & Dollár, P. (2014, September). Edge boxes: Locating object proposals from edges. In European conference on computer vision (pp. 391-405). Springer, Cham.
  • Wang, B. (2014). Automatic Animal Species Identification Based on Camera Trapping Data(Thesis).

Foto-kapan Görüntülerinde Hareketli Nesne Tespiti ve Konumunun Belirlenmesi

Year 2019, Volume: 12 Issue: 2, 902 - 919, 31.08.2019
https://doi.org/10.18185/erzifbed.509571

Abstract

Foto-kapanlar
genellikle ormanlık arazide sabit noktaya yerleştirilmiş ve doğal yaşamı
izlemek için kullanılan görüntüleme cihazlarıdır. Foto-kapanlar kullanılarak
canlıların doğal yaşamı üzerinde araştırma yapmak amacıyla milyonlarca görüntü
kaydedilmektedir. Kaydedilmiş görüntüler üzerinde bilgisayar tabanlı yöntemler
ile canlıların tespit edilmesi ve tanınması amacıyla otomatik yöntemler
geliştirilmektedir. Ayrıca foto-kapan görüntülerinde arka plan
karmaşıklığı, arka planın hareketli olması, ışık şiddeti değişimi ve nesnenin
parçalı olması gibi problemler hareketli nesne tespitini zorlaştırmaktadır.
Literatürde bu amaçla yapılan çalışmalarda hareketli nesnelere ait model
görüntüler görüntü içerisinden el ile tespit edilerek sınıflandırma tabanlı
yöntemlerde ön bilgi olarak kullanılmaktadır.
Nesnelere ait model
görüntülerin el ile tespit edilmesi ve kırpılması zor, zahmetli, zaman alan bir
süreçtir ve yüksek iş yükü gerektirmektedir. Çalışmamızda bu iş yükünü azaltmak
amacıyla doğal ortamdan elde edilmiş foto-kapan görüntülerinde nesnelere ait ön
bilgi kullanılmadan hareketli nesneler otomatik tespit edilmiş ve hareketli
nesnelerin görüntüdeki konumları belirlenmiştir. Önerilen yöntemde hareketli
nesnelerin tespit edilmesi için görüntülere arka plan çıkarma ve çerçeve farkı
yöntemleri uygulanmıştır. Arka plan modelinin oluşturulması için Değişen Gauss
Ortalama ve Gaussların Karışımı, gürültülerin azaltılması ve nesnelerin
belirginleştirilmesi amacıyla Gauss Bulanıklığı ve Medyan filtre, ön plan
tespitindeki hataların giderilmesi için OTSU eşikleme kullanılmıştır.
Foto-kapan veri setlerinde hareketli nesne tespit etme başarısı %83, nesne
konumlandırma başarısı ise %80 olarak elde edilmiştir.

References

  • Kays, R., Tilak, S., Kranstauber, B., Jansen, P. A., Carbone, C., Rowcliffe, M. J., & He, Z. (2010). Monitoring wild animal communities with arrays of motion sensitive camera traps. arXiv preprint arXiv:1009.5718.
  • Yu, X., Wang, J., Kays, R., Jansen, P. A., Wang, T., & Huang, T. (2013). Automated identification of animal species in camera trap images. EURASIP Journal on Image and Video Processing, 2013(1), 52.
  • Nguyen, H., Maclagan, S. J., Nguyen, T. D., Nguyen, T., Flemons, P., Andrews, K., ... & Phung, D. (2017, October). Animal Recognition and Identification with Deep Convolutional Neural Networks for Automated Wildlife Monitoring. In Data Science and Advanced Analytics (DSAA), 2017 IEEE International Conference on (pp. 40-49). IEEE.
  • Norouzzadeh, M. S., Nguyen, A., Kosmala, M., Swanson, A., Palmer, M. S., Packer, C., & Clune, J. (2018). Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proceedings of the National Academy of Sciences, 201719367.
  • Meek, P. D., Ballard, G., Claridge, A., Kays, R., Moseby, K., O’brien, T., & Townsend, S. (2014). Recommended guiding principles for reporting on camera trapping research. Biodiversity and Conservation, 23(9), 2321-2343.
  • Meek, P. D., Vernes, K., & Falzon, G. (2013). On the reliability of expert identification of small-medium sized mammals from camera trap photos. Wildlife Biology in Practice, 9(2), 1-19.
  • Kulchandani, J. S., & Dangarwala, K. J. (2015, January). Moving object detection: Review of recent research trends. In Pervasive Computing (ICPC), 2015 International Conference on (pp. 1-5). IEEE.
  • Piccardi, M. (2004, October). Background subtraction techniques: a review. In Systems, man and cybernetics, 2004 IEEE international conference on (Vol. 4, pp. 3099-3104). IEEE
  • Jain, R., & Nagel, H. H. (1979). On the analysis of accumulative difference pictures from image sequences of real world scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence, (2), 206-214.
  • Lin, K. H., Khorrami, P., Wang, J., Hasegawa-Johnson, M., & Huang, T. S. (2014, October). Foreground object detection in highly dynamic scenes using saliency. In Image Processing (ICIP), 2014 IEEE International Conference on (pp. 1125-1129). IEEE.
  • Andavarapu, N., & Vatsavayi, V. K. (2017). Wild-Animal Recognition in Agriculture Farms Using W-COHOG for Agro-Security. International Journal of Computational Intelligence Research, 13(9), 2247-2257.
  • Zhang, Z., He, Z., Cao, G., & Cao, W. (2016). Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification. IEEE Transactions on Multimedia, 18(10), 2079-2092.
  • Gonçalves, D. N., de Arruda, M. D. S., da Silva, L. A., Araujo, R. F. S., Machado, B. B., & Gonçalves, W. N. Recognition of Pantanal Animal Species using Convolutional Neural Networks.
  • He, Z., Kays, R., Zhang, Z., Ning, G., Huang, C., Han, T. X., ... & McShea, W. (2016). Visual informatics tools for supporting large-scale collaborative wildlife monitoring with citizen scientists. IEEE Circuits and Systems Magazine, 16(1), 73-86.
  • Khorrami, P., Wang, J., & Huang, T. (2012, November). Multiple animal species detection using robust principal component analysis and large displacement optical flow. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR), Workshop on Visual Observation and Analysis of Animal and Insect Behavior (pp. 11-15).
  • Chen, Q., Song, Z., Dong, J., Huang, Z., Hua, Y., & Yan, S. (2015). Contextualizing object detection and classification. IEEE transactions on pattern analysis and machine intelligence, 37(1), 13-27.
  • Weiming Hu, Tieniu Tan, Liang Wang, and Steve Maybank, “A Survey on Visual Surveillance of Object Motion and Behaviors,” IEEE Trans. on Systems, Man, and Cybernetics—Part C: Applications and Reviews,vol. 34, no. 3, pp. 334-352, August 2004.
  • Sebastian, E., & Daniel, N. (2017). A Survey on Various Saliency Detection Methods. International Journal of Computer Applications, 161(5).
  • Nurhadiyatna, A., Jatmiko, W., Hardjono, B., Wibisono, A., Sina, I., & Mursanto, P. (2013, October). Background subtraction using gaussian mixture model enhanced by hole filling algorithm (gmmhf). In Systems, Man, and Cybernetics (SMC), 2013 IEEE International Conference on (pp. 4006-4011). IEEE.
  • Singla, N. (2014). Motion detection based on frame difference method. International Journal of Information & Computation Technology, 4(15), 1559-1565.
  • Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1), 62-66.
  • Schindelin, J., Arganda-Carreras, I., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., & Tinevez, J. Y. (2012). Fiji: an open-source platform for biological-image analysis. Nature methods, 9(7), 676.
  • Suzuki, S. (1985). Topological structural analysis of digitized binary images by border following. Computer vision, graphics, and image processing, 30(1), 32-46.
  • Wren, C. R., Azarbayejani, A., Darrell, T., & Pentland, A. P. (1997). Pfinder: Real-time tracking of the human body. IEEE Transactions on Pattern Analysis & Machine Intelligence, (7), 780-785.
  • Lo, B. P. L., & Velastin, S. A. (2001). Automatic congestion detection system for underground platforms. In Intelligent Multimedia, Video and Speech Processing, 2001. Proceedings of 2001 International Symposium on (pp. 158-161). IEEE.
  • Stauffer, C., & Grimson, W. E. L. (1999, June). Adaptive background mixture models for real-time tracking. In cvpr (p. 2246). IEEE.
  • Elgammal, A., Harwood, D., & Davis, L. (2000, June). Non-parametric model for background subtraction. In European conference on computer vision (pp. 751-767). Springer, Berlin, Heidelberg.
  • Seki, M., Wada, T., Fujiwara, H., & Sumi, K. (2003, June). Background subtraction based on cooccurrence of image variations. In Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on (Vol. 2, pp. II-II). IEEE.
  • Oliver, N. M., Rosario, B., & Pentland, A. P. (2000). A Bayesian computer vision system for modeling human interactions. IEEE transactions on pattern analysis and machine intelligence, 22(8), 831-843.
  • Oliver, N. M., Rosario, B., & Pentland, A. P. (2000). A Bayesian computer vision system for modeling human interactions. IEEE transactions on pattern analysis and machine intelligence, 22(8), 831-843.
  • Zitnick, C. L., & Dollár, P. (2014, September). Edge boxes: Locating object proposals from edges. In European conference on computer vision (pp. 391-405). Springer, Cham.
  • Wang, B. (2014). Automatic Animal Species Identification Based on Camera Trapping Data(Thesis).
There are 32 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Makaleler
Authors

Emrah Şimşek 0000-0002-1652-9553

Özyer Barış This is me

Gülşah Tümüklü Özyer This is me

Publication Date August 31, 2019
Published in Issue Year 2019 Volume: 12 Issue: 2

Cite

APA Şimşek, E., Barış, Ö., & Özyer, G. T. (2019). Foto-kapan Görüntülerinde Hareketli Nesne Tespiti ve Konumunun Belirlenmesi. Erzincan University Journal of Science and Technology, 12(2), 902-919. https://doi.org/10.18185/erzifbed.509571

Cited By

Sending pictures over radio systems of trail cam in border security and directing uavs to the right areas
Communications Faculty of Sciences University of Ankara Series A2-A3 Physical Sciences and Engineering
https://doi.org/10.33769/aupse.1438139