Araştırma Makalesi
BibTex RIS Kaynak Göster

Dog Behavior Recognition and Tracking based on Faster R-CNN

Yıl 2020, Cilt: 35 Sayı: 2, 819 - 834, 25.12.2019
https://doi.org/10.17341/gazimmfd.541677

Öz

Recently, detection and recognition of animal
faces, body postures, behaviors, and physical movements is became an
interdisciplinary field. Computer vision methods can contribute to determine
behaviors of animals and predict the following behavior of animals. Moreover, these
methods would contribute to domesticate animals. In this study, a deep learning
based system is proposed for the detection and classification of dog’s
behaviour. In the study, firstly, a dataset is created by collecting videos
containing the behavior of dogs which don’t avoid contact with people. After the
necessary analysis on the obtained videos, a customized data set consisting of
more meaningful sections is developed by extracting determined behaviors in
videos. It is recognized the behavior with the Faster R-CNN (Faster Regional-Convolutional
Neural Networks) by selecting key frames from these customized video sections.
In the last stage, the related behaviors in videos are followed by video tracker
after the behavior of the dog is recognized. As a result of experimental
studies, the behaviors of dog such as
opening the mouth, sticking out the tongue,
sniffing, rearing the ear, swinging the tail and playing
are examined and accuracy rates 94.00%, 98.00%,
99.33%, 99.33%, 98.00% and 98.67% are obtained for these behaviors,
respectively. With the results obtained in the study, it is seen that our proposed
method based on key frame selection and determination of regions of interest are
successful in recognition the behavior of dogs.

Kaynakça

  • [1] Weisbord M. and Kachanoff K., Dogs with jobs: working dogs around the world: Simon and Schuster, 2000.
  • [2] Prato-Previde E., Nicotra V., Pelosi A., and Valsecchi P., Pet dogs’ behavior when the owner and an unfamiliar person attend to a faux rival, PloS one, vol. 13, p. e0194577, 18 April, 2018.
  • [3] Pan Y., Landsberg G., Mougeot I., Kelly S., Xu H., Bhatnagar S., et al., Efficacy of a therapeutic diet on dogs with signs of cognitive dysfunction syndrome (CDS): A prospective double blinded placebo controlled clinical study, Frontiers in Nutrition, 5 (127), 2018.
  • [4] Lindsay S. R., Handbook of applied dog behavior and training, adaptation and learning, 1, John Wiley & Sons, 2013.
  • [5] Peterson J. C., Soulos P., Nematzadeh A., and Griffiths T. L., Learning Hierarchical Visual Representations In Deep Neural Networks Using Hierarchical Linguistic Labels, arXiv preprint arXiv:1805.07647, 19 May, 2018.
  • [6] Byosiere S.-E., Chouinard P. A., Howell T. J., and Bennett P. C., What do dogs (Canis familiaris) see? A review of vision in dogs and implications for cognition research, Psychonomic Bulletin & Review, 25 (5), 1798-1813, October, 2018.
  • [7] Aenishaenslin C., Brunet P., Lévesque F., Gouin G. G., Simon A., Saint-Charles J., et al., Understanding the Connections Between Dogs, Health and Inuit Through a Mixed-Methods Study, EcoHealth, 1-10, 14 December, 2018.
  • [8] Ladha C., Hammerla N., Hughs E., Olivier P., and Plotz T., Dog’s Life: Wearable Activity Recognition for Dogs, 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Switzerland, 415-418, 2013.
  • [9] Leos-Barajas V., Photopoulou T., Langrock R., Patterson T. A., Watanabe Y. Y., Murgatroyd M., et al., Analysis of animal accelerometer data using hidden Markov models, Methods in Ecology and Evolution, 8 (2), 161–173, 2017.
  • [10] Gerencsér L., Vásárhelyi G., Nagy M., Vicsek T., and Miklósi A., Identification of behaviour in freely moving dogs (Canis familiaris) using inertial sensors, 8 (10), 77814, 2013.
  • [11] Brugarolas R., Loftin R. T., Yang P., Roberts D. L., Sherman B., and Bozkurt A., Behavior recognition based on machine learning algorithms for a wireless canine machine interface, 2013 IEEE International Conference on Body Sensor Networks, Cambridge-Ma-Usa ,1-5, 6-9 May, 2013.
  • [12] Sağıroğlu Ş. and Koç O., Büyük Veri Ve Açık Veri Analitiği:Yöntemler Ve Uygulamalar, Ankara: Gazi Üniversitesi Big Data Center, Ankara, Turkey, 2017.
  • [13] Huang H., Zhou H., Yang X., Zhang L., Qi L., and Zang A.-Y., Faster R-CNN for Marine Organisms Detection and Recognition Using Data Augmentation, Neurocomputing, 2019.
  • [14] Yang Q., Xiao D., and Lin S., Feeding behavior recognition for group-housed pigs with the Faster R-CNN, Computers and Electronics in Agriculture, 155, 453-460, 2018.
  • [15] Wang D., Tang J., Zhu W., Li H., Xin J., and He D., Dairy goat detection based on Faster R-CNN from surveillance video, Computers and Electronics in Agriculture, 154, 443-449, 2018.
  • [16] Zhao X., Wu Y., Song G., Li Z.,. Zhang Y, and Fan Y., A deep learning model integrating FCNNs and CRFs for brain tumor segmentation, Medical image analysis, 43,98-111, 2018.
  • [17] Sharma H., Zerbe N., Klempert I., Hellwich O., and Hufnagl P., Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology, Computerized Medical Imaging and Graphics, 61, 2-13, 2017.
  • [18] Havaei M., Davy A., Warde-Farley D., Biard A., Courville A., Bengio Y., et al., Brain tumor segmentation with deep neural networks, Medical image analysis, 35, 18-31, 2017.
  • [19] Kamnitsas K., Ledig C., Newcombe V. F., Simpson J. P., Kane A. D., Menon D. K., et al., Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation, Medical image analysis, 36, 61-78, 2017.
  • [20] Pirsiavash H. and Ramanan D., "Detecting activities of daily living in first-person camera views," 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence-RI-USA, 2847-2854, 16-21 June, 2012.
  • [21] Hammerla N. Y., Halloran S., and Plötz T., Deep, convolutional, and recurrent models for human activity recognition using wearables, IJCAI'16 Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York-USA, 1533-1540, 09-15 July, 2016.
  • [22] Cho Y., Nam Y., Choi Y.-J., and Cho W.-D., SmartBuckle: human activity recognition using a 3-axis accelerometer and a wearable camera, the 2nd International Workshop on Systems and Networking Support for Health Care and Assisted Living Environments, 7, 2008.
  • [23] Fathi A., Farhadi A., and Rehg J. M., Understanding egocentric activities, IEEE International Conference on Computer Vision, Barcelona-Spain , 407-414, 6-13 November, 2011.
  • [24] Iwashita Y., Takamine A., Kurazume R., and Ryoo M. S., First-person animal activity recognition from egocentric videos, 22nd International Conference on Pattern Recognition, Stockholm, Sweden, 4310-4315, 24-28 Augtos, 2014.
  • [25] Dodge S. and Karam L., A Study And Comparison Of Human And Deep Learning Recognition Performance Under Visual Distortions, 26th International Conference On Computer Communication And Networks (ICCCN), Vancouver, BC, Canada,1-7, 31 July-3 Augtos , 2017.
  • [26] Ehsani K., Bagherinezhad H., Redmon J., Mottaghi R., and Farhadi A., Who Let The Dogs Out? Modeling Dog Behavior From Visual Data, IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City-Ut-Usa , 4051-4060, 18-23 June, 2018.
  • [27] Zhou J., Li Z., Zhi W., Liang B., Moses D., and Dawes L., Using Convolutional Neural Networks And Transfer Learning For Bone Age Classification, International Conference on Digital Image Computing: Techniques and Applications (DICTA), Sydney-NSW-Australia, 1-6, 29 November-1 December, 2017.
  • [28] Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., et al., Going Deeper With Convolutions, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston-Usa, 1-9, 7-12 June, 2015.
  • [29] Ardö H., Guzhva O., Nilsson M., and Herlin A. H., Convolutional neural network-based cow interaction watchdog, IET Computer Vision,12 (2) , 171-177, 2018.
  • [30] Kang K., Li H., Yan J., Zeng X., Yang B., Xiao T., et al., T-cnn: Tubelets with convolutional neural networks for object detection from videos, IEEE Transactions on Circuits and Systems for Video Technology, 28 (10), 2896-2907, 2018.
  • [31] Girshick R., Donahue J., Darrell T., and Malik J., Rich Feature Hierarchies For Accurate Object Detection And Semantic Segmentation, 14 Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Washington DC-Usa, 580-587, 23-28 June, 2014.
  • [32] Ren S., He K., Girshick R., and Sun J., Faster R-CNN: Towards Real-Time Object Detection With Region Proposal Networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39 (6) ,91-99, 2015.
  • [33] Zheng C., Zhu X., Yang X., Wang L., Tu S., and Xue Y., Automatic recognition of lactating sow postures from depth images by deep learning detector, Computers and Electronics in Agriculture, 147, 51-63, 2018.
  • [34] Bulling A., Blanke U., and Schiele B., A tutorial on human activity recognition using body-worn inertial sensors, ACM Computing Surveys (CSUR), 46 (3), 33, 2014.
  • [35] Modern Dog Magazine. The Lifestyle Magazine for Modern Dogs and Their Companies. Available: https://moderndogmagazine.com/. Yayın tarihi 2012. Erişim tarihi Mart 10, 2019.
  • [36] LeCun Y., Bottou L., Bengio Y., and Haffner P., Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86 (11), 2278-2324, 1998.
  • [37] Krizhevsky A., Sutskever I., and Hinton G. E., Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, 60 (6) , 84-90, 2017.
  • [38] Karpathy A., Toderici G., Shetty S., Leung T., Sukthankar R., and Fei-Fei L., Large-scale video classification with convolutional neural networks, 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus-OH-Usa, 1725-1732, 23-28 June, 2014.
  • [39] Koppula H. S. and Saxena A., Anticipating human activities using object affordances for reactive robotic response, IEEE Transactions on Pattern Analysis And Machine Intelligence, 38 (1), 14-29, 2016.
  • [40] Kitani K. M., Ziebart B. D., Bagnell J. A., and Hebert M., Activity forecasting, Proceedings of the 12th European conference on Computer Vision, Florence-Italy, 201-214, 7-13 October, 2012.
  • [41] Lan T., Chen T.-C., and Savarese S., A hierarchical representation for future action prediction, 13th European Conference on Computer Vision, Zurich, Switzerland, 6-12September, 689-704,2014.
  • [42] Liu Y. H., Feature Extraction and Image Recognition with Convolutional Neural Networks, in Journal of Physics Conference Series, 1087 (6), 062032, 2018.
  • [43] Lu Y., Yi S., Zeng N., Liu Y., and Zhang Y., Identification Of Rice Diseases Using Deep Convolutional Neural Networks, Neurocomputing, 267, 378-384, 2017.
  • [44] Ali A., Hanbay D., Bölgesel Evrişimsel Sinir Ağları Tabanlı MR Görüntülerinde Tümör Tespiti, Gazi Üniversitesi Mühendislik-Mimarlık Fakültesi Dergisi, 2018.
  • [45] Weiss K., Khoshgoftaar T. M., and Wang D., A survey of transfer learning, Journal of Big Data, 3, 9, 2016.
  • [46] CIFAR-10. (11/03/2019). Available: https://www.cs.toronto.edu/~kriz/cifar.html
  • [47] Kurt F., "Evrişimli Sinir Ağlarında Hiper Parametrelerin Etkisinin İncelenmesi," Yüksek Lisans Tezi, Hacettepe Üniversitesi, Eğitim Bilimleri Enstitüsü, Ankara, 2018.

Daha hızlı bölgesel evrişimsel sinir ağları ile köpek davranışlarının tanınması ve takibi

Yıl 2020, Cilt: 35 Sayı: 2, 819 - 834, 25.12.2019
https://doi.org/10.17341/gazimmfd.541677

Öz

Hayvan yüzlerinin, vücut duruşlarının,
davranışlarının ve fiziksel hareketlerinin tespiti ve tanınması son zamanlarda
disiplinlerarası bir alan olarak ön plana çıkmıştır. Bilgisayarlı görü yöntemi
ile hayvanların davranışlarının tespitine, sonraki davranışların öngörülmesine
ve hayvanların evcilleştirilmesine katkı sunabilir. Bu çalışmada, köpeklerin
davranışlarının tespit edilmesi ve sınıflandırılması için derin öğrenmeye
dayalı bir sistem önerilmiştir. Çalışmada öncelikle, insanlar ile temastan
kaçınmayan köpeklerin

davranışlarını içeren videolar toplanarak bir veri seti oluşturulmuştur. Elde
edilen videolar üzerinde gerekli analizler yapıldıktan sonra belirlenen
davranışlar videolardan çıkarılarak, daha anlamlı bölümlerden oluşan
özelleştirilmiş bir veri seti geliştirilmiştir. Bu anlamlı video bölümlerinden anahtar çerçeveler seçilerek Daha Hızlı
Bölgesel-Evrişimsel Sinir Ağları (DH B-ESA) ile davranışlar tanınmıştır. Son aşamada ise, köpeğin davranışı tanındıktan sonra, video
üzerinde ilgili davranışlar takipçi ile izlenmiştir. Yapılan deneysel
çalışmalar sonucunda, köpeklerin
ağız
açma, dil çıkarma, koklama, kulak dikme, kuyruk sallama ve oyun oynama davranışları incelenmiş ve bu davranışlar
için sırasıyla %94.00, %98.00, %99.33, %99.33, %98.00, %98.67 doğruluk oranı
elde edilmiştir.
Çalışmada elde edilen sonuçlar ile anahtar çerçeve seçimi ve ilgi bölgelerin
belirlenmesine dayalı önerilen yöntemin, köpeklerin davranışlarını tanımada
başarılı olduğu görülmüştür.

Kaynakça

  • [1] Weisbord M. and Kachanoff K., Dogs with jobs: working dogs around the world: Simon and Schuster, 2000.
  • [2] Prato-Previde E., Nicotra V., Pelosi A., and Valsecchi P., Pet dogs’ behavior when the owner and an unfamiliar person attend to a faux rival, PloS one, vol. 13, p. e0194577, 18 April, 2018.
  • [3] Pan Y., Landsberg G., Mougeot I., Kelly S., Xu H., Bhatnagar S., et al., Efficacy of a therapeutic diet on dogs with signs of cognitive dysfunction syndrome (CDS): A prospective double blinded placebo controlled clinical study, Frontiers in Nutrition, 5 (127), 2018.
  • [4] Lindsay S. R., Handbook of applied dog behavior and training, adaptation and learning, 1, John Wiley & Sons, 2013.
  • [5] Peterson J. C., Soulos P., Nematzadeh A., and Griffiths T. L., Learning Hierarchical Visual Representations In Deep Neural Networks Using Hierarchical Linguistic Labels, arXiv preprint arXiv:1805.07647, 19 May, 2018.
  • [6] Byosiere S.-E., Chouinard P. A., Howell T. J., and Bennett P. C., What do dogs (Canis familiaris) see? A review of vision in dogs and implications for cognition research, Psychonomic Bulletin & Review, 25 (5), 1798-1813, October, 2018.
  • [7] Aenishaenslin C., Brunet P., Lévesque F., Gouin G. G., Simon A., Saint-Charles J., et al., Understanding the Connections Between Dogs, Health and Inuit Through a Mixed-Methods Study, EcoHealth, 1-10, 14 December, 2018.
  • [8] Ladha C., Hammerla N., Hughs E., Olivier P., and Plotz T., Dog’s Life: Wearable Activity Recognition for Dogs, 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Switzerland, 415-418, 2013.
  • [9] Leos-Barajas V., Photopoulou T., Langrock R., Patterson T. A., Watanabe Y. Y., Murgatroyd M., et al., Analysis of animal accelerometer data using hidden Markov models, Methods in Ecology and Evolution, 8 (2), 161–173, 2017.
  • [10] Gerencsér L., Vásárhelyi G., Nagy M., Vicsek T., and Miklósi A., Identification of behaviour in freely moving dogs (Canis familiaris) using inertial sensors, 8 (10), 77814, 2013.
  • [11] Brugarolas R., Loftin R. T., Yang P., Roberts D. L., Sherman B., and Bozkurt A., Behavior recognition based on machine learning algorithms for a wireless canine machine interface, 2013 IEEE International Conference on Body Sensor Networks, Cambridge-Ma-Usa ,1-5, 6-9 May, 2013.
  • [12] Sağıroğlu Ş. and Koç O., Büyük Veri Ve Açık Veri Analitiği:Yöntemler Ve Uygulamalar, Ankara: Gazi Üniversitesi Big Data Center, Ankara, Turkey, 2017.
  • [13] Huang H., Zhou H., Yang X., Zhang L., Qi L., and Zang A.-Y., Faster R-CNN for Marine Organisms Detection and Recognition Using Data Augmentation, Neurocomputing, 2019.
  • [14] Yang Q., Xiao D., and Lin S., Feeding behavior recognition for group-housed pigs with the Faster R-CNN, Computers and Electronics in Agriculture, 155, 453-460, 2018.
  • [15] Wang D., Tang J., Zhu W., Li H., Xin J., and He D., Dairy goat detection based on Faster R-CNN from surveillance video, Computers and Electronics in Agriculture, 154, 443-449, 2018.
  • [16] Zhao X., Wu Y., Song G., Li Z.,. Zhang Y, and Fan Y., A deep learning model integrating FCNNs and CRFs for brain tumor segmentation, Medical image analysis, 43,98-111, 2018.
  • [17] Sharma H., Zerbe N., Klempert I., Hellwich O., and Hufnagl P., Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology, Computerized Medical Imaging and Graphics, 61, 2-13, 2017.
  • [18] Havaei M., Davy A., Warde-Farley D., Biard A., Courville A., Bengio Y., et al., Brain tumor segmentation with deep neural networks, Medical image analysis, 35, 18-31, 2017.
  • [19] Kamnitsas K., Ledig C., Newcombe V. F., Simpson J. P., Kane A. D., Menon D. K., et al., Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation, Medical image analysis, 36, 61-78, 2017.
  • [20] Pirsiavash H. and Ramanan D., "Detecting activities of daily living in first-person camera views," 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence-RI-USA, 2847-2854, 16-21 June, 2012.
  • [21] Hammerla N. Y., Halloran S., and Plötz T., Deep, convolutional, and recurrent models for human activity recognition using wearables, IJCAI'16 Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York-USA, 1533-1540, 09-15 July, 2016.
  • [22] Cho Y., Nam Y., Choi Y.-J., and Cho W.-D., SmartBuckle: human activity recognition using a 3-axis accelerometer and a wearable camera, the 2nd International Workshop on Systems and Networking Support for Health Care and Assisted Living Environments, 7, 2008.
  • [23] Fathi A., Farhadi A., and Rehg J. M., Understanding egocentric activities, IEEE International Conference on Computer Vision, Barcelona-Spain , 407-414, 6-13 November, 2011.
  • [24] Iwashita Y., Takamine A., Kurazume R., and Ryoo M. S., First-person animal activity recognition from egocentric videos, 22nd International Conference on Pattern Recognition, Stockholm, Sweden, 4310-4315, 24-28 Augtos, 2014.
  • [25] Dodge S. and Karam L., A Study And Comparison Of Human And Deep Learning Recognition Performance Under Visual Distortions, 26th International Conference On Computer Communication And Networks (ICCCN), Vancouver, BC, Canada,1-7, 31 July-3 Augtos , 2017.
  • [26] Ehsani K., Bagherinezhad H., Redmon J., Mottaghi R., and Farhadi A., Who Let The Dogs Out? Modeling Dog Behavior From Visual Data, IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City-Ut-Usa , 4051-4060, 18-23 June, 2018.
  • [27] Zhou J., Li Z., Zhi W., Liang B., Moses D., and Dawes L., Using Convolutional Neural Networks And Transfer Learning For Bone Age Classification, International Conference on Digital Image Computing: Techniques and Applications (DICTA), Sydney-NSW-Australia, 1-6, 29 November-1 December, 2017.
  • [28] Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., et al., Going Deeper With Convolutions, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston-Usa, 1-9, 7-12 June, 2015.
  • [29] Ardö H., Guzhva O., Nilsson M., and Herlin A. H., Convolutional neural network-based cow interaction watchdog, IET Computer Vision,12 (2) , 171-177, 2018.
  • [30] Kang K., Li H., Yan J., Zeng X., Yang B., Xiao T., et al., T-cnn: Tubelets with convolutional neural networks for object detection from videos, IEEE Transactions on Circuits and Systems for Video Technology, 28 (10), 2896-2907, 2018.
  • [31] Girshick R., Donahue J., Darrell T., and Malik J., Rich Feature Hierarchies For Accurate Object Detection And Semantic Segmentation, 14 Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Washington DC-Usa, 580-587, 23-28 June, 2014.
  • [32] Ren S., He K., Girshick R., and Sun J., Faster R-CNN: Towards Real-Time Object Detection With Region Proposal Networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39 (6) ,91-99, 2015.
  • [33] Zheng C., Zhu X., Yang X., Wang L., Tu S., and Xue Y., Automatic recognition of lactating sow postures from depth images by deep learning detector, Computers and Electronics in Agriculture, 147, 51-63, 2018.
  • [34] Bulling A., Blanke U., and Schiele B., A tutorial on human activity recognition using body-worn inertial sensors, ACM Computing Surveys (CSUR), 46 (3), 33, 2014.
  • [35] Modern Dog Magazine. The Lifestyle Magazine for Modern Dogs and Their Companies. Available: https://moderndogmagazine.com/. Yayın tarihi 2012. Erişim tarihi Mart 10, 2019.
  • [36] LeCun Y., Bottou L., Bengio Y., and Haffner P., Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86 (11), 2278-2324, 1998.
  • [37] Krizhevsky A., Sutskever I., and Hinton G. E., Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, 60 (6) , 84-90, 2017.
  • [38] Karpathy A., Toderici G., Shetty S., Leung T., Sukthankar R., and Fei-Fei L., Large-scale video classification with convolutional neural networks, 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus-OH-Usa, 1725-1732, 23-28 June, 2014.
  • [39] Koppula H. S. and Saxena A., Anticipating human activities using object affordances for reactive robotic response, IEEE Transactions on Pattern Analysis And Machine Intelligence, 38 (1), 14-29, 2016.
  • [40] Kitani K. M., Ziebart B. D., Bagnell J. A., and Hebert M., Activity forecasting, Proceedings of the 12th European conference on Computer Vision, Florence-Italy, 201-214, 7-13 October, 2012.
  • [41] Lan T., Chen T.-C., and Savarese S., A hierarchical representation for future action prediction, 13th European Conference on Computer Vision, Zurich, Switzerland, 6-12September, 689-704,2014.
  • [42] Liu Y. H., Feature Extraction and Image Recognition with Convolutional Neural Networks, in Journal of Physics Conference Series, 1087 (6), 062032, 2018.
  • [43] Lu Y., Yi S., Zeng N., Liu Y., and Zhang Y., Identification Of Rice Diseases Using Deep Convolutional Neural Networks, Neurocomputing, 267, 378-384, 2017.
  • [44] Ali A., Hanbay D., Bölgesel Evrişimsel Sinir Ağları Tabanlı MR Görüntülerinde Tümör Tespiti, Gazi Üniversitesi Mühendislik-Mimarlık Fakültesi Dergisi, 2018.
  • [45] Weiss K., Khoshgoftaar T. M., and Wang D., A survey of transfer learning, Journal of Big Data, 3, 9, 2016.
  • [46] CIFAR-10. (11/03/2019). Available: https://www.cs.toronto.edu/~kriz/cifar.html
  • [47] Kurt F., "Evrişimli Sinir Ağlarında Hiper Parametrelerin Etkisinin İncelenmesi," Yüksek Lisans Tezi, Hacettepe Üniversitesi, Eğitim Bilimleri Enstitüsü, Ankara, 2018.
Toplam 47 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Emre Dandıl 0000-0001-6559-1399

Rukiye Polattimur Bu kişi benim 0000-0002-7939-2128

Yayımlanma Tarihi 25 Aralık 2019
Gönderilme Tarihi 19 Mart 2019
Kabul Tarihi 28 Temmuz 2019
Yayımlandığı Sayı Yıl 2020 Cilt: 35 Sayı: 2

Kaynak Göster

APA Dandıl, E., & Polattimur, R. (2019). Daha hızlı bölgesel evrişimsel sinir ağları ile köpek davranışlarının tanınması ve takibi. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi, 35(2), 819-834. https://doi.org/10.17341/gazimmfd.541677
AMA Dandıl E, Polattimur R. Daha hızlı bölgesel evrişimsel sinir ağları ile köpek davranışlarının tanınması ve takibi. GUMMFD. Aralık 2019;35(2):819-834. doi:10.17341/gazimmfd.541677
Chicago Dandıl, Emre, ve Rukiye Polattimur. “Daha hızlı bölgesel evrişimsel Sinir ağları Ile köpek davranışlarının tanınması Ve Takibi”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 35, sy. 2 (Aralık 2019): 819-34. https://doi.org/10.17341/gazimmfd.541677.
EndNote Dandıl E, Polattimur R (01 Aralık 2019) Daha hızlı bölgesel evrişimsel sinir ağları ile köpek davranışlarının tanınması ve takibi. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 35 2 819–834.
IEEE E. Dandıl ve R. Polattimur, “Daha hızlı bölgesel evrişimsel sinir ağları ile köpek davranışlarının tanınması ve takibi”, GUMMFD, c. 35, sy. 2, ss. 819–834, 2019, doi: 10.17341/gazimmfd.541677.
ISNAD Dandıl, Emre - Polattimur, Rukiye. “Daha hızlı bölgesel evrişimsel Sinir ağları Ile köpek davranışlarının tanınması Ve Takibi”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 35/2 (Aralık 2019), 819-834. https://doi.org/10.17341/gazimmfd.541677.
JAMA Dandıl E, Polattimur R. Daha hızlı bölgesel evrişimsel sinir ağları ile köpek davranışlarının tanınması ve takibi. GUMMFD. 2019;35:819–834.
MLA Dandıl, Emre ve Rukiye Polattimur. “Daha hızlı bölgesel evrişimsel Sinir ağları Ile köpek davranışlarının tanınması Ve Takibi”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi, c. 35, sy. 2, 2019, ss. 819-34, doi:10.17341/gazimmfd.541677.
Vancouver Dandıl E, Polattimur R. Daha hızlı bölgesel evrişimsel sinir ağları ile köpek davranışlarının tanınması ve takibi. GUMMFD. 2019;35(2):819-34.