Research Article
BibTex RIS Cite

Aras Kuş Türlerinin Ses Özellikleri Bakımından Derin Öğrenme Yöntemleriyle Tanınması

Year 2022, Volume: 12 Issue: 3, 1250 - 1263, 01.09.2022
https://doi.org/10.21597/jist.1124674

Abstract

Bu çalışmada Iğdır Aras Nehri Kuş Cenneti'nde sıklıkla görülen kuş türlerinin seslerinden tanınması üzerinde durulmuştur. Bu amaçla derin öğrenme yöntemleri kullanılmıştır. Biyolojik çeşitliliğin incelenmesi ve analiz edilmesi için akustik gözetleme çalışmaları yapılmaktadır. Bu iş için pasif dinleyici/kaydedici adındaki aygıtlar kullanılmaktadır. Genel olarak bu kaydedici aygıtlarla toplanan ham ses kayıtlarının üzerinde çeşitli analizler gerçekleştirilir. Bu çalışmada, kuşlardan elde edilen ham ses kayıtları tarafımızca geliştirilen yöntemlerle işlenmiş ve daha sonra derin öğrenme mimarileriyle kuş türleri sınıflandırılmıştır. Sınıflandırma çalışmaları, Aras Kuş Cenneti’nde çokça görülen 22 kuş türü üzerinde yapılmıştır. Ses kayıtları 10 saniyelik klipler haline getirilmiş daha sonra bunlar birer saniyelik log mel spektrogramlara çevrilmiştir. Sınıflandırma yöntemi olarak derin öğrenme mimarilerinden Evrişimsel Sinir Ağları (CNN)
ve Uzun Kısa-Dönemli Bellek Sinir Ağları (LSTM) kullanılmıştır. Ayrıca bu iki modelin yanında Öğrenme Aktarımı yöntemi de kullanılmıştır. Öğrenme aktarımı için kullanılan ön-eğitimli evrişimsel sinir ağlarından VGGish ve YAMNet modelleriyle seslerin yüksek seviyeli öznitelik vektörleri çıkarılmıştır. Çıkarılan bu vektörler sınıflandırıcıların giriş katmanlarını oluşturmuştur. Yapılan deneylerle dört farklı mimarinin ses kayıtları üzerindeki doğruluk oranları ve F1 skorları bulunmuştur. Buna göre en yüksek doğruluk oranı (acc) ve F1 skoru sırasıyla %94.2 ve %92.8 ile VGGish modelinin kullanıldığı sınıflandırıcıyla elde edilmiştir.

References

  • Abadi, M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado G. S, Davis A, Dean J, & Devin M. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv 2016. arXiv preprint arXiv:1603.04467.
  • Aide T. M, Corrada-Bravo C, Campos-Cerqueira M, Milan C, Vega G, & Alvarez R. (2013). Real-time bioacoustics monitoring and automated species identification. PeerJ, 2013(1).
  • Akhtar N, & Mian A. (2018). Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. Içinde IEEE Access (C. 6, ss. 14410–14430). Institute of Electrical and Electronics Engineers Inc.
  • Bardeli R, Wolff D, Kurth F, Koch M, Tauchert K. H, & Frommolt K. H. (2010). Detecting bird sounds in a complex acoustic environment and application to bioacoustic monitoring. Pattern Recognition Letters, 31(12), 1524–1534.
  • Barrowclough G. F, Cracraft J, Klicka J, & Zink R. M. (2016). How Many Kinds of Birds Are There and Why Does It Matter? PLOS ONE, 11(11), 1–15.
  • Bayat S, & Işık G. (2020). Identification of Aras Birds with Convolutional Neural Networks. 4th International Symposium on Multidisciplinary Studies and Innovative Technologies, ISMSIT 2020 - Proceedings.
  • Boersma P, & Weenink D. (2018). Praat: doing phonetics by computer [Computer program]. Version 6.0.43. retrieved 8 September 2018.
  • Chalmers C, Fergus P, Wich S, & Longmore S. (2021). Modelling Animal Biodiversity Using Acoustic Monitoring and Deep Learning.
  • Cho K, van Merriënboer B, Bahdanau D, & Bengio Y. (2014). On the properties of neural machine translation: Encoder–decoder approaches. Proceedings of SSST 2014 - 8th Workshop on Syntax, Semantics and Structure in Statistical Translation.
  • Chollet F. (2015). Keras: The Python Deep Learning library. Keras.Io.
  • de Jong N. H, & Wempe T. (2009). Praat script to detect syllable nuclei and measure speech rate automatically. Behavior Research Methods, 41(2), 385–390.
  • Ferdiana R, Dicka W. F. & Boediman A. (2021). Cat sounds classification with convolutional neural network. International Journal on Electrical Engineering and Informatics.
  • Florentin J, Dutoit T, & Verlinden O. (2020). Detection and identification of European woodpeckers with deep convolutional neural networks. Ecological Informatics.
  • Gemmeke J. F, Ellis D. P. W, Freedman D, Jansen A, Lawrence W, Moore R. C, Plakal M, & Ritter M. (2017). Audio Set: An ontology and human-labeled dataset for audio events. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings.
  • Grill T, & Schluter J. (2017). Two convolutional neural networks for bird detection in audio signals. 25th European Signal Processing Conference, EUSIPCO 2017, 2017-Janua, 1764–1768.
  • Guo Y, Xu M, Wu Z, Wu J, & Su B. (2019). Multi-Scale Convolutional Recurrent Neural Network with Ensemble Method for Weakly Labeled Sound Event Detection. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW 2019.
  • He K, Zhang X, Ren S, & Sun J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
  • Hershey S, Chaudhuri S, Ellis D. P. W, Gemmeke J. F, Jansen A, Moore R. C, Plakal M, Platt D, Saurous R. A, Seybold B, Slaney M, Weiss R. J, & Wilson K. (2017). CNN architectures for large-scale audio classification. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings.
  • Hershley S, et al.: Models for audioset: a large scale dataset of audio events (2016). https://github.com/tensorflow/models/tree/master/research/audioset/vggish
  • Howard A. G, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, & Adam H. (2017). MobileNets. arXiv preprint arXiv:1704.04861.
  • Işık G, & Artuner H. (2020). Turkish Dialect Recognition Using Acoustic and Phonotactic Features in Deep Learning Architectures. Bilişim Teknolojileri Dergisi, 13, 207–216.
  • Jalal A, Salman A, Mian A, Shortis M, & Shafait F. (2020). Fish detection and species classification in underwater environments using deep learning with temporal information. Ecological Informatics.
  • Joly A, Goëau H, Glotin H, Spampinato C, Bonnet P, Vellinga W. P, Lombardo J. C, Planqué R, Palazzo S, & Müller H. (2017). LifeCLEF 2017 lab overview: Multimedia Species identification challenges. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
  • Joly A, Goëau H, Kahl S, Deneu B, Servajean M, Cole E, Picek L, Ruiz de Castañeda R, Bolon I, Durso A, Lorieul T, Botella C, Glotin H, Champ J, Eggel I, Vellinga W. P, Bonnet P, & Müller H. (2020). Overview of LifeCLEF 2020: A System-Oriented Evaluation of Automated Species Identification and Species Distribution Prediction. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
  • Jung D. H, Kim N. Y, Moon S. H, Kim H. S, Lee T. S, Yang J. S, Lee J. Y, Han X, & Park S. H. (2021). Classification of Vocalization Recordings of Laying Hens and Cattle Using Convolutional Neural Network Models. Journal of Biosystems Engineering.
  • Kahl S, Wilhelm-Stein T, Hussein H, Klinck H, Kowerko D, Ritter M, & Eibl M. (2017). Large-scale bird sound classification using convolutional neural networks. CEUR Workshop Proceedings.
  • Kingma D. P, & Ba J. L. (2015). Adam: A method for stochastic optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings.
  • Kuzeydoğa Derneği. 10 Ekim 2020 tarihinde, https://kuzeydoga.net/ adresinden erişildi.
  • LeBien J, Zhong M, Campos-Cerqueira M, Velev J. P, Dodhia R, Ferres J. L, & Aide T. M. (2020). A pipeline for identification of bird and frog species in tropical soundscape recordings using a convolutional neural network. Ecological Informatics, 59.
  • Lezhenin I, Bogach N, & Pyshkin E. (2019). Urban Sound Classification using Long Short-Term Memory Neural Network.
  • M. Lasseck, “Acoustic bird detection with deep convolutional neuralnetworks,” DCASE2018 Challenge, Tech. Rep., September 2018.
  • Mac Aodha O, Gibb R, Barlow K. E, Browning E, Firman M, Freeman R, Harder B, Kinsey L, Mead G. R, Newson S. E, Pandourski I, Parsons S, Russ J, Szodoray-Paradi A, Szodoray-Paradi F, Tilova E, Girolami M, Brostow G, & Jones K. E. (2018). Bat detective—Deep learning tools for bat acoustic signal detection. PLOS Computational Biology, 14(3), e1005995.
  • Malfante M, Mars J. I, Dalla Mura M, & Gervaise C. (2018). Automatic fish sounds classification. The Journal of the Acoustical Society of America, 143(5), 2834–2846.
  • Mathur M, Vasudev D, Sahoo S, Jain D, & Goel N. (2020). Crosspooled FishNet: transfer learning based fish species classification model. Multimedia Tools and Applications.
  • McFee B, Raffel C, Liang D, Ellis D, Mcvicar M, Battenberg E, & Nieto O. (2015). librosa: Audio and Music Signal Analysis in Python.
  • Nguyen H, Maclagan S. J, Nguyen T. D, Nguyen T, Flemons P, Andrews K, Ritchie E. G, & Phung D. (2017). Animal recognition and identification with deep convolutional neural networks for automated wildlife monitoring. Proceedings - 2017 International Conference on Data Science and Advanced Analytics, DSAA 2017, 2018-Janua, 40–49.
  • Pacal I, & Karaboga D. (2021) A robust real-time deep learning based automatic polyp detection system, Computers in Biology and Medicine, Volume 134, 104519, ISSN 0010-4825
  • Pacal I, Karaman A, Karaboga D, Akay B, Basturk A, Nalbantoglu U, & Coskun S. (2022) An efficient real-time colonic polyp detection with YOLO algorithms trained by using negative samples and large datasets, Computers in Biology and Medicine, Volume 141, 105031, ISSN 0010-4825
  • Salamon J, Bello J. P, Farnsworth A, & Kelling S. (2017). Fusing shallow and deep learning for bioacoustic bird species classification. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 141–145.
  • Salamon J, Bello J. P, Farnsworth A, Robbins M, Keen S, Klinck H, & Kelling S. (2016). Towards the automatic classification of avian flight calls for bioacoustic monitoring. PLoS ONE, 11(11).
  • Simonyan K, & Zisserman A. (2015). Very deep convolutional networks for large-scale image recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings.
  • Sprengel E, Jaggi M, Kilcher Y, & Hofmann T. (2016). Audio Based Bird Species Identification Using Deep Learning Techniques. In CEUR Workshop Proceedings (Vol. 1609, pp. 547–559). CEUR-WS.
  • Stowell D, Wood M, Stylianou Y, & Glotin H. (2016). Bird detection in audio: A survey and a challenge. IEEE International Workshop on Machine Learning for Signal Processing, MLSP.
  • Szegedy C, Vanhoucke V, Ioffe S, Shlens J, & Wojna Z. (2016). Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
  • Tolkova I, Chu B, Hedman M, Kahl S, & Klinck H. (2021). Parsing Birdsong with Deep Audio Embeddings. CoRR, abs/2108.0. https://arxiv.org/abs/2108.09203
  • Vidaña-Vila E, Navarro J, Alsina-Pagès R. M, & Ramírez Á. (2020). A two-stage approach to automatically detect and classify woodpecker (Fam. Picidae) sounds. Applied Acoustics, 166.
  • xeno-canto. 10 Ekim 2020 tarihinde, https://www.xeno-canto.org/ adresinden erişildi.
  • Xie J, Hu K, Zhu M, & Guo Y. (2020). Bioacoustic signal classification in continuous recordings: Syllable-segmentation vs sliding-window. Expert Systems with Applications, 152.
  • Yamashita, R, Nishio, M, Do, RKG. et al. (2018) Convolutional neural networks: an overview and application in radiology. Insights Imaging 9, 611–629.
  • Young T, Hazarika D, Poria S, & Cambria E. (2018). Recent trends in deep learning based natural language processing [Review Article]. Içinde IEEE Computational Intelligence Magazine (C. 13, Sayı 3, ss. 55–75). Institute of Electrical and Electronics Engineers Inc.

Recognition of Aras Bird Species From Their Voices With Deep Learning Methods

Year 2022, Volume: 12 Issue: 3, 1250 - 1263, 01.09.2022
https://doi.org/10.21597/jist.1124674

Abstract

This study focuses on recognizing bird species from their voices, which are frequently seen in Aras River
Bird Sanctuary of Iğdır. For this purpose, deep learning methods were used. Acoustic monitoring is carried out to examine and analyze biological diversity. Passive acoustic listeners/recorders are used for this work. In general, various analyzes are performed on the raw sound recordings collected with these recording devices. In this study, raw sound recordings obtained from birds were processed with the methods developed by us, and then bird species were classified with deep learning architectures. Classifications were carried out on 22 bird species that are frequently seen in Aras Bird Sanctuary. Audio
recordings were made into 10-second clips and then converted into one-second log mel spectrograms. Convolutional Neural Networks (CNN) and Long Short-Term Memory Neural Networks (LSTM), which are deep learning architectures, were used as classification methods. In addition to these two models, the Transfer Learning method was also used. Highlevel feature vectors of sounds were extracted with VGGish and YAMNet models, which are pre-trained convolutional neural networks, used for transfer learning. These extracted vectors formed the input layers of the classifiers. Accuracy rates and F1 scores of four different architectures were found through experiments. Accordingly, the highest accuracy rate (acc) and F1 score were obtained with the classifier using the VGGish model with 94.2% and 92.8%, respectively.

References

  • Abadi, M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado G. S, Davis A, Dean J, & Devin M. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv 2016. arXiv preprint arXiv:1603.04467.
  • Aide T. M, Corrada-Bravo C, Campos-Cerqueira M, Milan C, Vega G, & Alvarez R. (2013). Real-time bioacoustics monitoring and automated species identification. PeerJ, 2013(1).
  • Akhtar N, & Mian A. (2018). Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. Içinde IEEE Access (C. 6, ss. 14410–14430). Institute of Electrical and Electronics Engineers Inc.
  • Bardeli R, Wolff D, Kurth F, Koch M, Tauchert K. H, & Frommolt K. H. (2010). Detecting bird sounds in a complex acoustic environment and application to bioacoustic monitoring. Pattern Recognition Letters, 31(12), 1524–1534.
  • Barrowclough G. F, Cracraft J, Klicka J, & Zink R. M. (2016). How Many Kinds of Birds Are There and Why Does It Matter? PLOS ONE, 11(11), 1–15.
  • Bayat S, & Işık G. (2020). Identification of Aras Birds with Convolutional Neural Networks. 4th International Symposium on Multidisciplinary Studies and Innovative Technologies, ISMSIT 2020 - Proceedings.
  • Boersma P, & Weenink D. (2018). Praat: doing phonetics by computer [Computer program]. Version 6.0.43. retrieved 8 September 2018.
  • Chalmers C, Fergus P, Wich S, & Longmore S. (2021). Modelling Animal Biodiversity Using Acoustic Monitoring and Deep Learning.
  • Cho K, van Merriënboer B, Bahdanau D, & Bengio Y. (2014). On the properties of neural machine translation: Encoder–decoder approaches. Proceedings of SSST 2014 - 8th Workshop on Syntax, Semantics and Structure in Statistical Translation.
  • Chollet F. (2015). Keras: The Python Deep Learning library. Keras.Io.
  • de Jong N. H, & Wempe T. (2009). Praat script to detect syllable nuclei and measure speech rate automatically. Behavior Research Methods, 41(2), 385–390.
  • Ferdiana R, Dicka W. F. & Boediman A. (2021). Cat sounds classification with convolutional neural network. International Journal on Electrical Engineering and Informatics.
  • Florentin J, Dutoit T, & Verlinden O. (2020). Detection and identification of European woodpeckers with deep convolutional neural networks. Ecological Informatics.
  • Gemmeke J. F, Ellis D. P. W, Freedman D, Jansen A, Lawrence W, Moore R. C, Plakal M, & Ritter M. (2017). Audio Set: An ontology and human-labeled dataset for audio events. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings.
  • Grill T, & Schluter J. (2017). Two convolutional neural networks for bird detection in audio signals. 25th European Signal Processing Conference, EUSIPCO 2017, 2017-Janua, 1764–1768.
  • Guo Y, Xu M, Wu Z, Wu J, & Su B. (2019). Multi-Scale Convolutional Recurrent Neural Network with Ensemble Method for Weakly Labeled Sound Event Detection. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW 2019.
  • He K, Zhang X, Ren S, & Sun J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
  • Hershey S, Chaudhuri S, Ellis D. P. W, Gemmeke J. F, Jansen A, Moore R. C, Plakal M, Platt D, Saurous R. A, Seybold B, Slaney M, Weiss R. J, & Wilson K. (2017). CNN architectures for large-scale audio classification. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings.
  • Hershley S, et al.: Models for audioset: a large scale dataset of audio events (2016). https://github.com/tensorflow/models/tree/master/research/audioset/vggish
  • Howard A. G, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, & Adam H. (2017). MobileNets. arXiv preprint arXiv:1704.04861.
  • Işık G, & Artuner H. (2020). Turkish Dialect Recognition Using Acoustic and Phonotactic Features in Deep Learning Architectures. Bilişim Teknolojileri Dergisi, 13, 207–216.
  • Jalal A, Salman A, Mian A, Shortis M, & Shafait F. (2020). Fish detection and species classification in underwater environments using deep learning with temporal information. Ecological Informatics.
  • Joly A, Goëau H, Glotin H, Spampinato C, Bonnet P, Vellinga W. P, Lombardo J. C, Planqué R, Palazzo S, & Müller H. (2017). LifeCLEF 2017 lab overview: Multimedia Species identification challenges. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
  • Joly A, Goëau H, Kahl S, Deneu B, Servajean M, Cole E, Picek L, Ruiz de Castañeda R, Bolon I, Durso A, Lorieul T, Botella C, Glotin H, Champ J, Eggel I, Vellinga W. P, Bonnet P, & Müller H. (2020). Overview of LifeCLEF 2020: A System-Oriented Evaluation of Automated Species Identification and Species Distribution Prediction. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
  • Jung D. H, Kim N. Y, Moon S. H, Kim H. S, Lee T. S, Yang J. S, Lee J. Y, Han X, & Park S. H. (2021). Classification of Vocalization Recordings of Laying Hens and Cattle Using Convolutional Neural Network Models. Journal of Biosystems Engineering.
  • Kahl S, Wilhelm-Stein T, Hussein H, Klinck H, Kowerko D, Ritter M, & Eibl M. (2017). Large-scale bird sound classification using convolutional neural networks. CEUR Workshop Proceedings.
  • Kingma D. P, & Ba J. L. (2015). Adam: A method for stochastic optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings.
  • Kuzeydoğa Derneği. 10 Ekim 2020 tarihinde, https://kuzeydoga.net/ adresinden erişildi.
  • LeBien J, Zhong M, Campos-Cerqueira M, Velev J. P, Dodhia R, Ferres J. L, & Aide T. M. (2020). A pipeline for identification of bird and frog species in tropical soundscape recordings using a convolutional neural network. Ecological Informatics, 59.
  • Lezhenin I, Bogach N, & Pyshkin E. (2019). Urban Sound Classification using Long Short-Term Memory Neural Network.
  • M. Lasseck, “Acoustic bird detection with deep convolutional neuralnetworks,” DCASE2018 Challenge, Tech. Rep., September 2018.
  • Mac Aodha O, Gibb R, Barlow K. E, Browning E, Firman M, Freeman R, Harder B, Kinsey L, Mead G. R, Newson S. E, Pandourski I, Parsons S, Russ J, Szodoray-Paradi A, Szodoray-Paradi F, Tilova E, Girolami M, Brostow G, & Jones K. E. (2018). Bat detective—Deep learning tools for bat acoustic signal detection. PLOS Computational Biology, 14(3), e1005995.
  • Malfante M, Mars J. I, Dalla Mura M, & Gervaise C. (2018). Automatic fish sounds classification. The Journal of the Acoustical Society of America, 143(5), 2834–2846.
  • Mathur M, Vasudev D, Sahoo S, Jain D, & Goel N. (2020). Crosspooled FishNet: transfer learning based fish species classification model. Multimedia Tools and Applications.
  • McFee B, Raffel C, Liang D, Ellis D, Mcvicar M, Battenberg E, & Nieto O. (2015). librosa: Audio and Music Signal Analysis in Python.
  • Nguyen H, Maclagan S. J, Nguyen T. D, Nguyen T, Flemons P, Andrews K, Ritchie E. G, & Phung D. (2017). Animal recognition and identification with deep convolutional neural networks for automated wildlife monitoring. Proceedings - 2017 International Conference on Data Science and Advanced Analytics, DSAA 2017, 2018-Janua, 40–49.
  • Pacal I, & Karaboga D. (2021) A robust real-time deep learning based automatic polyp detection system, Computers in Biology and Medicine, Volume 134, 104519, ISSN 0010-4825
  • Pacal I, Karaman A, Karaboga D, Akay B, Basturk A, Nalbantoglu U, & Coskun S. (2022) An efficient real-time colonic polyp detection with YOLO algorithms trained by using negative samples and large datasets, Computers in Biology and Medicine, Volume 141, 105031, ISSN 0010-4825
  • Salamon J, Bello J. P, Farnsworth A, & Kelling S. (2017). Fusing shallow and deep learning for bioacoustic bird species classification. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 141–145.
  • Salamon J, Bello J. P, Farnsworth A, Robbins M, Keen S, Klinck H, & Kelling S. (2016). Towards the automatic classification of avian flight calls for bioacoustic monitoring. PLoS ONE, 11(11).
  • Simonyan K, & Zisserman A. (2015). Very deep convolutional networks for large-scale image recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings.
  • Sprengel E, Jaggi M, Kilcher Y, & Hofmann T. (2016). Audio Based Bird Species Identification Using Deep Learning Techniques. In CEUR Workshop Proceedings (Vol. 1609, pp. 547–559). CEUR-WS.
  • Stowell D, Wood M, Stylianou Y, & Glotin H. (2016). Bird detection in audio: A survey and a challenge. IEEE International Workshop on Machine Learning for Signal Processing, MLSP.
  • Szegedy C, Vanhoucke V, Ioffe S, Shlens J, & Wojna Z. (2016). Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
  • Tolkova I, Chu B, Hedman M, Kahl S, & Klinck H. (2021). Parsing Birdsong with Deep Audio Embeddings. CoRR, abs/2108.0. https://arxiv.org/abs/2108.09203
  • Vidaña-Vila E, Navarro J, Alsina-Pagès R. M, & Ramírez Á. (2020). A two-stage approach to automatically detect and classify woodpecker (Fam. Picidae) sounds. Applied Acoustics, 166.
  • xeno-canto. 10 Ekim 2020 tarihinde, https://www.xeno-canto.org/ adresinden erişildi.
  • Xie J, Hu K, Zhu M, & Guo Y. (2020). Bioacoustic signal classification in continuous recordings: Syllable-segmentation vs sliding-window. Expert Systems with Applications, 152.
  • Yamashita, R, Nishio, M, Do, RKG. et al. (2018) Convolutional neural networks: an overview and application in radiology. Insights Imaging 9, 611–629.
  • Young T, Hazarika D, Poria S, & Cambria E. (2018). Recent trends in deep learning based natural language processing [Review Article]. Içinde IEEE Computational Intelligence Magazine (C. 13, Sayı 3, ss. 55–75). Institute of Electrical and Electronics Engineers Inc.
There are 50 citations in total.

Details

Primary Language Turkish
Subjects Computer Software
Journal Section Bilgisayar Mühendisliği / Computer Engineering
Authors

Seda Bayat 0000-0002-8427-9971

Gültekin Işık 0000-0003-3037-5586

Early Pub Date August 26, 2022
Publication Date September 1, 2022
Submission Date June 1, 2022
Acceptance Date June 22, 2022
Published in Issue Year 2022 Volume: 12 Issue: 3

Cite

APA Bayat, S., & Işık, G. (2022). Aras Kuş Türlerinin Ses Özellikleri Bakımından Derin Öğrenme Yöntemleriyle Tanınması. Journal of the Institute of Science and Technology, 12(3), 1250-1263. https://doi.org/10.21597/jist.1124674
AMA Bayat S, Işık G. Aras Kuş Türlerinin Ses Özellikleri Bakımından Derin Öğrenme Yöntemleriyle Tanınması. J. Inst. Sci. and Tech. September 2022;12(3):1250-1263. doi:10.21597/jist.1124674
Chicago Bayat, Seda, and Gültekin Işık. “Aras Kuş Türlerinin Ses Özellikleri Bakımından Derin Öğrenme Yöntemleriyle Tanınması”. Journal of the Institute of Science and Technology 12, no. 3 (September 2022): 1250-63. https://doi.org/10.21597/jist.1124674.
EndNote Bayat S, Işık G (September 1, 2022) Aras Kuş Türlerinin Ses Özellikleri Bakımından Derin Öğrenme Yöntemleriyle Tanınması. Journal of the Institute of Science and Technology 12 3 1250–1263.
IEEE S. Bayat and G. Işık, “Aras Kuş Türlerinin Ses Özellikleri Bakımından Derin Öğrenme Yöntemleriyle Tanınması”, J. Inst. Sci. and Tech., vol. 12, no. 3, pp. 1250–1263, 2022, doi: 10.21597/jist.1124674.
ISNAD Bayat, Seda - Işık, Gültekin. “Aras Kuş Türlerinin Ses Özellikleri Bakımından Derin Öğrenme Yöntemleriyle Tanınması”. Journal of the Institute of Science and Technology 12/3 (September 2022), 1250-1263. https://doi.org/10.21597/jist.1124674.
JAMA Bayat S, Işık G. Aras Kuş Türlerinin Ses Özellikleri Bakımından Derin Öğrenme Yöntemleriyle Tanınması. J. Inst. Sci. and Tech. 2022;12:1250–1263.
MLA Bayat, Seda and Gültekin Işık. “Aras Kuş Türlerinin Ses Özellikleri Bakımından Derin Öğrenme Yöntemleriyle Tanınması”. Journal of the Institute of Science and Technology, vol. 12, no. 3, 2022, pp. 1250-63, doi:10.21597/jist.1124674.
Vancouver Bayat S, Işık G. Aras Kuş Türlerinin Ses Özellikleri Bakımından Derin Öğrenme Yöntemleriyle Tanınması. J. Inst. Sci. and Tech. 2022;12(3):1250-63.

Cited By