Araştırma Makalesi
BibTex RIS Kaynak Göster

Gerçek Ortamlarda Artımlı Öğrenme ile Gerçek Zamanlı İşitsel Sahne Analizi

Yıl 2020, Ejosat Özel Sayı 2020 (HORA), 215 - 226, 15.08.2020
https://doi.org/10.31590/ejosat.779710

Öz

Artımlı öğrenme ile sahne analizi, farklı duyusal modaliteler kullanarak geçmiş deneyimlerden daha önce bilgi sahibi olunmayan olayları, eylemleri ve hatta gürültü modellerini aşamalı olarak öğrenmek için durmaksızın gerçekleşen bir süreçtir. Bu çalışmada, dinamik olarak değişen gerçek bir ev ortamında akustik olayları aşamalı olarak öğrenmek için artımlı bir öğrenme sistemine dayanan İşitsel Sahne Analizi (ASA) yaklaşımı sunulmuştur. Ortamdaki en baskın ses kaynakları olan olaylar, birden fazla kaynağın bulunduğu işitsel sahnede bu kaynaktan elde edilen sinyalleri verimli ve kesintisiz bir şekilde işlemek için bir Ses Kaynağı Yerelleştirme (SSL) yöntemi ile yer tespiti yapılmaktadır. Gerçek zamanlı sahne analizinde, ses örüntüleri, bu örüntülerden ses özniteliklerin çıkarılması ve öznitelik setinin oluşturulması için bu kaynağın akustik sinyal akışından segmente edilir. Artımlı öğrenme, kaynaklardan elde edilen akustik sinyallerden bu öznitelik kümelerinde zaman serisi algoritması tabanlı olan Gizli Markov Modeli (HMM) kullanılmıştır. Öğrenme süreci, Bilinmeyen Olay Algılama (UED), Akustik Olay Tanıma (AER) ve Hiyerarşik HMM yöntemi kullanarak sürekli öğrenmenin performansını değerlendirmek için çeşitli deneyler yapılarak geliştirilmiştir. Hiyerarşik HMM iki katmandan oluşur: 1) AER'nin her bir olay için HMM ve olay bazlı eşik değerleri kullanılarak gerçekleştirildiği bir alt katman; ve 2) bir ses önitelik seti için ilgili alt katman HMM’inden çıkartılan proto sembolleri ile ses özniteliklerinin birleştirilip bir HMM ile bir şüphe eşk değeri kullanılarak UED’nin gerçeklştirildiği bir üst katman. Artımlı öğrenme, AER ve UED’e sahip bu sistemin, Yanlış-Olumlu Oranlar, Doğru-Olumlu Oranlar, tanıma doğruluğu ve hesaplama süresi gözetilerek birden fazla olayın söz konusu olduğu gerçek zamanlı öğrenme için gereken gereksinimleri karşılayacak seviyede olduğunu doğruladık. AER sisteminin etkinliği, yüksek doğruluk ve dokuz farklı ses içeren gerçek zamanlı ASA'da kısa bir yeniden eğitim süresi ile doğrulanmıştır.

Kaynakça

  • Salamon J, Jacoby C, Bello JP. A dataset and taxonomy for urban soundresearch. In: Proceedings of the 22nd ACM international conference onMultimedia 2014, pp. 1041-1044.
  • Young SH, Scanlon MV. Robotic vehicle uses acoustic array for detec-tion and localization in urban environments. Unmanned Ground Vehicle Technology III, International Society for Optics and Photonics 2001; 4364: pp. 264-273.
  • D. Stowell, D. Giannoulis, E. Benetos, M. Lagrange and M. D. Plumbley, Detection and Classification of Acoustic Scenes and Events, IEEE Trans. Multimedia, vol. 17, no. 10, pp. 1733-1746, 2015.
  • Wang JC, Lee HP, ang JF, Lin CB. Robust environmental sound recognition for home automation. IEEE Transactions on Automation Science and Engineering 2008; 5 (1): 25-31.
  • Sinapov J, Weimer M, Stoytchev A. Interactive learning of the acoustic properties of objects by a robot. In: Procceedings of the RSS Workshopon Robot Manipulation: Intelligence in Human Environments 2008. doi:10.1109/ROBOT.2009.5152802
  • Lee, CH, Han, CC, Chuang, CC. Automatic classification of bird species from their sounds using two-dimensional cepstral coefficients, IEEE Transactions on Audio, Speech and Language Processing 2008; 16(8):1541-1550.
  • R. Radhakrishnan, A. Divakaran and P. Smaragdis, Audio Analysis for Surveillance Applications, in Proc. IEEE Workshop Appl. Signal Process. Audio Acoust., pp. 158-161, 2005.
  • Carletti V, Foggia P, Percannella G, Saggese A, Strisciuglio N et al. Audio surveillance using a bag of aural words classifier. 10th IEEE International Conference on Advanced Video and Signal Based Surveillance 2013, pp. 81-86.
  • Sinapov J, Stoytchev A. From acoustic object recognition to object cate-gorization by a humanoid robot. In: Proceedings of the RSS Workshop: Mobile Manipulation in Human Environments 2009.
  • I. Feki, A. B. Ammar and A. M. Alimi, Audio Stream Analysis for Environmental Sound Classification, In: International Conference on Multimedia Computing and Systems, 2011.
  • G. Lafay, M. Lagrange, M. Rossignol, E. Benetos and A. Roebel, AMorphological Model for Simulating Acoustic Scenes and Its Appli-cation to Sound Event Detection, IEEE/ACM Transactions on Audio, Speech and Language Processing, 2016, in Press.
  • Zhou X, Zhuang X, Liu M, Tang H, Hasegawa-Johnson M et al. HMM-based acoustic event detection with AdaBoost feature selection. In: Multimodal technologies for perception of humans 2007, pp. 345-353.
  • Cowling M, Sitte R. Comparison of techniques for environmental sound recognition. Pattern recognition letters, 2003; 24 (15): 2895–2907.
  • Rabaoui A, Kadri H, Lachiri Z, Ellouze N. Using robust features with multi-class SVMs to classify noisy sounds. International Symposium on Communications, Control and Signal Processing 2008. doi: 10.1109/ISCCSP.2008.4537294
  • Atrey, P.K., Maddage, N.C., and Kankanhalli, M.S. (2006). Audio based event detection for multimedia surveillance. In Proc. of ICASSP.
  • Dennis JW. Sound event recognition in unstructured environments using spectrogram image processing. PhD, Nanyang Technological University, Nanyang, Singapore, 2014.
  • Ephraim, Y. and Malah, D. (1984). Speech enhancement using minimum mean-square error short-time spectral amplitude estimator, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP- 32, no. 6, pp. 1109–1121.
  • Nakadai, K., İnce, G., Nakamura, K., Nakajima, H. (2012). Robot Audition for Dynamic Environments, IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), pp.125-130.
  • Yamamoto, S. and Kitayama, S. (1982). An adaptive echo canceller with variable step gain method, Trans. of the IECE of Japan, vol. E65, no. 1, pp. 1–8.
  • Cohen, I. and Berdugo, B. (2001). Speech enhancement for non-stationary noise environments, Signal Processing, vol. 81, no. 2, pp. 2403–2418.
  • Wolf, M. and Nadeu, C. (2014). Channel selection measures for multi-microphone speech recognition, Speech Communication, vol. 57, pp. 170–180.
  • Löllmann, H.W., Moore, A.H., Naylor, P.A., Rafaely, B., Horaud, R., Mazel, A. and Kellermann, W. (2017). Microphone Array Signal Processing for Robot Audition, IEEE Workshop on Hands-free Speech Communication and Microphone Arrays (HSCMA), pp. 51-55.
  • Cachu, R., Kopparthi, S., Adapa, B. and Barkana, B. (2008). Separation of voiced and unvoiced using zero crossing rate and energy of the speech signal, ASEE.
  • Vozarikova E., Pleva, M., Juhar, J., Cizmar, A. (2011). Surveillance system based on the acoustic events detection, Journal of Electrical and Electronics Engineering. vol. 4, no. 1, pp. 255-258.
  • Walter TC. Auditory-based processing of communication sounds. PhD, University of Cambridge, Cambridge, UK, 2011.
  • McLoughlin I, Zhang HM, Xie ZP, Song Y, Xiao W. Robust sound event classification using deep neural networks. IEEE Transactions on Audio, Speech, and Language Processing 2015; 23 (3): 540– 552. doi: 10.1109/TASLP.2015.2389618
  • Zhang H, McLoughlin I, Song Y. Robust sound event recognition using convolutional neural networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015, pp. 559–563. doi: 10.1109/ICASSP.2015.7178031
  • Sangnier M, Gauthier J, Rakotomamonjy A. Early frame-based detection of acoustic scenes. In: IEEE International Workshop on Applications of Signal Processing to Audio and Acoustics 2015, pp. 1-5. doi: 10.1109/WASPAA.2015.7336884
  • Saltalı İ, Sariel S, İnce G. Scene analysis through auditory event monitoring. In: Proceedings of the International Workshop on Social Learning and Multimodal Interaction for Designing Artificial Agents (DAA) 2016, pp. 1-6. doi: 10.1145/3005338.300534
  • Ntalampiras S, Potamitis I, Fakotakis N. A multidomain approach for automatic home environmental sound classification. In: Proceedings of 11th Annual Conference of the International Speech Communication Association (INTERSPEECH) 2010, pp. 2210-2213.
  • Do HM, Sheng W, Liu M. An open platform of auditory perception for home service robots. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015, pp. 6161–6166. doi: 10.1109/IROS.2015.7354255
  • Sinapov, J., & Stoytchev, A. (2009, June). From acoustic object recognition to object categorization by a humanoid robot. In Proc. of the RSS 2009 Workshop on Mobile Manipulation, Seattle, WA.
  • Sangnier M., Gauthier J. and Rakotomamonjy A. , Early Frame-based Detection of Acoustic Scenes, In IEEE International Workshop onApplications of Signal Processing to Audio and Acoustics, 2015
  • Xing Z, Pei J, Keogh E. A brief survey on sequence classification. ACM Sigkdd Explorations Newsletter 2010; 12 (1): 40-48. doi: 10.1145/1882471.1882478
  • Qiao Y, Xin XW, Bin Y, Ge S. Anomaly intrusion detection method based on HMM. Electronics Letters 2002; 38 (13): 663-664. doi: 10.1049/el:20020467
  • Nakamura K. et al., Intelligent sound source localization for dynamic environments, in IROS, pp. 664-669, 2009.
  • Okuno, & Nakadai, K. (2015, April). Robot audition: Its rise and perspectives. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 5610-5614). IEEE.
  • James J. Kuffner. Cloud-Enabled Robots. In IEEE-RAS International Conference on Humanoid Robots, Nashville, TN, 2010.
  • Hu, G., Tay, W. P., & Wen, Y. (2012). Cloud robotics: architecture, challenges and applications. IEEE network, 26(3), 21-28. % !!
  • Frame, S. J., & Jammalamadaka, S. R. (2007). Generalized mixture models, semi-supervised learning, and unknown class inference. Advances in Data Analysis and Classification, 1(1), 23-38.
  • Shi, B., Sun, M., Puvvada, K. C., Kao, C. C., Matsoukas, S., & Wang, C. (2020, May). Few-Shot Acoustic Event Detection Via Meta Learning. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 76-80). IEEE.
  • Nguyen, Duong, et al. Recurrent Neural Networks with Stochastic Layers for Acoustic Novelty Detection. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. p. 765-769.
  • Bayram, B., Duman, T. B., & Ince, G. (2020). Real time detection of acoustic anomalies in industrial processes using sequential autoencoders. Expert Systems, e12564.
  • Shmelkov, K., Schmid, C., & Alahari, K. (2017). Incremental learning of object detectors without catastrophic forgetting. In Proceedings of the IEEE International Conference on Computer Vision (pp. 3400-3409).
  • Ren, M., Liao, R., Fetaya, E., & Zemel, R. (2019). Incremental few-shot learning with attention attractor networks. In Advances in Neural Information Processing Systems (pp. 5275-5285).
  • Khreich W, Granger E, Miri A, Sabourin R. A survey of techniques for incremental learning of HMM parameters. Information Sciences 2012; 197: 105-130. doi: 10.1016/j.ins.2012.02.017
  • Nakadai, K., et al. An open source software system for robot audition HARK and its evaluation. In: Humanoids 2008-8th IEEE-RAS International Conference on Humanoid Robots. IEEE, 2008. pp. 561-566.

Real-Time Auditory Scene Analysis using Continual Learning in Real Environments

Yıl 2020, Ejosat Özel Sayı 2020 (HORA), 215 - 226, 15.08.2020
https://doi.org/10.31590/ejosat.779710

Öz

Continual learning for scene analysis is a continuous process to incrementally learn distinct events, actions, and even noise models from past experiences using different sensory modalities. In this paper, an Auditory Scene Analysis (ASA) approach based on a continual learning system is developed to incrementally learn the acoustic events in a dynamically-changing domestic environment. The events being salient sound sources are localized by a Sound Source Localization (SSL) method to robustly process the signals of the localized sound source in the domestic scene where multiple sources can co-exist. For real-time ASA, audio patterns are segmented from the acoustic signal stream of the localized source for extraction of the audio features, and construction of a feature set for each pattern. The continual learning is employed via a time-series algorithm, Hidden Markov Model (HMM), on these feature sets from acoustic signals stemming from the sources. The learning process is investigated by conducting a variety of experiments to evaluate the performance of Unknown Event Detection (UED), Acoustic Event Recognition (AER), and continual learning using a Hierarchical HMM algorithm. The Hierarchical HMM consists of two layers: 1) a lower layer in which AER is performed using an HMM for each event and the event-wise likelihood thresholds; and 2) an upper layer in which UED is achieved by one HMM with a suspicion threshold through the audio features with their proto symbols stemming from the lower layer HMMs. We verified the effectiveness of the proposed system capable of continual learning, AER and UED in terms of False-Positive Rates, True-Positive Rates, recognition accuracy and computational time to meet the demands in a learning task of multiple events in real-time. The effectiveness of the AER system has been verified with high accuracy, and a short retraining time in real-time ASA having nine different sounds.

Kaynakça

  • Salamon J, Jacoby C, Bello JP. A dataset and taxonomy for urban soundresearch. In: Proceedings of the 22nd ACM international conference onMultimedia 2014, pp. 1041-1044.
  • Young SH, Scanlon MV. Robotic vehicle uses acoustic array for detec-tion and localization in urban environments. Unmanned Ground Vehicle Technology III, International Society for Optics and Photonics 2001; 4364: pp. 264-273.
  • D. Stowell, D. Giannoulis, E. Benetos, M. Lagrange and M. D. Plumbley, Detection and Classification of Acoustic Scenes and Events, IEEE Trans. Multimedia, vol. 17, no. 10, pp. 1733-1746, 2015.
  • Wang JC, Lee HP, ang JF, Lin CB. Robust environmental sound recognition for home automation. IEEE Transactions on Automation Science and Engineering 2008; 5 (1): 25-31.
  • Sinapov J, Weimer M, Stoytchev A. Interactive learning of the acoustic properties of objects by a robot. In: Procceedings of the RSS Workshopon Robot Manipulation: Intelligence in Human Environments 2008. doi:10.1109/ROBOT.2009.5152802
  • Lee, CH, Han, CC, Chuang, CC. Automatic classification of bird species from their sounds using two-dimensional cepstral coefficients, IEEE Transactions on Audio, Speech and Language Processing 2008; 16(8):1541-1550.
  • R. Radhakrishnan, A. Divakaran and P. Smaragdis, Audio Analysis for Surveillance Applications, in Proc. IEEE Workshop Appl. Signal Process. Audio Acoust., pp. 158-161, 2005.
  • Carletti V, Foggia P, Percannella G, Saggese A, Strisciuglio N et al. Audio surveillance using a bag of aural words classifier. 10th IEEE International Conference on Advanced Video and Signal Based Surveillance 2013, pp. 81-86.
  • Sinapov J, Stoytchev A. From acoustic object recognition to object cate-gorization by a humanoid robot. In: Proceedings of the RSS Workshop: Mobile Manipulation in Human Environments 2009.
  • I. Feki, A. B. Ammar and A. M. Alimi, Audio Stream Analysis for Environmental Sound Classification, In: International Conference on Multimedia Computing and Systems, 2011.
  • G. Lafay, M. Lagrange, M. Rossignol, E. Benetos and A. Roebel, AMorphological Model for Simulating Acoustic Scenes and Its Appli-cation to Sound Event Detection, IEEE/ACM Transactions on Audio, Speech and Language Processing, 2016, in Press.
  • Zhou X, Zhuang X, Liu M, Tang H, Hasegawa-Johnson M et al. HMM-based acoustic event detection with AdaBoost feature selection. In: Multimodal technologies for perception of humans 2007, pp. 345-353.
  • Cowling M, Sitte R. Comparison of techniques for environmental sound recognition. Pattern recognition letters, 2003; 24 (15): 2895–2907.
  • Rabaoui A, Kadri H, Lachiri Z, Ellouze N. Using robust features with multi-class SVMs to classify noisy sounds. International Symposium on Communications, Control and Signal Processing 2008. doi: 10.1109/ISCCSP.2008.4537294
  • Atrey, P.K., Maddage, N.C., and Kankanhalli, M.S. (2006). Audio based event detection for multimedia surveillance. In Proc. of ICASSP.
  • Dennis JW. Sound event recognition in unstructured environments using spectrogram image processing. PhD, Nanyang Technological University, Nanyang, Singapore, 2014.
  • Ephraim, Y. and Malah, D. (1984). Speech enhancement using minimum mean-square error short-time spectral amplitude estimator, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP- 32, no. 6, pp. 1109–1121.
  • Nakadai, K., İnce, G., Nakamura, K., Nakajima, H. (2012). Robot Audition for Dynamic Environments, IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), pp.125-130.
  • Yamamoto, S. and Kitayama, S. (1982). An adaptive echo canceller with variable step gain method, Trans. of the IECE of Japan, vol. E65, no. 1, pp. 1–8.
  • Cohen, I. and Berdugo, B. (2001). Speech enhancement for non-stationary noise environments, Signal Processing, vol. 81, no. 2, pp. 2403–2418.
  • Wolf, M. and Nadeu, C. (2014). Channel selection measures for multi-microphone speech recognition, Speech Communication, vol. 57, pp. 170–180.
  • Löllmann, H.W., Moore, A.H., Naylor, P.A., Rafaely, B., Horaud, R., Mazel, A. and Kellermann, W. (2017). Microphone Array Signal Processing for Robot Audition, IEEE Workshop on Hands-free Speech Communication and Microphone Arrays (HSCMA), pp. 51-55.
  • Cachu, R., Kopparthi, S., Adapa, B. and Barkana, B. (2008). Separation of voiced and unvoiced using zero crossing rate and energy of the speech signal, ASEE.
  • Vozarikova E., Pleva, M., Juhar, J., Cizmar, A. (2011). Surveillance system based on the acoustic events detection, Journal of Electrical and Electronics Engineering. vol. 4, no. 1, pp. 255-258.
  • Walter TC. Auditory-based processing of communication sounds. PhD, University of Cambridge, Cambridge, UK, 2011.
  • McLoughlin I, Zhang HM, Xie ZP, Song Y, Xiao W. Robust sound event classification using deep neural networks. IEEE Transactions on Audio, Speech, and Language Processing 2015; 23 (3): 540– 552. doi: 10.1109/TASLP.2015.2389618
  • Zhang H, McLoughlin I, Song Y. Robust sound event recognition using convolutional neural networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015, pp. 559–563. doi: 10.1109/ICASSP.2015.7178031
  • Sangnier M, Gauthier J, Rakotomamonjy A. Early frame-based detection of acoustic scenes. In: IEEE International Workshop on Applications of Signal Processing to Audio and Acoustics 2015, pp. 1-5. doi: 10.1109/WASPAA.2015.7336884
  • Saltalı İ, Sariel S, İnce G. Scene analysis through auditory event monitoring. In: Proceedings of the International Workshop on Social Learning and Multimodal Interaction for Designing Artificial Agents (DAA) 2016, pp. 1-6. doi: 10.1145/3005338.300534
  • Ntalampiras S, Potamitis I, Fakotakis N. A multidomain approach for automatic home environmental sound classification. In: Proceedings of 11th Annual Conference of the International Speech Communication Association (INTERSPEECH) 2010, pp. 2210-2213.
  • Do HM, Sheng W, Liu M. An open platform of auditory perception for home service robots. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015, pp. 6161–6166. doi: 10.1109/IROS.2015.7354255
  • Sinapov, J., & Stoytchev, A. (2009, June). From acoustic object recognition to object categorization by a humanoid robot. In Proc. of the RSS 2009 Workshop on Mobile Manipulation, Seattle, WA.
  • Sangnier M., Gauthier J. and Rakotomamonjy A. , Early Frame-based Detection of Acoustic Scenes, In IEEE International Workshop onApplications of Signal Processing to Audio and Acoustics, 2015
  • Xing Z, Pei J, Keogh E. A brief survey on sequence classification. ACM Sigkdd Explorations Newsletter 2010; 12 (1): 40-48. doi: 10.1145/1882471.1882478
  • Qiao Y, Xin XW, Bin Y, Ge S. Anomaly intrusion detection method based on HMM. Electronics Letters 2002; 38 (13): 663-664. doi: 10.1049/el:20020467
  • Nakamura K. et al., Intelligent sound source localization for dynamic environments, in IROS, pp. 664-669, 2009.
  • Okuno, & Nakadai, K. (2015, April). Robot audition: Its rise and perspectives. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 5610-5614). IEEE.
  • James J. Kuffner. Cloud-Enabled Robots. In IEEE-RAS International Conference on Humanoid Robots, Nashville, TN, 2010.
  • Hu, G., Tay, W. P., & Wen, Y. (2012). Cloud robotics: architecture, challenges and applications. IEEE network, 26(3), 21-28. % !!
  • Frame, S. J., & Jammalamadaka, S. R. (2007). Generalized mixture models, semi-supervised learning, and unknown class inference. Advances in Data Analysis and Classification, 1(1), 23-38.
  • Shi, B., Sun, M., Puvvada, K. C., Kao, C. C., Matsoukas, S., & Wang, C. (2020, May). Few-Shot Acoustic Event Detection Via Meta Learning. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 76-80). IEEE.
  • Nguyen, Duong, et al. Recurrent Neural Networks with Stochastic Layers for Acoustic Novelty Detection. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. p. 765-769.
  • Bayram, B., Duman, T. B., & Ince, G. (2020). Real time detection of acoustic anomalies in industrial processes using sequential autoencoders. Expert Systems, e12564.
  • Shmelkov, K., Schmid, C., & Alahari, K. (2017). Incremental learning of object detectors without catastrophic forgetting. In Proceedings of the IEEE International Conference on Computer Vision (pp. 3400-3409).
  • Ren, M., Liao, R., Fetaya, E., & Zemel, R. (2019). Incremental few-shot learning with attention attractor networks. In Advances in Neural Information Processing Systems (pp. 5275-5285).
  • Khreich W, Granger E, Miri A, Sabourin R. A survey of techniques for incremental learning of HMM parameters. Information Sciences 2012; 197: 105-130. doi: 10.1016/j.ins.2012.02.017
  • Nakadai, K., et al. An open source software system for robot audition HARK and its evaluation. In: Humanoids 2008-8th IEEE-RAS International Conference on Humanoid Robots. IEEE, 2008. pp. 561-566.
Toplam 47 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Barış Bayram Bu kişi benim 0000-0002-5588-577X

Gökhan İnce Bu kişi benim 0000-0002-0034-030X

Yayımlanma Tarihi 15 Ağustos 2020
Yayımlandığı Sayı Yıl 2020 Ejosat Özel Sayı 2020 (HORA)

Kaynak Göster

APA Bayram, B., & İnce, G. (2020). Real-Time Auditory Scene Analysis using Continual Learning in Real Environments. Avrupa Bilim Ve Teknoloji Dergisi215-226. https://doi.org/10.31590/ejosat.779710