Research Article
BibTex RIS Cite

Accurate Indoor Home Location Classification through Sound Analysis: The 1D-ILQP Approach

Year 2025, Volume: 4 Issue: 1, 12 - 29, 18.02.2025
https://doi.org/10.62520/fujece.1422119

Abstract

Detecting human activities within domestic environments constitutes a fundamental challenge in machine learning. Conventionally, sensors and video cameras served as primary tools for human activity detection. However, our work is oriented towards the innovative objective of ascertaining home locations by analyzing environmental sound signals. Consequently, we compiled a comprehensive sound dataset from eight distinct locations. To enable automatic home location detection using this sound dataset, we employed a lightweight machine learning model designed with a paramount focus on precision and minimal computational overhead. At the core of our approach is the introduction of a local feature generator, referred to as the one-dimensional Improved Local Quadruple Pattern (1D-ILQP). This novel 1D-ILQP plays a central role in the feature extraction process, generating textural features from the acoustic signals. To facilitate the extraction of high-level textural features, we emulated the convolutional neural network (CNN) architecture, applying maximum pooling to decompose signals. The suggested 1D-ILQP extracts textural features from each decomposed frequency band as well as the original signal. Subsequently, we selected the top 100 features using the Neighborhood Component Analysis (NCA) technique. The final step of our model involves classification, wherein we employed a range of classifiers, including decision trees, linear discriminant analysis, quadratic discriminant analysis, Naive Bayes, support vector machines, k-nearest neighbor, bagged trees, and artificial neural networks. We subjected the results to a comprehensive evaluation, and all classifiers achieved classification accuracies exceeding 80%. Notably, the k-nearest neighbor classifier delivered the highest classification accuracy, reaching an impressive 99.75%. Our findings unequivocally demonstrate that the proposed sound classification model, based on the 1D-ILQP, has yielded highly satisfactory results when applied to the home location sound dataset.

Ethical Statement

There is no need to obtain ethics committee permission for the prepared article. There is no conflict of interest with any person/institution in the prepared article.

References

  • C. A. Ronao and S.B. Cho, "Human activity recognition with smartphone sensors using deep learning neural networks," Expert Syst. Appl., vol. 59, pp. 235-244, 2016.
  • B. Dong and K. P. Lam, "Building energy and comfort management through occupant behaviour pattern detection based on a large-scale environmental sensor network," J. Build. Perform. Simul., vol. 4, pp. 359-369, 2011.
  • J. G. Ortega, L. Han, N. Whittacker, and N. Bowring, "A machine-learning based approach to model user occupancy and activity patterns for energy saving in buildings," Sci. Inf. Conf. (SAI), IEEE, pp. 474-482, 2015.
  • A. Khosrowpour, J. C. Niebles, and M. Golparvar-Fard, "Vision-based workface assessment using depth images for activity analysis of interior construction operations," Autom. Constr., vol. 48, pp. 74-87, 2014.
  • M. Zeng, L. T. Nguyen, B. Yu, O. J. Mengshoel, J. Zhu, P. Wu, and J. Zhang, "Convolutional neural networks for human activity recognition using mobile sensors," in 6th Int. Conf. Mob. Comput. Appl. Serv., IEEE, pp. 197-205, 2014.
  • A. Jalal, Y.-H. Kim, Y.-J. Kim, S. Kamal, and D. Kim, "Robust human activity recognition from depth video using spatiotemporal multi-fused features," Pattern Recognit., vol. 61, pp. 295-308, 2017.
  • S. Kamal, A. Jalal, and D. Kim, "Depth images-based human detection, tracking and activity recognition using spatiotemporal features and modified HMM," J. Electr. Eng. Technol., vol. 11, pp. 1857-1862, 2016.
  • A. Franco, A. Magnani, and D. Maio, "A multimodal approach for human activity recognition based on skeleton and RGB data," Pattern Recognit. Lett., 2020.
  • M. M. Hassan, M. Z. Uddin, A. Mohamed, and A. Almogren, "A robust human activity recognition system using smartphone sensors and deep learning," Future Gener. Comput. Syst., vol. 81, pp. 307-313, 2018.
  • Y. Chen and C. Shen, "Performance analysis of smartphone-sensor behavior for human activity recognition," IEEE Access, vol. 5, pp. 3095-3110, 2017.
  • H. F. Nweke, Y. W. Teh, M. A. Al-Garadi, and U. R. Alo, "Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges," Expert Syst. Appl., vol. 105, pp. 233-261, 2018.
  • E. Lattanzi, L. Calisti, and P. Capellacci, "Lightweight accurate trigger to reduce power consumption in sensor-based continuous human activity recognition," Pervasive Mob. Comput., 2023.
  • H. Feng, Q. Shen, R. Song, L. Shi, and H. Xu, "ATFA: Adversarial Time–Frequency Attention network for sensor-based multimodal human activity recognition," Expert Syst. Appl., vol. 236, 2024.
  • Z. Chen, Q. Zhu, Y. C. Soh, and L. Zhang, "Robust human activity recognition using smartphone sensors via CT-PCA and online SVM," IEEE Trans. Ind. Inf., vol. 13, pp. 3070-3080, 2017.
  • A. Ignatov, "Real-time human activity recognition from accelerometer data using Convolutional Neural Networks," Appl. Soft Comput., vol. 62, pp. 915-922, 2018.
  • Z. Qin, Y. Zhang, S. Meng, Z. Qin, and K.-K. R. Choo, "Imaging and fusing time series for wearable sensor-based human activity recognition," Inf. Fusion, vol. 53, pp. 80-87, 2020.
  • F. S. Abuhoureyah, Y. C. Wong, and A. S. B. M. Isira, "WiFi-based human activity recognition through wall using deep learning," Eng. Appl. Artif. Intell., vol. 127, 2024.
  • A. Mastakouris, G. Andriosopoulou, D. Masouros, P. Benardos, G.-C. Vosniakos, and D. Soudris, "Human worker activity recognition in a production floor environment through deep learning," J. Manuf. Syst., vol. 71, pp. 115-130, 2023.
  • M. N.U. Hasan, and C. R. Stannard, "Exploring online consumer reviews of wearable technology: The Owlet Smart Sock," Res. J. Text. Apparel, 2022.
  • M. P. Buttner and L. D. Stetzenbach, "Monitoring airborne fungal spores in an experimental indoor environment to evaluate sampling methods and the effects of human activity on air sampling," Appl. Environ. Microbiol., vol. 59, pp. 219-226, 1993.
  • A. Bansal and N. K. Garg, "Environmental Sound Classification: A descriptive review of the literature," Intell. Syst. Appl., 2022.
  • Y. Chen, Q. Guo, X. Liang, J. Wang, and Y. Qian, "Environmental sound classification with dilated convolutions," Appl. Acoust., vol. 148, pp. 123-132, 2019.
  • S. Abdoli, P. Cardinal, and A. L. Koerich, "End-to-end environmental sound classification using a 1D convolutional neural network," Expert Syst. Appl., vol. 136, pp. 252-263, 2019.
  • A. Khamparia, D. Gupta, N. G. Nguyen, A. Khanna, B. Pandey, and P. Tiwari, "Sound classification using convolutional neural network and tensor deep stacking network," IEEE Access, vol. 7, pp. 7717-7727, 2019.
  • A. Bansal and N. K. Garg, "Environmental Sound Classification using Hybrid Ensemble Model," Procedia Comput. Sci., vol. 218, pp. 418-428, 2023.
  • S. Dong, Z. Xia, X. Pan, and T. Yu, "Environmental sound classification based on improved compact bilinear attention network," Digit. Signal Process., vol. 141, 2023.
  • M. YILDIRIM, " Automatic classification of environmental sounds with the mfcc method and the proposed deep model," Fırat Univ. J. Eng. Sci., vol. 34, pp. 449-457, 2022.
  • A. Baró, P. Riba, J. Calvo-Zaragoza, and A. Fornés, "From optical music recognition to handwritten music recognition: A baseline," Pattern Recognit. Lett., vol. 123, pp. 1-8, 2019.
  • H. Tang, and N. Chen, "Combining CNN and Broad Learning for Music Classification," IEICE Trans. Inf. Syst., vol. 103, pp. 695-701, 2020.
  • D. Snyder, D. Garcia-Romero, G. Sell, A. McCree, D. Povey, and S. Khudanpur, "Speaker recognition for multi-speaker conversations using x-vectors," ICASSP 2019 - IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), pp. 5796-5800, 2019.
  • J. Villalba, N. Chen, D. Snyder, D. Garcia-Romero, A. McCree, G. Sell, J. Borgstrom, F. Richardson, S. Shon, F. Grondin, "State-of-the-art speaker recognition for telephone and video speech: The JHU-MIT submission for NIST SRE18," Proc. Interspeech, pp. 1488-1492, 2019.
  • J. Nicholson, K. Takahashi, and R. Nakatsu, "Emotion recognition in speech using neural networks," Neural Comput. Appl., vol. 9, pp. 290-296, 2000.
  • Y.S. Seo, and J.-H. Huh, "Automatic emotion-based music classification for supporting intelligent IoT applications," Electron., vol. 8, p. 164, 2019.
  • M. Yildirim, "Automatic diagnosis of snoring sounds with the developed artificial intelligence-based hybrid model," Turk. J. Sci. Technol., vol. 17, pp. 405-416, 2022.
  • N. Takahashi, M. Gygli, B. Pfister, and L. Van Gool, "Deep convolutional neural networks and data augmentation for acoustic event detection," arXiv preprint arXiv:1604.07160, 2016.
  • X. Xia, R. Togneri, F. Sohel, and D. Huang, "Random forest classification based acoustic event detection utilizing contextual-information and bottleneck features," Pattern Recognit., vol. 81, pp. 1-13, 2018.
  • X. Xia, R. Togneri, F. Sohel, and D. Huang, "Auxiliary classifier generative adversarial network with soft labels in imbalanced acoustic event detection," IEEE Trans. Multimedia, vol. 21, pp. 1359-1371, 2018.
  • F. Saki et al., "Open-set evolving acoustic scene classification system," 2019.
  • T. Doan, H. Nguyen, D. T. Ngo, L. Pham, and H. H. Kha, "Acoustic scene classification using a deeper training method for convolution neural network," 2019 Int. Symp. Electr. Electron. Eng. (ISEE), pp. 63-67, 2019.
  • J. Xie, and M. Zhu, "Investigation of acoustic and visual features for acoustic scene classification," Expert Syst. Appl., vol. 126, pp. 20-29, 2019.
  • Y. Han, J. Park, and K. Lee, "Convolutional neural networks with binaural representations and background subtraction for acoustic scene classification," Detec. and Classif. of Acou. Sce. and Events (DCASE), pp. 1-5, 2017.
  • L. Yang, L. Tao, X. Chen, and X. Gu, "Multi-scale semantic feature fusion and data augmentation for acoustic scene classification," Appl. Acoust., vol. 163, p. 107238, 2020.
  • M. Jung, and S. Chi, "Human activity classification based on sound recognition and residual convolutional neural network," Autom. Constr., vol. 114, p. 103177, 2020.
  • C.E. Galván-Tejada, F. López-Monteagudo, O. Alonso-González, J.I. Galván-Tejada, J.M. Celaya-Padilla, H. Gamboa-Rosales, R. Magallanes-Quintanar, L.A. Zanella-Calzada, "A generalized model for indoor location estimation using environmental sound from human activity recognition," ISPRS Int. J. Geo-Inf., vol. 7, p. 81, 2018.
  • H. M. Do, K. C. Welch, and W. Sheng, "Soham: A sound-based human activity monitoring framework for home service robots," IEEE Trans. Autom. Sci. Eng., 2021.
  • W. Wang, F. Seraj, N. Meratnia, and P. J. Havinga, "Privacy-aware environmental sound classification for indoor human activity recognition," Proc. 12th ACM Int. Conf. Pervasive Technol. Assist. Environ., pp. 36-44, 2019.
  • A. Mesaros, T. Heittola, A. Eronen, and T. Virtanen, "Acoustic event detection in real life recordings," 18th Eur. Signal Process. Conf., pp. 1267-1271, 2010.
  • J. Goldberger, G. E. Hinton, S. Roweis, and R. R. Salakhutdinov, "Neighbourhood components analysis," Adv. Neural Inf. Process. Syst., vol. 17, pp. 513-520, 2004.
  • S. R. Safavian and D. Landgrebe, "A survey of decision tree classifier methodology," IEEE Trans. Syst. Man Cybern., vol. 21, pp. 660-674, 1991.
  • W. Zhao, R. Chellappa, and N. Nandhakumar, "Empirical performance analysis of linear discriminant classifiers," Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit., pp. 164-169, 1998.
  • C. E. Thomaz, D. F. Gillies, and R. Q. Feitosa, "A new quadratic classifier applied to biometric recognition," Int. Workshop Biometric Auth., Springer, pp. 186-196, 2002.
  • A. Y. Ng and M. I. Jordan, "On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes," Adv. Neural Inf. Process. Syst., pp. 841-848, 2002.
  • V. Vapnik, "The support vector method of function estimation," Nonlinear Modeling, Springer, pp. 55-85, 1998.
  • L. E. Peterson, "K-nearest neighbor," Scholarpedia, vol. 4, p. 1883, 2009.
  • T. Hothorn and B. Lausen, "Bagging tree classifiers for laser scanning images: A data-and simulation-based strategy," Artif. Intell. Med., vol. 27, pp. 65-79, 2003.
  • Q. K. Al-Shayea, "Artificial neural networks in medical diagnosis," Int. J. Comput. Sci. Issues, vol. 8, pp. 150-154, 2011.

Ses Analizi Yoluyla Doğru Ev İçi Konumu Sınıflandırması: 1D-ILQP Yaklaşımı

Year 2025, Volume: 4 Issue: 1, 12 - 29, 18.02.2025
https://doi.org/10.62520/fujece.1422119

Abstract

Ev ortamlarındaki insan faaliyetlerinin tespit edilmesi, makine öğrenimi alanında temel bir zorluk teşkil etmektedir. Geleneksel olarak sensörler ve video kameralar, insan faaliyetinin tespitinde birincil araçlar olarak hizmet vermiştir. Ancak çalışmamız, çevresel ses sinyallerinin analizi yoluyla ev içi konumlarını belirlemeye yönelik yenilikçi hedefe sahiptir. Sonuç olarak, sekiz farklı lokasyondan gelen verileri kapsayan kapsamlı bir ses veri seti toplanmıştır. Bu ses veri kümesini kullanarak otomatik ev konumu algılamayı etkinleştirmek için, hassasiyete ve minimum hesaplama yüküne odaklanarak hafif bir makine öğrenimi modeli kullanılmıştır. Yaklaşımımızın temelinde, tek boyutlu Geliştirilmiş Yerel Dörtlü Model (GYDM) olarak adlandırılan yerel bir özellik oluşturucunun tanıtılması yer almaktadır. Bu yöntem, akustik sinyallerden dokusal özellikler üreterek özellik çıkarma sürecinde merkezi bir rol oynar. Yüksek seviyeli dokusal özelliklerin çıkarılmasını kolaylaştırmak için, sinyalleri ayrıştırmak için maksimum havuzlama uygulayarak evrişimli sinir ağı mimarisini taklit edilmiştir. Önerilen GYDM, orijinal sinyalin yanı sıra her ayrıştırılmış frekans bandından dokusal özelliklerini çıkarmaktadır. Daha sonra Komşu Bileşen Analizi tekniğini kullanarak en iyi 100 özellik seçilmiştir.Modelimizin son adımı sınıflandırmayı içermektedir. Bu aşamada karar ağaçları, doğrusal diskriminant analizi, ikinci dereceden diskriminant analizi, Naive Bayes, destek vektör makineleri, k-en yakın komşu, torbalanmış ağaçlar ve yapay sinir ağları dahil olmak üzere bir dizi sınıflandırıcı kullanılmıştır. Sonuçlar kapsamlı bir değerlendirmeye tabi tutulmuş ve tüm sınıflandırıcılar %80'in üzerinde sınıflandırma doğruluğuna ulaşmıştır. Özellikle k-en yakın komşu sınıflandırıcı, %99,75 gibi etkileyici bir değere ulaşarak en yüksek sınıflandırma doğruluğu sağlamıştır. Bulgularımız, GYDM’ye dayanan önerilen ses sınıflandırma modelinin, ev konumu ses veri setine uygulandığında oldukça tatmin edici sonuçlar verdiğini açıkça göstermektedir.

References

  • C. A. Ronao and S.B. Cho, "Human activity recognition with smartphone sensors using deep learning neural networks," Expert Syst. Appl., vol. 59, pp. 235-244, 2016.
  • B. Dong and K. P. Lam, "Building energy and comfort management through occupant behaviour pattern detection based on a large-scale environmental sensor network," J. Build. Perform. Simul., vol. 4, pp. 359-369, 2011.
  • J. G. Ortega, L. Han, N. Whittacker, and N. Bowring, "A machine-learning based approach to model user occupancy and activity patterns for energy saving in buildings," Sci. Inf. Conf. (SAI), IEEE, pp. 474-482, 2015.
  • A. Khosrowpour, J. C. Niebles, and M. Golparvar-Fard, "Vision-based workface assessment using depth images for activity analysis of interior construction operations," Autom. Constr., vol. 48, pp. 74-87, 2014.
  • M. Zeng, L. T. Nguyen, B. Yu, O. J. Mengshoel, J. Zhu, P. Wu, and J. Zhang, "Convolutional neural networks for human activity recognition using mobile sensors," in 6th Int. Conf. Mob. Comput. Appl. Serv., IEEE, pp. 197-205, 2014.
  • A. Jalal, Y.-H. Kim, Y.-J. Kim, S. Kamal, and D. Kim, "Robust human activity recognition from depth video using spatiotemporal multi-fused features," Pattern Recognit., vol. 61, pp. 295-308, 2017.
  • S. Kamal, A. Jalal, and D. Kim, "Depth images-based human detection, tracking and activity recognition using spatiotemporal features and modified HMM," J. Electr. Eng. Technol., vol. 11, pp. 1857-1862, 2016.
  • A. Franco, A. Magnani, and D. Maio, "A multimodal approach for human activity recognition based on skeleton and RGB data," Pattern Recognit. Lett., 2020.
  • M. M. Hassan, M. Z. Uddin, A. Mohamed, and A. Almogren, "A robust human activity recognition system using smartphone sensors and deep learning," Future Gener. Comput. Syst., vol. 81, pp. 307-313, 2018.
  • Y. Chen and C. Shen, "Performance analysis of smartphone-sensor behavior for human activity recognition," IEEE Access, vol. 5, pp. 3095-3110, 2017.
  • H. F. Nweke, Y. W. Teh, M. A. Al-Garadi, and U. R. Alo, "Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges," Expert Syst. Appl., vol. 105, pp. 233-261, 2018.
  • E. Lattanzi, L. Calisti, and P. Capellacci, "Lightweight accurate trigger to reduce power consumption in sensor-based continuous human activity recognition," Pervasive Mob. Comput., 2023.
  • H. Feng, Q. Shen, R. Song, L. Shi, and H. Xu, "ATFA: Adversarial Time–Frequency Attention network for sensor-based multimodal human activity recognition," Expert Syst. Appl., vol. 236, 2024.
  • Z. Chen, Q. Zhu, Y. C. Soh, and L. Zhang, "Robust human activity recognition using smartphone sensors via CT-PCA and online SVM," IEEE Trans. Ind. Inf., vol. 13, pp. 3070-3080, 2017.
  • A. Ignatov, "Real-time human activity recognition from accelerometer data using Convolutional Neural Networks," Appl. Soft Comput., vol. 62, pp. 915-922, 2018.
  • Z. Qin, Y. Zhang, S. Meng, Z. Qin, and K.-K. R. Choo, "Imaging and fusing time series for wearable sensor-based human activity recognition," Inf. Fusion, vol. 53, pp. 80-87, 2020.
  • F. S. Abuhoureyah, Y. C. Wong, and A. S. B. M. Isira, "WiFi-based human activity recognition through wall using deep learning," Eng. Appl. Artif. Intell., vol. 127, 2024.
  • A. Mastakouris, G. Andriosopoulou, D. Masouros, P. Benardos, G.-C. Vosniakos, and D. Soudris, "Human worker activity recognition in a production floor environment through deep learning," J. Manuf. Syst., vol. 71, pp. 115-130, 2023.
  • M. N.U. Hasan, and C. R. Stannard, "Exploring online consumer reviews of wearable technology: The Owlet Smart Sock," Res. J. Text. Apparel, 2022.
  • M. P. Buttner and L. D. Stetzenbach, "Monitoring airborne fungal spores in an experimental indoor environment to evaluate sampling methods and the effects of human activity on air sampling," Appl. Environ. Microbiol., vol. 59, pp. 219-226, 1993.
  • A. Bansal and N. K. Garg, "Environmental Sound Classification: A descriptive review of the literature," Intell. Syst. Appl., 2022.
  • Y. Chen, Q. Guo, X. Liang, J. Wang, and Y. Qian, "Environmental sound classification with dilated convolutions," Appl. Acoust., vol. 148, pp. 123-132, 2019.
  • S. Abdoli, P. Cardinal, and A. L. Koerich, "End-to-end environmental sound classification using a 1D convolutional neural network," Expert Syst. Appl., vol. 136, pp. 252-263, 2019.
  • A. Khamparia, D. Gupta, N. G. Nguyen, A. Khanna, B. Pandey, and P. Tiwari, "Sound classification using convolutional neural network and tensor deep stacking network," IEEE Access, vol. 7, pp. 7717-7727, 2019.
  • A. Bansal and N. K. Garg, "Environmental Sound Classification using Hybrid Ensemble Model," Procedia Comput. Sci., vol. 218, pp. 418-428, 2023.
  • S. Dong, Z. Xia, X. Pan, and T. Yu, "Environmental sound classification based on improved compact bilinear attention network," Digit. Signal Process., vol. 141, 2023.
  • M. YILDIRIM, " Automatic classification of environmental sounds with the mfcc method and the proposed deep model," Fırat Univ. J. Eng. Sci., vol. 34, pp. 449-457, 2022.
  • A. Baró, P. Riba, J. Calvo-Zaragoza, and A. Fornés, "From optical music recognition to handwritten music recognition: A baseline," Pattern Recognit. Lett., vol. 123, pp. 1-8, 2019.
  • H. Tang, and N. Chen, "Combining CNN and Broad Learning for Music Classification," IEICE Trans. Inf. Syst., vol. 103, pp. 695-701, 2020.
  • D. Snyder, D. Garcia-Romero, G. Sell, A. McCree, D. Povey, and S. Khudanpur, "Speaker recognition for multi-speaker conversations using x-vectors," ICASSP 2019 - IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), pp. 5796-5800, 2019.
  • J. Villalba, N. Chen, D. Snyder, D. Garcia-Romero, A. McCree, G. Sell, J. Borgstrom, F. Richardson, S. Shon, F. Grondin, "State-of-the-art speaker recognition for telephone and video speech: The JHU-MIT submission for NIST SRE18," Proc. Interspeech, pp. 1488-1492, 2019.
  • J. Nicholson, K. Takahashi, and R. Nakatsu, "Emotion recognition in speech using neural networks," Neural Comput. Appl., vol. 9, pp. 290-296, 2000.
  • Y.S. Seo, and J.-H. Huh, "Automatic emotion-based music classification for supporting intelligent IoT applications," Electron., vol. 8, p. 164, 2019.
  • M. Yildirim, "Automatic diagnosis of snoring sounds with the developed artificial intelligence-based hybrid model," Turk. J. Sci. Technol., vol. 17, pp. 405-416, 2022.
  • N. Takahashi, M. Gygli, B. Pfister, and L. Van Gool, "Deep convolutional neural networks and data augmentation for acoustic event detection," arXiv preprint arXiv:1604.07160, 2016.
  • X. Xia, R. Togneri, F. Sohel, and D. Huang, "Random forest classification based acoustic event detection utilizing contextual-information and bottleneck features," Pattern Recognit., vol. 81, pp. 1-13, 2018.
  • X. Xia, R. Togneri, F. Sohel, and D. Huang, "Auxiliary classifier generative adversarial network with soft labels in imbalanced acoustic event detection," IEEE Trans. Multimedia, vol. 21, pp. 1359-1371, 2018.
  • F. Saki et al., "Open-set evolving acoustic scene classification system," 2019.
  • T. Doan, H. Nguyen, D. T. Ngo, L. Pham, and H. H. Kha, "Acoustic scene classification using a deeper training method for convolution neural network," 2019 Int. Symp. Electr. Electron. Eng. (ISEE), pp. 63-67, 2019.
  • J. Xie, and M. Zhu, "Investigation of acoustic and visual features for acoustic scene classification," Expert Syst. Appl., vol. 126, pp. 20-29, 2019.
  • Y. Han, J. Park, and K. Lee, "Convolutional neural networks with binaural representations and background subtraction for acoustic scene classification," Detec. and Classif. of Acou. Sce. and Events (DCASE), pp. 1-5, 2017.
  • L. Yang, L. Tao, X. Chen, and X. Gu, "Multi-scale semantic feature fusion and data augmentation for acoustic scene classification," Appl. Acoust., vol. 163, p. 107238, 2020.
  • M. Jung, and S. Chi, "Human activity classification based on sound recognition and residual convolutional neural network," Autom. Constr., vol. 114, p. 103177, 2020.
  • C.E. Galván-Tejada, F. López-Monteagudo, O. Alonso-González, J.I. Galván-Tejada, J.M. Celaya-Padilla, H. Gamboa-Rosales, R. Magallanes-Quintanar, L.A. Zanella-Calzada, "A generalized model for indoor location estimation using environmental sound from human activity recognition," ISPRS Int. J. Geo-Inf., vol. 7, p. 81, 2018.
  • H. M. Do, K. C. Welch, and W. Sheng, "Soham: A sound-based human activity monitoring framework for home service robots," IEEE Trans. Autom. Sci. Eng., 2021.
  • W. Wang, F. Seraj, N. Meratnia, and P. J. Havinga, "Privacy-aware environmental sound classification for indoor human activity recognition," Proc. 12th ACM Int. Conf. Pervasive Technol. Assist. Environ., pp. 36-44, 2019.
  • A. Mesaros, T. Heittola, A. Eronen, and T. Virtanen, "Acoustic event detection in real life recordings," 18th Eur. Signal Process. Conf., pp. 1267-1271, 2010.
  • J. Goldberger, G. E. Hinton, S. Roweis, and R. R. Salakhutdinov, "Neighbourhood components analysis," Adv. Neural Inf. Process. Syst., vol. 17, pp. 513-520, 2004.
  • S. R. Safavian and D. Landgrebe, "A survey of decision tree classifier methodology," IEEE Trans. Syst. Man Cybern., vol. 21, pp. 660-674, 1991.
  • W. Zhao, R. Chellappa, and N. Nandhakumar, "Empirical performance analysis of linear discriminant classifiers," Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit., pp. 164-169, 1998.
  • C. E. Thomaz, D. F. Gillies, and R. Q. Feitosa, "A new quadratic classifier applied to biometric recognition," Int. Workshop Biometric Auth., Springer, pp. 186-196, 2002.
  • A. Y. Ng and M. I. Jordan, "On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes," Adv. Neural Inf. Process. Syst., pp. 841-848, 2002.
  • V. Vapnik, "The support vector method of function estimation," Nonlinear Modeling, Springer, pp. 55-85, 1998.
  • L. E. Peterson, "K-nearest neighbor," Scholarpedia, vol. 4, p. 1883, 2009.
  • T. Hothorn and B. Lausen, "Bagging tree classifiers for laser scanning images: A data-and simulation-based strategy," Artif. Intell. Med., vol. 27, pp. 65-79, 2003.
  • Q. K. Al-Shayea, "Artificial neural networks in medical diagnosis," Int. J. Comput. Sci. Issues, vol. 8, pp. 150-154, 2011.
There are 56 citations in total.

Details

Primary Language English
Subjects Computer Software
Journal Section Research Articles
Authors

Nura Abdullahi 0000-0001-6321-4880

Erhan Akbal 0000-0002-5257-7560

Sengul Dogan 0000-0001-9677-5684

Türker Tuncer 0000-0002-5126-6445

Umut Erman This is me 0009-0007-2334-2045

Publication Date February 18, 2025
Submission Date January 19, 2024
Acceptance Date May 2, 2024
Published in Issue Year 2025 Volume: 4 Issue: 1

Cite

APA Abdullahi, N., Akbal, E., Dogan, S., Tuncer, T., et al. (2025). Accurate Indoor Home Location Classification through Sound Analysis: The 1D-ILQP Approach. Firat University Journal of Experimental and Computational Engineering, 4(1), 12-29. https://doi.org/10.62520/fujece.1422119
AMA Abdullahi N, Akbal E, Dogan S, Tuncer T, Erman U. Accurate Indoor Home Location Classification through Sound Analysis: The 1D-ILQP Approach. FUJECE. February 2025;4(1):12-29. doi:10.62520/fujece.1422119
Chicago Abdullahi, Nura, Erhan Akbal, Sengul Dogan, Türker Tuncer, and Umut Erman. “Accurate Indoor Home Location Classification through Sound Analysis: The 1D-ILQP Approach”. Firat University Journal of Experimental and Computational Engineering 4, no. 1 (February 2025): 12-29. https://doi.org/10.62520/fujece.1422119.
EndNote Abdullahi N, Akbal E, Dogan S, Tuncer T, Erman U (February 1, 2025) Accurate Indoor Home Location Classification through Sound Analysis: The 1D-ILQP Approach. Firat University Journal of Experimental and Computational Engineering 4 1 12–29.
IEEE N. Abdullahi, E. Akbal, S. Dogan, T. Tuncer, and U. Erman, “Accurate Indoor Home Location Classification through Sound Analysis: The 1D-ILQP Approach”, FUJECE, vol. 4, no. 1, pp. 12–29, 2025, doi: 10.62520/fujece.1422119.
ISNAD Abdullahi, Nura et al. “Accurate Indoor Home Location Classification through Sound Analysis: The 1D-ILQP Approach”. Firat University Journal of Experimental and Computational Engineering 4/1 (February 2025), 12-29. https://doi.org/10.62520/fujece.1422119.
JAMA Abdullahi N, Akbal E, Dogan S, Tuncer T, Erman U. Accurate Indoor Home Location Classification through Sound Analysis: The 1D-ILQP Approach. FUJECE. 2025;4:12–29.
MLA Abdullahi, Nura et al. “Accurate Indoor Home Location Classification through Sound Analysis: The 1D-ILQP Approach”. Firat University Journal of Experimental and Computational Engineering, vol. 4, no. 1, 2025, pp. 12-29, doi:10.62520/fujece.1422119.
Vancouver Abdullahi N, Akbal E, Dogan S, Tuncer T, Erman U. Accurate Indoor Home Location Classification through Sound Analysis: The 1D-ILQP Approach. FUJECE. 2025;4(1):12-29.