Vision Based Transformer Tekniği İle İnsan Konuşmasından Nörolojik Bozuklukların Sınıflandırılması
Yıl 2024,
Cilt: 3 Sayı: 2, 160 - 174, 12.06.2024
Emel Soylu
,
Sema Gül
,
Kübra Aslan
,
Muammer Türkoğlu
,
Murat Terzi
Öz
Bu çalışmada, Parkinson hastalığı, Multipl Skleroz (MS), sağlıklı bireyler ve diğer kategoriler gibi farklı sağlık kategorilerinin insan konuşmasından yüksek doğrulukta sınıflandırılması için transformatör tabanlı sinir ağı yaklaşımını sunuyoruz. Bu yaklaşım, insan konuşmasının spektrogramlara dönüştürülmesinde yatmaktadır ve daha sonra bu spektrogramlar görsel görüntülere dönüştürülmektedir. Bu dönüşüm süreci, ağımızın çeşitli sağlık koşullarını belirten karmaşık ses desenlerini ve ince nüansları yakalamasını sağlar. Yaklaşımımızın deneysel doğrulaması, Parkinson hastalığı, MS, sağlıklı bireyler ve diğer kategoriler arasında yüksek performanslı sonuçlar vermiştir. Bu başarı, spektrogram analizi ve vision based transformer tekniği birleşimine dayanan yenilikçi, invaziv olmayan bir tanı aracı sunarak potansiyel klinik uygulamalar için kapıları açmaktadır.
Kaynakça
- B. Karasulu, “Çoklu ortam sistemleri için siber güvenlik kapsamında derin öğrenme kullanarak ses sahne ve olaylarının tespiti,” Acta INFOLOGICA, vol. 3, no. 2, pp. 60–82, 2019.
- A. Tursunov, J. Y. Choeh, and S. Kwon, “Age and gender recognition using a convolutional neural network with a specially designed multi-attention module through speech spectrograms,” Sensors, vol. 21, no. 17, p. 5892, 2021.
- M. Vacher, J.-F. Serignat, and S. Chaillol, “Sound classification in a smart room environment: an approach using GMM and HMM methods,” in The 4th IEEE Conference on Speech Technology and Human-Computer Dialogue (SpeD 2007), Publishing House of the Romanian Academy (Bucharest), 2007, vol. 1, pp. 135–146.
- J. Acharya and A. Basu, “Deep neural network for respiratory sound classification in wearable devices enabled by patient specific model tuning,” IEEE Trans. Biomed. Circuits Syst., vol. 14, no. 3, pp. 535–544, 2020.
- G. Woodson, “Management of neurologic disorders of the larynx,” Ann. Otol. Rhinol. \& Laryngol., vol. 117, no. 5, pp. 317–326, 2008.
- A. Abushakra and M. Faezipour, “Acoustic signal classification of breathing movements to virtually aid breath regulation,” IEEE J. Biomed. Heal. informatics, vol. 17, no. 2, pp. 493–500, 2013.
- E. Soares, P. Angelov, and X. Gu, “Autonomous learning multiple-model zero-order classifier for heart sound classification,” Appl. Soft Comput., vol. 94, p. 106449, 2020.
- Z. Dokur and T. Ölmez, “Heart sound classification using wavelet transform and incremental self-organizing map,” Digit. Signal Process., vol. 18, no. 6, pp. 951–959, 2008.
- M. Tschannen, T. Kramer, G. Marti, M. Heinzmann, and T. Wiatowski, “Heart sound classification using deep structured features,” in 2016 Computing in Cardiology Conference (CinC), 2016, pp. 565–568.
- P. Langley and A. Murray, “Heart sound classification from unsegmented phonocardiograms,” Physiol. Meas., vol. 38, no. 8, p. 1658, 2017.
- Z. Ren, N. Cummins, V. Pandit, J. Han, K. Qian, and B. Schuller, “Learning image-based representations for heart sound classification,” in Proceedings of the 2018 international conference on digital health, 2018, pp. 143–147.
- M. Deng, T. Meng, J. Cao, S. Wang, J. Zhang, and H. Fan, “Heart sound classification based on improved MFCC features and convolutional recurrent neural networks,” Neural Networks, vol. 130, pp. 22–32, 2020.
- K. S. Kim, J. H. Seo, J. U. Kang, and C. G. Song, “An enhanced algorithm for knee joint sound classification using feature extraction based on time-frequency analysis,” Comput. Methods Programs Biomed., vol. 94, no. 2, pp. 198–206, 2009.
- I. Vigo, L. Coelho, and S. Reis, “Speech-and language-based classification of alzheimer’s disease: a systematic review,” Bioengineering, vol. 9, no. 1, p. 27, 2022.
- J. Rusz et al., “Speech biomarkers in rapid eye movement sleep behavior disorder and parkinson disease,” Ann. Neurol., vol. 90, no. 1, pp. 62–75, 2021.
- K. K. Lella and A. Pja, “Automatic diagnosis of covıd-19 disease using deep convolutional neural network with multi-feature channel from respiratory sound data: cough, voice, and breath,” Alexandria Eng. J., vol. 61, no. 2, pp. 1319–1334, 2022.
- M. Faezipour and A. Abuzneid, “Smartphone-based self-testing of covıd-19 using breathing sounds,” Telemed. e-Health, vol. 26, no. 10, pp. 1202–1205, 2020.
- N. Melek Manshouri, “Identifying covıd-19 by using spectral analysis of cough recordings: a distinctive classification study,” Cogn. Neurodyn., vol. 16, no. 1, pp. 239–253, 2022.
- N. Sharma et al., “Coswara--a database of breathing, cough, and voice sounds for covıd-19 diagnosis,” arXiv Prepr. arXiv2005.10548, 2020.
- A. Tena, F. Clarià, and F. Solsona, “Automated detection of covıd-19 cough,” Biomed. Signal Process. Control, vol. 71, p. 103175, 2022.
- L. Kranthi Kumar and P. J. A. Alphonse, “COVID-19 disease diagnosis with light-weight CNN using modified MFCC and enhanced GFCC from human respiratory sounds,” Eur. Phys. J. Spec. Top., pp. 1–18, 2022.
- M. Kuluozturk et al., “DKPNet41: directed knight pattern network-based cough sound classification model for automatic disease diagnosis,” Med. Eng. \& Phys., p. 103870, 2022.
- T. Nguyen and F. Pernkopf, “Lung sound classification using co-tuning and stochastic normalization,” IEEE Trans. Biomed. Eng., 2022.
- T. Tuncer, E. Akbal, E. Aydemir, S. B. Belhaouari, and S. Dogan, “A novel local feature generation technique based sound classification method for covid-19 detection using lung breathing sound,” Eur. J. Tech., vol. 11, no. 2, pp. 165–174, 2021.
- G. C. Jana, R. Sharma, and A. Agrawal, “A 1D-CNN-spectrogram based approach for seizure detection from EEG signal,” Procedia Comput. Sci., vol. 167, pp. 403–412, 2020.
- J. Xie, K. Hu, M. Zhu, J. Yu, and Q. Zhu, “Investigation of different CNN-based models for improved bird sound classification,” IEEE Access, vol. 7, pp. 175353–175361, 2019.
- V. Franzoni, G. Biondi, and A. Milani, “Crowd emotional sounds: spectrogram-based analysis using convolutional neural network.,” in SAT@ SMC, pp. 32–36, 2019.
- H. Hu et al., “Deep learning application for vocal fold disease prediction through voice recognition: preliminary development study,” J. Med. Internet Res., 2021.
- E. C. Compton et al., “Developing an artificial ıntelligence tool to predict vocal cord pathology in primary care settings,” Laryngoscope, 2022.
- A. Vaswani et al., “Attention is all you need,” Adv. Neural Inf. Process. Syst., vol. 30, 2017.
- S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and M. Shah, “Transformers in vision: A survey,” ACM Comput. Surv., vol. 54, no. 10s, pp. 1–41, 2022.
- S. Bhojanapalli, A. Chakrabarti, D. Glasner, D. Li, T. Unterthiner, and A. Veit, “Understanding robustness of transformers for image classification,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10231–10241.
- K. S. Kalyan, A. Rajasekharan, and S. Sangeetha, “Ammus: A survey of transformer-based pretrained models in natural language processing,” arXiv Prepr. arXiv2108.05542, 2021.
- J. Beal, E. Kim, E. Tzeng, D. H. Park, A. Zhai, and D. Kislyuk, “Toward transformer-based object detection,” arXiv Prepr. arXiv2012.09958, 2020.
- Z. Shao et al., “Transmil: Transformer based correlated multiple instance learning for whole slide image classification,” Adv. Neural Inf. Process. Syst., vol. 34, pp. 2136–2147, 2021.
- F. Shamshad et al., “Transformers in medical imaging: a survey,” Med. Image Anal., p. 102802, 2023.
- A. Hatamizadeh et al., “Unetr: Transformers for 3d medical image segmentation,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2022, pp. 574–584.
- Z. Liu, Q. Lv, Z. Yang, Y. Li, C. H. Lee, and L. Shen, “Recent progress in transformer-based medical image analysis,” Comput. Biol. Med., p. 107268, 2023.
- Z. Liu and L. Shen, “Medical image analysis based on transformer: A review,” arXiv Prepr. arXiv2208.06643, 2022.
- E. Svoboda, T. Boril, J. Rusz, T. Tykalova, D. Horakova, C. Guttman, K. B. Blagoev, H. Hatabu and V. Valtchinov, “Assessing clinical utility of Machine Learning and Artificial Intelligence approaches to analyze speech recordings in Multiple Sclerosis: A Pilot Study,” arXiv Prepr. arXiv2109.09844, 2021.
- D. Yamamoto et al., “Computer-aided detection of multiple sclerosis lesions in brain magnetic resonance images: False positive reduction scheme consisted of rule-based, level set method, and support vector machine,” Comput. Med. Imaging Graph., vol. 34, no. 5, pp. 404–413, 2010.
- F. Haider, S. De La Fuente, and S. Luz, “An assessment of paralinguistic acoustic features for detection of Alzheimer’s dementia in spontaneous speech,” IEEE J. Sel. Top. Signal Process., vol. 14, no. 2, pp. 272–281, 2019.
- J. Weiner, C. Herff, and T. Schultz, “Speech-based detection of alzheimer’s disease in conversational german.,” in Interspeech, 2016, pp. 1938–1942.
- A. Kapur, U. Sarawgi, E. Wadkins, M. Wu, N. Hollenstein, and P. Maes, “Non-ınvasive silent speech recognition in multiple sclerosis with dysphonia,” Proc. Mach. Learn. Heal. NeurIPS Work., pp. 25–38, 2020.
- L. Zahid et al., “A spectrogram-based deep feature assisted computer-aided diagnostic system for Parkinson’s disease,” IEEE Access, vol. 8, pp. 35482–35495, 2020.
- L. Liu, S. Zhao, H. Chen, and A. Wang, “A new machine learning method for identifying Alzheimer’s disease,” Simul. Model. Pract. Theory, vol. 99, p. 102023, 2020.
- A. Johri, A. Tripathi, and others, “Parkinson disease detection using deep neural networks,” in 2019 Twelfth international conference on contemporary computing (IC3), 2019, pp. 1–4.
- B. N. Suhas et al., “Speech task based automatic classification of ALS and Parkinson’s Disease and their severity using log Mel spectrograms,” in 2020 international conference on signal processing and communications (SPCOM), 2020, pp. 1–5.
- Z.-J. Xu, R.-F. Wang, J. Wang, and D.-H. Yu, “Parkinson’s disease detection based on spectrogram-deep convolutional generative adversarial network sample augmentation,” IEEE Access, vol. 8, pp. 206888–206900, 2020.
- D. Hemmerling et al., “Vision transformer for parkinson’s disease classification using multilingual sustained vowel recordings.”
- H.-J. Sun and Z.-G. Zhang, “Transformer-based severity detection of parkinson’s symptoms from gait,” in 2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), 2022, pp. 1–5.
- S. M. Abdullah et al., “Deep transfer learning based parkinson’s disease detection using optimized feature selection,” IEEE Access, vol. 11, pp. 3511–3524, 2023.
- L. Wyse, “Audio spectrogram representations for processing with convolutional neural networks,” vol. 1, no. 1, pp. 37–41, 2017.
- F. Ye and J. Yang, “A deep neural network model for speaker identification,” Appl. Sci., vol. 11, no. 8, p. 3603, 2021.
- “Stft.” [Online]. Available: https://musicinformationretrieval.com/stft.html.
- B. Li, “On identity authentication technology of distance education system based on voiceprint recognition,” in Proceedings of the 30th Chinese Control Conference, 2011, pp. 5718–5721.
- A. Dosovitskiy et al., “An image is worth 16x16 words: transformers for image recognition at scale. arxiv 2020,” arXiv Prepr. arXiv2010.11929, 2010.
Vision Transformer Based Classification of Neurological Disorders from Human Speech
Yıl 2024,
Cilt: 3 Sayı: 2, 160 - 174, 12.06.2024
Emel Soylu
,
Sema Gül
,
Kübra Aslan
,
Muammer Türkoğlu
,
Murat Terzi
Öz
In this study, we introduce a transformative approach to achieve high-accuracy classification of distinct health categories, including Parkinson's disease, Multiple Sclerosis (MS), healthy individuals, and other categories, utilizing a transformer-based neural network. The cornerstone of this approach lies in the innovative conversion of human speech into spectrograms, which are subsequently transformed into visual images. This transformation process enables our network to capture intricate vocal patterns and subtle nuances that are indicative of various health conditions. The experimental validation of our approach underscores its remarkable performance, achieving exceptional accuracy in differentiating Parkinson's disease, MS, healthy subjects, and other categories. This breakthrough opens doors to potential clinical applications, offering an innovative, non-invasive diagnostic tool that rests on the fusion of spectrogram analysis and transformer-based models.
Etik Beyan
The study protocol for this dataset was approved by the Ondokuz Mayıs University Clinical Research Ethics Committee (2022-545/2023). Written informed consent form was obtained from the address in the working environment and patient contents were extracted to ensure anonymity.
Teşekkür
This study attracted great attention at the 58th National Neurology Congress in Turkey and was awarded the second prize in the oral presentation category.
Kaynakça
- B. Karasulu, “Çoklu ortam sistemleri için siber güvenlik kapsamında derin öğrenme kullanarak ses sahne ve olaylarının tespiti,” Acta INFOLOGICA, vol. 3, no. 2, pp. 60–82, 2019.
- A. Tursunov, J. Y. Choeh, and S. Kwon, “Age and gender recognition using a convolutional neural network with a specially designed multi-attention module through speech spectrograms,” Sensors, vol. 21, no. 17, p. 5892, 2021.
- M. Vacher, J.-F. Serignat, and S. Chaillol, “Sound classification in a smart room environment: an approach using GMM and HMM methods,” in The 4th IEEE Conference on Speech Technology and Human-Computer Dialogue (SpeD 2007), Publishing House of the Romanian Academy (Bucharest), 2007, vol. 1, pp. 135–146.
- J. Acharya and A. Basu, “Deep neural network for respiratory sound classification in wearable devices enabled by patient specific model tuning,” IEEE Trans. Biomed. Circuits Syst., vol. 14, no. 3, pp. 535–544, 2020.
- G. Woodson, “Management of neurologic disorders of the larynx,” Ann. Otol. Rhinol. \& Laryngol., vol. 117, no. 5, pp. 317–326, 2008.
- A. Abushakra and M. Faezipour, “Acoustic signal classification of breathing movements to virtually aid breath regulation,” IEEE J. Biomed. Heal. informatics, vol. 17, no. 2, pp. 493–500, 2013.
- E. Soares, P. Angelov, and X. Gu, “Autonomous learning multiple-model zero-order classifier for heart sound classification,” Appl. Soft Comput., vol. 94, p. 106449, 2020.
- Z. Dokur and T. Ölmez, “Heart sound classification using wavelet transform and incremental self-organizing map,” Digit. Signal Process., vol. 18, no. 6, pp. 951–959, 2008.
- M. Tschannen, T. Kramer, G. Marti, M. Heinzmann, and T. Wiatowski, “Heart sound classification using deep structured features,” in 2016 Computing in Cardiology Conference (CinC), 2016, pp. 565–568.
- P. Langley and A. Murray, “Heart sound classification from unsegmented phonocardiograms,” Physiol. Meas., vol. 38, no. 8, p. 1658, 2017.
- Z. Ren, N. Cummins, V. Pandit, J. Han, K. Qian, and B. Schuller, “Learning image-based representations for heart sound classification,” in Proceedings of the 2018 international conference on digital health, 2018, pp. 143–147.
- M. Deng, T. Meng, J. Cao, S. Wang, J. Zhang, and H. Fan, “Heart sound classification based on improved MFCC features and convolutional recurrent neural networks,” Neural Networks, vol. 130, pp. 22–32, 2020.
- K. S. Kim, J. H. Seo, J. U. Kang, and C. G. Song, “An enhanced algorithm for knee joint sound classification using feature extraction based on time-frequency analysis,” Comput. Methods Programs Biomed., vol. 94, no. 2, pp. 198–206, 2009.
- I. Vigo, L. Coelho, and S. Reis, “Speech-and language-based classification of alzheimer’s disease: a systematic review,” Bioengineering, vol. 9, no. 1, p. 27, 2022.
- J. Rusz et al., “Speech biomarkers in rapid eye movement sleep behavior disorder and parkinson disease,” Ann. Neurol., vol. 90, no. 1, pp. 62–75, 2021.
- K. K. Lella and A. Pja, “Automatic diagnosis of covıd-19 disease using deep convolutional neural network with multi-feature channel from respiratory sound data: cough, voice, and breath,” Alexandria Eng. J., vol. 61, no. 2, pp. 1319–1334, 2022.
- M. Faezipour and A. Abuzneid, “Smartphone-based self-testing of covıd-19 using breathing sounds,” Telemed. e-Health, vol. 26, no. 10, pp. 1202–1205, 2020.
- N. Melek Manshouri, “Identifying covıd-19 by using spectral analysis of cough recordings: a distinctive classification study,” Cogn. Neurodyn., vol. 16, no. 1, pp. 239–253, 2022.
- N. Sharma et al., “Coswara--a database of breathing, cough, and voice sounds for covıd-19 diagnosis,” arXiv Prepr. arXiv2005.10548, 2020.
- A. Tena, F. Clarià, and F. Solsona, “Automated detection of covıd-19 cough,” Biomed. Signal Process. Control, vol. 71, p. 103175, 2022.
- L. Kranthi Kumar and P. J. A. Alphonse, “COVID-19 disease diagnosis with light-weight CNN using modified MFCC and enhanced GFCC from human respiratory sounds,” Eur. Phys. J. Spec. Top., pp. 1–18, 2022.
- M. Kuluozturk et al., “DKPNet41: directed knight pattern network-based cough sound classification model for automatic disease diagnosis,” Med. Eng. \& Phys., p. 103870, 2022.
- T. Nguyen and F. Pernkopf, “Lung sound classification using co-tuning and stochastic normalization,” IEEE Trans. Biomed. Eng., 2022.
- T. Tuncer, E. Akbal, E. Aydemir, S. B. Belhaouari, and S. Dogan, “A novel local feature generation technique based sound classification method for covid-19 detection using lung breathing sound,” Eur. J. Tech., vol. 11, no. 2, pp. 165–174, 2021.
- G. C. Jana, R. Sharma, and A. Agrawal, “A 1D-CNN-spectrogram based approach for seizure detection from EEG signal,” Procedia Comput. Sci., vol. 167, pp. 403–412, 2020.
- J. Xie, K. Hu, M. Zhu, J. Yu, and Q. Zhu, “Investigation of different CNN-based models for improved bird sound classification,” IEEE Access, vol. 7, pp. 175353–175361, 2019.
- V. Franzoni, G. Biondi, and A. Milani, “Crowd emotional sounds: spectrogram-based analysis using convolutional neural network.,” in SAT@ SMC, pp. 32–36, 2019.
- H. Hu et al., “Deep learning application for vocal fold disease prediction through voice recognition: preliminary development study,” J. Med. Internet Res., 2021.
- E. C. Compton et al., “Developing an artificial ıntelligence tool to predict vocal cord pathology in primary care settings,” Laryngoscope, 2022.
- A. Vaswani et al., “Attention is all you need,” Adv. Neural Inf. Process. Syst., vol. 30, 2017.
- S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and M. Shah, “Transformers in vision: A survey,” ACM Comput. Surv., vol. 54, no. 10s, pp. 1–41, 2022.
- S. Bhojanapalli, A. Chakrabarti, D. Glasner, D. Li, T. Unterthiner, and A. Veit, “Understanding robustness of transformers for image classification,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10231–10241.
- K. S. Kalyan, A. Rajasekharan, and S. Sangeetha, “Ammus: A survey of transformer-based pretrained models in natural language processing,” arXiv Prepr. arXiv2108.05542, 2021.
- J. Beal, E. Kim, E. Tzeng, D. H. Park, A. Zhai, and D. Kislyuk, “Toward transformer-based object detection,” arXiv Prepr. arXiv2012.09958, 2020.
- Z. Shao et al., “Transmil: Transformer based correlated multiple instance learning for whole slide image classification,” Adv. Neural Inf. Process. Syst., vol. 34, pp. 2136–2147, 2021.
- F. Shamshad et al., “Transformers in medical imaging: a survey,” Med. Image Anal., p. 102802, 2023.
- A. Hatamizadeh et al., “Unetr: Transformers for 3d medical image segmentation,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2022, pp. 574–584.
- Z. Liu, Q. Lv, Z. Yang, Y. Li, C. H. Lee, and L. Shen, “Recent progress in transformer-based medical image analysis,” Comput. Biol. Med., p. 107268, 2023.
- Z. Liu and L. Shen, “Medical image analysis based on transformer: A review,” arXiv Prepr. arXiv2208.06643, 2022.
- E. Svoboda, T. Boril, J. Rusz, T. Tykalova, D. Horakova, C. Guttman, K. B. Blagoev, H. Hatabu and V. Valtchinov, “Assessing clinical utility of Machine Learning and Artificial Intelligence approaches to analyze speech recordings in Multiple Sclerosis: A Pilot Study,” arXiv Prepr. arXiv2109.09844, 2021.
- D. Yamamoto et al., “Computer-aided detection of multiple sclerosis lesions in brain magnetic resonance images: False positive reduction scheme consisted of rule-based, level set method, and support vector machine,” Comput. Med. Imaging Graph., vol. 34, no. 5, pp. 404–413, 2010.
- F. Haider, S. De La Fuente, and S. Luz, “An assessment of paralinguistic acoustic features for detection of Alzheimer’s dementia in spontaneous speech,” IEEE J. Sel. Top. Signal Process., vol. 14, no. 2, pp. 272–281, 2019.
- J. Weiner, C. Herff, and T. Schultz, “Speech-based detection of alzheimer’s disease in conversational german.,” in Interspeech, 2016, pp. 1938–1942.
- A. Kapur, U. Sarawgi, E. Wadkins, M. Wu, N. Hollenstein, and P. Maes, “Non-ınvasive silent speech recognition in multiple sclerosis with dysphonia,” Proc. Mach. Learn. Heal. NeurIPS Work., pp. 25–38, 2020.
- L. Zahid et al., “A spectrogram-based deep feature assisted computer-aided diagnostic system for Parkinson’s disease,” IEEE Access, vol. 8, pp. 35482–35495, 2020.
- L. Liu, S. Zhao, H. Chen, and A. Wang, “A new machine learning method for identifying Alzheimer’s disease,” Simul. Model. Pract. Theory, vol. 99, p. 102023, 2020.
- A. Johri, A. Tripathi, and others, “Parkinson disease detection using deep neural networks,” in 2019 Twelfth international conference on contemporary computing (IC3), 2019, pp. 1–4.
- B. N. Suhas et al., “Speech task based automatic classification of ALS and Parkinson’s Disease and their severity using log Mel spectrograms,” in 2020 international conference on signal processing and communications (SPCOM), 2020, pp. 1–5.
- Z.-J. Xu, R.-F. Wang, J. Wang, and D.-H. Yu, “Parkinson’s disease detection based on spectrogram-deep convolutional generative adversarial network sample augmentation,” IEEE Access, vol. 8, pp. 206888–206900, 2020.
- D. Hemmerling et al., “Vision transformer for parkinson’s disease classification using multilingual sustained vowel recordings.”
- H.-J. Sun and Z.-G. Zhang, “Transformer-based severity detection of parkinson’s symptoms from gait,” in 2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), 2022, pp. 1–5.
- S. M. Abdullah et al., “Deep transfer learning based parkinson’s disease detection using optimized feature selection,” IEEE Access, vol. 11, pp. 3511–3524, 2023.
- L. Wyse, “Audio spectrogram representations for processing with convolutional neural networks,” vol. 1, no. 1, pp. 37–41, 2017.
- F. Ye and J. Yang, “A deep neural network model for speaker identification,” Appl. Sci., vol. 11, no. 8, p. 3603, 2021.
- “Stft.” [Online]. Available: https://musicinformationretrieval.com/stft.html.
- B. Li, “On identity authentication technology of distance education system based on voiceprint recognition,” in Proceedings of the 30th Chinese Control Conference, 2011, pp. 5718–5721.
- A. Dosovitskiy et al., “An image is worth 16x16 words: transformers for image recognition at scale. arxiv 2020,” arXiv Prepr. arXiv2010.11929, 2010.