Research Article
BibTex RIS Cite

Detection of Stroke (Cerebrovascular Accident) Using Machine Learning Methods

Year 2024, Volume: 13 Issue: 4, 1169 - 1180, 31.12.2024
https://doi.org/10.17798/bitlisfen.1539189

Abstract

Stroke occurs when the blood flow to the brain is suddenly interrupted. This interruption can lead to the loss of function in the affected area of the brain and cause permanent damage to the corresponding part of the body. Stroke can develop due to various factors such as age, occupation, chronic diseases, and a family history of stroke. Assessing these factors and predicting stroke risk is often a costly and time-consuming process, which can increase the risk of permanent damage for the individual. However, with today's technology, Artificial Intelligence (AI) and Machine Learning (ML) models can process millions of data points to determine stroke risk within seconds. In this study, the risk of stroke in individuals is predicted most reliably using ML methods such as Logistic Regression (LR), Decision Tree (DT), Support Vector Machines (SVM), and k-Nearest Neighbors (KNN), with the aim of saving time, protecting human health, and enabling early diagnosis of the disease. As a result of the study, the highest accuracy rate was achieved by the DT model with 91%. The accuracy rates of the other models were found to be 89% for SVM, 81% for KNN, and 75% for LR.

Ethical Statement

The study is complied with research and publication ethics.

References

  • [1] S. Emon, M. M. Talukder, M. M. Rahman, and M. S. Islam, “Stroke prediction using machine learning algorithms,” in 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 2020, pp. 101–104.
  • [2] M. Singh, V. S. Chouhan, and S. Gupta, “Prediction of stroke using artificial intelligence,” in 2017 International Conference on Computing, Communication and Automation (ICCCA), Greater Noida, India, 2017, pp. 1467–1471.
  • [3] E. Sevli, “Prediction of stroke using Random Forest Classifier,” in 2020 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), Sakheer, Bahrain, 2020, pp. 1–6.
  • [4] K. Revanth, G. Deepak, and K. N. V. Satya Prasad, “A comparative study on stroke prediction using various machine learning models,” in 2020 International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 2020, pp. 1385–1390.
  • [5] S. Cheon, D. A. Lee, and J. K. Kim, “Detection of stroke using deep learning algorithms,” in 2019 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 2019, pp. 1–4.
  • [6] S. Shoily, A. B. K. Singh, and M. Rahman, “Analysis of machine learning algorithms for stroke detection,” in 2019 International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2), Rajshahi, Bangladesh, 2019, pp. 1–4.
  • [7] N. Pradeepa, R. Gupta, and R. Singh, “Detection of stroke symptoms using machine learning and social media analytics,” in 2020 IEEE Symposium on Signal Processing and Information Technology (ISSPIT), Bilbao, Spain, 2020, pp. 1–6.
  • [8] H. Li, J. Liu, and Y. Wang, “Stroke risk prediction with various machine learning algorithms,” in 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 2019, pp. 2148–2151.
  • [9] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015.
  • [10] A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Jun. 2012, pp. 1106–1114.
  • [11] L. Zhang, "Machine Learning Techniques for Big Data Analysis," Ph.D. dissertation, Dept. of Computer Science, University of California, Berkeley, CA, USA, 2018.
  • [12] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Cambridge, MA: MIT Press, 2016.
  • [13] M. U. Emon, M. S. Keya, T. I. Meghla, M. M. Rahman, S. Al Mamun, and M. S. Kaiser, “Performance analysis of machine learning,” in Fourth International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 2020, pp. 1464–1469.
  • [14] M. S. Singh and P. Choudhary, “Stroke prediction using artificial intelligence,” in 8th Annual Industrial Automation and Electromechanical Engineering Conference (IEMECON), Bangkok, Thailand, 2017, pp. 158–161.
  • [15] O. Sevli, “İnme (felç) riskinin MÖ kullanılarak tespiti,” in 7. Uluslararası Mühendislik Mimarlık ve Tasarım Kongresi (7th International Congress on Engineering, Architecture and Design), İstanbul, Türkiye, 2021, pp. 661–667.
  • [16] S. Revanth, S. Sanjay, and V. Vijagayaganth, “Stroke prediction using machine learning algorithms,” International Journal of Disaster Recovery and Business Continuity, vol. 11, no. 1, pp. 3081–3086, 2020.
  • [17] S. Cheon, J. Kim, and J. Lim, “The use of deep learning to predict stroke patient mortality,” International Journal of Environmental Research and Public Health, vol. 16, no. 11, pp. 1–12, 2019.
  • [18] T. I. Shoily, T. Islam, S. Jannat, S. A. Tanna, T. M. Alif, and R. R. Ema, “Detection of stroke disease using machine learning algorithms,” in Proceedings of the 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), IEEE, 2019, pp. 1–6.
  • [19] S. Pradeepa, K. Manjula, S. Vimal, M. S. Khan, N. Chilamkurti, and A. K. Luhach, “DRFS: Detecting risk factor of stroke disease from social media using machine learning techniques,” Neural Processing Letters, 2020, pp. 1–19.
  • [20] X. Li, D. Bian, J. Yu, M. Li, and D. Zhao, “Using machine learning models to improve stroke risk level classification methods of China national stroke screening,” BMC Medical Informatics and Decision Making, vol. 19, pp. 1–7, 2019.
  • [21] D. R. Cox, “Regression models and life-tables,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 34, no. 2, pp. 187–220, 1972.
  • [22] M. E. T. M. MacKinnon, “Logistic regression analysis for binary data,” Journal of Applied Statistics, vol. 28, no. 6, pp. 759–769, Jun. 2001.
  • [23] A. Agresti, “Categorical data analysis,” Journal of the American Statistical Association, vol. 95, no. 450, pp. 1121–1130, Dec. 2000.
  • [24] S. M. Bhandari, J. P. Lee, and K. J. Kim, “Application of logistic regression in predicting heart disease,” in Proc. IEEE International Conference on Data Mining (ICDM), Dec. 2019, pp. 1124–1130.
  • [25] S. Wang and X. Zhang, “Improving logistic regression with feature selection for high-dimensional data,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2018, pp. 1552–1560.
  • [26] L. Huang and X. Liu, “Logistic regression for multiclass classification problems,” in Proc. IEEE International Conference on Machine Learning and Applications (ICMLA), Dec. 2017, pp. 987–994.
  • [27] T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Transactions on Information Theory, vol. 13, no. 1, pp. 21–27, Jan. 1967.
  • [28] M. A. Hall and L. A. Smith, “Feature selection for machine learning: Comparing a correlation-based filter approach to the wrapper,” IEEE Transactions on Neural Networks, vol. 15, no. 5, pp. 103–110, Sep. 2003.
  • [29] D. Zhang, M. M. Islam, and G. Lu, “A review on automatic image annotation techniques,” Pattern Recognition, vol. 45, no. 1, pp. 346–362, Jan. 2012.
  • [30] K. Q. Weinberger and L. K. Saul, “Distance metric learning for large margin nearest neighbor classification,” in Proc. Advances in Neural Information Processing Systems (NIPS), Dec. 2006, pp. 1473–1480.
  • [31] A. Beygelzimer, S. Kakade, and J. Langford, “Cover trees for nearest neighbor,” in Proc. 23rd International Conference on Machine Learning (ICML), Jun. 2006, pp. 97–104.
  • [32] M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in Proc. IEEE International Conference on Computer Vision Theory and Applications (VISAPP), Feb. 2009, pp. 331–340.
  • [33] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, Sep. 1995.
  • [34] J. Weston and C. Watkins, “Support vector machines for multi-class pattern recognition,” in European Symposium on Artificial Neural Networks (ESANN), vol. 4, pp. 219–224, Apr. 1999.
  • [35] B. Scholkopf, A. Smola, R. Williamson, and P. Bartlett, “New support vector algorithms,” Neural Computation, vol. 12, no. 5, pp. 1207–1245, May 2000.
  • [36] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge: Cambridge University Press, 2000.
  • [37] B. Boser, I. Guyon, and V. Vapnik, “A training algorithm for optimal margin classifiers,” in Proc. 5th Annual ACM Workshop on Computational Learning Theory (COLT), Jul. 1992, pp. 144–152.
  • [38] A. J. Smola and B. Scholkopf, “A tutorial on support vector regression,” in Proc. International Conference on Artificial Neural Networks (ICANN), Aug. 1998, pp. 52–78.
  • [39] J. R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, no. 1, pp. 81–106, Mar. 1986.
  • [40] L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, Oct. 2001.
  • [41] S. Murthy, “Automatic construction of decision trees from data: A multi-disciplinary survey,” Data Mining and Knowledge Discovery, vol. 2, no. 4, pp. 345–389, Dec. 1998.
  • [42] R. Kohavi, “Scaling up the accuracy of Naive-Bayes classifiers: A decision-tree hybrid,” in Proc. 2nd International Conference on Knowledge Discovery and Data Mining (KDD), Aug. 1996, pp. 202–207.
  • [43] P. Geurts, D. Ernst, and L. Wehenkel, “Extremely randomized trees,” in Proc. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), Sep. 2006, pp. 3–42.
  • [44] H. Blockeel and L. De Raedt, “Top-down induction of first-order logical decision trees,” in Proc. 9th International Conference on Machine Learning (ICML), Jul. 1998, pp. 55–63.
  • [45] V. Tümen and A. S. Sunar, “Predicting the work-life balance of employees based on the ensemble learning method,” Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, vol. 12, no. 2, pp. 344–353, 2023.
  • [46] A. Çalışkan, “Finding complement of inefficient feature clusters obtained by metaheuristic optimization algorithms to detect rock mineral types,” Transactions of the Institute of Measurement and Control, vol. 45, no. 10, pp. 1815–1828, 2023.
  • [47] D. M. W. Powers, “Evaluation: From precision, recall and F-measure to ROC, informedness, markedness & correlation,” Journal of Machine Learning Technologies, vol. 2, no. 1, pp. 37–63, 2011.
  • [48] M. Sokolova and G. Lapalme, “A systematic analysis of performance measures for classification tasks,” Information Processing & Management, vol. 45, no. 4, pp. 427–437, Jul. 2009.
  • [49] S. B. Çelebi and Ö. A. Karaman, “Multilayer LSTM model for wind power estimation in the SCADA system,” European Journal of Technique (EJT), vol. 13, no. 2, pp. 116–122, 2023.
  • [50] J. Davis and M. Goadrich, “The relationship between precision-recall and ROC curves,” in Proc. 23rd International Conference on Machine Learning (ICML), Pittsburgh, PA, USA, Jun. 2006, pp. 233–240.
  • [51] V. Tümen, “SpiCoNET: A hybrid deep learning model to diagnose COVID-19 and pneumonia using chest X-ray images,” Traitement du Signal, vol. 39, no. 4, pp. 1169, 2022.
  • [52] N. Chinchor, “MUC-4 evaluation metrics,” in Proc. 4th Conference on Message Understanding, McLean, VA, USA, Jun. 1992, pp. 22–29.
  • [53] A. Çalışkan, “Classification of tympanic membrane images based on VGG16 model,” Kocaeli Journal of Science and Engineering, vol. 5, no. 1, pp. 105–111, 2022.
  • [54] Kaggle, "Stroke Prediction Dataset," [Online]. Available: https://www.kaggle.com/datasets/fedesoriano/stroke-prediction-dataset. [Accessed: May 8, 2024].
Year 2024, Volume: 13 Issue: 4, 1169 - 1180, 31.12.2024
https://doi.org/10.17798/bitlisfen.1539189

Abstract

References

  • [1] S. Emon, M. M. Talukder, M. M. Rahman, and M. S. Islam, “Stroke prediction using machine learning algorithms,” in 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 2020, pp. 101–104.
  • [2] M. Singh, V. S. Chouhan, and S. Gupta, “Prediction of stroke using artificial intelligence,” in 2017 International Conference on Computing, Communication and Automation (ICCCA), Greater Noida, India, 2017, pp. 1467–1471.
  • [3] E. Sevli, “Prediction of stroke using Random Forest Classifier,” in 2020 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), Sakheer, Bahrain, 2020, pp. 1–6.
  • [4] K. Revanth, G. Deepak, and K. N. V. Satya Prasad, “A comparative study on stroke prediction using various machine learning models,” in 2020 International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 2020, pp. 1385–1390.
  • [5] S. Cheon, D. A. Lee, and J. K. Kim, “Detection of stroke using deep learning algorithms,” in 2019 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 2019, pp. 1–4.
  • [6] S. Shoily, A. B. K. Singh, and M. Rahman, “Analysis of machine learning algorithms for stroke detection,” in 2019 International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2), Rajshahi, Bangladesh, 2019, pp. 1–4.
  • [7] N. Pradeepa, R. Gupta, and R. Singh, “Detection of stroke symptoms using machine learning and social media analytics,” in 2020 IEEE Symposium on Signal Processing and Information Technology (ISSPIT), Bilbao, Spain, 2020, pp. 1–6.
  • [8] H. Li, J. Liu, and Y. Wang, “Stroke risk prediction with various machine learning algorithms,” in 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 2019, pp. 2148–2151.
  • [9] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015.
  • [10] A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Jun. 2012, pp. 1106–1114.
  • [11] L. Zhang, "Machine Learning Techniques for Big Data Analysis," Ph.D. dissertation, Dept. of Computer Science, University of California, Berkeley, CA, USA, 2018.
  • [12] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Cambridge, MA: MIT Press, 2016.
  • [13] M. U. Emon, M. S. Keya, T. I. Meghla, M. M. Rahman, S. Al Mamun, and M. S. Kaiser, “Performance analysis of machine learning,” in Fourth International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 2020, pp. 1464–1469.
  • [14] M. S. Singh and P. Choudhary, “Stroke prediction using artificial intelligence,” in 8th Annual Industrial Automation and Electromechanical Engineering Conference (IEMECON), Bangkok, Thailand, 2017, pp. 158–161.
  • [15] O. Sevli, “İnme (felç) riskinin MÖ kullanılarak tespiti,” in 7. Uluslararası Mühendislik Mimarlık ve Tasarım Kongresi (7th International Congress on Engineering, Architecture and Design), İstanbul, Türkiye, 2021, pp. 661–667.
  • [16] S. Revanth, S. Sanjay, and V. Vijagayaganth, “Stroke prediction using machine learning algorithms,” International Journal of Disaster Recovery and Business Continuity, vol. 11, no. 1, pp. 3081–3086, 2020.
  • [17] S. Cheon, J. Kim, and J. Lim, “The use of deep learning to predict stroke patient mortality,” International Journal of Environmental Research and Public Health, vol. 16, no. 11, pp. 1–12, 2019.
  • [18] T. I. Shoily, T. Islam, S. Jannat, S. A. Tanna, T. M. Alif, and R. R. Ema, “Detection of stroke disease using machine learning algorithms,” in Proceedings of the 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), IEEE, 2019, pp. 1–6.
  • [19] S. Pradeepa, K. Manjula, S. Vimal, M. S. Khan, N. Chilamkurti, and A. K. Luhach, “DRFS: Detecting risk factor of stroke disease from social media using machine learning techniques,” Neural Processing Letters, 2020, pp. 1–19.
  • [20] X. Li, D. Bian, J. Yu, M. Li, and D. Zhao, “Using machine learning models to improve stroke risk level classification methods of China national stroke screening,” BMC Medical Informatics and Decision Making, vol. 19, pp. 1–7, 2019.
  • [21] D. R. Cox, “Regression models and life-tables,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 34, no. 2, pp. 187–220, 1972.
  • [22] M. E. T. M. MacKinnon, “Logistic regression analysis for binary data,” Journal of Applied Statistics, vol. 28, no. 6, pp. 759–769, Jun. 2001.
  • [23] A. Agresti, “Categorical data analysis,” Journal of the American Statistical Association, vol. 95, no. 450, pp. 1121–1130, Dec. 2000.
  • [24] S. M. Bhandari, J. P. Lee, and K. J. Kim, “Application of logistic regression in predicting heart disease,” in Proc. IEEE International Conference on Data Mining (ICDM), Dec. 2019, pp. 1124–1130.
  • [25] S. Wang and X. Zhang, “Improving logistic regression with feature selection for high-dimensional data,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2018, pp. 1552–1560.
  • [26] L. Huang and X. Liu, “Logistic regression for multiclass classification problems,” in Proc. IEEE International Conference on Machine Learning and Applications (ICMLA), Dec. 2017, pp. 987–994.
  • [27] T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Transactions on Information Theory, vol. 13, no. 1, pp. 21–27, Jan. 1967.
  • [28] M. A. Hall and L. A. Smith, “Feature selection for machine learning: Comparing a correlation-based filter approach to the wrapper,” IEEE Transactions on Neural Networks, vol. 15, no. 5, pp. 103–110, Sep. 2003.
  • [29] D. Zhang, M. M. Islam, and G. Lu, “A review on automatic image annotation techniques,” Pattern Recognition, vol. 45, no. 1, pp. 346–362, Jan. 2012.
  • [30] K. Q. Weinberger and L. K. Saul, “Distance metric learning for large margin nearest neighbor classification,” in Proc. Advances in Neural Information Processing Systems (NIPS), Dec. 2006, pp. 1473–1480.
  • [31] A. Beygelzimer, S. Kakade, and J. Langford, “Cover trees for nearest neighbor,” in Proc. 23rd International Conference on Machine Learning (ICML), Jun. 2006, pp. 97–104.
  • [32] M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in Proc. IEEE International Conference on Computer Vision Theory and Applications (VISAPP), Feb. 2009, pp. 331–340.
  • [33] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, Sep. 1995.
  • [34] J. Weston and C. Watkins, “Support vector machines for multi-class pattern recognition,” in European Symposium on Artificial Neural Networks (ESANN), vol. 4, pp. 219–224, Apr. 1999.
  • [35] B. Scholkopf, A. Smola, R. Williamson, and P. Bartlett, “New support vector algorithms,” Neural Computation, vol. 12, no. 5, pp. 1207–1245, May 2000.
  • [36] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge: Cambridge University Press, 2000.
  • [37] B. Boser, I. Guyon, and V. Vapnik, “A training algorithm for optimal margin classifiers,” in Proc. 5th Annual ACM Workshop on Computational Learning Theory (COLT), Jul. 1992, pp. 144–152.
  • [38] A. J. Smola and B. Scholkopf, “A tutorial on support vector regression,” in Proc. International Conference on Artificial Neural Networks (ICANN), Aug. 1998, pp. 52–78.
  • [39] J. R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, no. 1, pp. 81–106, Mar. 1986.
  • [40] L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, Oct. 2001.
  • [41] S. Murthy, “Automatic construction of decision trees from data: A multi-disciplinary survey,” Data Mining and Knowledge Discovery, vol. 2, no. 4, pp. 345–389, Dec. 1998.
  • [42] R. Kohavi, “Scaling up the accuracy of Naive-Bayes classifiers: A decision-tree hybrid,” in Proc. 2nd International Conference on Knowledge Discovery and Data Mining (KDD), Aug. 1996, pp. 202–207.
  • [43] P. Geurts, D. Ernst, and L. Wehenkel, “Extremely randomized trees,” in Proc. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), Sep. 2006, pp. 3–42.
  • [44] H. Blockeel and L. De Raedt, “Top-down induction of first-order logical decision trees,” in Proc. 9th International Conference on Machine Learning (ICML), Jul. 1998, pp. 55–63.
  • [45] V. Tümen and A. S. Sunar, “Predicting the work-life balance of employees based on the ensemble learning method,” Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, vol. 12, no. 2, pp. 344–353, 2023.
  • [46] A. Çalışkan, “Finding complement of inefficient feature clusters obtained by metaheuristic optimization algorithms to detect rock mineral types,” Transactions of the Institute of Measurement and Control, vol. 45, no. 10, pp. 1815–1828, 2023.
  • [47] D. M. W. Powers, “Evaluation: From precision, recall and F-measure to ROC, informedness, markedness & correlation,” Journal of Machine Learning Technologies, vol. 2, no. 1, pp. 37–63, 2011.
  • [48] M. Sokolova and G. Lapalme, “A systematic analysis of performance measures for classification tasks,” Information Processing & Management, vol. 45, no. 4, pp. 427–437, Jul. 2009.
  • [49] S. B. Çelebi and Ö. A. Karaman, “Multilayer LSTM model for wind power estimation in the SCADA system,” European Journal of Technique (EJT), vol. 13, no. 2, pp. 116–122, 2023.
  • [50] J. Davis and M. Goadrich, “The relationship between precision-recall and ROC curves,” in Proc. 23rd International Conference on Machine Learning (ICML), Pittsburgh, PA, USA, Jun. 2006, pp. 233–240.
  • [51] V. Tümen, “SpiCoNET: A hybrid deep learning model to diagnose COVID-19 and pneumonia using chest X-ray images,” Traitement du Signal, vol. 39, no. 4, pp. 1169, 2022.
  • [52] N. Chinchor, “MUC-4 evaluation metrics,” in Proc. 4th Conference on Message Understanding, McLean, VA, USA, Jun. 1992, pp. 22–29.
  • [53] A. Çalışkan, “Classification of tympanic membrane images based on VGG16 model,” Kocaeli Journal of Science and Engineering, vol. 5, no. 1, pp. 105–111, 2022.
  • [54] Kaggle, "Stroke Prediction Dataset," [Online]. Available: https://www.kaggle.com/datasets/fedesoriano/stroke-prediction-dataset. [Accessed: May 8, 2024].
There are 54 citations in total.

Details

Primary Language English
Subjects Artificial Intelligence (Other)
Journal Section Araştırma Makalesi
Authors

Hadice Ateş 0000-0002-0462-6997

Abidin Çalışkan 0000-0001-5039-6400

Early Pub Date December 30, 2024
Publication Date December 31, 2024
Submission Date August 27, 2024
Acceptance Date September 8, 2024
Published in Issue Year 2024 Volume: 13 Issue: 4

Cite

IEEE H. Ateş and A. Çalışkan, “Detection of Stroke (Cerebrovascular Accident) Using Machine Learning Methods”, Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, vol. 13, no. 4, pp. 1169–1180, 2024, doi: 10.17798/bitlisfen.1539189.

Bitlis Eren University
Journal of Science Editor
Bitlis Eren University Graduate Institute
Bes Minare Mah. Ahmet Eren Bulvari, Merkez Kampus, 13000 BITLIS