Araştırma Makalesi
BibTex RIS Kaynak Göster

ENHANCING DIABETES PREDICTION WITH INTERPRETABLE MACHINE LEARNING: A COMPARATIVE ANALYSIS OF ADDITIVE–MULTIPLICATIVE NEURAL NETWORKS AND KOLMOGOROV–ARNOLD NETWORKS

Yıl 2026, Cilt: 15 Sayı: 1, 46 - 65, 27.01.2026
https://doi.org/10.18036/estubtdc.1674345

Öz

This study investigates the effectiveness of machine learning (ML) models in diagnosing diabetes and identifying the most influential predictors using the PIMA Indians Diabetes dataset. A particular emphasis is placed on novel neural network architectures, especially the Additive and Multiplicative Neurons Network (AMNN), introduced as a key innovation in this work.

The dataset underwent comprehensive preprocessing, including handling missing values, feature scaling, and addressing class imbalance via the SMOTE algorithm. To interpret the importance of predictors, five feature selection techniques (Correlation, Boruta, MRMR, RFE, Random Forest) and two explainable AI (XAI) tools (SHAP and LIME) were applied.

A total of eight machine learning algorithms were tested and evaluated based on accuracy, recall, F1-score, and AUC-ROC. Among all models, AMNN achieved the best performance, with an accuracy of 0.7576, recall of 0.7576, F1-score of 0.7618, and AUC-ROC of 0.8206. MLP-2 and XGBoost also showed competitive results. Kolmogorov-Arnold Networks (KAN), while not outperforming other models, demonstrated moderate success and offered interpretability advantages due to its flexible activation structure.

Consistently, glucose, BMI, age, and pregnancy count were found to be the most significant predictors across feature selection and XAI evaluations. These results align with existing clinical insights into diabetes risk.

In conclusion, this study highlights the potential of the AMNN model as a powerful and interpretable tool for early diabetes detection. These findings suggest that AMNN offers a compelling balance between performance and interpretability, making it suitable for real-world medical applications. The integration of feature selection and XAI techniques supports model transparency, paving the way for its application in clinical decision-making. Future work should focus on enhancing generalizability through larger datasets and hybrid modeling strategies.

Kaynakça

  • [1] Temurtas H, Yumusak N, Temurtas F. A comparative study on diabetes disease diagnosis using neural networks. Expert Syst Appl 2009; 36(4): 8610-8615.
  • [2] Başer BÖ, Yangın M, Sarıdaş ES. Classification of diabetes mellitus with machine learning techniques. Süleyman Demirel University, J Nat Appl Sci 2021; 25(1): 112-120.
  • [3] Kaggle. PIMA Indians diabetes database [Internet]. 2024. Accessed December 2024. Available from: https://www.kaggle.com/datasets/uciml/pima-indians-diabetes-database?select=diabetes.
  • [4] Kayaer K, Yıldırım T. Medical diagnosis on Pima Indian diabetes using general regression neural networks. In: Proceedings of the international conference on artificial neural networks and neural information processing (ICANN/ICONIP) 2003; 181–184.
  • [5] Karatsiolis S, Schizas CN. Region based Support Vector Machine algorithm for medical diagnosis on Pima Indian Diabetes dataset. In: 2012 IEEE 12th International Conference on Bioinformatics & Bioengineering (BIBE) 2012; 139-144.
  • [6] Yangın G. Application of XGBoost and decision tree based algorithms on diabetes data (Master's thesis). Institute of Science, Mimar Sinan Fine Arts University, İstanbul, Turkey, 2019; Available from: https://hdl.handle.net/20.500.14124/1152
  • [7] Sankar Ganesh PV, Sripriya P. A comparative review of prediction methods for Pima Indians Diabetes dataset. Comput Vis Bio-Inspired Comput 2019; 735-750.
  • [8] Lakhwani K, Bhargava S, Hiran KK, Bundele MM, Somwanshi D. Prediction of the onset of diabetes using artificial neural network and Pima Indians Diabetes dataset. In: 2020 5th IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE) 2020; 1-6.
  • [9] Patra R, Khuntia, B. Analysis and prediction of Pima Indian Diabetes Dataset using SDKNN classifier technique. In: IOP Conference Series: Materials Science and Engineering. IOP Publishing 2021; 1070(1): p. 012059.
  • [10] Mousa A, Mustafa W, Marqas RB, Mohammed SH. A comparative study of diabetes detection using the Pima Indian Diabetes database. J Duhok University 2023; 26(2): 277-288.
  • [11] Chang V, Bailey J, Xu QA, Sun Z. Pima Indians diabetes mellitus classification based on machine learning (ML) algorithms. Neural Comput Appl 2023; 35(22): 16157-16173.
  • [12] Farsana KS, Poulose A. Hybrid convolutional neural networks for PIMA Indians diabetes prediction. In: 2024 Fifteenth International Conference on Ubiquitous and Future Networks (ICUFN). IEEE. 2024; 268-273.
  • [13] Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 2002; 16: 321-357.
  • [14] Douzas G, Bacao F, Last F. Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE. Inf Sci 2018; 465: 1-20.
  • [15] Kursa MB, Rudnicki WR. Feature selection with the Boruta package. J Stat Softw 2010; 36: 1-13.
  • [16] Kursa MB, Jankowski A, Rudnicki WR. Boruta–a system for feature selection. Fundam Inform 2010; 101(4): 271-285.
  • [17] Zhao Z, Anand R, Wang M. Maximum relevance and minimum redundancy feature selection methods for a marketing machine learning platform. In: 2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA). IEEE. 2019; 442-452.
  • [18] Peng H, Long F, Ding C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intell 2005; 27(8): 1226-1238.
  • [19] Chen XW, Jeong JC. Enhanced recursive feature elimination. In: Proceedings of the Sixth International Conference on Machine Learning and Applications (ICMLA). IEEE. 2007; 429-435.
  • [20] Breiman L. Random forests. Mach Learn 2001; 45: 5-32.
  • [21] Agraz M. Comparison of feature selection methods in breast cancer microarray data. Med Rec 2023; 5(2): 284-9.
  • [22] Agraz M, Deng Y, Karniadakis GE, Mantzoros CS. Enhancing severe hypoglycemia prediction in type 2 diabetes mellitus through multi-view co-training machine learning model for imbalanced dataset. Sci Rep 2024; 14(1): 22741.
  • [23] Ribeiro MT, Singh S, Guestrin C. "Why should I trust you?" Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016; 1135-1144.
  • [24] Kuhn M, Johnson K. Applied predictive modeling. New York: Springer. 2013.
  • [25] Ho TK. Random decision forests. In: Proceedings of the 3rd International Conference on Document Analysis and Recognition 1995; 278-282.
  • [26] Tatlıdil H. Applied multivariate statistical analysis. Academy Printing House, Ankara, 2002; 167.
  • [27] Kalaycı Ş. SPSS applied multivariate statistics techniques. Ankara, Asil Publishing, 2016.
  • [28] Rumelhart DE, Hinton G. E, Williams RJ. Learning representations by back-propagating errors. Nature 1986; 323(6088): 533-536.
  • [29] Friedman JH. Greedy function approximation: a gradient boosting machine. Ann Stat 2001; 1189-1232.
  • [30] Chen T, Guestrin C. Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016; 785-794.
  • [31] Geurts P, Ernst D, Wehenkel L. Extremely randomized trees. Mach Learn 2006; 63: 3-42.
  • [32] Fix E, Hodges JL. Discriminatory analysis: nonparametric discrimination, small sample performance. Air University, USAF School of Aviation Medicine 1952.
  • [33] John GH, Langley P. Estimating continuous distributions in Bayesian classifiers. In: Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence 1995; 338-345.
  • [34] Liu Z, Wang Y, Vaidya S, Ruehle F, Halverson J, Soljačić M, Hou TY, Tegmark M. KAN: Kolmogorov-Arnold networks. arXiv preprint 2024; 19756.
  • [35] Higgins C. Diagnosing diabetes: blood glucose and the role of the laboratory. Br J Nurs 2001; 10(4): 230-236.
  • [36] Barber TM, Kyrou I, Randeva HS, Weickert MO. Mechanisms of insulin resistance at the crossroad of obesity with associated metabolic abnormalities and cognitive dysfunction. Int J Mol Sci 2021; 22(2): 546.
  • [37] Naito H, Kaga H, Someya Y, et al. Fat accumulation and elevated free fatty acid are associated with age-related glucose intolerance: Bunkyo Health Study. J Endocr Soc 2023; 8(2): bvad164.
  • [38] Nyakairu Doreen G. The Impact of Genetic History on the Risk of Developing Type II Diabetes. Res Output J Biol Appl Sci 2024; 4(1): 51-57.
  • [39] Diaz-Santana MV, O'Brien KM, Park YM, Sandler DP, Weinberg CR. Persistence of risk for type 2 diabetes after gestational diabetes mellitus. Diabetes Care 2022; 45(4): 864-870.
  • [40] Cubillos G, Monckeberg M, Plaza A, Morgan M, Estevez PA, Choolani M, Kemp MW, Illanes SE, Perez CA. Development of machine learning models to predict gestational diabetes risk in the first half of pregnancy. BMC Pregnancy Childbirth 2023; 23(1): 469.
  • [41] Kumar M, Ang LT, Ho C, et al. Machine Learning-Derived Prenatal Predictive Risk Model to Guide Intervention and Prevent the Progression of Gestational Diabetes Mellitus to Type 2 Diabetes: Prediction Model Development Study. JMIR Diabetes 2022; 7(3): e32366.
  • [42] Lai H, Huang H, Keshavjee K, Guergachi A, Gao X. Predictive models for diabetes mellitus using machine learning techniques. BMC Endocr Disord 2019; 19(1): 1-9.
  • [43] Agraz M, Goksuluk D, Zhang P, Choi BR, Clements RT, Choudhary G, Karniadakis GE. ML-GAP: machine learning-enhanced genomic analysis pipeline using autoencoders and data augmentation. Frontiers in Genetics 2024; 15: 1442759.
  • [44] Mak KK, Wong YH, Pichika MR. Artificial intelligence in drug discovery and development. Drug discovery and evaluation: safety and pharmacokinetic assays 2024; 1461-1498.
  • [45] Yolcu U, Egrioglu E, Aladag ÇH. A new linear & nonlinear artificial neural network model for time series forecasting. Decision Support Systems 2013; 54(3): 1340-1347.
  • [46] Valenca M, Ludermir T. Multiplicative-additive neural networks with active neurons. In IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No. 99CH36339) IEEE, 1999; 6, 3821-3823.
  • [47] Tatlı ŞD, Yakut SG. Determination of factors affecting university students’ happiness levels through decision trees analysis. Journal of Awareness 2024; 9(2), 237-250.
  • [48] Kolmogorov AN. On the representations of continuous functions of many variables by superposition of continuous functions of one variable and addition. In: Dokl. Akad. Nauk USSR 1957; 953-956.
  • [49] Ong KL, Stafford LK, McLaughlin SA, Boyko EJ, Vollset SE, Smith AE, Brauer M. Global, regional, and national burden of diabetes from 1990 to 2021, with projections of prevalence to 2050: a systematic analysis for the Global Burden of Disease Study 2021. The Lancet 2023; 402(10397), 203-234.
  • [50] Topşir A, Güler F, Çetin E, Burak MF & Agraz M. Thyroid disease classification using generative adversarial networks and Kolmogorov-Arnold network for three-class classification. BMC Medical Informatics and Decision Making, 2025, 25(1), 284.
  • [51] Agraz M, Mantzoros C & Karniadakis GE. ChatGPT-Enhanced ROC Analysis (CERA): A shiny web tool for finding optimal cutoff points in biomarker analysis. PLOS ONE, 2024; 19(4), e0289141.

ENHANCING DIABETES PREDICTION WITH INTERPRETABLE MACHINE LEARNING: A COMPARATIVE ANALYSIS OF ADDITIVE–MULTIPLICATIVE NEURAL NETWORKS AND KOLMOGOROV–ARNOLD NETWORKS

Yıl 2026, Cilt: 15 Sayı: 1, 46 - 65, 27.01.2026
https://doi.org/10.18036/estubtdc.1674345

Öz

This study investigates the effectiveness of machine learning (ML) models in diagnosing diabetes and identifying the most influential predictors using the PIMA Indians Diabetes dataset. A particular emphasis is placed on novel neural network architectures, especially the Additive and Multiplicative Neurons Network (AMNN), introduced as a key innovation in this work.

The dataset underwent comprehensive preprocessing, including handling missing values, feature scaling, and addressing class imbalance via the SMOTE algorithm. To interpret the importance of predictors, five feature selection techniques (Correlation, Boruta, MRMR, RFE, Random Forest) and two explainable AI (XAI) tools (SHAP and LIME) were applied.

A total of eight machine learning algorithms were tested and evaluated based on accuracy, recall, F1-score, and AUC-ROC. Among all models, AMNN achieved the best performance, with an accuracy of 0.7576, recall of 0.7576, F1-score of 0.7618, and AUC-ROC of 0.8206. MLP-2 and XGBoost also showed competitive results. Kolmogorov-Arnold Networks (KAN), while not outperforming other models, demonstrated moderate success and offered interpretability advantages due to its flexible activation structure.

Consistently, glucose, BMI, age, and pregnancy count were found to be the most significant predictors across feature selection and XAI evaluations. These results align with existing clinical insights into diabetes risk.

In conclusion, this study highlights the potential of the AMNN model as a powerful and interpretable tool for early diabetes detection. These findings suggest that AMNN offers a compelling balance between performance and interpretability, making it suitable for real-world medical applications. The integration of feature selection and XAI techniques supports model transparency, paving the way for its application in clinical decision-making. Future work should focus on enhancing generalizability through larger datasets and hybrid modeling strategies.

Kaynakça

  • [1] Temurtas H, Yumusak N, Temurtas F. A comparative study on diabetes disease diagnosis using neural networks. Expert Syst Appl 2009; 36(4): 8610-8615.
  • [2] Başer BÖ, Yangın M, Sarıdaş ES. Classification of diabetes mellitus with machine learning techniques. Süleyman Demirel University, J Nat Appl Sci 2021; 25(1): 112-120.
  • [3] Kaggle. PIMA Indians diabetes database [Internet]. 2024. Accessed December 2024. Available from: https://www.kaggle.com/datasets/uciml/pima-indians-diabetes-database?select=diabetes.
  • [4] Kayaer K, Yıldırım T. Medical diagnosis on Pima Indian diabetes using general regression neural networks. In: Proceedings of the international conference on artificial neural networks and neural information processing (ICANN/ICONIP) 2003; 181–184.
  • [5] Karatsiolis S, Schizas CN. Region based Support Vector Machine algorithm for medical diagnosis on Pima Indian Diabetes dataset. In: 2012 IEEE 12th International Conference on Bioinformatics & Bioengineering (BIBE) 2012; 139-144.
  • [6] Yangın G. Application of XGBoost and decision tree based algorithms on diabetes data (Master's thesis). Institute of Science, Mimar Sinan Fine Arts University, İstanbul, Turkey, 2019; Available from: https://hdl.handle.net/20.500.14124/1152
  • [7] Sankar Ganesh PV, Sripriya P. A comparative review of prediction methods for Pima Indians Diabetes dataset. Comput Vis Bio-Inspired Comput 2019; 735-750.
  • [8] Lakhwani K, Bhargava S, Hiran KK, Bundele MM, Somwanshi D. Prediction of the onset of diabetes using artificial neural network and Pima Indians Diabetes dataset. In: 2020 5th IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE) 2020; 1-6.
  • [9] Patra R, Khuntia, B. Analysis and prediction of Pima Indian Diabetes Dataset using SDKNN classifier technique. In: IOP Conference Series: Materials Science and Engineering. IOP Publishing 2021; 1070(1): p. 012059.
  • [10] Mousa A, Mustafa W, Marqas RB, Mohammed SH. A comparative study of diabetes detection using the Pima Indian Diabetes database. J Duhok University 2023; 26(2): 277-288.
  • [11] Chang V, Bailey J, Xu QA, Sun Z. Pima Indians diabetes mellitus classification based on machine learning (ML) algorithms. Neural Comput Appl 2023; 35(22): 16157-16173.
  • [12] Farsana KS, Poulose A. Hybrid convolutional neural networks for PIMA Indians diabetes prediction. In: 2024 Fifteenth International Conference on Ubiquitous and Future Networks (ICUFN). IEEE. 2024; 268-273.
  • [13] Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 2002; 16: 321-357.
  • [14] Douzas G, Bacao F, Last F. Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE. Inf Sci 2018; 465: 1-20.
  • [15] Kursa MB, Rudnicki WR. Feature selection with the Boruta package. J Stat Softw 2010; 36: 1-13.
  • [16] Kursa MB, Jankowski A, Rudnicki WR. Boruta–a system for feature selection. Fundam Inform 2010; 101(4): 271-285.
  • [17] Zhao Z, Anand R, Wang M. Maximum relevance and minimum redundancy feature selection methods for a marketing machine learning platform. In: 2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA). IEEE. 2019; 442-452.
  • [18] Peng H, Long F, Ding C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intell 2005; 27(8): 1226-1238.
  • [19] Chen XW, Jeong JC. Enhanced recursive feature elimination. In: Proceedings of the Sixth International Conference on Machine Learning and Applications (ICMLA). IEEE. 2007; 429-435.
  • [20] Breiman L. Random forests. Mach Learn 2001; 45: 5-32.
  • [21] Agraz M. Comparison of feature selection methods in breast cancer microarray data. Med Rec 2023; 5(2): 284-9.
  • [22] Agraz M, Deng Y, Karniadakis GE, Mantzoros CS. Enhancing severe hypoglycemia prediction in type 2 diabetes mellitus through multi-view co-training machine learning model for imbalanced dataset. Sci Rep 2024; 14(1): 22741.
  • [23] Ribeiro MT, Singh S, Guestrin C. "Why should I trust you?" Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016; 1135-1144.
  • [24] Kuhn M, Johnson K. Applied predictive modeling. New York: Springer. 2013.
  • [25] Ho TK. Random decision forests. In: Proceedings of the 3rd International Conference on Document Analysis and Recognition 1995; 278-282.
  • [26] Tatlıdil H. Applied multivariate statistical analysis. Academy Printing House, Ankara, 2002; 167.
  • [27] Kalaycı Ş. SPSS applied multivariate statistics techniques. Ankara, Asil Publishing, 2016.
  • [28] Rumelhart DE, Hinton G. E, Williams RJ. Learning representations by back-propagating errors. Nature 1986; 323(6088): 533-536.
  • [29] Friedman JH. Greedy function approximation: a gradient boosting machine. Ann Stat 2001; 1189-1232.
  • [30] Chen T, Guestrin C. Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016; 785-794.
  • [31] Geurts P, Ernst D, Wehenkel L. Extremely randomized trees. Mach Learn 2006; 63: 3-42.
  • [32] Fix E, Hodges JL. Discriminatory analysis: nonparametric discrimination, small sample performance. Air University, USAF School of Aviation Medicine 1952.
  • [33] John GH, Langley P. Estimating continuous distributions in Bayesian classifiers. In: Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence 1995; 338-345.
  • [34] Liu Z, Wang Y, Vaidya S, Ruehle F, Halverson J, Soljačić M, Hou TY, Tegmark M. KAN: Kolmogorov-Arnold networks. arXiv preprint 2024; 19756.
  • [35] Higgins C. Diagnosing diabetes: blood glucose and the role of the laboratory. Br J Nurs 2001; 10(4): 230-236.
  • [36] Barber TM, Kyrou I, Randeva HS, Weickert MO. Mechanisms of insulin resistance at the crossroad of obesity with associated metabolic abnormalities and cognitive dysfunction. Int J Mol Sci 2021; 22(2): 546.
  • [37] Naito H, Kaga H, Someya Y, et al. Fat accumulation and elevated free fatty acid are associated with age-related glucose intolerance: Bunkyo Health Study. J Endocr Soc 2023; 8(2): bvad164.
  • [38] Nyakairu Doreen G. The Impact of Genetic History on the Risk of Developing Type II Diabetes. Res Output J Biol Appl Sci 2024; 4(1): 51-57.
  • [39] Diaz-Santana MV, O'Brien KM, Park YM, Sandler DP, Weinberg CR. Persistence of risk for type 2 diabetes after gestational diabetes mellitus. Diabetes Care 2022; 45(4): 864-870.
  • [40] Cubillos G, Monckeberg M, Plaza A, Morgan M, Estevez PA, Choolani M, Kemp MW, Illanes SE, Perez CA. Development of machine learning models to predict gestational diabetes risk in the first half of pregnancy. BMC Pregnancy Childbirth 2023; 23(1): 469.
  • [41] Kumar M, Ang LT, Ho C, et al. Machine Learning-Derived Prenatal Predictive Risk Model to Guide Intervention and Prevent the Progression of Gestational Diabetes Mellitus to Type 2 Diabetes: Prediction Model Development Study. JMIR Diabetes 2022; 7(3): e32366.
  • [42] Lai H, Huang H, Keshavjee K, Guergachi A, Gao X. Predictive models for diabetes mellitus using machine learning techniques. BMC Endocr Disord 2019; 19(1): 1-9.
  • [43] Agraz M, Goksuluk D, Zhang P, Choi BR, Clements RT, Choudhary G, Karniadakis GE. ML-GAP: machine learning-enhanced genomic analysis pipeline using autoencoders and data augmentation. Frontiers in Genetics 2024; 15: 1442759.
  • [44] Mak KK, Wong YH, Pichika MR. Artificial intelligence in drug discovery and development. Drug discovery and evaluation: safety and pharmacokinetic assays 2024; 1461-1498.
  • [45] Yolcu U, Egrioglu E, Aladag ÇH. A new linear & nonlinear artificial neural network model for time series forecasting. Decision Support Systems 2013; 54(3): 1340-1347.
  • [46] Valenca M, Ludermir T. Multiplicative-additive neural networks with active neurons. In IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No. 99CH36339) IEEE, 1999; 6, 3821-3823.
  • [47] Tatlı ŞD, Yakut SG. Determination of factors affecting university students’ happiness levels through decision trees analysis. Journal of Awareness 2024; 9(2), 237-250.
  • [48] Kolmogorov AN. On the representations of continuous functions of many variables by superposition of continuous functions of one variable and addition. In: Dokl. Akad. Nauk USSR 1957; 953-956.
  • [49] Ong KL, Stafford LK, McLaughlin SA, Boyko EJ, Vollset SE, Smith AE, Brauer M. Global, regional, and national burden of diabetes from 1990 to 2021, with projections of prevalence to 2050: a systematic analysis for the Global Burden of Disease Study 2021. The Lancet 2023; 402(10397), 203-234.
  • [50] Topşir A, Güler F, Çetin E, Burak MF & Agraz M. Thyroid disease classification using generative adversarial networks and Kolmogorov-Arnold network for three-class classification. BMC Medical Informatics and Decision Making, 2025, 25(1), 284.
  • [51] Agraz M, Mantzoros C & Karniadakis GE. ChatGPT-Enhanced ROC Analysis (CERA): A shiny web tool for finding optimal cutoff points in biomarker analysis. PLOS ONE, 2024; 19(4), e0289141.
Toplam 51 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Biyomühendislik (Diğer)
Bölüm Araştırma Makalesi
Yazarlar

Şeyda Demirel Tatlı 0000-0002-8736-5162

Kürşad Aytekin 0000-0002-6969-1183

Melih Agraz 0000-0002-6597-7627

Gönderilme Tarihi 11 Nisan 2025
Kabul Tarihi 22 Kasım 2025
Yayımlanma Tarihi 27 Ocak 2026
Yayımlandığı Sayı Yıl 2026 Cilt: 15 Sayı: 1

Kaynak Göster

AMA 1.Demirel Tatlı Ş, Aytekin K, Agraz M. ENHANCING DIABETES PREDICTION WITH INTERPRETABLE MACHINE LEARNING: A COMPARATIVE ANALYSIS OF ADDITIVE–MULTIPLICATIVE NEURAL NETWORKS AND KOLMOGOROV–ARNOLD NETWORKS. Estuscience - Life. 2026;15(1):46-65. doi:10.18036/estubtdc.1674345