Research Article
BibTex RIS Cite
Year 2025, Volume: 16 Issue: 1, 30 - 47, 31.03.2025

Abstract

References

  • Agresti, A. (2013). Categorical data analysis (3rd ed.). Wiley.
  • Barön, A. E. (1991). Misclassification among methods used for multiple group discrimination‐the effects of distributional properties. Statistics in Medicine, 10(5), 757-766. doi: https://doi.org/10.1002/sim.4780100511
  • Bates, B. E., Xie, D., Kwong, P. L., Kurichi, J. E., Ripley, D. C., & Stineman, M. G. (2014). One-year all-cause mortality after stroke: A prediction model. PM&R, 6(6), 473-483. doi: https://doi.org/10.1016/j.pmrj.2013.11.006
  • Bolin, J., & Finch, W. (2014). Supervised classification in the presence of misclassified training data: A Monte Carlo simulation study in the three-group case. Frontiers in Psychology, 5 doi:10.3389/fpsyg.2014.00118
  • Breiman, L., Friedman, J., Stone, C. J., & Olshen, R. A. (1984). Classification and regression trees. CRC Press.
  • Castonguay, A. C., Zoghi, Z., Zaidat, O. O., Burgess, R. E., Zaidi, S. F., Mueller‐Kronast, N., ... & Jumaa, M. A. (2023). Predicting functional outcome using 24‐hour post‐treatment characteristics: Application of machine learning algorithms in the STRATIS registry. Annals of Neurology, 93(1), 40-49. https://doi.org/10.1002/ana.26528
  • Chiang, Y. C. (2021). Evaluating the performance of classification and regression trees, random forests, and K-means clustering under controlled conditions (Doctoral dissertation, Indiana University).
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Earlbaum Associates.
  • Dattalo, P. (1995). A comparison of discriminant analysis and logistic regression. Journal of Social Service Research, 19(3-4), 121-144.
  • Dey, E. L., & Astin, A. W. (1993). Statistical alternatives for studying college student retention: A comparative analysis of logit, probit, and linear regression. Research in Higher Education, 34, 569-581. doi: https://doi.org/10.1007/BF00991920
  • Edwards, A. L. (1985). Experimental design in psychological research (5th ed.). NY: Harper & Row.
  • Fan, X., & Wang, L. (1999). Comparing linear discriminant function with logistic regression for the two-group classification problem. The Journal of Experimental Education, 67(3), 265-286. doi:10.1080/00220979909598356
  • Ferrer, A. J. A., & Wang, L. (1999). Comparing the Classification Accuracy among Nonparametric, Parametric Discriminant Analysis and Logistic Regression Methods (pp. 1-24, Rep.). Montreal: Paper presented at the Annual Meeting of the American Educational Research Association. https://eric.ed.gov/?id=ED432591
  • Finch, H. W., Bolin, J. E., & Kelley, K. (2014). Group membership prediction when known groups consist of unknown subgroups: A Monte Carlo comparison of methods. Frontiers in Psychology, 5. doi:10.3389/fpsyg.2014.00337
  • Finch, H., & Schneider, M. K. (2007). Classification accuracy of neural networks vs. discriminant analysis, logistic regression, and classification and regression trees: Three- and five-group cases. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 3(2), 47-57. doi:10.1027/1614-2241.3.2.47
  • Finch, W. H., & Schneider, M. K. (2006). Misclassification rates for four methods of group classification. Educational and Psychological Measurement, 66(2), 240-257. doi:10.1177/0013164405278579
  • Graf, R., Zeldovich, M., & Friedrich, S. (2023). Comparing linear discriminant analysis and supervised learning algorithms for binary classification—A method comparison study. Biometrical Journal, 66(1), 2200098. doi: https://doi.org/10.1002/bimj.202200098
  • Grassi, M., Villani, S., & Marinoni, A. (2001). Classification methods for the identification of “case” in epidemiological diagnosis of asthma. European Journal of Epidemiology, 17, 19-29. doi: https://doi.org/10.1023/A:1010987521885
  • Hassan, H. A., Hemdan, E. E. D., El‐Shafai, W., Shokair, M., & Abd El‐Samie, F. E. (2024). Detection of attacks on software defined networks using machine learning techniques and imbalanced data handling methods. Security and Privacy, 7(2), e350. doi: https://doi.org/10.1002/spy2.350
  • Hastie, T., Tibshirani R., Friedman, J. (2009). The elements of statistical learning: Data mining, inference, and prediction (2nd ed.). Springer.
  • Hess, B., Olejnik, S., & Huberty, C. J. (2001). The efficacy of two improvement-over-chance effect sizes for two-group univariate comparisons under variance heterogeneity and nonnormality. Educational and Psychological Measurement, 61(6), 909-936. doi: https://doi.org/10.1177/00131640121971572
  • Hao, Z., Yang, B., Ruggiano, N., Ma, Y., Guo, Y., & Pan, X. (2022). Depression prediction amongst Chinese older adults with neurodegenerative diseases: A performance comparison between decision tree model and logistic regression analysis. The British Journal of Social Work, 52(1), 274-290. doi: https://doi.org/10.1093/bjsw/bcaa237
  • Hoang, M. L., Matrella, G., & Ciampolini, P. (2025). Metrological evaluation of contactless sleep position recognition using an accelerometric smart bed and machine learning. Sensors and Actuators A: Physical, 385, 116309. doi: https://doi.org/10.1016/j.sna.2025.116309
  • Holden, J. E., Finch, W. H., & Kelley, K. (2011). A comparison of two- group classification methods. Educational and Psychological Measurement, 71(5), 870-901. doi:10.1177/0013164411398357
  • Holden, J. E., & Kelley, K. (2010). The effects of initially misclassified data on the effectiveness of discriminant function analysis and finite mixture modeling. Educational and Psychological Measurement, 70(1), 36-55. doi:10.1177/0013164409344533
  • Huberty, C. J. (1994). Applied discriminant analysis. John Wiley & Sons.
  • Huberty, C. J., & Olejnik, S. (2006). Applied MANOVA and discriminant analysis (Vol. 498). John Wiley & Sons. IBM Corp. (2025). IBM SPSS Statistics for Windows, Version 30.0.0. Armonk, NY: IBM Corp. Retrieved from https://www.ibm.com/us-en/marketplace/statistical-analysis-and-reporting
  • Kiang, M. Y. (2003). A comparative assessment of classification methods. Decision Support Systems, 35(4), 441-454. doi: https://doi.org/10.1016/S0167-9236(02)00110-0
  • Lei, P., & Koehly, L. (2003). Linear discriminant analysis versus logistic regression: A comparison of classification errors in the two-group case. Journal of Experimental Education, 72(1), 25-49. doi: https://doi.org/10.1080/00220970309600878
  • Meshbane, A., & Morris, J. D. (1995). A method for selecting between linear and quadratic classification models in discriminant analysis. Journal of Experimental Education, 63(1), 263-273. doi: https://doi.org/10.1080/00220973.1995.9943813
  • Pai, D. R., Lawrence, K. D., Klimberg, R. K., & Lawrence, S. M. (2012). Analyzing the balancing of error rates for multi-group classification. Expert Systems with Applications, 39(17), 12869-12875. doi: https://doi.org/10.1016/j.eswa.2012.05.006
  • Paxton, P., Curran, P. J., Bollen, K. A., Kirby, J., & Chen, F. (2001). Monte Carlo Experiments: Design and implementation. Structural Equation Modeling, 8(2), 287-312. doi: https://doi.org/10.1207/S15328007SEM0802_7
  • Pierce, C. A., Block, R. A., & Aguinis, H. (2004). Cautionary note on reporting eta-squared values from multifactor ANOVA designs. Educational and Psychological Measurement, 64(6), 916-924. doi: https://doi.org/10.1177/0013164404264848
  • Pohar, M., Blas, M., & Turk, S. (2004). Comparison of logistic regression and linear discriminant analysis: A simulation study. Metodoloski Zvezki, 1(1), 143-161. http://mrvar.fdv.uni-lj.si/pub/mz/mz1.1/pohar.pdf
  • Preatoni, D. G., Nodari, M., Chirchella, R., Tosi, G., Wauters, L. A., & Martinoli, A. (2005). Identifying bats from time-expanded recordings of search calls: Comparing classification methods. Journal of Wildlife Management, 69(1), 1601-1614. doi: https://doi.org/10.2193/0022-541X(2005)69[1601:IBFTRO]2.0.CO;2
  • R Core Team (2016). R: A Language and Environment for Statistical Computing. R foundation for Statistical Computing, Vienna, Austria. Retrieved from https://www.r-project.org/
  • Rausch, J. R., & Kelley, K. (2009). A comparison of linear and mixture models for discriminant analysis under nonnormality. Behavior Research Methods, 41(1), 85-98. doi: https://doi.org/10.3758/BRM.41.1.85
  • Richardson, J. T. (2011). Eta squared and partial eta squared as measures of effect size in educational research. Educational Research Review, 6(2), 135-147. doi: https://doi.org/10.1016/j.edurev.2010.12.001
  • Ripley, B. D. (1994). Neural networks and related methods for classification. Journal of the Royal Statistical Society: Series B (Methodological), 3(1), 409-456. doi: https://doi.org/10.1111/j.2517-6161.1994.tb01990.x
  • Saboor, A., Usman, M., Ali, S., Samad, A., Abrar, M. F., & Ullah, N. (2022). A method for improving prediction of human heart disease using machine learning algorithms. Mobile Information Systems, 2022(1), 1410169. doi: https://doi.org/10.1155/2022/1410169
  • Sainani, K. L. (2014). Explanatory versus predictive modeling. PM&R, 6(9), 841-844. doi: https://doi.org/10.1016/j.pmrj.2014.08.941
  • Schumacher, M., Rossner, R., & Vach, W. (1996). Neural networks and logistic regression: Part I. Computational Statistics: Data Analysis, 21(1), 661-682. doi: https://doi.org/10.1016/0167-9473(95)00032-1
  • Selim, G. E. I., Hemdan, E. E. D., Shehata, A. M., & El-Fishawy, N. A. (2021). Anomaly events classification and detection system in critical industrial internet of things infrastructure using machine learning algorithms. Multimedia Tools and Applications, 80(8), 12619-12640. doi: https://doi.org/10.1007/s11042-020-10354-1
  • Soureshjani, M. H., & Kimiagari, A. M. (2013). Calculating the best cut off point using logistic regression and neural network on credit scoring problem-A case study of a commercial bank. African Journal of Business Management, 7(16), 1414. doi: 10.5897/AJBM11.394
  • Song, G., Ai, Z., Zhang, G., Peng, Y., Wang, W., & Yan, Y. (2022). Using machine learning algorithms to multidimensional analysis of subjective thermal comfort in a library. Building and Environment, 212, 108790. doi: https://doi.org/10.1016/j.buildenv.2022.108790
  • Suh, S., Suh, J., & Houston, I. (2007). Predictors of categorical at‐risk high school dropouts. Journal of Counseling & Development, 85(2), 196-203. doi: https://doi.org/10.1002/j.1556-6678.2007.tb00463.x
  • Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics. Boston: Pearson Education.
  • Williams, C. J., Lee, S. S., Fisher, R. A., & Dickerman, L. H. (1999). A comparison of statistical methods for prenatal screening for Down syndrome. Applied Stochastic Models in Business and Industry, 15(2), 89-101. doi: https://doi.org/10.1002/(SICI)1526-4025(199904/06)15:2<89::AID-ASMB366>3.0.CO;2-K
  • Zampogna, B., Torre, G., Zampoli, A., Parisi, F., Ferrini, A., Shanmugasundaram, S., ... & Papalia, R. (2024). Can machine learning predict the accuracy of preoperative planning for total hip arthroplasty, basing on patient-related factors? An explorative investigation on Supervised machine learning classification models. Journal of Clinical Orthopaedics and Trauma, 53, 102470. doi: https://doi.org/10.1016/j.jcot.2024.102470
  • Zavorka, S., & Perrett, J. J. (2014). Minimum sample size considerations for two-group linear and quadratic discriminant analysis with rare populations. Communications in Statistics-Simulation and Computation, 43(7), 1726-1739. doi: https://doi.org/10.1080/03610918.2012.744041

Performance of Classification Techniques on Smaller Group Prediction

Year 2025, Volume: 16 Issue: 1, 30 - 47, 31.03.2025

Abstract

Classification techniques allow researchers to analyze data based on groups for the purposes of clustering or making predictions about group membership. Since there are many methods for utilizing classification analyses, such as Linear Discriminant Analysis (LDA), Logistic Regression (LR), and Classification and Regression Trees (CART), it is important to know which techniques perform better under which conditions to affect prediction accuracy. In the context of group prediction, it is crucial to consider the impact of group proportional sizes on prediction accuracy, particularly when comparing smaller groups to larger ones. This study evaluated the small group prediction accuracies of LDA, LR, and CART, controlling for number of groups, correlation, and number of predictor variables. Results showed that CART performed best in most cases for smaller and overall group prediction. In addition, a notable difference was observed in overall group prediction accuracy compared to small group prediction accuracy, with the overall group prediction accuracy being greater. Data conditions had a greater impact on LR and LDA than CART, and, in certain instances, LR showed superiority over the other two methods. The number of groups was the most influential factor on small group prediction, while the number of predictor variables, correlation, and method were of decreasing influence. In general, overall group prediction accuracy and small group prediction accuracy were negatively related. However, for the categories with an equal number of groups, the two were positively related.

References

  • Agresti, A. (2013). Categorical data analysis (3rd ed.). Wiley.
  • Barön, A. E. (1991). Misclassification among methods used for multiple group discrimination‐the effects of distributional properties. Statistics in Medicine, 10(5), 757-766. doi: https://doi.org/10.1002/sim.4780100511
  • Bates, B. E., Xie, D., Kwong, P. L., Kurichi, J. E., Ripley, D. C., & Stineman, M. G. (2014). One-year all-cause mortality after stroke: A prediction model. PM&R, 6(6), 473-483. doi: https://doi.org/10.1016/j.pmrj.2013.11.006
  • Bolin, J., & Finch, W. (2014). Supervised classification in the presence of misclassified training data: A Monte Carlo simulation study in the three-group case. Frontiers in Psychology, 5 doi:10.3389/fpsyg.2014.00118
  • Breiman, L., Friedman, J., Stone, C. J., & Olshen, R. A. (1984). Classification and regression trees. CRC Press.
  • Castonguay, A. C., Zoghi, Z., Zaidat, O. O., Burgess, R. E., Zaidi, S. F., Mueller‐Kronast, N., ... & Jumaa, M. A. (2023). Predicting functional outcome using 24‐hour post‐treatment characteristics: Application of machine learning algorithms in the STRATIS registry. Annals of Neurology, 93(1), 40-49. https://doi.org/10.1002/ana.26528
  • Chiang, Y. C. (2021). Evaluating the performance of classification and regression trees, random forests, and K-means clustering under controlled conditions (Doctoral dissertation, Indiana University).
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Earlbaum Associates.
  • Dattalo, P. (1995). A comparison of discriminant analysis and logistic regression. Journal of Social Service Research, 19(3-4), 121-144.
  • Dey, E. L., & Astin, A. W. (1993). Statistical alternatives for studying college student retention: A comparative analysis of logit, probit, and linear regression. Research in Higher Education, 34, 569-581. doi: https://doi.org/10.1007/BF00991920
  • Edwards, A. L. (1985). Experimental design in psychological research (5th ed.). NY: Harper & Row.
  • Fan, X., & Wang, L. (1999). Comparing linear discriminant function with logistic regression for the two-group classification problem. The Journal of Experimental Education, 67(3), 265-286. doi:10.1080/00220979909598356
  • Ferrer, A. J. A., & Wang, L. (1999). Comparing the Classification Accuracy among Nonparametric, Parametric Discriminant Analysis and Logistic Regression Methods (pp. 1-24, Rep.). Montreal: Paper presented at the Annual Meeting of the American Educational Research Association. https://eric.ed.gov/?id=ED432591
  • Finch, H. W., Bolin, J. E., & Kelley, K. (2014). Group membership prediction when known groups consist of unknown subgroups: A Monte Carlo comparison of methods. Frontiers in Psychology, 5. doi:10.3389/fpsyg.2014.00337
  • Finch, H., & Schneider, M. K. (2007). Classification accuracy of neural networks vs. discriminant analysis, logistic regression, and classification and regression trees: Three- and five-group cases. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 3(2), 47-57. doi:10.1027/1614-2241.3.2.47
  • Finch, W. H., & Schneider, M. K. (2006). Misclassification rates for four methods of group classification. Educational and Psychological Measurement, 66(2), 240-257. doi:10.1177/0013164405278579
  • Graf, R., Zeldovich, M., & Friedrich, S. (2023). Comparing linear discriminant analysis and supervised learning algorithms for binary classification—A method comparison study. Biometrical Journal, 66(1), 2200098. doi: https://doi.org/10.1002/bimj.202200098
  • Grassi, M., Villani, S., & Marinoni, A. (2001). Classification methods for the identification of “case” in epidemiological diagnosis of asthma. European Journal of Epidemiology, 17, 19-29. doi: https://doi.org/10.1023/A:1010987521885
  • Hassan, H. A., Hemdan, E. E. D., El‐Shafai, W., Shokair, M., & Abd El‐Samie, F. E. (2024). Detection of attacks on software defined networks using machine learning techniques and imbalanced data handling methods. Security and Privacy, 7(2), e350. doi: https://doi.org/10.1002/spy2.350
  • Hastie, T., Tibshirani R., Friedman, J. (2009). The elements of statistical learning: Data mining, inference, and prediction (2nd ed.). Springer.
  • Hess, B., Olejnik, S., & Huberty, C. J. (2001). The efficacy of two improvement-over-chance effect sizes for two-group univariate comparisons under variance heterogeneity and nonnormality. Educational and Psychological Measurement, 61(6), 909-936. doi: https://doi.org/10.1177/00131640121971572
  • Hao, Z., Yang, B., Ruggiano, N., Ma, Y., Guo, Y., & Pan, X. (2022). Depression prediction amongst Chinese older adults with neurodegenerative diseases: A performance comparison between decision tree model and logistic regression analysis. The British Journal of Social Work, 52(1), 274-290. doi: https://doi.org/10.1093/bjsw/bcaa237
  • Hoang, M. L., Matrella, G., & Ciampolini, P. (2025). Metrological evaluation of contactless sleep position recognition using an accelerometric smart bed and machine learning. Sensors and Actuators A: Physical, 385, 116309. doi: https://doi.org/10.1016/j.sna.2025.116309
  • Holden, J. E., Finch, W. H., & Kelley, K. (2011). A comparison of two- group classification methods. Educational and Psychological Measurement, 71(5), 870-901. doi:10.1177/0013164411398357
  • Holden, J. E., & Kelley, K. (2010). The effects of initially misclassified data on the effectiveness of discriminant function analysis and finite mixture modeling. Educational and Psychological Measurement, 70(1), 36-55. doi:10.1177/0013164409344533
  • Huberty, C. J. (1994). Applied discriminant analysis. John Wiley & Sons.
  • Huberty, C. J., & Olejnik, S. (2006). Applied MANOVA and discriminant analysis (Vol. 498). John Wiley & Sons. IBM Corp. (2025). IBM SPSS Statistics for Windows, Version 30.0.0. Armonk, NY: IBM Corp. Retrieved from https://www.ibm.com/us-en/marketplace/statistical-analysis-and-reporting
  • Kiang, M. Y. (2003). A comparative assessment of classification methods. Decision Support Systems, 35(4), 441-454. doi: https://doi.org/10.1016/S0167-9236(02)00110-0
  • Lei, P., & Koehly, L. (2003). Linear discriminant analysis versus logistic regression: A comparison of classification errors in the two-group case. Journal of Experimental Education, 72(1), 25-49. doi: https://doi.org/10.1080/00220970309600878
  • Meshbane, A., & Morris, J. D. (1995). A method for selecting between linear and quadratic classification models in discriminant analysis. Journal of Experimental Education, 63(1), 263-273. doi: https://doi.org/10.1080/00220973.1995.9943813
  • Pai, D. R., Lawrence, K. D., Klimberg, R. K., & Lawrence, S. M. (2012). Analyzing the balancing of error rates for multi-group classification. Expert Systems with Applications, 39(17), 12869-12875. doi: https://doi.org/10.1016/j.eswa.2012.05.006
  • Paxton, P., Curran, P. J., Bollen, K. A., Kirby, J., & Chen, F. (2001). Monte Carlo Experiments: Design and implementation. Structural Equation Modeling, 8(2), 287-312. doi: https://doi.org/10.1207/S15328007SEM0802_7
  • Pierce, C. A., Block, R. A., & Aguinis, H. (2004). Cautionary note on reporting eta-squared values from multifactor ANOVA designs. Educational and Psychological Measurement, 64(6), 916-924. doi: https://doi.org/10.1177/0013164404264848
  • Pohar, M., Blas, M., & Turk, S. (2004). Comparison of logistic regression and linear discriminant analysis: A simulation study. Metodoloski Zvezki, 1(1), 143-161. http://mrvar.fdv.uni-lj.si/pub/mz/mz1.1/pohar.pdf
  • Preatoni, D. G., Nodari, M., Chirchella, R., Tosi, G., Wauters, L. A., & Martinoli, A. (2005). Identifying bats from time-expanded recordings of search calls: Comparing classification methods. Journal of Wildlife Management, 69(1), 1601-1614. doi: https://doi.org/10.2193/0022-541X(2005)69[1601:IBFTRO]2.0.CO;2
  • R Core Team (2016). R: A Language and Environment for Statistical Computing. R foundation for Statistical Computing, Vienna, Austria. Retrieved from https://www.r-project.org/
  • Rausch, J. R., & Kelley, K. (2009). A comparison of linear and mixture models for discriminant analysis under nonnormality. Behavior Research Methods, 41(1), 85-98. doi: https://doi.org/10.3758/BRM.41.1.85
  • Richardson, J. T. (2011). Eta squared and partial eta squared as measures of effect size in educational research. Educational Research Review, 6(2), 135-147. doi: https://doi.org/10.1016/j.edurev.2010.12.001
  • Ripley, B. D. (1994). Neural networks and related methods for classification. Journal of the Royal Statistical Society: Series B (Methodological), 3(1), 409-456. doi: https://doi.org/10.1111/j.2517-6161.1994.tb01990.x
  • Saboor, A., Usman, M., Ali, S., Samad, A., Abrar, M. F., & Ullah, N. (2022). A method for improving prediction of human heart disease using machine learning algorithms. Mobile Information Systems, 2022(1), 1410169. doi: https://doi.org/10.1155/2022/1410169
  • Sainani, K. L. (2014). Explanatory versus predictive modeling. PM&R, 6(9), 841-844. doi: https://doi.org/10.1016/j.pmrj.2014.08.941
  • Schumacher, M., Rossner, R., & Vach, W. (1996). Neural networks and logistic regression: Part I. Computational Statistics: Data Analysis, 21(1), 661-682. doi: https://doi.org/10.1016/0167-9473(95)00032-1
  • Selim, G. E. I., Hemdan, E. E. D., Shehata, A. M., & El-Fishawy, N. A. (2021). Anomaly events classification and detection system in critical industrial internet of things infrastructure using machine learning algorithms. Multimedia Tools and Applications, 80(8), 12619-12640. doi: https://doi.org/10.1007/s11042-020-10354-1
  • Soureshjani, M. H., & Kimiagari, A. M. (2013). Calculating the best cut off point using logistic regression and neural network on credit scoring problem-A case study of a commercial bank. African Journal of Business Management, 7(16), 1414. doi: 10.5897/AJBM11.394
  • Song, G., Ai, Z., Zhang, G., Peng, Y., Wang, W., & Yan, Y. (2022). Using machine learning algorithms to multidimensional analysis of subjective thermal comfort in a library. Building and Environment, 212, 108790. doi: https://doi.org/10.1016/j.buildenv.2022.108790
  • Suh, S., Suh, J., & Houston, I. (2007). Predictors of categorical at‐risk high school dropouts. Journal of Counseling & Development, 85(2), 196-203. doi: https://doi.org/10.1002/j.1556-6678.2007.tb00463.x
  • Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics. Boston: Pearson Education.
  • Williams, C. J., Lee, S. S., Fisher, R. A., & Dickerman, L. H. (1999). A comparison of statistical methods for prenatal screening for Down syndrome. Applied Stochastic Models in Business and Industry, 15(2), 89-101. doi: https://doi.org/10.1002/(SICI)1526-4025(199904/06)15:2<89::AID-ASMB366>3.0.CO;2-K
  • Zampogna, B., Torre, G., Zampoli, A., Parisi, F., Ferrini, A., Shanmugasundaram, S., ... & Papalia, R. (2024). Can machine learning predict the accuracy of preoperative planning for total hip arthroplasty, basing on patient-related factors? An explorative investigation on Supervised machine learning classification models. Journal of Clinical Orthopaedics and Trauma, 53, 102470. doi: https://doi.org/10.1016/j.jcot.2024.102470
  • Zavorka, S., & Perrett, J. J. (2014). Minimum sample size considerations for two-group linear and quadratic discriminant analysis with rare populations. Communications in Statistics-Simulation and Computation, 43(7), 1726-1739. doi: https://doi.org/10.1080/03610918.2012.744041
There are 50 citations in total.

Details

Primary Language English
Subjects Statistical Analysis Methods
Journal Section Articles
Authors

Cahit Polat 0000-0002-1423-5084

Kathy Green This is me 0000-0002-1676-3139

Publication Date March 31, 2025
Submission Date December 9, 2024
Acceptance Date March 19, 2025
Published in Issue Year 2025 Volume: 16 Issue: 1

Cite

APA Polat, C., & Green, K. (2025). Performance of Classification Techniques on Smaller Group Prediction. Journal of Measurement and Evaluation in Education and Psychology, 16(1), 30-47. https://doi.org/10.21031/epod.1598907