Research Article
BibTex RIS Cite
Year 2024, , 302 - 317, 30.12.2024
https://doi.org/10.21031/epod.1526704

Abstract

Project Number

16DHBKI045

References

  • Adnan, M., Habib, A., Ashraf, J., Mussadiq, S., Raza, A. A., Abid, M., ... & Khan, S. U. (2021). Predicting atrisk students at different percentages of course length for early intervention using machine learning models. IEEE Access, 9, 7519-7539. https://doi.org/10.1109/ACCESS.2021.3049446
  • Afrin, F., Hamilton, M., & Thevathyan, C. (2023, June). Exploring counterfactual explanations for predicting student success. In International Conference on Computational Science (pp. 413–420). Springer. https://doi.org/10.1007/978-3-031-36021-3_44
  • Arnold, K. E., & Pistilli, M. D. (2012). Course signals at Purdue: Using learning analytics to increase student success. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK '12) (pp. 267–270). ACM. https://doi.org/10.1145/2330601.2330666
  • Artelt, A., & Hammer, B. (2019). On the computation of counterfactual explanations: A survey. arXiv preprint arXiv:1911.07749. https://doi.org/10.48550/arXiv.1911.07749
  • Artelt, A., Vaquet, V., Velioglu, R., Hinder, F., Brinkrolf, J., Schilling, M., & Hammer, B. (2021). Evaluating the robustness of counterfactual explanations. In 2021 IEEE Symposium Series on Computational Intelligence (pp. 01–09). IEEE. https://doi.org/10.1109/SSCI50451.2021.9660058
  • Biecek, P., & Burzykowski, T. (2021). Explanatory model analysis: Explore, explain, and examine predictive models. Chapman and Hall/CRC. https://doi.org/10.1201/9780429027192
  • Brughmans, D., Leyman, P., & Martens, D. (2023). NICE: An algorithm for nearest instance counterfactual explanations. Data Mining and Knowledge Discovery, 1–39. https://doi.org/10.1007/s10618-023-00930- y
  • Carriero, A., Luijken, K., de Hond, A., Moons, K. G., van Calster, B., & van Smeden, M. (2024). The harms of class imbalance corrections for machine learning-based prediction models: A simulation study. arXiv preprint arXiv:2404.19494. https://doi.org/10.48550/arXiv.2404.19494
  • Cavus, M., & Biecek, P. (2024). An experimental study on the Rashomon effect of balancing methods in imbalanced classification. arXiv preprint arXiv:2405.01557. https://doi.org/10.48550/arXiv.2405.01557
  • Cavus, M., & Kuzilek, J. (2024). The actionable explanations for student success prediction models: A benchmark study on the quality of counterfactual methods. Joint Proceedings of the Human-Centric eXplainable AI in Education and the Leveraging Large Language Models for Next Generation Educational Technologies Workshops (HEXED-L3MNGET 2024), co-located with the 17th International Conference on Educational Data Mining (EDM 2024), 1-10. https://ceur-ws.org/Vol-3840/HEXED24_paper1.pdf
  • Cavus, M., Stando, A., & Biecek, P. (2023). Glocal explanations of expected goal models in soccer. arXiv preprint arXiv:2308.15559. https://doi.org/10.48550/arXiv.2308.15559
  • Chawla, N. V. (2010). Data mining for imbalanced datasets: An overview. In Data Mining and Knowledge Discovery Handbook (pp. 875–886). Springer. https://doi.org/10.1007/978-0-387-09823-4_45
  • Dandl, S., Hofheinz, A., Binder, M., Bischl, B., & Casalicchio, G. (2023). Counterfactuals: An R package for counterfactual explanation methods. arXiv preprint arXiv:2304.06569. https://doi.org/10.48550/arXiv.2304.06569
  • Dandl, S., Molnar, C., Binder, M., & Bischl, B. (2020). Multi-objective counterfactual explanations. In Proceedings of the International Conference on Parallel Problem Solving from Nature (pp. 448–469). Springer. https://doi.org/10.1007/978-3-030-58112-1_31
  • Drachsler, H. (2018). Trusted learning analytics. Synergie, 6, 40–43. https://doi.org/10.25657/02:19141
  • Elyan, E., Moreno-Garcia, C. F., & Jayne, C. (2021). CDSMOTE: Class decomposition and synthetic minority class oversampling technique for imbalanced-data classification. Neural Computing and Applications, 33(7), 2839–2851. https://doi.org/10.1007/s00521-020-05130-z
  • Eurostat. (2023). Glossary: Information and communication technology (ICT). Eurostat: Statistics Explained. https://tinyurl.com/eust-ict
  • Grinsztajn, L., Oyallon, E., & Varoquaux, G. (2022). Why do tree-based models still outperform deep learning on typical tabular data? Advances in Neural Information Processing Systems, 35, 507–520. https://doi.org/10.48550/arXiv.2207.08815
  • Guidotti, R. (2022). Counterfactual explanations and how to find them: Literature review and benchmarking. Data Mining and Knowledge Discovery, 1–55. https://doi.org/10.1007/s10618-022-00831-6
  • Gunonu, S., Altun, G., & Cavus, M. (2024). Explainable bank failure prediction models: Counterfactual explanations to reduce the failure risk. arXiv preprint arXiv:2407.11089. https://doi.org/10.48550/arXiv.2407.11089
  • Gu, Q., Tian, J., Li, X., & Jiang, S. (2022). A novel Random Forest integrated model for imbalanced data classification problem. Knowledge-Based Systems, 250, 109050. https://doi.org/10.1016/j.knosys.2022.109050
  • Herodotou, C., Rienties, B., Boroowa, A., et al. (2019). A large-scale implementation of predictive learning analytics in higher education: The teachers’ role and perspective. Education Tech Research Dev, 67, 1273–1306. https://doi.org/10.1007/s11423-019-09685-0
  • Hilbert, M., & López, P. (2011). The world’s technological capacity to store, communicate, and compute information. Science, 332(6025), 60–65. https://doi.org/10.1126/science.1200970
  • Hoel, T., Griffiths, D., & Chen, W. (2017). The influence of data protection and privacy frameworks on the design of learning analytics systems. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference (pp. 243–252). ACM. https://doi.org/10.1145/3027385.3027414
  • Holzinger, A., Saranti, A., Molnar, C., Biecek, P., & Samek, W. (2022). Explainable AI methods: A brief overview. In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers (pp. 13–38). Springer. https://doi.org/10.1007/978-3-031-04083-2_2
  • Junior, J. D. S. F., & Pisani, P. H. (2022). Performance and model complexity on imbalanced datasets using resampling and cost-sensitive algorithms. In Fourth International Workshop on Learning with Imbalanced Domains: Theory and Applications (pp. 83–97). PMLR.
  • Karimi, A. H., Barthe, G., Balle, B., & Valera, I. (2020). Model-agnostic counterfactual explanations for consequential decisions. In Proceedings of the International Conference on Artificial Intelligence and Statistics (pp. 895–905). PMLR. https://doi.org/10.48550/arXiv.1905.11190
  • Kuzilek, J., Hlosta, M., & Zdrahal, Z. (2017). Open university learning analytics dataset. Scientific Data, 4(1), 1– 8. https://doi.org/10.1038/sdata.2017.171
  • Kuzilek, J., Hlosta, M., Herrmannova, D., Zdrahal, Z., Vaclavek, J., & Wolff, A. (2015). OU student data from a MOOC environment. Data in Brief, 5, 759–761. https://doi.org/10.1016/j.dib.2015.11.014
  • LeCun, Y., Bengio, Y. & Hinton, G. (2015).Deep learning. Nature 521, 436–444. https://doi.org/10.1038/nature14539
  • Liu, J. (2022). Importance-SMOTE: a synthetic minority oversampling method for noisy imbalanced data. Soft Computing, 26(3), 1141-1163. https://doi.org/10.1007/s00500-021-06532-4
  • López-Pernas, S., Saqr, M., Conde, J., Del-Río-Carazo, L. (2024). A Broad Collection of Datasets for Educational Research Training and Application. In: Saqr, M., López-Pernas, S. (eds) Learning Analytics Methods and Tutorials. Springer, Cham. https://doi.org/10.1007/978-3-031-54464-4_2
  • Molnar, C. (2020). Interpretable machine learning. Lulu.com.
  • Murphy, K. P. (2022). Probabilistic Machine Learning: An Introduction. MIT Press.
  • Papamitsiou, Z., & Economides, A. (2014). Learning analytics and educational data mining in practice: A systematic literature review of empirical evidence. Journal of Educational Technology & Society, 17(4), 49–64. http://www.jstor.org/stable/jeductechsoci.17.4.49
  • Pinto, J. D., & Paquette, L. (2024). Towards a unified framework for evaluating explanations. arXiv preprint arXiv:2405.14016. https://doi.org/10.48550/arXiv.2405.14016
  • Siemens, G., & Baker, R. (2012). Learning analytics and educational data mining: Towards communication and collaboration. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK '12) (pp. 252–254). Association for Computing Machinery. https://doi.org/10.1145/2330601.2330661
  • Singer, N. (2014). InBloom student data repository to close. The New York Times, April 21, 2014. Available under: https://uhh.de/2rgnb [11.07.2018].
  • Smith, B. I., Chimedza, C., & Bührmann, J. H. (2022). Individualized help for at-risk students using modelagnostic and counterfactual explanations. Education and Information Technologies, 1–20. https://doi.org/10.1007/s10639-021-10661-6
  • Stando, A., Cavus, M., & Biecek, P. (2024, June). The effect of balancing methods on model behavior in imbalanced classification problems. In Fifth International Workshop on Learning with Imbalanced Domains: Theory and Applications (pp. 16–30). PMLR. https://doi.org/10.48550/arXiv.2307.00157
  • Tao, X., Li, Q., Guo, W., Ren, C., Li, C., Liu, R., & Zou, J. (2019). Self-adaptive cost weights-based support vector machine cost-sensitive ensemble for imbalanced data classification. Information Sciences, 487, 31–56. https://doi.org/10.1016/j.ins.2019.02.062
  • Tsiakmaki, M., & Ragos, O. (2021). A case study of interpretable counterfactual explanations for the task of predicting student academic performance. In 2021 25th International Conference on Circuits, Systems, Communications and Computers (CSCC) (pp. 120–125). IEEE. https://doi.org/10.1109/CSCC53858.2021.00029
  • Vaswani, A. (2017). Attention is all you need. Advances in Neural Information Processing Systems. https://doi.org/10.48550/arXiv.1706.03762
  • Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31, 841–872. https://doi.org/10.48550/arXiv.1711.00399
  • Waheed, H., Hassan, S. U., Aljohani, N. R., Hardman, J., Alelyani, S., & Nawaz, R. (2020). Predicting the academic performance of students from VLE big data using deep learning models. Computers in Human Behavior, 104, 106189. https://doi.org/10.1016/j.chb.2019.106189
  • Warren, G., Keane, M. T., Gueret, C., & Delaney, E. (2023). Explaining groups of instances counterfactually for XAI: A use case, algorithm, and user study for group-counterfactuals. arXiv preprint arXiv:2303.09297. https://doi.org/10.48550/arXiv.2303.09297
  • Yin, J., Gan, C., Zhao, K., Lin, X., Quan, Z., & Wang, Z. J. (2020). A novel model for imbalanced data classification. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 04, pp. 6680–6687). https://doi.org/10.1609/aaai.v34i04.6145
  • Zhang, H., Dong, J., Lv, C., Lin, Y., & Bai, J. (2023). Visual analytics of potential dropout behavior patterns in online learning based on counterfactual explanation. Journal of Visualization, 26(3), 723–741. https://doi.org/10.1007/s12650-022-00899-8
  • Zong, W., Huang, G. B., & Chen, Y. (2013). Weighted extreme learning machine for imbalance learning. Neurocomputing, 101, 229–242. https://doi.org/10.1016/j.neucom.2012.08.010
  • Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viégas, F., & Wilson, J. (2019). The what-if tool: Interactive probing of machine learning models. IEEE Transactions on Visualization and Computer Graphics, 26(1), 56–65. https://doi.ieeecomputersociety.org/10.1109/TVCG.2019.2934619

An Effect Analysis of the Balancing Techniques on the Counterfactual Explanations of Student Success Prediction Models

Year 2024, , 302 - 317, 30.12.2024
https://doi.org/10.21031/epod.1526704

Abstract

In the past decade, we have experienced a massive boom in the usage of digital solutions in higher education. Due to this boom, large amounts of data have enabled advanced data analysis methods to support learners and examine learning processes. One of the dominant research directions in learning analytics is predictive modeling of learners' success using various machine learning methods. To build learners' and teachers' trust in such methods and systems, exploring the methods and methodologies that enable relevant stakeholders to deeply understand the underlying machine-learning models is necessary. In this context, counterfactual explanations from explainable machine learning tools are promising. Several counterfactual generation methods hold much promise, but the features must be actionable and causal to be effective. Thus, obtaining which counterfactual generation method suits the student success prediction models in terms of desiderata, stability, and robustness is essential. Although a few studies have been published in recent years on the use of counterfactual explanations in educational sciences, they have yet to discuss which counterfactual generation method is more suitable for this problem. This paper analyzed the effectiveness of commonly used counterfactual generation methods, such as WhatIf Counterfactual Explanations, Multi-Objective Counterfactual Explanations, and Nearest Instance Counterfactual Explanations after balancing. This contribution presents a case study using the Open University Learning Analytics dataset to demonstrate the practical usefulness of counterfactual explanations. The results illustrate the method's effectiveness and describe concrete steps that could be taken to alter the model's prediction.

Supporting Institution

German Federal Ministry of Education and Research

Project Number

16DHBKI045

References

  • Adnan, M., Habib, A., Ashraf, J., Mussadiq, S., Raza, A. A., Abid, M., ... & Khan, S. U. (2021). Predicting atrisk students at different percentages of course length for early intervention using machine learning models. IEEE Access, 9, 7519-7539. https://doi.org/10.1109/ACCESS.2021.3049446
  • Afrin, F., Hamilton, M., & Thevathyan, C. (2023, June). Exploring counterfactual explanations for predicting student success. In International Conference on Computational Science (pp. 413–420). Springer. https://doi.org/10.1007/978-3-031-36021-3_44
  • Arnold, K. E., & Pistilli, M. D. (2012). Course signals at Purdue: Using learning analytics to increase student success. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK '12) (pp. 267–270). ACM. https://doi.org/10.1145/2330601.2330666
  • Artelt, A., & Hammer, B. (2019). On the computation of counterfactual explanations: A survey. arXiv preprint arXiv:1911.07749. https://doi.org/10.48550/arXiv.1911.07749
  • Artelt, A., Vaquet, V., Velioglu, R., Hinder, F., Brinkrolf, J., Schilling, M., & Hammer, B. (2021). Evaluating the robustness of counterfactual explanations. In 2021 IEEE Symposium Series on Computational Intelligence (pp. 01–09). IEEE. https://doi.org/10.1109/SSCI50451.2021.9660058
  • Biecek, P., & Burzykowski, T. (2021). Explanatory model analysis: Explore, explain, and examine predictive models. Chapman and Hall/CRC. https://doi.org/10.1201/9780429027192
  • Brughmans, D., Leyman, P., & Martens, D. (2023). NICE: An algorithm for nearest instance counterfactual explanations. Data Mining and Knowledge Discovery, 1–39. https://doi.org/10.1007/s10618-023-00930- y
  • Carriero, A., Luijken, K., de Hond, A., Moons, K. G., van Calster, B., & van Smeden, M. (2024). The harms of class imbalance corrections for machine learning-based prediction models: A simulation study. arXiv preprint arXiv:2404.19494. https://doi.org/10.48550/arXiv.2404.19494
  • Cavus, M., & Biecek, P. (2024). An experimental study on the Rashomon effect of balancing methods in imbalanced classification. arXiv preprint arXiv:2405.01557. https://doi.org/10.48550/arXiv.2405.01557
  • Cavus, M., & Kuzilek, J. (2024). The actionable explanations for student success prediction models: A benchmark study on the quality of counterfactual methods. Joint Proceedings of the Human-Centric eXplainable AI in Education and the Leveraging Large Language Models for Next Generation Educational Technologies Workshops (HEXED-L3MNGET 2024), co-located with the 17th International Conference on Educational Data Mining (EDM 2024), 1-10. https://ceur-ws.org/Vol-3840/HEXED24_paper1.pdf
  • Cavus, M., Stando, A., & Biecek, P. (2023). Glocal explanations of expected goal models in soccer. arXiv preprint arXiv:2308.15559. https://doi.org/10.48550/arXiv.2308.15559
  • Chawla, N. V. (2010). Data mining for imbalanced datasets: An overview. In Data Mining and Knowledge Discovery Handbook (pp. 875–886). Springer. https://doi.org/10.1007/978-0-387-09823-4_45
  • Dandl, S., Hofheinz, A., Binder, M., Bischl, B., & Casalicchio, G. (2023). Counterfactuals: An R package for counterfactual explanation methods. arXiv preprint arXiv:2304.06569. https://doi.org/10.48550/arXiv.2304.06569
  • Dandl, S., Molnar, C., Binder, M., & Bischl, B. (2020). Multi-objective counterfactual explanations. In Proceedings of the International Conference on Parallel Problem Solving from Nature (pp. 448–469). Springer. https://doi.org/10.1007/978-3-030-58112-1_31
  • Drachsler, H. (2018). Trusted learning analytics. Synergie, 6, 40–43. https://doi.org/10.25657/02:19141
  • Elyan, E., Moreno-Garcia, C. F., & Jayne, C. (2021). CDSMOTE: Class decomposition and synthetic minority class oversampling technique for imbalanced-data classification. Neural Computing and Applications, 33(7), 2839–2851. https://doi.org/10.1007/s00521-020-05130-z
  • Eurostat. (2023). Glossary: Information and communication technology (ICT). Eurostat: Statistics Explained. https://tinyurl.com/eust-ict
  • Grinsztajn, L., Oyallon, E., & Varoquaux, G. (2022). Why do tree-based models still outperform deep learning on typical tabular data? Advances in Neural Information Processing Systems, 35, 507–520. https://doi.org/10.48550/arXiv.2207.08815
  • Guidotti, R. (2022). Counterfactual explanations and how to find them: Literature review and benchmarking. Data Mining and Knowledge Discovery, 1–55. https://doi.org/10.1007/s10618-022-00831-6
  • Gunonu, S., Altun, G., & Cavus, M. (2024). Explainable bank failure prediction models: Counterfactual explanations to reduce the failure risk. arXiv preprint arXiv:2407.11089. https://doi.org/10.48550/arXiv.2407.11089
  • Gu, Q., Tian, J., Li, X., & Jiang, S. (2022). A novel Random Forest integrated model for imbalanced data classification problem. Knowledge-Based Systems, 250, 109050. https://doi.org/10.1016/j.knosys.2022.109050
  • Herodotou, C., Rienties, B., Boroowa, A., et al. (2019). A large-scale implementation of predictive learning analytics in higher education: The teachers’ role and perspective. Education Tech Research Dev, 67, 1273–1306. https://doi.org/10.1007/s11423-019-09685-0
  • Hilbert, M., & López, P. (2011). The world’s technological capacity to store, communicate, and compute information. Science, 332(6025), 60–65. https://doi.org/10.1126/science.1200970
  • Hoel, T., Griffiths, D., & Chen, W. (2017). The influence of data protection and privacy frameworks on the design of learning analytics systems. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference (pp. 243–252). ACM. https://doi.org/10.1145/3027385.3027414
  • Holzinger, A., Saranti, A., Molnar, C., Biecek, P., & Samek, W. (2022). Explainable AI methods: A brief overview. In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers (pp. 13–38). Springer. https://doi.org/10.1007/978-3-031-04083-2_2
  • Junior, J. D. S. F., & Pisani, P. H. (2022). Performance and model complexity on imbalanced datasets using resampling and cost-sensitive algorithms. In Fourth International Workshop on Learning with Imbalanced Domains: Theory and Applications (pp. 83–97). PMLR.
  • Karimi, A. H., Barthe, G., Balle, B., & Valera, I. (2020). Model-agnostic counterfactual explanations for consequential decisions. In Proceedings of the International Conference on Artificial Intelligence and Statistics (pp. 895–905). PMLR. https://doi.org/10.48550/arXiv.1905.11190
  • Kuzilek, J., Hlosta, M., & Zdrahal, Z. (2017). Open university learning analytics dataset. Scientific Data, 4(1), 1– 8. https://doi.org/10.1038/sdata.2017.171
  • Kuzilek, J., Hlosta, M., Herrmannova, D., Zdrahal, Z., Vaclavek, J., & Wolff, A. (2015). OU student data from a MOOC environment. Data in Brief, 5, 759–761. https://doi.org/10.1016/j.dib.2015.11.014
  • LeCun, Y., Bengio, Y. & Hinton, G. (2015).Deep learning. Nature 521, 436–444. https://doi.org/10.1038/nature14539
  • Liu, J. (2022). Importance-SMOTE: a synthetic minority oversampling method for noisy imbalanced data. Soft Computing, 26(3), 1141-1163. https://doi.org/10.1007/s00500-021-06532-4
  • López-Pernas, S., Saqr, M., Conde, J., Del-Río-Carazo, L. (2024). A Broad Collection of Datasets for Educational Research Training and Application. In: Saqr, M., López-Pernas, S. (eds) Learning Analytics Methods and Tutorials. Springer, Cham. https://doi.org/10.1007/978-3-031-54464-4_2
  • Molnar, C. (2020). Interpretable machine learning. Lulu.com.
  • Murphy, K. P. (2022). Probabilistic Machine Learning: An Introduction. MIT Press.
  • Papamitsiou, Z., & Economides, A. (2014). Learning analytics and educational data mining in practice: A systematic literature review of empirical evidence. Journal of Educational Technology & Society, 17(4), 49–64. http://www.jstor.org/stable/jeductechsoci.17.4.49
  • Pinto, J. D., & Paquette, L. (2024). Towards a unified framework for evaluating explanations. arXiv preprint arXiv:2405.14016. https://doi.org/10.48550/arXiv.2405.14016
  • Siemens, G., & Baker, R. (2012). Learning analytics and educational data mining: Towards communication and collaboration. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK '12) (pp. 252–254). Association for Computing Machinery. https://doi.org/10.1145/2330601.2330661
  • Singer, N. (2014). InBloom student data repository to close. The New York Times, April 21, 2014. Available under: https://uhh.de/2rgnb [11.07.2018].
  • Smith, B. I., Chimedza, C., & Bührmann, J. H. (2022). Individualized help for at-risk students using modelagnostic and counterfactual explanations. Education and Information Technologies, 1–20. https://doi.org/10.1007/s10639-021-10661-6
  • Stando, A., Cavus, M., & Biecek, P. (2024, June). The effect of balancing methods on model behavior in imbalanced classification problems. In Fifth International Workshop on Learning with Imbalanced Domains: Theory and Applications (pp. 16–30). PMLR. https://doi.org/10.48550/arXiv.2307.00157
  • Tao, X., Li, Q., Guo, W., Ren, C., Li, C., Liu, R., & Zou, J. (2019). Self-adaptive cost weights-based support vector machine cost-sensitive ensemble for imbalanced data classification. Information Sciences, 487, 31–56. https://doi.org/10.1016/j.ins.2019.02.062
  • Tsiakmaki, M., & Ragos, O. (2021). A case study of interpretable counterfactual explanations for the task of predicting student academic performance. In 2021 25th International Conference on Circuits, Systems, Communications and Computers (CSCC) (pp. 120–125). IEEE. https://doi.org/10.1109/CSCC53858.2021.00029
  • Vaswani, A. (2017). Attention is all you need. Advances in Neural Information Processing Systems. https://doi.org/10.48550/arXiv.1706.03762
  • Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31, 841–872. https://doi.org/10.48550/arXiv.1711.00399
  • Waheed, H., Hassan, S. U., Aljohani, N. R., Hardman, J., Alelyani, S., & Nawaz, R. (2020). Predicting the academic performance of students from VLE big data using deep learning models. Computers in Human Behavior, 104, 106189. https://doi.org/10.1016/j.chb.2019.106189
  • Warren, G., Keane, M. T., Gueret, C., & Delaney, E. (2023). Explaining groups of instances counterfactually for XAI: A use case, algorithm, and user study for group-counterfactuals. arXiv preprint arXiv:2303.09297. https://doi.org/10.48550/arXiv.2303.09297
  • Yin, J., Gan, C., Zhao, K., Lin, X., Quan, Z., & Wang, Z. J. (2020). A novel model for imbalanced data classification. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 04, pp. 6680–6687). https://doi.org/10.1609/aaai.v34i04.6145
  • Zhang, H., Dong, J., Lv, C., Lin, Y., & Bai, J. (2023). Visual analytics of potential dropout behavior patterns in online learning based on counterfactual explanation. Journal of Visualization, 26(3), 723–741. https://doi.org/10.1007/s12650-022-00899-8
  • Zong, W., Huang, G. B., & Chen, Y. (2013). Weighted extreme learning machine for imbalance learning. Neurocomputing, 101, 229–242. https://doi.org/10.1016/j.neucom.2012.08.010
  • Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viégas, F., & Wilson, J. (2019). The what-if tool: Interactive probing of machine learning models. IEEE Transactions on Visualization and Computer Graphics, 26(1), 56–65. https://doi.ieeecomputersociety.org/10.1109/TVCG.2019.2934619
There are 50 citations in total.

Details

Primary Language English
Subjects Testing, Assessment and Psychometrics (Other)
Journal Section Articles
Authors

Mustafa Çavuş 0000-0002-6172-5449

Jakub Kuzilek This is me 0000-0002-8656-0599

Project Number 16DHBKI045
Publication Date December 30, 2024
Submission Date August 1, 2024
Acceptance Date November 26, 2024
Published in Issue Year 2024

Cite

APA Çavuş, M., & Kuzilek, J. (2024). An Effect Analysis of the Balancing Techniques on the Counterfactual Explanations of Student Success Prediction Models. Journal of Measurement and Evaluation in Education and Psychology, 15(Special Issue), 302-317. https://doi.org/10.21031/epod.1526704