Research Article
BibTex RIS Cite

Harmonizing perspectives to understand attitudes: A mixed methods approach to crafting an assessment literacy attitude scale

Year 2024, Volume: 11 Issue: 3, 424 - 444, 09.09.2024
https://doi.org/10.21449/ijate.1440210

Abstract

Assessment literacy's vital role in faculty effectiveness within higher education lacks sufficient tools for measuring faculty attitudes on this matter. Employing a sequential mixed-methods approach, this study utilized the theory of planned behavior to develop the Assessment Literacy Attitude Scale (ALAS) and evaluate its psychometric properties within the U.S. higher education context. The qualitative phase involved a literature review of relevant studies and existing self-report measures, interviews with stakeholders, and panel reviews to shape initial item development. Following the establishment of a conceptual foundation and a comprehensive overview of the scale's construction, our study advanced to the quantitative stage that involves factor analytical and item response theory approaches using data from 260 faculty across three public universities in the U.S. Exploratory factor analysis (EFA) was employed initially to obtain preliminary insights into the scale's factorial structure and dimensionality. Confirmatory factor analysis (CFA) was subsequently applied with separate data and the findings largely supported the conclusions from the EFA. Exploratory and confirmatory factor analyses resulted in 15 items loading across two factors in a good model fit range. Finally, we used nonparametric item response theory (IRT) techniques based on Mokken Scale Analysis (MSA) to evaluate individual items for evidence of effective psychometric properties to support the interpretation of ALAS scores, including monotonicity, scalability, and invariant item ordering. The newly-developed scale shows promise in assessing faculty attitudes toward enhancing their assessment literacy.

References

  • Adam, S. (2004). Using learning outcomes: A consideration of the nature, role, application and implications for European education of employing “learning outcomes” at the local, national and international levels. Paper presented at the Bologna Seminar, Heriot-Watt University, Edinburgh United Kingdom. http://www.aic.lv/ace/ace_disk/Bologna/Bol_semin/Edinburgh/S_Adam_Bacgrerep_presentation.pdf Accessed on 16 November 2023.
  • Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211. https://doi.org/10.1016/0749-5978(91)90020-T
  • Ajzen, I. (2001). Nature and operation of attitudes. Annual Review of Psychology, 52(1), 27-58. https://doi.org/10.1146/annurev.psych.52.1.27
  • Ajzen, I., & Timko, C. (1986). Correspondence between health attitudes and behavior. Basic and Applied Social Psychology, 7(4), 259 276. https://doi.org/10.1207/s15324834basp0704_2
  • Archie, T., Hayward, C.N., Yoshinobu, S., & Laursen, S.L. (2022). Investigating the linkage between professional development and mathematics instructors’ use of teaching practices using the theory of planned behavior. Plos One, 17(4), e0267097. https://doi.org/10.1371/journal.pone.0267097
  • Balloo, K., Norman, M., & Winstone, N.E. (2018, January). Evaluation of a large-scale inclusive assessment intervention: a novel approach to quantifying perceptions about assessment literacy. In The Changing Shape of Higher Education-Can Excellence and Inclusion Cohabit?: Conference Programmme and Book of Abstracts. University of Southern Queensland. https://srhe.ac.uk/arc/conference2018/downloads/SRHE_Conf_2018_Programme_Papers.pdf
  • Biggs, J., & Tang, C. (2011). Train-the-trainers: Implementing outcomes-based teaching and learning in Malaysian higher education. Malaysian Journal of Learning and Instruction, 8, 1-19.
  • Caspersen, J., & Smeby, J.C. (2018). The relationship among learning outcome measures used in higher education. Quality in Higher Education, 24(2), 117 135. https://doi.org/10.1080/13538322.2018.1484411
  • Chang, L. (1995). Connotatively consistent and reversed connotatively inconsistent items are not fully equivalent: Generalizability study. Educational and Psychological Measurement, 55(6), 991-997. https://doi.org/10.1177/0013164495055006007
  • Coates, H. (2016). Assessing student learning outcomes internationally: Insights and frontiers. Assessment & Evaluation in Higher Education, 41(5), 662 676. https://doi.org/10.1080/02602938.2016.1160273
  • Cochran, W.G. (1977). Sampling techniques. John Wiley & Sons.
  • Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37 46. https://doi.org/10.1177/001316446002000104
  • Cole, K.L., Turner, R.C., & Gitchel, W.D. (2019). A study of polytomous IRT methods and item wording directionality effects on perceived stress items. Personality and Individual Differences, 147(6), 63-72. https://doi.org/10.1016/j.paid.2019.03.046
  • Conner, M., & Armitage, C.J. (1998). Extending the theory of planned behavior: A review and avenues for further research. Journal of Applied Social Psychology, 28(15), 1429-1464. https://doi.org/10.1111/j.1559-1816.1998.tb01685.x
  • Creswell, J.W., & Clark, V.P. (2011). Mixed methods research. SAGE Publications.
  • Crick, R.D., Broadfoot, P., & Claxton, G. (2004). Developing an effective lifelong learning inventory: The ELLI project. Assessment in Education: Principles, Policy & Practice, 11(3), 247-272. https://doi.org/10.1080/0969594042000304582
  • Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297-334. https://doi.org/10.1007/BF02310555
  • Dann, R. (2014). Assessment as learning: blurring the boundaries of assessment and learning for theory, policy and practice. Assessment in Education: Principles, Policy & Practice, 21(2), 149-166. https://doi.org/10.1080/0969594X.2014.898128
  • Dilek, H., & Akbaş, U. (2022). Investigation of education value perception scale's psychometric properties according to CTT and IRT. International Journal of Assessment Tools in Education, 9(3), 548-564. https://doi.org/10.21449/ijate.986530
  • Dill, D. (2007). Quality assurance in higher education: Practices and issues. The 3rd International Encyclopedia of Education.
  • Dunn, R., Hattie, J., & Bowles, T. (2018). Using the Theory of Planned Behavior to explore teachers’ intentions to engage in ongoing teacher professional learning. Studies in Educational Evaluation, 59, 288-294. https://doi.org/10.1016/j.stueduc.2018.10.001
  • Eubanks, D. (2019). Reassessing the elephant, part 1. Assessment Update, 31(2), 6-7. https://doi.org/10.1002/au.30166
  • Evans, C. (2016). Enhancing assessment feedback practice in higher education: The EAT framework. University of Southampton. https://www.southampton.ac.uk/assets/imported/transforms/content block/UsefulDownloads_Download/A0999D3AF2AF4C5AA24B5BEA08C61D8E/EAT%20Guide%20April%20FINAL1%20ALL.pdf
  • Field, A. (2003). Discovering Statistics using IBM SPSS statistics. Sage Publications.
  • Fokkema, M., & Greiff, S. (2017). How performing PCA and CFA on the same data equals trouble: Overfitting in the assessment of internal structure and some editorial thoughts on it [Editorial]. European Journal of Psychological Assessment, 33(6), 399–402. https://doi.org/10.1027/1015-5759/a000460
  • Fornell, C., & David, F.L. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18, 39 50. https://doi.org/10.2307/3151312
  • Henseler, J., Ringle, C.M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy Marketing Science, 43, 115–135. https://doi.org/10.1007/s11747-014-0403-8
  • Hines, S.R. (2009). Investigating faculty development program assessment practices: What's being done and how can it be improved?. The Journal of Faculty Development, 23(3), 5.
  • Holmboe, E.S., Ward, D.S., Reznick, R.K., Katsufrakis, P.J., Leslie, K.M., Patel, V.L., ... & Nelson, E.A. (2011). Faculty development in assessment: the missing link in competency-based medical education. Academic Medicine, 86(4), 460 467. https://doi.org/10.1097/acm.0b013e31820cb2a7
  • Hora, M.T., & Anderson, C. (2012). Perceived norms for interactive teaching and their relationship to instructional decision-making: A mixed methods study. Higher Education, 64, 573-592. https://doi.org/10.1007/s10734-012-9513-8
  • Howard, M.C. (2016). A review of exploratory factor analysis decisions and overview of current practices: What we are doing and how can we improve?. International Journal of Human Computer Interaction, 32(1), 51 62. https://doi.org/10.1080/10447318.2015.1087664
  • Hu, L.T., & Bentler, P.M. (1999). Cutof criteria for fit indexes in covariance structural analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118
  • Jankowski, N.A., & Marshall, D.W. (2017). Degrees that matter: Moving higher education to a learning systems paradigm. Routledge. https://doi.org/10.4324/9781003444015
  • Kao, C.P., Lin, K.Y., & Chien, H.M. (2018). Predicting teachers’ behavioral intentions regarding web-based professional development by the theory of planned behavior. EURASIA Journal of Mathematics, Science and Technology Education, 14(5), 1887-1897. https://doi.org/10.29333/ejmste/85425
  • Kline, P. (1994). An easy guide to factor analysis. Routledge.
  • Knauder, H., & Koschmieder, C. (2019). Individualized student support in primary school teaching: A review of influencing factors using the Theory of Planned Behavior (TPB). Teaching and Teacher Education, 77, 66-76. https://doi.org/10.1016/j.tate.2018.09.012
  • Kremmel, B., & Harding, L. (2020). Towards a comprehensive, empirical model of language assessment literacy across stakeholder groups: Developing the language assessment literacy survey. Language Assessment Quarterly, 17(1), 100 120. https://doi.org/10.1080/15434303.2019.1674855
  • Landis, J.R., & Koch, G.G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159-174. https://doi.org/10.2307/2529310
  • Ligtvoet, R., Van der Ark, L.A., Marvelde, J.M. te, & Sijtsma, K. (2010). Investigating an Invariant Item Ordering for Polytomously Scored Items. Educational and Psychological Measurement, 70(4), 578–595. https://doi.org/10.1177/0013164409355697
  • Liu, O.L., Bridgeman, B., & Adler, R.M. (2012). Measuring learning outcomes in higher education: Motivation matters. Educational Researcher, 41(9), 352 362. https://doi.org/10.3102/0013189X12459679
  • Madigan, D.J., & Kim, L.E. (2021). Towards an understanding of teacher attrition: A meta-analysis of burnout, job satisfaction, and teachers’ intentions to quit. Teaching and Teacher Education, 105, 103425. https://doi.org/10.1016/j.tate.2021.103425
  • Mazza, A., Punzo, A., & McGuire, B. (2014). KernSmoothIRT: An R package for kernel smoothing in Item Response Theory. Journal of Statistical Software, 58(6). https://doi.org/10.18637/jss.v058.i06
  • McDonald, R.P. (1999). Test theory: A unified treatment. Taylor & Francis.
  • McKown, C., Kharitonova, M., Russo-Ponsaran, N.M., & Aksu-Dunya, B. (2023). Development and Validation of a shortened form of SELweb EE, a Web-Based Assessment of Children’s Social and Emotional Competence. Assessment, 30(1), 171-189. https://doi.org/10.1177/10731911211046044
  • Medland, E. (2019). ‘I’m an assessment illiterate’: Towards a shared discourse of assessment literacy for external examiners. Assessment & Evaluation in Higher Education, 44(4), 565-580. https://doi.org/10.1080/02602938.2018.1523363
  • Meijer, R.R., & Baneke, J.J. (2004). Analyzing psychopathology items: A case for Nonparametric Item Response Theory Modeling. Psychological Methods, 9(3), 354–368. https://doi.org/10.1037/1082-989X.9.3.354
  • Meijer, R.R., Tendeiro, J.N., & Wanders, R.B.K. (2015). The use of nonparametric item response theory to explore data quality. In S.P. Reise & D.A. Revicki (Eds.), Handbook of Item Response Theory Modeling: Applications to typical performance assessment (pp. 85–110). Routledge.
  • Mokken, R.J. (1971). A theory and procedure of scale analysis. De Gruyter.
  • Morell, L., & Tan, R.J.B. (2009). Validating for use and interpretation: A mixed methods contribution illustrated. Journal of Mixed Methods Research, 3(3), 242 264. https://doi.org/10.1177/1558689809335079
  • Muthén, B.O. (1993). Goodness of fit with categorical and other nonnormal variables. In K.A. Bollen, & J.S. Long (Eds.), Testing structural equation models (pp. 205-234). Sage Publishing.
  • O’Neill, G., McEvoy, E., & Maguire, T. (2023). Supporting assessment literacy in changing times. In C. Evans and M. Waring (Eds.), Research handbook on innovations in assessment and feedback in higher education. Elgar Publishing.
  • Padilla, J.L., & Leighton, J.P. (2017). Cognitive interviewing and think aloud methods. In B. Zumbo & A. Hubley (Eds.), Understanding and investigating response processes in validation research (pp. 211-228). Springer.
  • Pastore, S. (2022). Assessment Literacy in the higher education context: A critical review. Intersection: A Journal at the Intersection of Assessment and Learning, 4(1). https://doi.org/10.61669/001c.39702
  • Pastore, S., & Andrade, H.L. (2019). Teacher assessment literacy: A three-dimensional model. Teaching and Teacher Education, 84, 128 138. https://doi.org/10.1016/j.tate.2019.05.003
  • Pett, M.A., Lackey, N.R., & Sullivan, J.J. (2003). Making sense of factor analysis: The use of factor analysis for instrument development in health care research. Sage Publications.
  • Price, M., Rust, C., ODonovan, B., Handley, K., & Bryant, R. (2012). Assessment literacy: The foundation for improving student learning. ASKe, Oxford Centre for Staff and Learning Development.
  • R Core Team (2023). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/
  • Ramsay, J.O., & Silverman, B.W. (2005). Functional data analysis (2nd ed.). Springer.
  • Revelle, W. (2023). Psych: Procedures for psychological, psychometric, and personality research. Northwestern University, Evanston, Illinois. R package version 2.3.9, https://CRAN.R-project.org/package=psych
  • Rosseel, Y. (2012). lavaan: An R package for Structural Equation Modeling. Journal of Statistical Software, 48(2), 1–36. https://doi.org/10.18637/jss.v048.i02
  • Rimal, R.N., & Real, K. (2003). Understanding the influence of perceived norms on behaviors. Communication Theory, 13(2), 184 203. https://doi.org/10.1111/j.1468 2885.2003.tb00288.x
  • Sadler, D.R. (2017). Academic achievement standards and quality assurance. Quality in Higher Education, 23(2), 81-99. https://doi.org/10.1080/13538322.2017.1356614
  • Scholl, K., & Olsen, H.M. (2014). Measuring student learning outcomes using the SALG instrument. SCHOLE: A Journal of Leisure Studies and Recreation Education, 29(1), 37-50. https://doi.org/10.1080/1937156X.2014.11949710
  • Sijtsma, K., & Molenaar, I.W. (2002). Introduction to nonparametric item response theory (Vol. 5). Sage Publications.
  • Sijtsma, K., & van der Ark, L.A. (2017). A tutorial on how to do a Mokken scale analysis on your test and questionnaire data. British Journal of Mathematical and Statistical Psychology, 70(1), 137–158. https://doi.org/10.1111/bmsp.12078
  • Singh, M., & Ramya, K.R. (2011). Outcome based education. International Journal of Nursing Education, 3(2), 87-91.
  • Sliter, K.A., & Zickar, M.J. (2014). An IRT examination of the psychometric functioning of negatively worded personality items. Educational and Psychological Measurement, 74(2), 214-226. https://doi.org/10.1177/0013164413504584
  • Tabachnick, B., & Fidell, L.S. (1996). Using multivariate statistics. Harper Collins.
  • Van der Ark, L.A. (2007). Mokken scale analysis in R. Journal of Statistical Software, 20, 1-19. https://doi.org/10.18637/jss.v020.i11
  • Van der Ark, L.A. (2012). New developments in Mokken scale analysis in R. Journal of Statistical Software, 48, 1-27. https://doi.org/10.18637/jss.v048.i05
  • Velicer, W.F., Eaton, C.A., & Fava, J.L. (2000). Construct explication through factor or component analysis: A review and evaluation of alternative procedures for determining the number of factors or components. In R.D. Goffin & E. Helmes (Eds.), Problems and solutions in human assessment (pp. 41-71). Kluwer.
  • Williams, J. (2016). Quality assurance and quality enhancement: Is there a relationship?. Quality in Higher Education, 22(2), 97 102. https://doi.org/10.1080/13538322.2016.1227207
  • Wolf, R., Zahner, D., & Benjamin, R. (2015). Methodological challenges in international comparative post-secondary assessment programs: Lessons learned and the road ahead. Studies in Higher Education, 40(3), 471 481. https://doi.org/10.1080/03075079.2015.1004239
  • Wright, B.D., & Masters, G.N. (1982). Rating Scale Analysis. MESA Press.
  • Wright, S.L., & Jenkins-Guarnieri, M.A. (2023). Further validation of the social efficacy and social outcome expectations scale. Journal of Psychoeducational Assessment, 42(1), 74-88. https://doi.org/10.1177/07342829231198277
  • Xu, Y., & Brown, G.T. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher Education, 58, 149 162. https://doi.org/10.1016/j.tate.2016.05.010
  • Zhou, Y. (2019). A mixed methods model of scale development and validation analysis. Measurement: Interdisciplinary Research and Perspectives, 17(1), 38 47. https://doi.org/10.1080/15366367.2018.1479088
  • Zhu, X., & Evans, C. (2022). Enhancing the development and understanding of assessment literacy in higher education. European Journal of Higher Education, 1 21. https://doi.org/10.1080/21568235.2022.2118149
  • Zoom Video Communications, Inc. (2023). ZOOM cloud meetings (Version 5.15.5). https://zoom.com

Harmonizing perspectives to understand attitudes: A mixed methods approach to crafting an assessment literacy attitude scale

Year 2024, Volume: 11 Issue: 3, 424 - 444, 09.09.2024
https://doi.org/10.21449/ijate.1440210

Abstract

Assessment literacy's vital role in faculty effectiveness within higher education lacks sufficient tools for measuring faculty attitudes on this matter. Employing a sequential mixed-methods approach, this study utilized the theory of planned behavior to develop the Assessment Literacy Attitude Scale (ALAS) and evaluate its psychometric properties within the U.S. higher education context. The qualitative phase involved a literature review of relevant studies and existing self-report measures, interviews with stakeholders, and panel reviews to shape initial item development. Following the establishment of a conceptual foundation and a comprehensive overview of the scale's construction, our study advanced to the quantitative stage that involves factor analytical and item response theory approaches using data from 260 faculty across three public universities in the U.S. Exploratory factor analysis (EFA) was employed initially to obtain preliminary insights into the scale's factorial structure and dimensionality. Confirmatory factor analysis (CFA) was subsequently applied with separate data and the findings largely supported the conclusions from the EFA. Exploratory and confirmatory factor analyses resulted in 15 items loading across two factors in a good model fit range. Finally, we used nonparametric item response theory (IRT) techniques based on Mokken Scale Analysis (MSA) to evaluate individual items for evidence of effective psychometric properties to support the interpretation of ALAS scores, including monotonicity, scalability, and invariant item ordering. The newly-developed scale shows promise in assessing faculty attitudes toward enhancing their assessment literacy.

References

  • Adam, S. (2004). Using learning outcomes: A consideration of the nature, role, application and implications for European education of employing “learning outcomes” at the local, national and international levels. Paper presented at the Bologna Seminar, Heriot-Watt University, Edinburgh United Kingdom. http://www.aic.lv/ace/ace_disk/Bologna/Bol_semin/Edinburgh/S_Adam_Bacgrerep_presentation.pdf Accessed on 16 November 2023.
  • Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211. https://doi.org/10.1016/0749-5978(91)90020-T
  • Ajzen, I. (2001). Nature and operation of attitudes. Annual Review of Psychology, 52(1), 27-58. https://doi.org/10.1146/annurev.psych.52.1.27
  • Ajzen, I., & Timko, C. (1986). Correspondence between health attitudes and behavior. Basic and Applied Social Psychology, 7(4), 259 276. https://doi.org/10.1207/s15324834basp0704_2
  • Archie, T., Hayward, C.N., Yoshinobu, S., & Laursen, S.L. (2022). Investigating the linkage between professional development and mathematics instructors’ use of teaching practices using the theory of planned behavior. Plos One, 17(4), e0267097. https://doi.org/10.1371/journal.pone.0267097
  • Balloo, K., Norman, M., & Winstone, N.E. (2018, January). Evaluation of a large-scale inclusive assessment intervention: a novel approach to quantifying perceptions about assessment literacy. In The Changing Shape of Higher Education-Can Excellence and Inclusion Cohabit?: Conference Programmme and Book of Abstracts. University of Southern Queensland. https://srhe.ac.uk/arc/conference2018/downloads/SRHE_Conf_2018_Programme_Papers.pdf
  • Biggs, J., & Tang, C. (2011). Train-the-trainers: Implementing outcomes-based teaching and learning in Malaysian higher education. Malaysian Journal of Learning and Instruction, 8, 1-19.
  • Caspersen, J., & Smeby, J.C. (2018). The relationship among learning outcome measures used in higher education. Quality in Higher Education, 24(2), 117 135. https://doi.org/10.1080/13538322.2018.1484411
  • Chang, L. (1995). Connotatively consistent and reversed connotatively inconsistent items are not fully equivalent: Generalizability study. Educational and Psychological Measurement, 55(6), 991-997. https://doi.org/10.1177/0013164495055006007
  • Coates, H. (2016). Assessing student learning outcomes internationally: Insights and frontiers. Assessment & Evaluation in Higher Education, 41(5), 662 676. https://doi.org/10.1080/02602938.2016.1160273
  • Cochran, W.G. (1977). Sampling techniques. John Wiley & Sons.
  • Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37 46. https://doi.org/10.1177/001316446002000104
  • Cole, K.L., Turner, R.C., & Gitchel, W.D. (2019). A study of polytomous IRT methods and item wording directionality effects on perceived stress items. Personality and Individual Differences, 147(6), 63-72. https://doi.org/10.1016/j.paid.2019.03.046
  • Conner, M., & Armitage, C.J. (1998). Extending the theory of planned behavior: A review and avenues for further research. Journal of Applied Social Psychology, 28(15), 1429-1464. https://doi.org/10.1111/j.1559-1816.1998.tb01685.x
  • Creswell, J.W., & Clark, V.P. (2011). Mixed methods research. SAGE Publications.
  • Crick, R.D., Broadfoot, P., & Claxton, G. (2004). Developing an effective lifelong learning inventory: The ELLI project. Assessment in Education: Principles, Policy & Practice, 11(3), 247-272. https://doi.org/10.1080/0969594042000304582
  • Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297-334. https://doi.org/10.1007/BF02310555
  • Dann, R. (2014). Assessment as learning: blurring the boundaries of assessment and learning for theory, policy and practice. Assessment in Education: Principles, Policy & Practice, 21(2), 149-166. https://doi.org/10.1080/0969594X.2014.898128
  • Dilek, H., & Akbaş, U. (2022). Investigation of education value perception scale's psychometric properties according to CTT and IRT. International Journal of Assessment Tools in Education, 9(3), 548-564. https://doi.org/10.21449/ijate.986530
  • Dill, D. (2007). Quality assurance in higher education: Practices and issues. The 3rd International Encyclopedia of Education.
  • Dunn, R., Hattie, J., & Bowles, T. (2018). Using the Theory of Planned Behavior to explore teachers’ intentions to engage in ongoing teacher professional learning. Studies in Educational Evaluation, 59, 288-294. https://doi.org/10.1016/j.stueduc.2018.10.001
  • Eubanks, D. (2019). Reassessing the elephant, part 1. Assessment Update, 31(2), 6-7. https://doi.org/10.1002/au.30166
  • Evans, C. (2016). Enhancing assessment feedback practice in higher education: The EAT framework. University of Southampton. https://www.southampton.ac.uk/assets/imported/transforms/content block/UsefulDownloads_Download/A0999D3AF2AF4C5AA24B5BEA08C61D8E/EAT%20Guide%20April%20FINAL1%20ALL.pdf
  • Field, A. (2003). Discovering Statistics using IBM SPSS statistics. Sage Publications.
  • Fokkema, M., & Greiff, S. (2017). How performing PCA and CFA on the same data equals trouble: Overfitting in the assessment of internal structure and some editorial thoughts on it [Editorial]. European Journal of Psychological Assessment, 33(6), 399–402. https://doi.org/10.1027/1015-5759/a000460
  • Fornell, C., & David, F.L. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18, 39 50. https://doi.org/10.2307/3151312
  • Henseler, J., Ringle, C.M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy Marketing Science, 43, 115–135. https://doi.org/10.1007/s11747-014-0403-8
  • Hines, S.R. (2009). Investigating faculty development program assessment practices: What's being done and how can it be improved?. The Journal of Faculty Development, 23(3), 5.
  • Holmboe, E.S., Ward, D.S., Reznick, R.K., Katsufrakis, P.J., Leslie, K.M., Patel, V.L., ... & Nelson, E.A. (2011). Faculty development in assessment: the missing link in competency-based medical education. Academic Medicine, 86(4), 460 467. https://doi.org/10.1097/acm.0b013e31820cb2a7
  • Hora, M.T., & Anderson, C. (2012). Perceived norms for interactive teaching and their relationship to instructional decision-making: A mixed methods study. Higher Education, 64, 573-592. https://doi.org/10.1007/s10734-012-9513-8
  • Howard, M.C. (2016). A review of exploratory factor analysis decisions and overview of current practices: What we are doing and how can we improve?. International Journal of Human Computer Interaction, 32(1), 51 62. https://doi.org/10.1080/10447318.2015.1087664
  • Hu, L.T., & Bentler, P.M. (1999). Cutof criteria for fit indexes in covariance structural analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118
  • Jankowski, N.A., & Marshall, D.W. (2017). Degrees that matter: Moving higher education to a learning systems paradigm. Routledge. https://doi.org/10.4324/9781003444015
  • Kao, C.P., Lin, K.Y., & Chien, H.M. (2018). Predicting teachers’ behavioral intentions regarding web-based professional development by the theory of planned behavior. EURASIA Journal of Mathematics, Science and Technology Education, 14(5), 1887-1897. https://doi.org/10.29333/ejmste/85425
  • Kline, P. (1994). An easy guide to factor analysis. Routledge.
  • Knauder, H., & Koschmieder, C. (2019). Individualized student support in primary school teaching: A review of influencing factors using the Theory of Planned Behavior (TPB). Teaching and Teacher Education, 77, 66-76. https://doi.org/10.1016/j.tate.2018.09.012
  • Kremmel, B., & Harding, L. (2020). Towards a comprehensive, empirical model of language assessment literacy across stakeholder groups: Developing the language assessment literacy survey. Language Assessment Quarterly, 17(1), 100 120. https://doi.org/10.1080/15434303.2019.1674855
  • Landis, J.R., & Koch, G.G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159-174. https://doi.org/10.2307/2529310
  • Ligtvoet, R., Van der Ark, L.A., Marvelde, J.M. te, & Sijtsma, K. (2010). Investigating an Invariant Item Ordering for Polytomously Scored Items. Educational and Psychological Measurement, 70(4), 578–595. https://doi.org/10.1177/0013164409355697
  • Liu, O.L., Bridgeman, B., & Adler, R.M. (2012). Measuring learning outcomes in higher education: Motivation matters. Educational Researcher, 41(9), 352 362. https://doi.org/10.3102/0013189X12459679
  • Madigan, D.J., & Kim, L.E. (2021). Towards an understanding of teacher attrition: A meta-analysis of burnout, job satisfaction, and teachers’ intentions to quit. Teaching and Teacher Education, 105, 103425. https://doi.org/10.1016/j.tate.2021.103425
  • Mazza, A., Punzo, A., & McGuire, B. (2014). KernSmoothIRT: An R package for kernel smoothing in Item Response Theory. Journal of Statistical Software, 58(6). https://doi.org/10.18637/jss.v058.i06
  • McDonald, R.P. (1999). Test theory: A unified treatment. Taylor & Francis.
  • McKown, C., Kharitonova, M., Russo-Ponsaran, N.M., & Aksu-Dunya, B. (2023). Development and Validation of a shortened form of SELweb EE, a Web-Based Assessment of Children’s Social and Emotional Competence. Assessment, 30(1), 171-189. https://doi.org/10.1177/10731911211046044
  • Medland, E. (2019). ‘I’m an assessment illiterate’: Towards a shared discourse of assessment literacy for external examiners. Assessment & Evaluation in Higher Education, 44(4), 565-580. https://doi.org/10.1080/02602938.2018.1523363
  • Meijer, R.R., & Baneke, J.J. (2004). Analyzing psychopathology items: A case for Nonparametric Item Response Theory Modeling. Psychological Methods, 9(3), 354–368. https://doi.org/10.1037/1082-989X.9.3.354
  • Meijer, R.R., Tendeiro, J.N., & Wanders, R.B.K. (2015). The use of nonparametric item response theory to explore data quality. In S.P. Reise & D.A. Revicki (Eds.), Handbook of Item Response Theory Modeling: Applications to typical performance assessment (pp. 85–110). Routledge.
  • Mokken, R.J. (1971). A theory and procedure of scale analysis. De Gruyter.
  • Morell, L., & Tan, R.J.B. (2009). Validating for use and interpretation: A mixed methods contribution illustrated. Journal of Mixed Methods Research, 3(3), 242 264. https://doi.org/10.1177/1558689809335079
  • Muthén, B.O. (1993). Goodness of fit with categorical and other nonnormal variables. In K.A. Bollen, & J.S. Long (Eds.), Testing structural equation models (pp. 205-234). Sage Publishing.
  • O’Neill, G., McEvoy, E., & Maguire, T. (2023). Supporting assessment literacy in changing times. In C. Evans and M. Waring (Eds.), Research handbook on innovations in assessment and feedback in higher education. Elgar Publishing.
  • Padilla, J.L., & Leighton, J.P. (2017). Cognitive interviewing and think aloud methods. In B. Zumbo & A. Hubley (Eds.), Understanding and investigating response processes in validation research (pp. 211-228). Springer.
  • Pastore, S. (2022). Assessment Literacy in the higher education context: A critical review. Intersection: A Journal at the Intersection of Assessment and Learning, 4(1). https://doi.org/10.61669/001c.39702
  • Pastore, S., & Andrade, H.L. (2019). Teacher assessment literacy: A three-dimensional model. Teaching and Teacher Education, 84, 128 138. https://doi.org/10.1016/j.tate.2019.05.003
  • Pett, M.A., Lackey, N.R., & Sullivan, J.J. (2003). Making sense of factor analysis: The use of factor analysis for instrument development in health care research. Sage Publications.
  • Price, M., Rust, C., ODonovan, B., Handley, K., & Bryant, R. (2012). Assessment literacy: The foundation for improving student learning. ASKe, Oxford Centre for Staff and Learning Development.
  • R Core Team (2023). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/
  • Ramsay, J.O., & Silverman, B.W. (2005). Functional data analysis (2nd ed.). Springer.
  • Revelle, W. (2023). Psych: Procedures for psychological, psychometric, and personality research. Northwestern University, Evanston, Illinois. R package version 2.3.9, https://CRAN.R-project.org/package=psych
  • Rosseel, Y. (2012). lavaan: An R package for Structural Equation Modeling. Journal of Statistical Software, 48(2), 1–36. https://doi.org/10.18637/jss.v048.i02
  • Rimal, R.N., & Real, K. (2003). Understanding the influence of perceived norms on behaviors. Communication Theory, 13(2), 184 203. https://doi.org/10.1111/j.1468 2885.2003.tb00288.x
  • Sadler, D.R. (2017). Academic achievement standards and quality assurance. Quality in Higher Education, 23(2), 81-99. https://doi.org/10.1080/13538322.2017.1356614
  • Scholl, K., & Olsen, H.M. (2014). Measuring student learning outcomes using the SALG instrument. SCHOLE: A Journal of Leisure Studies and Recreation Education, 29(1), 37-50. https://doi.org/10.1080/1937156X.2014.11949710
  • Sijtsma, K., & Molenaar, I.W. (2002). Introduction to nonparametric item response theory (Vol. 5). Sage Publications.
  • Sijtsma, K., & van der Ark, L.A. (2017). A tutorial on how to do a Mokken scale analysis on your test and questionnaire data. British Journal of Mathematical and Statistical Psychology, 70(1), 137–158. https://doi.org/10.1111/bmsp.12078
  • Singh, M., & Ramya, K.R. (2011). Outcome based education. International Journal of Nursing Education, 3(2), 87-91.
  • Sliter, K.A., & Zickar, M.J. (2014). An IRT examination of the psychometric functioning of negatively worded personality items. Educational and Psychological Measurement, 74(2), 214-226. https://doi.org/10.1177/0013164413504584
  • Tabachnick, B., & Fidell, L.S. (1996). Using multivariate statistics. Harper Collins.
  • Van der Ark, L.A. (2007). Mokken scale analysis in R. Journal of Statistical Software, 20, 1-19. https://doi.org/10.18637/jss.v020.i11
  • Van der Ark, L.A. (2012). New developments in Mokken scale analysis in R. Journal of Statistical Software, 48, 1-27. https://doi.org/10.18637/jss.v048.i05
  • Velicer, W.F., Eaton, C.A., & Fava, J.L. (2000). Construct explication through factor or component analysis: A review and evaluation of alternative procedures for determining the number of factors or components. In R.D. Goffin & E. Helmes (Eds.), Problems and solutions in human assessment (pp. 41-71). Kluwer.
  • Williams, J. (2016). Quality assurance and quality enhancement: Is there a relationship?. Quality in Higher Education, 22(2), 97 102. https://doi.org/10.1080/13538322.2016.1227207
  • Wolf, R., Zahner, D., & Benjamin, R. (2015). Methodological challenges in international comparative post-secondary assessment programs: Lessons learned and the road ahead. Studies in Higher Education, 40(3), 471 481. https://doi.org/10.1080/03075079.2015.1004239
  • Wright, B.D., & Masters, G.N. (1982). Rating Scale Analysis. MESA Press.
  • Wright, S.L., & Jenkins-Guarnieri, M.A. (2023). Further validation of the social efficacy and social outcome expectations scale. Journal of Psychoeducational Assessment, 42(1), 74-88. https://doi.org/10.1177/07342829231198277
  • Xu, Y., & Brown, G.T. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher Education, 58, 149 162. https://doi.org/10.1016/j.tate.2016.05.010
  • Zhou, Y. (2019). A mixed methods model of scale development and validation analysis. Measurement: Interdisciplinary Research and Perspectives, 17(1), 38 47. https://doi.org/10.1080/15366367.2018.1479088
  • Zhu, X., & Evans, C. (2022). Enhancing the development and understanding of assessment literacy in higher education. European Journal of Higher Education, 1 21. https://doi.org/10.1080/21568235.2022.2118149
  • Zoom Video Communications, Inc. (2023). ZOOM cloud meetings (Version 5.15.5). https://zoom.com
There are 79 citations in total.

Details

Primary Language English
Subjects Scale Development
Journal Section Articles
Authors

Beyza Aksu 0000-0003-4994-1429

Stefanie Wind 0000-0002-1599-375X

Mehmet Can Demir 0000-0001-7849-7078

Early Pub Date August 27, 2024
Publication Date September 9, 2024
Submission Date February 20, 2024
Acceptance Date April 29, 2024
Published in Issue Year 2024 Volume: 11 Issue: 3

Cite

APA Aksu, B., Wind, S., & Demir, M. C. (2024). Harmonizing perspectives to understand attitudes: A mixed methods approach to crafting an assessment literacy attitude scale. International Journal of Assessment Tools in Education, 11(3), 424-444. https://doi.org/10.21449/ijate.1440210

23823             23825