Research Article

Artificial Intelligence Literacy: An Adaptation Study

Volume: 4 Number: 2 December 31, 2023
TR EN

Artificial Intelligence Literacy: An Adaptation Study

Abstract

The purpose of this research is to adapt the Artificial Intelligence Literacy Scale (AILS) developed by Wang et al. (2022) into Turkish and study its validity and reliability. The scale aims to measure the artificial intelligence literacy levels of non-expert adults. The research data were gathered from 402 participants, and the researchers did Confirmatory Factor Analysis (CFA) to test the validity of the adapted scale, and to test the reliability, they adopted Cronbach’s alpha technique. The adapted scale consists of 12 items and 4 factors, as is the case in the original version. CFA results indicate that X^2/df =1.82, RMSEA = 0.04, RMR = 0.03, NFI = 0.95, CFI = 0.98, GFI = 0.96 and AGFI = 0.94. Considering CFA results, it is concluded that the adapted scale is a good fit. As for reliability, as far as the factors are concerned, the internal consistency results are 0.72, 0.74, 0.76, and 0.72, respectively. Additionally, α=0.85 for the whole scale. Consideringly, the scale and its factors are adequately reliable, and the adapted scale can be used in Turkish culture.

Keywords

Artificial Intelligence , AI literacy , Digital literacy , AI literacy scale

References

  1. Akkaya, B., Özkan, A., & Özkan, H. (2021). Artificial intelligence anxiety (AIA) scale: adaptation to Turkish, validity and reliability study. Alanya Academic Review, 5(2), 1125 - 1146. https://doi.org/10.29023/alanyaakademik.833668
  2. Alpaydın, E. (2004). Introduction to machine learning. The MIT Press.
  3. Amirrudin, M., Nasution, K., & Supahar, S. (2020). Effect of variability on Cronbach alpha reliability in research practice. Jurnal Matematika, Statistika Dan Komputasi, 17(2), 223–230. https://doi.org/10.20956/jmsk.v17i2.11655
  4. Balfe, N., Sharples, S., & Wilson, J. R. (2018). Understanding Is Key: An Analysis of Factors Pertaining to Trust in a Real-World Automation System. Human Factors, 60(4), 477-495. https://doi.org/10.1177/0018720818761256
  5. Bollen, K. A. (2002). Latent variables in psychology and the social sciences. Annual Review of Psychology, 53(1), 605–634. https://doi.org/10.1146/annurev.psych.53.100901.135239
  6. Calvani, A., Cartelli, A., Fini, A., & Ranieri, M. (2008). Models and instruments for assessing digital competence at school. Journal of E-learning and Knowledge Society, 4(3), 183-193.
  7. Calvani, A., Fini, A., & Ranieri, M. (2009). Assessing digital competence in secondary education. Issues, models and instruments. M. Leaning içinde, Issues in information and media literacy: Education, practice and pedagogy (s. 153-172). Informing Science Press.
  8. Carolus, A., Koch, M. J., Straka, S., Latoschik, M. E., & Wienrich, C. (2023). MAILS - Meta AI literacy scale: Development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change- and meta-competencies. Computers in Human Behavior: Artificial Humans, 1(2). https://doi.org/10.1016/j.chbah.2023.100014
  9. Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen and J. S. Long (Eds.), Testing structural equation models (pp. 136-162). Newbury Park, CA: Sage.
  10. Büyüköztürk, Ş., Akgün, Ö. E., Özkahveci, Ö., & Demirel, F. (2004). The validity and reliability study of the Turkish version of the motivated strategies for learning questionnaire. Educational Sciences: Theory & Practice, 4(2), 231–237.
APA
Çelebi, C., Yılmaz, F., Demir, U., & Karakuş, F. (2023). Artificial Intelligence Literacy: An Adaptation Study. Instructional Technology and Lifelong Learning, 4(2), 291-306. https://doi.org/10.52911/itall.1401740

Cited By