Diğer
BibTex RIS Kaynak Göster

Creating multiple-choice items for testing student learning

Yıl 2022, Cilt: 9 Sayı: Special Issue, 6 - 18, 29.11.2022
https://doi.org/10.21449/ijate.1196701

Öz

The use of multiple-choice items for classroom testing is firmly established for many good reasons. The content any unit or course of study can be well sampled. Test scores can be reliable (trusted). And time spent administering and scoring can be minimized. This article provides a current review of best practices in the design and use of a variety of multiple-choice formats for classroom assessment of student learning. One of the most serious problems facing current educators is developing test items that measure more than simple factual recall. It is important to measure understanding, comprehension, critical thinking, and problem solving. Not only are these types of higher-level thinking described, but items are presented that illustrate how this is done. These best practices are continually evolving. The objective is always to use tests to measure validly what students have learned as well as help students learn what they have not yet learned. We call this formative and summative assessment. Guidelines are presented showing good and bad practices. What may be surprising to readers is the extensive variety of formats and methods for gathering or generating new test items. Readers are encouraged to experiment with these formats. Some formats can very efficiently measure what students were supposed to learn.

Kaynakça

  • Gierl, M., & Haladyna, T.M. (Eds.). (2013). Automatic item generation: Theory and practice. Routledge.
  • Haladyna, T.M. (Submitted for publication). How much of threat to validity is random guessing?
  • Haladyna, T.M. (2015). Item analysis for selected-response test items. In Lane, S., Raymond, M.R., & Haladyna, T.M. (Eds.). Handbook of test development (pp. 392 409). Routledge.
  • Haladyna, T.M., & Rodriguez, M.R. (2021). Using full-information item analysis to evaluate multiple choice distractors. Educational Assessment, 26(3), 198 211. https://doi.org/10.1080/10627197.2021.1946390
  • Haladyna, T.M., & Shindoll, L.R. (1989). Item shells: A method for writing effective multiple-choice test items. Evaluation in the Health Professions, 12(1) 97 106. https://doi.org/10.1177/016327878901200106
  • Haladyna, T.M., & Rodriguez, M.C. (2013). Developing and validating test items. Routledge.
  • Haladyna, T.M., Raymond, M.R., & Stevens, C. (2019). Are multiple-choice items too fat? Educational Assessment, 32(4), 350 364. https://doi.org/10.1080/08957347.2019.1660348
  • Lane, S., Raymond, M., & Haladyna, T.M. (Eds.) (2015). Handbook of test development (2nd ed.). Routledge.
  • Rodriguez, M.C. (2016). Selected-response item development. In Lane, S., Raymond, M.R., & Haladyna, T.M. (Eds.). Handbook of test development (pp. 259-273). Routledge.

Creating multiple-choice items for testing student learning

Yıl 2022, Cilt: 9 Sayı: Special Issue, 6 - 18, 29.11.2022
https://doi.org/10.21449/ijate.1196701

Öz

The use of multiple-choice items for classroom testing is firmly established for many good reasons. The content any unit or course of study can be well sampled. Test scores can be reliable (trusted). And time spent administering and scoring can be minimized. This article provides a current review of best practices in the design and use of a variety of multiple-choice formats for classroom assessment of student learning. One of the most serious problems facing current educators is developing test items that measure more than simple factual recall. It is important to measure understanding, comprehension, critical thinking, and problem solving. Not only are these types of higher-level thinking described, but items are presented that illustrate how this is done. These best practices are continually evolving. The objective is always to use tests to measure validly what students have learned as well as help students learn what they have not yet learned. We call this formative and summative assessment. Guidelines are presented showing good and bad practices. What may be surprising to readers is the extensive variety of formats and methods for gathering or generating new test items. Readers are encouraged to experiment with these formats. Some formats can very efficiently measure what students were supposed to learn.

Kaynakça

  • Gierl, M., & Haladyna, T.M. (Eds.). (2013). Automatic item generation: Theory and practice. Routledge.
  • Haladyna, T.M. (Submitted for publication). How much of threat to validity is random guessing?
  • Haladyna, T.M. (2015). Item analysis for selected-response test items. In Lane, S., Raymond, M.R., & Haladyna, T.M. (Eds.). Handbook of test development (pp. 392 409). Routledge.
  • Haladyna, T.M., & Rodriguez, M.R. (2021). Using full-information item analysis to evaluate multiple choice distractors. Educational Assessment, 26(3), 198 211. https://doi.org/10.1080/10627197.2021.1946390
  • Haladyna, T.M., & Shindoll, L.R. (1989). Item shells: A method for writing effective multiple-choice test items. Evaluation in the Health Professions, 12(1) 97 106. https://doi.org/10.1177/016327878901200106
  • Haladyna, T.M., & Rodriguez, M.C. (2013). Developing and validating test items. Routledge.
  • Haladyna, T.M., Raymond, M.R., & Stevens, C. (2019). Are multiple-choice items too fat? Educational Assessment, 32(4), 350 364. https://doi.org/10.1080/08957347.2019.1660348
  • Lane, S., Raymond, M., & Haladyna, T.M. (Eds.) (2015). Handbook of test development (2nd ed.). Routledge.
  • Rodriguez, M.C. (2016). Selected-response item development. In Lane, S., Raymond, M.R., & Haladyna, T.M. (Eds.). Handbook of test development (pp. 259-273). Routledge.
Toplam 9 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Alan Eğitimleri
Bölüm Special Issue
Yazarlar

Thomas Haladyna Bu kişi benim 0000-0003-3761-6979

Erken Görünüm Tarihi 17 Kasım 2022
Yayımlanma Tarihi 29 Kasım 2022
Gönderilme Tarihi 10 Eylül 2022
Yayımlandığı Sayı Yıl 2022 Cilt: 9 Sayı: Special Issue

Kaynak Göster

APA Haladyna, T. (2022). Creating multiple-choice items for testing student learning. International Journal of Assessment Tools in Education, 9(Special Issue), 6-18. https://doi.org/10.21449/ijate.1196701

23824         23823             23825