Derleme
BibTex RIS Kaynak Göster

A Comparative Analysis of Traditional and Modern Approaches to Assessment and Evaluation in Education

Yıl 2024, , 520 - 555, 28.04.2024
https://doi.org/10.51460/baebd.1386737

Öz

Ölçme ve değerlendirme, eğitim sürecinde öğrenci başarısını anlamak ve eğitim programlarını şekillendirmek açısından merkezi bir öneme sahiptir. Ölçme, öğrencilerin ilerlemesini izlemek için veri toplama işlemi iken, değerlendirme bu verilerin analiz edilerek eğitimde bilinçli kararlar alınmasını içerir. Eğitim sonuçlarını iyileştirmek için etkili ölçme ve değerlendirme yöntemlerinin kullanılması gerekmektedir. Bu yöntemler eğitimcilerin her öğrencinin güçlü ve zayıf yönlerini belirlemesine ve eğitimi kişiselleştirmesine olanak tanır. Bu makale, geleneksel ve modern ölçme ve değerlendirme tekniklerini karşılaştırarak eğitim politikaları ve uygulamaları için sonuçlar çıkarmayı amaçlamaktadır. Çalışmanın yöntemi, sistematik bir literatür taraması olup, çeşitli disiplinlerden veriler toplanarak analiz edilmiştir. Ölçme ve değerlendirme tekniklerinin karşılaştırmalı analizinde belirlenen anahtar kriterler arasında verimlilik, uygunluk, ölçeklenebilirlik, kapsayıcılık, teknolojik entegrasyon ve paydaşların kabulü yer almaktadır. Her iki yöntemin avantajları ve dezavantajları bu kriterlere göre belirlenmiştir. Örneğin, geleneksel yöntemler yaygın ve standartlaşmış olmaları sebebiyle kolay uygulanabilirken, bireysel farklılıkları yansıtmada yetersiz bulunmuşlardır. Çağdaş yöntemler ise farklı öğrenci ihtiyaçlarını karşılayabilme ve farklı koşullara uyum sağlama kapasitesiyle ön plana çıkmıştır ancak bu yöntemlerin yaygın uygulamaları kolay geleneksel yöntemlere nazaran çok daha zordur. Çalışmanın bulguları, modern ve geleneksel ölçme ve değerlendirme tekniklerinin, amaç, kalite güvencesi, katılımcılık, etik ve sürekli gelişim gibi temalar etrafında benzerliklere sahip olduğunu göstermiştir. Ancak, modern teknikler, formatif stratejiler kullanarak ve sürekli geri bildirim sağlayarak öğrencilerin gelişimine daha fazla odaklanırken, geleneksel yöntemler daha summatif yani özetleyici nitelikte kalmaktadır. Ayrıca, modern teknikler esnekliği, teknolojik entegrasyonu ve kapsayıcılığı artırırken, geleneksel yöntemler belirli bir format içinde sınırlı kalabilmektedir. Modern teknikler, eğitimde teknolojinin artan rolünü yansıtır ve interaktif, heyecan verici öğrenme ortamları oluşturmada önemli olanaklar sunar. Buna karşın, geleneksel yöntemler akademik bilgiye ve hafıza becerilerine odaklanır ve genellikle pratik uygulamaları dikkate almazlar. Modern yaklaşımlar, problem ve proje tabanlı öğrenme ve gerçek dünya senaryolarını içeren değerlendirmelerle güncel uygulamalara öncelik verir. Sonuç olarak, geleneksel ve modern ölçme ve değerlendirme tekniklerinin birleştirilerek, daha etkili ve kapsayıcı bir eğitim değerlendirme sistemi meydana getirilmesi önerilmektedir. Ölçme ve değerlendirme tekniklerinin uygulanabilirliği ve etkinliği üzerine yapılan bu çalışma, eğitimde yeni yönler ve metodolojilerin keşfedilmesine olanak tanırken, var olan uygulamaların da değerlendirilmesine ve uygulamaya ilişkin öneriler sunmaya da olanak tanımaktadır.

Etik Beyan

Eğitimde Geleneksel ve Modern Ölçme ve Değerlendirme Yaklaşımlarının Karşılaştırmalı Bir Analizi A Comparative Analysis of Traditional and Modern Approaches to Assessment and Evaluation in Education başlıklı makalem alan yazın taraması içerdiğinden etik kurul onayları alınması gerekmemiştir.

Teşekkür

Sayın Doç. Dr. Hadiye Küçükkaragöz Hocam'a yardımlarından dolayı teşekkürlerimi sunarım.

Kaynakça

  • Abaya, J. (2009). Inclusive assessment and accountability: A guide to accommodations for students with diverse needs. Journal of Educational Administration, 47(5), 680–684. https://doi.org/10.1108/09578230910981152
  • AERA, APA & NCME. (2014). Standards for educational and psychological testing. New York: American Educational Research Association.
  • Airasian, P. W. (2001). Classroom assessment: Concepts and applications. McGraw-Hill.
  • All, A. (2016). Assessing the effectiveness of digital game-based learning: Best practices. Castellar, E.P.N., and Van Looy. Journal of Computers in Education, 92, 90–103.
  • Alquraan, M. (2012). Methods of assessing students' learning in higher education. Education Business and Society Contemporary Middle Eastern Issues, 5(2), 124–133. https://doi.org/10.1108/17537981211251160
  • Au, W. (2007). High-Stakes Testing and Curricular Control: A Qualitative Meta-Synthesis. Educational Researcher, 36(5), 258–267. https://doi.org/10.3102/0013189X07306523
  • Barrett, H. C. (2007). Researching electronic portfolios and learner engagement: The REFLECT initiative. Journal of Adolescent and Adult Literacy, 50(6), 436–449. https://doi.org/10.1598/JAAL.50.6.2
  • Bebell, D., Russell, M., & O’Dwyer, L. (2004). Measuring teachers’ technology uses: Why multiple-measures are more revealing. Journal of Research on Technology in Education, 37(1), 45–63. https://doi.org/10.1080/15391523.2004.10782425
  • Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy and Practice, 18(1), 5–25. https://doi.org/10.1080/0969594X.2010.513678
  • Bennett, R. E. (2015). The changing nature of educational assessment. Review of Research in Education, 39(1), 370–407. https://doi.org/10.3102/0091732X14554179
  • Berk, R. A. (2013). Top five flashpoints in the assessment of teaching effectiveness. Medical Teacher, 35(1), 15–26. https://doi.org/10.3109/0142159X.2012.732247
  • Berry, M., Mello‐Neto, J., Souza, M., Figueredo, C., Reher, V., & Evans, J. (2020). Effectiveness of technology‐enhanced learning to improve periodontics educational outcomes: a systematic review. Journal of Dental Education, 84(7), 830-839. https://doi.org/10.1002/jdd.12179
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice, 5(1), 7–74. https://doi.org/10.1080/0969595980050102
  • Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31. https://doi.org/10.1007/s11092-008-9068-5
  • Black, P., & Wiliam, D. (2010). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 92(1), 81–90. https://doi.org/10.1177/003172171009200119
  • Black, P., Daugherty, R., Ecclestone, K., Gardner, J., Harlen, W., James, M., Sebba, J. C., & Stobart, G. (2004). A systematic review of the evidence of the impact on students, teachers, and the curriculum of the process of using assessment by teachers for summative purposes. EPPI-Centre. University of London.
  • Brookhart, S. M. (2010). How to assess higher-order thinking skills in your classroom. ASCD.
  • Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Educational Measurement: Issues and Practice, 30(1), 3–12. https://doi.org/10.1111/j.1745-3992.2010.00195.x
  • Brown, G. T. L., & Harris, L. R. (2013). Student self-assessment. In J. H. McMillan (Ed.), The SAGE handbook of research on classroom assessment (pp. 367–393). https://doi.org/10.4135/9781452218649.n21
  • Bryk, A. S., Sebring, P. B., Allensworth, E., Luppescu, S., & Easton, J. Q. (2010). Organizing schools for improvement: Lessons from Chicago. University of Chicago Press.
  • Chapman, C., & King, R. (2005). Differentiated assessment strategies: One tool doesn’t fit all. Corwin Press.
  • Chappuis, J., Stiggins, R. J., Chappuis, S., & Arter, J. (2012). Classroom assessment for student learning: Doing it right-using it well (p. 432). Pearson.
  • Chappuis, S., & Stiggins, R. J. (2002). Classroom assessment for learning. Educational Leadership, 60(1), 40–44.
  • Clarke-Midura, J., & Dede, C. (2010). Assessment, technology, and change. Journal of Research on Technology in Education, 42(3), 309–328. https://doi.org/10.1080/15391523.2010.10782553
  • Cook, D. A., Hatala, R., Brydges, R., Zendejas, B., Szostek, J. H., Wang, A. T., Erwin, P. J., & Hamstra, S. J. (2011). Technology-enhanced simulation for health professions education: A systematic review and meta-analysis. JAMA, 306(9), 978–988. https://doi.org/10.1001/jama.2011.1234
  • Daniel, S. J. (2020). Education and the COVID-19 pandemic. Prospects, 49(1–2), 91–96. https://doi.org/10.1007/s11125-020-09464-3
  • Danielson, C. (2013). The Framework for Teaching Evaluation Instrument. Princeton, NJ: The Danielson Group.
  • Darling Hammond, L. (2007). Race, inequality and educational accountability: The irony of “No Child Left Behind”. Race Ethnicity and Education, 10(3), 245–260. https://doi.org/10.1080/13613320701503207
  • Darling Hammond, L., Adamson, F., & Abedi, J. (2010). Beyond basic skills: The role of performance assessment in achieving 21st-century standards of learning (p. 52). Stanford Center for Opportunity Pollcy in Education.
  • Darling Hammond, L., Ancess, J., & Falk, B. (1995). Authentic assessment in action: Studies of schools and students at work. Teacher’s College Press.
  • Dart, B. C., Burnett, P. C., Purdie, N., Boulton-Lewis, G., Campbell, J., & Smith, D. (2000). Students’ conceptions of learning, the classroom environment, and approaches to learning. Journal of Educational Research, 93(4), 262–270. https://doi.org/10.1080/00220670009598715
  • Deci, E. L., Vallerand, R. J., Pelletier, L. G., & Ryan, R. M. (1991). Motivation and education: The self-determination perspective. Educational Psychologist, 26(3–4), 325–346. https://doi.org/10.1080/00461520.1991.9653137
  • Dhawan, S. (2020). Online learning: A panacea in the time of COVID-19 crisis. Journal of Educational Technology Systems, 49(1), 5–22. https://doi.org/10.1177/0047239520934018
  • Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method. John Wiley & Sons.
  • Duarte, M. E., & Rossier, J. (2008). Testing and assessment in an international context: Cross- and multi-cultural issues. In J. A. Athanasou & R. Van Esbroeck (Eds.), International handbook of career guidance. Springer. https://doi.org/10.1007/978-1-4020-6230-8_24
  • Dweck, C. S. (2006). Mindset: The new psychology of success. Random House.
  • Elwood, J., & Klenowski, V. (2002). Creating communities of shared practice: The challenges of assessment use in learning and teaching. Assessment and Evaluation in Higher Education, 27(3), 243–256. https://doi.org/10.1080/02602930220138606
  • Emelyanova, I. N., Teplyakova, O. A., & Efimova, G. Z. (2019). Modern evaluation methods at various levels of education. Education and Science Journal, 21(6), 9–28. https://doi.org/10.17853/1994-5639-2019-6-9-28
  • Ertmer, P. A. (2005). Teacher pedagogical beliefs: The final frontier in our quest for technology integration? Educational Technology Research and Development, 53(4), 25–39. https://doi.org/10.1007/BF02504683
  • Erwin, T. D., & Rieppi, R. (1999). Comparing multimedia and traditional approaches in undergraduate psychology classes. Teaching of Psychology, 26(1), 58–61. https://doi.org/10.1207/s15328023top2601_18
  • Eseryel, D., Guo, Y., & Law, V. (2012). Interactivity 3 design and assessment framework for educational games to promote motivation and complex problem-solving skills. Assessment in game-based learning: Foundations, innovations, and perspectives, 257–285.
  • Fautley, M., & Savage, J. (2008). Assessment for learning and teaching in secondary schools. Learning matters.
  • Felder, R. M., & Brent, R. (2005). Understanding student differences. Journal of Engineering Education, 94(1), 57–72. https://doi.org/10.1002/j.2168-9830.2005.tb00829.x
  • Fuhrman, S. H. (1999). The new accountability. Consortium for Policy Research in Education.
  • Gadbury-Amyot, C. C., McCracken, M. S., Woldt, J. L., & Brennan, R. W. (2012). Implementation of portfolio assessment of student competence in two dental school populations. Journal of Dental Education, 76(12), 1559–1571. https://doi.org/10.1002/j.0022-0337.2012.76.12.tb05419.x
  • Gardner, H. E. (2011). Frames of mind: The theory of multiple intelligences. Basic Books.
  • Gee, J. P. (2003). What video games have to teach us about learning and literacy. Computers in Entertainment, 1(1), 20–20. https://doi.org/10.1145/950566.950595
  • Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers and Education, 57(4), 2333–2351. https://doi.org/10.1016/j.compedu.2011.06.004
  • Griffin, P., Wilson, M., & Care, E. (Eds.). (2018). Assessment and teaching of 21st century skills: Research and applications. Springer International Publishing.
  • Gulikers, J. T. M., Bastiaens, T. J., & Kirschner, P. A. (2004). A five-dimensional framework for authentic assessment. Educational Technology Research and Development, 52(3), 67–86. https://doi.org/10.1007/BF02504676
  • Guskey, T. R., & Jung, L. A. (2016). Practical solutions for severe problems in standards-based grading. Corwin Press.
  • Hargreaves, A. (2003). Teaching in the knowledge society: Education in the age of insecurity. Teacher’s College Press.
  • Harlen, W., & Deakin Crick, R. (2003). Testing and motivation for learning. Assessment in Education: Principles, Policy and Practice, 10(2), 169–207. https://doi.org/10.1080/0969594032000121270
  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
  • Hill, C. J., Bloom, H. S., Black, A. R., & Lipsey, M. W. (2008). Empirical benchmarks for interpreting effect sizes in research. Child Development Perspectives, 2(3), 172–177. https://doi.org/10.1111/j.1750-8606.2008.00061.x
  • Hodges, C. B., Moore, S., Lockee, B. B., Trust, T., & Bond, M. A. (2020). The difference between emergency remote teaching and online learning.
  • Ifenthaler, D. (2012). Determining the effectiveness of prompts for self-regulated learning in problem-solving scenarios. Journal of Educational Technology and Society, 15(1), 38–52.
  • Ifenthaler, D., & Widanapathirana, C. (2014). Development and validation of a learning analytics framework: Two case studies using support vector machines. Technology, Knowledge and Learning, 19(1–2), 221–240. https://doi.org/10.1007/s10758-014-9226-4
  • Ifenthaler, D., Eseryel, D., & Ge, X. (2012). Assessment for game-based learning. In Assessment in game-based learning: Foundations, innovations, and perspectives (pp. 1–8). Springer.
  • Ifenthaler, D., Eseryel, D., & Ge, X. (2012). Assessment in game-based learning. https://doi.org/10.1007/978-1-4614-3546-4
  • Ifenthaler, D., Pirnay-Dummer, P., & Seel, N. M. (2007). The role of cognitive learning strategies and intellectual abilities in mental model building processes. Technology, Instruction, Cognition & Learning, 5(4).
  • Ismail, A., Rakovshik, S., & Abdelrahim, S. (2021). A randomised controlled study of face-to-face versus internet-based teaching of cbt skills for healthcare professionals in sudan. BJPsych International, 18(2), E7. https://doi.org/10.1192/bji.2020.60
  • Joint Committee on Standards for Educational Evaluation (JCSEE). (2010). The program evaluation standards: A guide for evaluators and evaluation users. SAGE Publications.
  • Joint Council for Qualifications (JCQ). (2023). Instructions for conducting examinations.
  • Joint Information Systems Committee (JISC). (2007). Effective practice with e-assessment. Joint Information Systems Committee.
  • Kellaghan, T., & Stufflebeam, D. L. (Eds.). (2012). International handbook of educational evaluation: Part One: Perspectives/part two: Practice, 9. Springer Science+Business Media.
  • Kember, D., McKay, J., Sinclair, K., & Wong, F. K. Y. (2008). A four‐category scheme for coding and assessing the level of reflection in written work. Assessment and Evaluation in Higher Education, 33(4), 369–379. https://doi.org/10.1080/02602930701293355
  • Killam, L. and Camargo‐Plazas, P. (2021). Revisioning assessment and evaluation in nursing education through critical caring pedagogy. Advances in Nursing Science, 45(1), E15-E30. https://doi.org/10.1097/ans.0000000000000382
  • Kingston, N., & Nash, B. (2011). Formative assessment: A meta-analysis and a call for research. Educational Measurement: Issues and Practice, 30(4), 28–37. https://doi.org/10.1111/j.1745-3992.2011.00220.x
  • Koedinger, K. R., McLaughlin, E. A., Jia, J. Z., & Bier, N. L. (2016, April). Is the doer effect a causal relationship? How can we tell and why it’s important. In Proceedings of the Sixth International Conference on Learning Analytics and Knowledge (pp. 388–397).
  • Kohn, A. (2000). The case against standardized testing: Raising the scores, ruining the schools (pp. 1–25). Heinemann.
  • Koretz, D. M. (2008). Measuring up: What educational testing really tells us. Harvard University Press.
  • Kunnan, A. J. (2004). Test fairness. European Language Testing in a Global Context, 18, 27–48.
  • Kyaw, B., Posadzki, P., Dunleavy, G., Semwal, M., Divakar, U., Hervatis, V., … & Car, L. (2019). Offline digital education for medical students: systematic review and meta-analysis by the digital health education collaboration. Journal of Medical Internet Research, 21(3), e13165. https://doi.org/10.2196/13165
  • Ladson-Billings, G. (2006). From the Achievement Gap to the Education Debt: Understanding Achievement in US Schools. Educational Researcher, 35(7), 3–12. https://doi.org/10.3102/0013189X035007003
  • Lam, R. (2015). Language assessment training in Hong Kong: Implications for language assessment literacy. Language Testing, 32(2), 169–197. https://doi.org/10.1177/0265532214554321
  • Le, A., Satkunam, L., & Yu, J. (2023). Residents’ perceptions of a novel virtual livestream cadaveric teaching series for musculoskeletal anatomy education. American Journal of Physical Medicine & Rehabilitation, 102(12), e165-e168. https://doi.org/10.1097/phm.0000000000002284
  • Lemann, N. (2000). The big test: The secret history of the American meritocracy. Macmillan.
  • Levin, J. A., & Datnow, A. (2012). The principal role in data-driven decision making: Using case-study data to develop multi-mediator models of educational reform. School Effectiveness and School Improvement, 23(2), 179–201. https://doi.org/10.1080/09243453.2011.599394
  • Linn, R. L. (2000). Assessments and accountability. Educational Researcher, 29(2), 4–16. https://doi.org/10.3102/0013189X029002004
  • Marzano, R. J., & Toth, M. D. (2013). Teacher evaluation that makes a difference: A new model for teacher growth and student achievement. ASCD.
  • McMillan, J. H. (2014). Classroom assessment: Principles and practice for effective standards-based instruction. Pearson.
  • Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record: The Voice of Scholarship In Education, 115(3), 1–47. https://doi.org/10.1177/016146811311500307
  • Mundy, K., Green, A., Lingard, B., & Verger, A. (Eds.). (2016). Handbook of global education policy. John Wiley & Sons.
  • Nagendrababu, V., Pulikkotil, S., Sultan, O., Jayaraman, J., Soh, J., & Dummer, P. (2018). Effectiveness of technology‐enhanced learning in endodontic education: a systematic review and meta‐analysis. International Endodontic Journal, 52(2), 181-192. https://doi.org/10.1111/iej.12995
  • Nichols, S. L., & Berliner, D. C. (2007). Collateral damage: How high-stakes testing corrupts America’s schools. Harvard Education Press.
  • Nichols, S. L., Glass, G. V., & Berliner, D. C. (2006). High-stakes testing and student achievement: Does accountability pressure increase student learning? Education Policy Analysis Archives/Archivos Analíticos de Políticas Educativas, 14, 1–175. https://doi.org/10.14507/epaa.v14n1.2006
  • NRC. (2012). Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st. Century. National Research Council. https://doi.org/10.17226/13398
  • O’Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.
  • Oandasan, I., D’Amour, D., Zwarenstein, M., Barker, K., Purden, M., Beaulieu, M. D., . . . & Tregunno, D. (2004). Interdisciplinary education for collaborative, patient-centred practice: Research and findings report (pp. 41–99). Health Canada.
  • Olenick, M., Allen, L. R., & Smego, Jr., R. A. (2010). Interprofessional education: A concept analysis. Advances in Medical Education and Practice, 1, 75–84. https://doi.org/10.2147/AMEP.S13207
  • Onwuegbuzie, A. J., & Daniel, L. G. (1999). Uses and misuses of the correlation coefficient.
  • Paulson, F. L. (1991). What makes a portfolio a portfolio? Educational Leadership, 48(5), 60–63.
  • Pellegrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. National Research Council.
  • Phelps, R. P. (2005). The rich, robust research literature on testing’s achievement benefits. Defending standardized testing.
  • Piaget, J., & Cook, M. (1952). The origins of intelligence in children. International Universities Press (8, 5. 1952, 18).
  • Popham, W. J. (2001a). Teaching to the test? Educational Leadership, 58(6), 16–21.
  • Popham, W. J. (2001b). The truth about testing: An educator’s call to action. ASCD.
  • Popham, W. J. (2010a). Classroom assessment: What teachers need to know. Education Review.
  • Popham, W. J. (2010b). Everything school leaders need to know about assessment. Corwin Press.
  • Prihandoko, Y., Wahab, R., Wilujeng, I., & Kartowagiran, B. (2020). How is the perception of teachers in indonesia about portfolio assessment for elementary school? Universal Journal of Educational Research, 8(12B), 8294–8303. https://doi.org/10.13189/ujer.2020.082634
  • Puentedura, R. (2010). SAMR and TPCK: Intro to advanced practice. http://www.hippasus.com/rrpweblog/archives/000073.html
  • Reeves, S., Boet, S., Zierler, B., & Kitto, S. (2015). Interprofessional education and practice guide no. 3: Evaluating interprofessional education. Journal of Interprofessional Care, 29(4), 305–312. https://doi.org/10.3109/13561820.2014.1003637
  • Rezaie, M. (2015). Computer Adaptive Test (CAT): Advantages and limitations. International Journal of Educational Investigations, 2, 128–137.
  • Rienties, B., Brouwer, N., & Lygo-Baker, S. (2013). The effects of online professional development on higher education teachers’ beliefs and intentions towards learning facilitation and technology. Teaching and Teacher Education, 29, 122–131. https://doi.org/10.1016/j.tate.2012.09.002
  • Russell, M., Madaus, G., & Higgins, J. (2009). The paradoxes of high stakes testing: How they affect students, their parents, teachers, principals, schools, and society. IAP.
  • Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144. https://doi.org/10.1007/BF00117714
  • Saher, A. S., Ali, A. M. J., Amani, D., & Najwan, F. (2022). Traditional Versus Authentic Assessments in Higher Education. Pegem Journal of Education and Instruction, 12(1), 283-291. https://doi.org/10.47750/pegegog.12.01.29
  • Sahlberg, P. (2021). Finnish lessons. 3.0: What can the world learn from educational change in Finland? Teacher’s College Press.
  • Santos, G., Leite, A., Figueiredo, P., Pimentel, N., Flores‐Mir, C., Melo, N., … & Canto, G. (2016). Effectiveness of e‐learning in oral radiology education: a systematic review. Journal of Dental Education, 80(9), 1126-1139. https://doi.org/10.1002/j.0022-0337.2016.80.9.tb06195.x
  • Saravia-Shore, M., & Garcia, E. (2008). Diverse teaching strategies for diverse learners. Educating everybody’s children: Diverse teaching strategies for diverse learners, 41–97.
  • Schön, D. A. (1987). Educating the reflective practitioner: Toward a new design for teaching and learning in the professions. Jossey-Bass.
  • Schwartz, D. L., & Arena, D. (2013). Measuring what matters most: Choice-based assessments for the digital age (p. 192). The MIT Press.
  • Scriven, M. (1991). Evaluation thesaurus. SAGE.
  • Shavelson, R. J. (2003). On the integration of formative assessment in teaching and learning with implications for teacher education. Paper presented at Biannual Meeting of the European Association for Research on Learning and Instruction. In 2003. Padova.
  • Sheikhaboumasoudi, R., Bagheri, M., Hosseini, S., Ashouri, E., & Elahi, N. (2018). Improving nursing students' learning outcomes in fundamentals of nursing course through combination of traditional and e-learning methods. Iranian Journal of Nursing and Midwifery Research, 23(3), 217. https://doi.org/10.4103/ijnmr.ijnmr_79_17
  • Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14. https://doi.org/10.3102/0013189X029007004
  • Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. https://doi.org/10.3102/0034654307313795
  • Shute, V. J., & Ke, F. (2012). Games, learning, and assessment. In Assessment in game-based learning: Foundations, innovations, and perspectives (pp. 43–58). Springer.
  • Shute, V. J., & Rahimi, S. (2017). Review of computer‐based assessment for learning in elementary and secondary education. Journal of Computer Assisted Learning, 33(1), 1–19. https://doi.org/10.1111/jcal.12172
  • Sieber, J. E., & Tolich, M. B. (2012). Planning ethically responsible research, 31. Sage Publications.
  • Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510–1529. https://doi.org/10.1177/0002764213479366
  • Spolsky, B. (2014). The languages of the Jews: A sociolinguistic history. Cambridge University Press.
  • Stiggins, R. (2005). From formative assessment to assessment for learning: A path to success in standards-based schools. Phi Delta Kappan, 87(4), 324–328. https://doi.org/10.1177/003172170508700414
  • Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83(10), 758–765. https://doi.org/10.1177/003172170208301010
  • Stiggins, R., & Chappuis, J. (2005). Using student-involved classroom assessment to close achievement gaps. Theory into Practice, 44(1), 11–18. https://doi.org/10.1207/s15430421tip4401_3
  • Stobart, G. (2014). The Expert Learner: Challenging the myth of ability (p. 184). Open University Press. https://doi.org/10.1036/9780335247318
  • Tanaka-Matsumi, J. (2022). Counseling across cultures: A half-century assessment. Journal of Cross-Cultural Psychology, 53(7–8), 957–975. https://doi.org/10.1177/00220221221111810
  • Thompson, N. A., & Weiss, D. A. (2011). A framework for the development of computerized adaptive tests. Practical Assessment, Research and Evaluation, 16(1), 1.
  • Tyack, D. B., & Cuban, L. (1997). Tinkering toward utopia: A century of public school reform. Harvard University Press.
  • Van der Linden, W. J., & Glas, C. A. W. (2000). Computer adaptive testing: Theory and practice. Kluwer Academic Publishers.
  • Volante, L., & Beckett, D. (2011). Formative assessment and the contemporary classroom: Synergies and tensions between research and practice. Canadian Journal of Education/Revue canadienne de l'éducation, 34(2), 239–255.
  • Vygotsky, L. S., & Cole, M. (1978). Mind in society: Development of higher psychological processes. Harvard University Press.
  • Wade, R. C., & Yarbrough, D. B. (1996). Portfolios: A tool for reflective thinking in teacher education? Teaching and Teacher Education, 12(1), 63–79. https://doi.org/10.1016/0742-051X(95)00022-C
  • Wainer, H., Dorans, N. J., Flaugher, R., Green, B. F., & Mislevy, R. J. (2000). Computerized adaptive testing: A primer. Routledge.
  • Wang, J., Xu, Y., Liu, X., Xiong, W., Xie, J., & Zhao, J. (2016). Assessing the effectiveness of problem-based learning in physical diagnostics education in china: a meta-analysis. Scientific Reports, 6(1). https://doi.org/10.1038/srep36279
  • Wang, T., & Kolen, M. J. (2001). Evaluating comparability in computerized adaptive testing: Issues, criteria and an example. Journal of Educational Measurement, 38(1), 19–49. https://doi.org/10.1111/j.1745-3984.2001.tb01115.x
  • Warschauer, M. (2007). The paradoxical future of digital learning. Learning Inquiry, 1(1), 41–49. https://doi.org/10.1007/s11519-007-0001-5
  • Warschauer, M., & Matuchniak, T. (2010). New technology and digital worlds: Analyzing evidence of equity in access, use, and outcomes. Review of Research in Education, 34(1), 179–225. https://doi.org/10.3102/0091732X09349791
  • Weller, J., Naik, V., & Diego, R. (2020). Systematic review and narrative synthesis of competency-based medical education in anaesthesia. British Journal of Anaesthesia, 124(6), 748-760. https://doi.org/10.1016/j.bja.2019.10.025
  • Wiggins, G. (1998). Educative assessment. Designing assessments to inform and improve student performance p. 94104. Jossey-Bass Publishers.
  • Wiggins, G. (2011). A true test: Toward more authentic and equitable assessment. Phi Delta Kappan, 92(7), 81–93. https://doi.org/10.1177/003172171109200721
  • Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3–14. https://doi.org/10.1016/j.stueduc.2011.03.001
  • Wiliam, D. (2011). Embedded formative assessment. Solution Tree Press.
  • Williamson, B. (2016). Digital education governance: Data visualization, predictive analytics, and “real-time policy instruments. Journal of Education Policy, 31(2), 123–141. https://doi.org/10.1080/02680939.2015.1035758
  • Williamson, B. (2018). The hidden architecture of higher education: Building a big data infrastructure for the “smarter university”. International Journal of Educational Technology in Higher Education, 15(1). https://doi.org/10.1186/s41239-018-0094-1
  • Zeidner, M. (2007). Test anxiety in educational contexts: Concepts, findings, and future directions. In Emotion in education (pp. 165–184). Academic Press.

A Comparative Analysis of Traditional and Modern Approaches to Assessment and Evaluation in Education

Yıl 2024, , 520 - 555, 28.04.2024
https://doi.org/10.51460/baebd.1386737

Öz

For tracking student progress and designing educational programs, assessment and evaluation are critical components of the process. Assessment is data collection to monitor students' development, while evaluation analyzes this data to make informed educational choices. To enhance educational results, practical and effective assessment and evaluation techniques must be applied. These strategies enable instructors to discover and customize instruction by identifying each student's strengths and limitations. This article compares classic and modern assessment and evaluation methodologies to draw implications for educational policies and practices. The research employs a systematic literature review to analyze data gathered from diverse fields. Efficiency, appropriateness, scalability, inclusivity, technology integration, and stakeholder acceptability are critical factors in comparing assessment and evaluation approaches. These parameters were used to analyze the benefits and drawbacks of each strategy. Traditional approaches, for example, have been proven insufficient in expressing individual characteristics since they are ubiquitous and uniform. Modern approaches stand out for their ability to meet student demands and adapt to diverse circumstances. The broad use of these modern procedures, however, is much more complicated than the simply adaptable traditional methods. According to the study's results, current and traditional assessment and evaluation methods have common themes such as purpose, quality assurance, involvement, ethics, and continual development. Traditional approaches, on the other hand, remain more summative, but novel methods concentrate more on student growth via formative strategies and ongoing feedback. Furthermore, novel methods provide flexibility, technology integration, and inclusion, while conventional methods may be restricted to a specific format. Modern methodologies reflect technology's growing importance in education and provide substantial opportunities to build engaging and attractive learning environments. Traditional techniques, on the other hand, emphasize academic knowledge and memory abilities while ignoring practical applications. Assessments that involve problem and project-based learning, as well as real-world situations, are prioritized in modern techniques. Consequently, integrating traditional and contemporary assessment and evaluation methodologies is recommended and will result in a more effective and inclusive educational assessment system. This research on the applicability and efficacy of assessment and evaluation methodologies provides for the exploration of new educational approaches and techniques, as well as the chance to assess current practices and make suggestions for implementation.

Kaynakça

  • Abaya, J. (2009). Inclusive assessment and accountability: A guide to accommodations for students with diverse needs. Journal of Educational Administration, 47(5), 680–684. https://doi.org/10.1108/09578230910981152
  • AERA, APA & NCME. (2014). Standards for educational and psychological testing. New York: American Educational Research Association.
  • Airasian, P. W. (2001). Classroom assessment: Concepts and applications. McGraw-Hill.
  • All, A. (2016). Assessing the effectiveness of digital game-based learning: Best practices. Castellar, E.P.N., and Van Looy. Journal of Computers in Education, 92, 90–103.
  • Alquraan, M. (2012). Methods of assessing students' learning in higher education. Education Business and Society Contemporary Middle Eastern Issues, 5(2), 124–133. https://doi.org/10.1108/17537981211251160
  • Au, W. (2007). High-Stakes Testing and Curricular Control: A Qualitative Meta-Synthesis. Educational Researcher, 36(5), 258–267. https://doi.org/10.3102/0013189X07306523
  • Barrett, H. C. (2007). Researching electronic portfolios and learner engagement: The REFLECT initiative. Journal of Adolescent and Adult Literacy, 50(6), 436–449. https://doi.org/10.1598/JAAL.50.6.2
  • Bebell, D., Russell, M., & O’Dwyer, L. (2004). Measuring teachers’ technology uses: Why multiple-measures are more revealing. Journal of Research on Technology in Education, 37(1), 45–63. https://doi.org/10.1080/15391523.2004.10782425
  • Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy and Practice, 18(1), 5–25. https://doi.org/10.1080/0969594X.2010.513678
  • Bennett, R. E. (2015). The changing nature of educational assessment. Review of Research in Education, 39(1), 370–407. https://doi.org/10.3102/0091732X14554179
  • Berk, R. A. (2013). Top five flashpoints in the assessment of teaching effectiveness. Medical Teacher, 35(1), 15–26. https://doi.org/10.3109/0142159X.2012.732247
  • Berry, M., Mello‐Neto, J., Souza, M., Figueredo, C., Reher, V., & Evans, J. (2020). Effectiveness of technology‐enhanced learning to improve periodontics educational outcomes: a systematic review. Journal of Dental Education, 84(7), 830-839. https://doi.org/10.1002/jdd.12179
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice, 5(1), 7–74. https://doi.org/10.1080/0969595980050102
  • Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31. https://doi.org/10.1007/s11092-008-9068-5
  • Black, P., & Wiliam, D. (2010). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 92(1), 81–90. https://doi.org/10.1177/003172171009200119
  • Black, P., Daugherty, R., Ecclestone, K., Gardner, J., Harlen, W., James, M., Sebba, J. C., & Stobart, G. (2004). A systematic review of the evidence of the impact on students, teachers, and the curriculum of the process of using assessment by teachers for summative purposes. EPPI-Centre. University of London.
  • Brookhart, S. M. (2010). How to assess higher-order thinking skills in your classroom. ASCD.
  • Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Educational Measurement: Issues and Practice, 30(1), 3–12. https://doi.org/10.1111/j.1745-3992.2010.00195.x
  • Brown, G. T. L., & Harris, L. R. (2013). Student self-assessment. In J. H. McMillan (Ed.), The SAGE handbook of research on classroom assessment (pp. 367–393). https://doi.org/10.4135/9781452218649.n21
  • Bryk, A. S., Sebring, P. B., Allensworth, E., Luppescu, S., & Easton, J. Q. (2010). Organizing schools for improvement: Lessons from Chicago. University of Chicago Press.
  • Chapman, C., & King, R. (2005). Differentiated assessment strategies: One tool doesn’t fit all. Corwin Press.
  • Chappuis, J., Stiggins, R. J., Chappuis, S., & Arter, J. (2012). Classroom assessment for student learning: Doing it right-using it well (p. 432). Pearson.
  • Chappuis, S., & Stiggins, R. J. (2002). Classroom assessment for learning. Educational Leadership, 60(1), 40–44.
  • Clarke-Midura, J., & Dede, C. (2010). Assessment, technology, and change. Journal of Research on Technology in Education, 42(3), 309–328. https://doi.org/10.1080/15391523.2010.10782553
  • Cook, D. A., Hatala, R., Brydges, R., Zendejas, B., Szostek, J. H., Wang, A. T., Erwin, P. J., & Hamstra, S. J. (2011). Technology-enhanced simulation for health professions education: A systematic review and meta-analysis. JAMA, 306(9), 978–988. https://doi.org/10.1001/jama.2011.1234
  • Daniel, S. J. (2020). Education and the COVID-19 pandemic. Prospects, 49(1–2), 91–96. https://doi.org/10.1007/s11125-020-09464-3
  • Danielson, C. (2013). The Framework for Teaching Evaluation Instrument. Princeton, NJ: The Danielson Group.
  • Darling Hammond, L. (2007). Race, inequality and educational accountability: The irony of “No Child Left Behind”. Race Ethnicity and Education, 10(3), 245–260. https://doi.org/10.1080/13613320701503207
  • Darling Hammond, L., Adamson, F., & Abedi, J. (2010). Beyond basic skills: The role of performance assessment in achieving 21st-century standards of learning (p. 52). Stanford Center for Opportunity Pollcy in Education.
  • Darling Hammond, L., Ancess, J., & Falk, B. (1995). Authentic assessment in action: Studies of schools and students at work. Teacher’s College Press.
  • Dart, B. C., Burnett, P. C., Purdie, N., Boulton-Lewis, G., Campbell, J., & Smith, D. (2000). Students’ conceptions of learning, the classroom environment, and approaches to learning. Journal of Educational Research, 93(4), 262–270. https://doi.org/10.1080/00220670009598715
  • Deci, E. L., Vallerand, R. J., Pelletier, L. G., & Ryan, R. M. (1991). Motivation and education: The self-determination perspective. Educational Psychologist, 26(3–4), 325–346. https://doi.org/10.1080/00461520.1991.9653137
  • Dhawan, S. (2020). Online learning: A panacea in the time of COVID-19 crisis. Journal of Educational Technology Systems, 49(1), 5–22. https://doi.org/10.1177/0047239520934018
  • Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method. John Wiley & Sons.
  • Duarte, M. E., & Rossier, J. (2008). Testing and assessment in an international context: Cross- and multi-cultural issues. In J. A. Athanasou & R. Van Esbroeck (Eds.), International handbook of career guidance. Springer. https://doi.org/10.1007/978-1-4020-6230-8_24
  • Dweck, C. S. (2006). Mindset: The new psychology of success. Random House.
  • Elwood, J., & Klenowski, V. (2002). Creating communities of shared practice: The challenges of assessment use in learning and teaching. Assessment and Evaluation in Higher Education, 27(3), 243–256. https://doi.org/10.1080/02602930220138606
  • Emelyanova, I. N., Teplyakova, O. A., & Efimova, G. Z. (2019). Modern evaluation methods at various levels of education. Education and Science Journal, 21(6), 9–28. https://doi.org/10.17853/1994-5639-2019-6-9-28
  • Ertmer, P. A. (2005). Teacher pedagogical beliefs: The final frontier in our quest for technology integration? Educational Technology Research and Development, 53(4), 25–39. https://doi.org/10.1007/BF02504683
  • Erwin, T. D., & Rieppi, R. (1999). Comparing multimedia and traditional approaches in undergraduate psychology classes. Teaching of Psychology, 26(1), 58–61. https://doi.org/10.1207/s15328023top2601_18
  • Eseryel, D., Guo, Y., & Law, V. (2012). Interactivity 3 design and assessment framework for educational games to promote motivation and complex problem-solving skills. Assessment in game-based learning: Foundations, innovations, and perspectives, 257–285.
  • Fautley, M., & Savage, J. (2008). Assessment for learning and teaching in secondary schools. Learning matters.
  • Felder, R. M., & Brent, R. (2005). Understanding student differences. Journal of Engineering Education, 94(1), 57–72. https://doi.org/10.1002/j.2168-9830.2005.tb00829.x
  • Fuhrman, S. H. (1999). The new accountability. Consortium for Policy Research in Education.
  • Gadbury-Amyot, C. C., McCracken, M. S., Woldt, J. L., & Brennan, R. W. (2012). Implementation of portfolio assessment of student competence in two dental school populations. Journal of Dental Education, 76(12), 1559–1571. https://doi.org/10.1002/j.0022-0337.2012.76.12.tb05419.x
  • Gardner, H. E. (2011). Frames of mind: The theory of multiple intelligences. Basic Books.
  • Gee, J. P. (2003). What video games have to teach us about learning and literacy. Computers in Entertainment, 1(1), 20–20. https://doi.org/10.1145/950566.950595
  • Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers and Education, 57(4), 2333–2351. https://doi.org/10.1016/j.compedu.2011.06.004
  • Griffin, P., Wilson, M., & Care, E. (Eds.). (2018). Assessment and teaching of 21st century skills: Research and applications. Springer International Publishing.
  • Gulikers, J. T. M., Bastiaens, T. J., & Kirschner, P. A. (2004). A five-dimensional framework for authentic assessment. Educational Technology Research and Development, 52(3), 67–86. https://doi.org/10.1007/BF02504676
  • Guskey, T. R., & Jung, L. A. (2016). Practical solutions for severe problems in standards-based grading. Corwin Press.
  • Hargreaves, A. (2003). Teaching in the knowledge society: Education in the age of insecurity. Teacher’s College Press.
  • Harlen, W., & Deakin Crick, R. (2003). Testing and motivation for learning. Assessment in Education: Principles, Policy and Practice, 10(2), 169–207. https://doi.org/10.1080/0969594032000121270
  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
  • Hill, C. J., Bloom, H. S., Black, A. R., & Lipsey, M. W. (2008). Empirical benchmarks for interpreting effect sizes in research. Child Development Perspectives, 2(3), 172–177. https://doi.org/10.1111/j.1750-8606.2008.00061.x
  • Hodges, C. B., Moore, S., Lockee, B. B., Trust, T., & Bond, M. A. (2020). The difference between emergency remote teaching and online learning.
  • Ifenthaler, D. (2012). Determining the effectiveness of prompts for self-regulated learning in problem-solving scenarios. Journal of Educational Technology and Society, 15(1), 38–52.
  • Ifenthaler, D., & Widanapathirana, C. (2014). Development and validation of a learning analytics framework: Two case studies using support vector machines. Technology, Knowledge and Learning, 19(1–2), 221–240. https://doi.org/10.1007/s10758-014-9226-4
  • Ifenthaler, D., Eseryel, D., & Ge, X. (2012). Assessment for game-based learning. In Assessment in game-based learning: Foundations, innovations, and perspectives (pp. 1–8). Springer.
  • Ifenthaler, D., Eseryel, D., & Ge, X. (2012). Assessment in game-based learning. https://doi.org/10.1007/978-1-4614-3546-4
  • Ifenthaler, D., Pirnay-Dummer, P., & Seel, N. M. (2007). The role of cognitive learning strategies and intellectual abilities in mental model building processes. Technology, Instruction, Cognition & Learning, 5(4).
  • Ismail, A., Rakovshik, S., & Abdelrahim, S. (2021). A randomised controlled study of face-to-face versus internet-based teaching of cbt skills for healthcare professionals in sudan. BJPsych International, 18(2), E7. https://doi.org/10.1192/bji.2020.60
  • Joint Committee on Standards for Educational Evaluation (JCSEE). (2010). The program evaluation standards: A guide for evaluators and evaluation users. SAGE Publications.
  • Joint Council for Qualifications (JCQ). (2023). Instructions for conducting examinations.
  • Joint Information Systems Committee (JISC). (2007). Effective practice with e-assessment. Joint Information Systems Committee.
  • Kellaghan, T., & Stufflebeam, D. L. (Eds.). (2012). International handbook of educational evaluation: Part One: Perspectives/part two: Practice, 9. Springer Science+Business Media.
  • Kember, D., McKay, J., Sinclair, K., & Wong, F. K. Y. (2008). A four‐category scheme for coding and assessing the level of reflection in written work. Assessment and Evaluation in Higher Education, 33(4), 369–379. https://doi.org/10.1080/02602930701293355
  • Killam, L. and Camargo‐Plazas, P. (2021). Revisioning assessment and evaluation in nursing education through critical caring pedagogy. Advances in Nursing Science, 45(1), E15-E30. https://doi.org/10.1097/ans.0000000000000382
  • Kingston, N., & Nash, B. (2011). Formative assessment: A meta-analysis and a call for research. Educational Measurement: Issues and Practice, 30(4), 28–37. https://doi.org/10.1111/j.1745-3992.2011.00220.x
  • Koedinger, K. R., McLaughlin, E. A., Jia, J. Z., & Bier, N. L. (2016, April). Is the doer effect a causal relationship? How can we tell and why it’s important. In Proceedings of the Sixth International Conference on Learning Analytics and Knowledge (pp. 388–397).
  • Kohn, A. (2000). The case against standardized testing: Raising the scores, ruining the schools (pp. 1–25). Heinemann.
  • Koretz, D. M. (2008). Measuring up: What educational testing really tells us. Harvard University Press.
  • Kunnan, A. J. (2004). Test fairness. European Language Testing in a Global Context, 18, 27–48.
  • Kyaw, B., Posadzki, P., Dunleavy, G., Semwal, M., Divakar, U., Hervatis, V., … & Car, L. (2019). Offline digital education for medical students: systematic review and meta-analysis by the digital health education collaboration. Journal of Medical Internet Research, 21(3), e13165. https://doi.org/10.2196/13165
  • Ladson-Billings, G. (2006). From the Achievement Gap to the Education Debt: Understanding Achievement in US Schools. Educational Researcher, 35(7), 3–12. https://doi.org/10.3102/0013189X035007003
  • Lam, R. (2015). Language assessment training in Hong Kong: Implications for language assessment literacy. Language Testing, 32(2), 169–197. https://doi.org/10.1177/0265532214554321
  • Le, A., Satkunam, L., & Yu, J. (2023). Residents’ perceptions of a novel virtual livestream cadaveric teaching series for musculoskeletal anatomy education. American Journal of Physical Medicine & Rehabilitation, 102(12), e165-e168. https://doi.org/10.1097/phm.0000000000002284
  • Lemann, N. (2000). The big test: The secret history of the American meritocracy. Macmillan.
  • Levin, J. A., & Datnow, A. (2012). The principal role in data-driven decision making: Using case-study data to develop multi-mediator models of educational reform. School Effectiveness and School Improvement, 23(2), 179–201. https://doi.org/10.1080/09243453.2011.599394
  • Linn, R. L. (2000). Assessments and accountability. Educational Researcher, 29(2), 4–16. https://doi.org/10.3102/0013189X029002004
  • Marzano, R. J., & Toth, M. D. (2013). Teacher evaluation that makes a difference: A new model for teacher growth and student achievement. ASCD.
  • McMillan, J. H. (2014). Classroom assessment: Principles and practice for effective standards-based instruction. Pearson.
  • Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record: The Voice of Scholarship In Education, 115(3), 1–47. https://doi.org/10.1177/016146811311500307
  • Mundy, K., Green, A., Lingard, B., & Verger, A. (Eds.). (2016). Handbook of global education policy. John Wiley & Sons.
  • Nagendrababu, V., Pulikkotil, S., Sultan, O., Jayaraman, J., Soh, J., & Dummer, P. (2018). Effectiveness of technology‐enhanced learning in endodontic education: a systematic review and meta‐analysis. International Endodontic Journal, 52(2), 181-192. https://doi.org/10.1111/iej.12995
  • Nichols, S. L., & Berliner, D. C. (2007). Collateral damage: How high-stakes testing corrupts America’s schools. Harvard Education Press.
  • Nichols, S. L., Glass, G. V., & Berliner, D. C. (2006). High-stakes testing and student achievement: Does accountability pressure increase student learning? Education Policy Analysis Archives/Archivos Analíticos de Políticas Educativas, 14, 1–175. https://doi.org/10.14507/epaa.v14n1.2006
  • NRC. (2012). Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st. Century. National Research Council. https://doi.org/10.17226/13398
  • O’Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.
  • Oandasan, I., D’Amour, D., Zwarenstein, M., Barker, K., Purden, M., Beaulieu, M. D., . . . & Tregunno, D. (2004). Interdisciplinary education for collaborative, patient-centred practice: Research and findings report (pp. 41–99). Health Canada.
  • Olenick, M., Allen, L. R., & Smego, Jr., R. A. (2010). Interprofessional education: A concept analysis. Advances in Medical Education and Practice, 1, 75–84. https://doi.org/10.2147/AMEP.S13207
  • Onwuegbuzie, A. J., & Daniel, L. G. (1999). Uses and misuses of the correlation coefficient.
  • Paulson, F. L. (1991). What makes a portfolio a portfolio? Educational Leadership, 48(5), 60–63.
  • Pellegrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. National Research Council.
  • Phelps, R. P. (2005). The rich, robust research literature on testing’s achievement benefits. Defending standardized testing.
  • Piaget, J., & Cook, M. (1952). The origins of intelligence in children. International Universities Press (8, 5. 1952, 18).
  • Popham, W. J. (2001a). Teaching to the test? Educational Leadership, 58(6), 16–21.
  • Popham, W. J. (2001b). The truth about testing: An educator’s call to action. ASCD.
  • Popham, W. J. (2010a). Classroom assessment: What teachers need to know. Education Review.
  • Popham, W. J. (2010b). Everything school leaders need to know about assessment. Corwin Press.
  • Prihandoko, Y., Wahab, R., Wilujeng, I., & Kartowagiran, B. (2020). How is the perception of teachers in indonesia about portfolio assessment for elementary school? Universal Journal of Educational Research, 8(12B), 8294–8303. https://doi.org/10.13189/ujer.2020.082634
  • Puentedura, R. (2010). SAMR and TPCK: Intro to advanced practice. http://www.hippasus.com/rrpweblog/archives/000073.html
  • Reeves, S., Boet, S., Zierler, B., & Kitto, S. (2015). Interprofessional education and practice guide no. 3: Evaluating interprofessional education. Journal of Interprofessional Care, 29(4), 305–312. https://doi.org/10.3109/13561820.2014.1003637
  • Rezaie, M. (2015). Computer Adaptive Test (CAT): Advantages and limitations. International Journal of Educational Investigations, 2, 128–137.
  • Rienties, B., Brouwer, N., & Lygo-Baker, S. (2013). The effects of online professional development on higher education teachers’ beliefs and intentions towards learning facilitation and technology. Teaching and Teacher Education, 29, 122–131. https://doi.org/10.1016/j.tate.2012.09.002
  • Russell, M., Madaus, G., & Higgins, J. (2009). The paradoxes of high stakes testing: How they affect students, their parents, teachers, principals, schools, and society. IAP.
  • Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144. https://doi.org/10.1007/BF00117714
  • Saher, A. S., Ali, A. M. J., Amani, D., & Najwan, F. (2022). Traditional Versus Authentic Assessments in Higher Education. Pegem Journal of Education and Instruction, 12(1), 283-291. https://doi.org/10.47750/pegegog.12.01.29
  • Sahlberg, P. (2021). Finnish lessons. 3.0: What can the world learn from educational change in Finland? Teacher’s College Press.
  • Santos, G., Leite, A., Figueiredo, P., Pimentel, N., Flores‐Mir, C., Melo, N., … & Canto, G. (2016). Effectiveness of e‐learning in oral radiology education: a systematic review. Journal of Dental Education, 80(9), 1126-1139. https://doi.org/10.1002/j.0022-0337.2016.80.9.tb06195.x
  • Saravia-Shore, M., & Garcia, E. (2008). Diverse teaching strategies for diverse learners. Educating everybody’s children: Diverse teaching strategies for diverse learners, 41–97.
  • Schön, D. A. (1987). Educating the reflective practitioner: Toward a new design for teaching and learning in the professions. Jossey-Bass.
  • Schwartz, D. L., & Arena, D. (2013). Measuring what matters most: Choice-based assessments for the digital age (p. 192). The MIT Press.
  • Scriven, M. (1991). Evaluation thesaurus. SAGE.
  • Shavelson, R. J. (2003). On the integration of formative assessment in teaching and learning with implications for teacher education. Paper presented at Biannual Meeting of the European Association for Research on Learning and Instruction. In 2003. Padova.
  • Sheikhaboumasoudi, R., Bagheri, M., Hosseini, S., Ashouri, E., & Elahi, N. (2018). Improving nursing students' learning outcomes in fundamentals of nursing course through combination of traditional and e-learning methods. Iranian Journal of Nursing and Midwifery Research, 23(3), 217. https://doi.org/10.4103/ijnmr.ijnmr_79_17
  • Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14. https://doi.org/10.3102/0013189X029007004
  • Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. https://doi.org/10.3102/0034654307313795
  • Shute, V. J., & Ke, F. (2012). Games, learning, and assessment. In Assessment in game-based learning: Foundations, innovations, and perspectives (pp. 43–58). Springer.
  • Shute, V. J., & Rahimi, S. (2017). Review of computer‐based assessment for learning in elementary and secondary education. Journal of Computer Assisted Learning, 33(1), 1–19. https://doi.org/10.1111/jcal.12172
  • Sieber, J. E., & Tolich, M. B. (2012). Planning ethically responsible research, 31. Sage Publications.
  • Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510–1529. https://doi.org/10.1177/0002764213479366
  • Spolsky, B. (2014). The languages of the Jews: A sociolinguistic history. Cambridge University Press.
  • Stiggins, R. (2005). From formative assessment to assessment for learning: A path to success in standards-based schools. Phi Delta Kappan, 87(4), 324–328. https://doi.org/10.1177/003172170508700414
  • Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83(10), 758–765. https://doi.org/10.1177/003172170208301010
  • Stiggins, R., & Chappuis, J. (2005). Using student-involved classroom assessment to close achievement gaps. Theory into Practice, 44(1), 11–18. https://doi.org/10.1207/s15430421tip4401_3
  • Stobart, G. (2014). The Expert Learner: Challenging the myth of ability (p. 184). Open University Press. https://doi.org/10.1036/9780335247318
  • Tanaka-Matsumi, J. (2022). Counseling across cultures: A half-century assessment. Journal of Cross-Cultural Psychology, 53(7–8), 957–975. https://doi.org/10.1177/00220221221111810
  • Thompson, N. A., & Weiss, D. A. (2011). A framework for the development of computerized adaptive tests. Practical Assessment, Research and Evaluation, 16(1), 1.
  • Tyack, D. B., & Cuban, L. (1997). Tinkering toward utopia: A century of public school reform. Harvard University Press.
  • Van der Linden, W. J., & Glas, C. A. W. (2000). Computer adaptive testing: Theory and practice. Kluwer Academic Publishers.
  • Volante, L., & Beckett, D. (2011). Formative assessment and the contemporary classroom: Synergies and tensions between research and practice. Canadian Journal of Education/Revue canadienne de l'éducation, 34(2), 239–255.
  • Vygotsky, L. S., & Cole, M. (1978). Mind in society: Development of higher psychological processes. Harvard University Press.
  • Wade, R. C., & Yarbrough, D. B. (1996). Portfolios: A tool for reflective thinking in teacher education? Teaching and Teacher Education, 12(1), 63–79. https://doi.org/10.1016/0742-051X(95)00022-C
  • Wainer, H., Dorans, N. J., Flaugher, R., Green, B. F., & Mislevy, R. J. (2000). Computerized adaptive testing: A primer. Routledge.
  • Wang, J., Xu, Y., Liu, X., Xiong, W., Xie, J., & Zhao, J. (2016). Assessing the effectiveness of problem-based learning in physical diagnostics education in china: a meta-analysis. Scientific Reports, 6(1). https://doi.org/10.1038/srep36279
  • Wang, T., & Kolen, M. J. (2001). Evaluating comparability in computerized adaptive testing: Issues, criteria and an example. Journal of Educational Measurement, 38(1), 19–49. https://doi.org/10.1111/j.1745-3984.2001.tb01115.x
  • Warschauer, M. (2007). The paradoxical future of digital learning. Learning Inquiry, 1(1), 41–49. https://doi.org/10.1007/s11519-007-0001-5
  • Warschauer, M., & Matuchniak, T. (2010). New technology and digital worlds: Analyzing evidence of equity in access, use, and outcomes. Review of Research in Education, 34(1), 179–225. https://doi.org/10.3102/0091732X09349791
  • Weller, J., Naik, V., & Diego, R. (2020). Systematic review and narrative synthesis of competency-based medical education in anaesthesia. British Journal of Anaesthesia, 124(6), 748-760. https://doi.org/10.1016/j.bja.2019.10.025
  • Wiggins, G. (1998). Educative assessment. Designing assessments to inform and improve student performance p. 94104. Jossey-Bass Publishers.
  • Wiggins, G. (2011). A true test: Toward more authentic and equitable assessment. Phi Delta Kappan, 92(7), 81–93. https://doi.org/10.1177/003172171109200721
  • Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3–14. https://doi.org/10.1016/j.stueduc.2011.03.001
  • Wiliam, D. (2011). Embedded formative assessment. Solution Tree Press.
  • Williamson, B. (2016). Digital education governance: Data visualization, predictive analytics, and “real-time policy instruments. Journal of Education Policy, 31(2), 123–141. https://doi.org/10.1080/02680939.2015.1035758
  • Williamson, B. (2018). The hidden architecture of higher education: Building a big data infrastructure for the “smarter university”. International Journal of Educational Technology in Higher Education, 15(1). https://doi.org/10.1186/s41239-018-0094-1
  • Zeidner, M. (2007). Test anxiety in educational contexts: Concepts, findings, and future directions. In Emotion in education (pp. 165–184). Academic Press.
Toplam 147 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Alan Eğitimleri (Diğer)
Bölüm Makaleler
Yazarlar

Ruşen Meylani 0000-0002-3121-6088

Erken Görünüm Tarihi 23 Nisan 2024
Yayımlanma Tarihi 28 Nisan 2024
Gönderilme Tarihi 6 Kasım 2023
Kabul Tarihi 22 Nisan 2024
Yayımlandığı Sayı Yıl 2024

Kaynak Göster

APA Meylani, R. (2024). A Comparative Analysis of Traditional and Modern Approaches to Assessment and Evaluation in Education. Batı Anadolu Eğitim Bilimleri Dergisi, 15(1), 520-555. https://doi.org/10.51460/baebd.1386737