Research Article
BibTex RIS Cite

Investigation of Activities For Reading Comprehension Skills: A G-Theory Analysis

Year 2025, Volume: 16 Issue: 1, 48 - 58, 31.03.2025

Abstract

This study aimed to investigate the effectiveness of activities prepared to improve reading comprehension skills based on the variables of number of raters, evaluation criteria and number of activities. Twelve experts evaluated five reading comprehension activities created by the researcher. A descriptive survey method grounded in a quantitative approach was employed. The study utilized five reading comprehension activities, commonly used in high school textbooks, and a rubric developed by the researcher, consisting of sixteen criteria based on relevant literature. After performing reliability and validity analyses on the rubric, the experts assessed the activities using this tool. The data collected from their evaluations were analyzed through generalizability theory. The EduG program was used to estimate variance values for both main and common effects according to generalizability theory, calculate the scores' reliability using G and Φ (Phi) coefficients, and conduct Decision (D) studies. The findings revealed that incorporating a variety of reading comprehension activities is crucial for improving students' comprehension skills, and that increased interaction with these activities leads to better skill development. Additionally, it was concluded that increasing the number of criteria included in the rubric and increasing the number of expert raters to fifteen would lead to a more accurate and effective evaluation of the activities.

References

  • Akyol, H. (2005). Turkish primary reading and writing teaching, Ankara: PegemA.
  • Akyol, H., & Ketenoğlu Kayabaşı, Z. E. (2018). Improving the Reading Skills of a Students with Reading Difficulties: An Action Research. Education and Science, 43(193). https://doi.org/10.15390/EB.2018.7240
  • Alkan, M., & Doğan, N. (2023). A Comparison of Different Designs in Scoring of PISA 2009 Reading Open Ended Items According to Generalizability Theory. Journal of Measurement and Evaluation in Education and Psychology, 14(2), 106-117. https://doi.org/10.21031/epod.1210917
  • Arter, J. (2002). Rubrics, scoring guides, and performance criteria. In C. Boston (Ed.), Understanding Scoring Rubrics a Guide for Teachers (p. 21-31). Office of Educational Research and Improvement.
  • Başpınar Yörük, N. (2013). An investigation on the use of creativity development methods in 6th grade Turkish course reading activities. (Master thesis), University of Necmettin Erbakan Üniversitesi, Konya. Accessed from YOK Thesis Center database (Dissertation No: 348744).
  • Baştuğ, M., Hiğde, A., Çam, E., Örs, E., & Efe, P. (2019). Strategies, techniques, practices to improve reading comprehension skills. Ankara: PegemA.
  • Brennan, R. L. (2001). Generalizability theory. New York: Springer Verlag.
  • Brilliananda, C., & Wibowo, S. E. (2023). Reading Strategies for Post-Pandemic Students' Reading Comprehension Skills. International Journal of Elementary Education, 7(2).
  • Cizek, G. J. (2009). Reliability and validity of information about student achievement: Comparing large-scale and classroom testing contexts. Theory into practice, 48(1), 63-71.
  • Clarke, P. J., Snowling, M. J., Truelove, E. & Hulme, C. (2010). Ameliorating Children’s Reading-Comprehension Difficulties: A Randomized Controlled Trial. Psychological Science, 21(8), 1106–1116. https://doi.org/10.1177/0956797610375449
  • Collins, A. A., Compton, D. L., Lindström, E. R., & Gilbert, J. K. (2020). Performance variations across reading comprehension assessments: Examining the unique contributions of text, activity, and reader. Reading and Writing, 33(3), 605-634.
  • Çepni, S. (2010). Introduction to research and project work. Trabzon: Celepler.
  • DeVellis, R. F., & Thorpe, C. T. (2021). Scale development: Theory and applications. Sage publications.
  • Dunbar, N. E., Brooks, C. F. & Miller, T. K. (2006). Oral communication skills in higher education: Using a performance-based evaluation rubric to assess communication skills. Innovative Higher Education, 31(2), 2006, 115-128.
  • Floris, F. D., & Divina, M. (2015). A study on the reading skills of EFL university students. Teflin Journal, 20(1), 37–47.
  • Güler, N., Kaya Uyanık, G., & Taşdelen Teker, G. (2012). Generalizability theory. Ankara: PegemA.
  • Güneş, F. (2017). Activity approach in teaching Turkish, Journal of Native Language Education, 5(1), 48-64. https://doi.org/10.16916/aded.286415
  • Güvendir, M. A. (2014). Öğrenci başarılarının belirlenmesi sınavında öğrenci ve okul özelliklerinin Türkçe başarısı ile ilişkisi. Eğitim ve Bilim, 39(172).
  • Hall, E. K. & Salmon, S. J. (2003). Chocolate chip cookies and rubrics helping students understand rubrics in inclusive settings. Teaching Exceptional Children, 35(4), 8-11.
  • Hellman, C. M., Fuqua, D. R., & Worley, J. (2006). A reliability generalization study on the survey of perceived organizational support: The effects of mean age and number of items on score reliability. Educational and psychological measurement, 66(4), 631-642.
  • Henson, R. K., & Thompson, B. (2002). Characterizing measurement error in scores across studies: Some recommendations for conducting “reliability generalization” studies. Measurement and Evaluation in Counseling and Development, 35, 113-126.
  • Hunt, A., & Beglar, D. (2005). A framework for developing EFL reading vocabulary. Reading in a Foreign Language, 17(1), 23–59.
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational research review, 2(2), 130-144.
  • Jorgenson, G. W. (1975). An analysis of teacher judgments of reading level. American Educational Research Journal, 12 (1), 67-75. https://doi.org/10.2307/1162581
  • Karasar, N. (2010). Scientific research method. Ankara: Nobel.
  • Kaya Uyanık, G., & Güler, N. (2016). Examining the reliability of concept map scores: An example of a crossover mixed design in generalizability theory. Hacettepe University Faculty of Education Journal 31(1). 97-11. http://doi.org/10.16986/HUJE.2015014136
  • Kim, J. S., Relyea, J. E., Burkhauser, M. A., Scherer, E., & Rich, P. (2021). Improving elementary grade students’ science and social studies vocabulary knowledge depth, reading comprehension, and argumentative writing: A conceptual replication. Educational Psychology Review, 1-30.
  • Kim, Y. S. G. (2020). Hierarchical and dynamic relations of language and cognitive skills to reading comprehension: Testing the direct and indirect effects model of reading (DIER). Journal of Educational Psychology, 112(4), 667.
  • Kohn, A. (2006). The trouble with rubrics. English Journal, 95(4), 12–15.
  • Long, H., & Pang, W. (2015). Rater effects in creativity assessment: A mixed methods investigation. Thinking Skills and Creativity, 15, 13-25.
  • Mabry, L. (1999). Writing to the rubric: Lingering effects of traditional standardized testing on direct writing assessment. Phi Delta Kappan, 80(9), 673–679.
  • MoNE. (2020). Turkish Language Exam in Four Skills: Pilot Study Results. https://www.meb.gov.tr/meb_iys_dosyalar/2020_01/20094146_Dort_Beceride_Turkce_Dil_Sinavi_Ocak_2020.pdf
  • Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet Rasch measurement: Part I. Journal of applied measurement, 4(4), 386-422.
  • Nalbantoğlu F., & Gelbal S. (2011). Comparison of different designs with generalizability theory at the communication skills station scale, Hacettepe University Faculty of Education Journal, 41, 509-518. https://dergipark.org.tr/tr/download/article-file/87423
  • Oaklef, M. (2009). Using rubrics to assess information literacy: An examination of methodology and ınterrater reliability. Journal of the American Society for Information Science and Technology, 60(5), 969-983. https://doi.org/10.1002/asi.21030
  • Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing writing, 15(1), 18-39. https://doi.org/10.1016/j.asw.2010.01.003
  • Shavelson, R. J. & Webb, N. M. (1991). Generalizability theory: A primer. Newbury Park, CA: Sage. http://doi.org/10.1002/9781118445112.stat00068
  • Siti, M., & Mumu, M. (2022). The effect of critical multiliteracy learning model on students' reading comprehension. International Journal of Educational Qualitative Quantitative Research (IJE-QQR), 1(1), 28-33.
  • Smith, R., Snow, P., Serry, T., & Hammond, L. (2021). The role of background knowledge in reading comprehension: A critical review. Reading Psychology, 42(3), 214-240.
  • Snyder, L., Caccamise, D., & Wise, B. (2005). The assessment of reading comprehension: Considerations and cautions. Topics in Language Disorders, 25(1), 33-50.
  • Spandel, V. (2006). In defense of rubrics. English Journal, 96 (1), 19–22. https://doi.org/10.58680/ej20065683
  • Şata, M., & Karakaya, İ. (2021). Investigating the Effect of Rater Training on Differential Rater Function in Assessing Academic Writing Skills of Higher Education Students. Journal of Measurement and Evaluation in Education and Psychology, 12(2), 163-181. https://doi.org/10.21031/epod.842094
  • Wiseman, C. S. (2012). Rater effects: Ego engagement in rater decision-making. Assessing Writing, 17(3), 150-173.
  • Wolf, K. & Stevens, E. (2007). The role of rubrics in advancing and assessing student learning. The Journal of Effective Teaching, 7(1), 3-14.
  • Wolfe, E. W. (1997). The relationship between essay reading style and scoring proficiency in a psychometric scoring system. Assessing Writing, 4(1), 83–106.
Year 2025, Volume: 16 Issue: 1, 48 - 58, 31.03.2025

Abstract

References

  • Akyol, H. (2005). Turkish primary reading and writing teaching, Ankara: PegemA.
  • Akyol, H., & Ketenoğlu Kayabaşı, Z. E. (2018). Improving the Reading Skills of a Students with Reading Difficulties: An Action Research. Education and Science, 43(193). https://doi.org/10.15390/EB.2018.7240
  • Alkan, M., & Doğan, N. (2023). A Comparison of Different Designs in Scoring of PISA 2009 Reading Open Ended Items According to Generalizability Theory. Journal of Measurement and Evaluation in Education and Psychology, 14(2), 106-117. https://doi.org/10.21031/epod.1210917
  • Arter, J. (2002). Rubrics, scoring guides, and performance criteria. In C. Boston (Ed.), Understanding Scoring Rubrics a Guide for Teachers (p. 21-31). Office of Educational Research and Improvement.
  • Başpınar Yörük, N. (2013). An investigation on the use of creativity development methods in 6th grade Turkish course reading activities. (Master thesis), University of Necmettin Erbakan Üniversitesi, Konya. Accessed from YOK Thesis Center database (Dissertation No: 348744).
  • Baştuğ, M., Hiğde, A., Çam, E., Örs, E., & Efe, P. (2019). Strategies, techniques, practices to improve reading comprehension skills. Ankara: PegemA.
  • Brennan, R. L. (2001). Generalizability theory. New York: Springer Verlag.
  • Brilliananda, C., & Wibowo, S. E. (2023). Reading Strategies for Post-Pandemic Students' Reading Comprehension Skills. International Journal of Elementary Education, 7(2).
  • Cizek, G. J. (2009). Reliability and validity of information about student achievement: Comparing large-scale and classroom testing contexts. Theory into practice, 48(1), 63-71.
  • Clarke, P. J., Snowling, M. J., Truelove, E. & Hulme, C. (2010). Ameliorating Children’s Reading-Comprehension Difficulties: A Randomized Controlled Trial. Psychological Science, 21(8), 1106–1116. https://doi.org/10.1177/0956797610375449
  • Collins, A. A., Compton, D. L., Lindström, E. R., & Gilbert, J. K. (2020). Performance variations across reading comprehension assessments: Examining the unique contributions of text, activity, and reader. Reading and Writing, 33(3), 605-634.
  • Çepni, S. (2010). Introduction to research and project work. Trabzon: Celepler.
  • DeVellis, R. F., & Thorpe, C. T. (2021). Scale development: Theory and applications. Sage publications.
  • Dunbar, N. E., Brooks, C. F. & Miller, T. K. (2006). Oral communication skills in higher education: Using a performance-based evaluation rubric to assess communication skills. Innovative Higher Education, 31(2), 2006, 115-128.
  • Floris, F. D., & Divina, M. (2015). A study on the reading skills of EFL university students. Teflin Journal, 20(1), 37–47.
  • Güler, N., Kaya Uyanık, G., & Taşdelen Teker, G. (2012). Generalizability theory. Ankara: PegemA.
  • Güneş, F. (2017). Activity approach in teaching Turkish, Journal of Native Language Education, 5(1), 48-64. https://doi.org/10.16916/aded.286415
  • Güvendir, M. A. (2014). Öğrenci başarılarının belirlenmesi sınavında öğrenci ve okul özelliklerinin Türkçe başarısı ile ilişkisi. Eğitim ve Bilim, 39(172).
  • Hall, E. K. & Salmon, S. J. (2003). Chocolate chip cookies and rubrics helping students understand rubrics in inclusive settings. Teaching Exceptional Children, 35(4), 8-11.
  • Hellman, C. M., Fuqua, D. R., & Worley, J. (2006). A reliability generalization study on the survey of perceived organizational support: The effects of mean age and number of items on score reliability. Educational and psychological measurement, 66(4), 631-642.
  • Henson, R. K., & Thompson, B. (2002). Characterizing measurement error in scores across studies: Some recommendations for conducting “reliability generalization” studies. Measurement and Evaluation in Counseling and Development, 35, 113-126.
  • Hunt, A., & Beglar, D. (2005). A framework for developing EFL reading vocabulary. Reading in a Foreign Language, 17(1), 23–59.
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational research review, 2(2), 130-144.
  • Jorgenson, G. W. (1975). An analysis of teacher judgments of reading level. American Educational Research Journal, 12 (1), 67-75. https://doi.org/10.2307/1162581
  • Karasar, N. (2010). Scientific research method. Ankara: Nobel.
  • Kaya Uyanık, G., & Güler, N. (2016). Examining the reliability of concept map scores: An example of a crossover mixed design in generalizability theory. Hacettepe University Faculty of Education Journal 31(1). 97-11. http://doi.org/10.16986/HUJE.2015014136
  • Kim, J. S., Relyea, J. E., Burkhauser, M. A., Scherer, E., & Rich, P. (2021). Improving elementary grade students’ science and social studies vocabulary knowledge depth, reading comprehension, and argumentative writing: A conceptual replication. Educational Psychology Review, 1-30.
  • Kim, Y. S. G. (2020). Hierarchical and dynamic relations of language and cognitive skills to reading comprehension: Testing the direct and indirect effects model of reading (DIER). Journal of Educational Psychology, 112(4), 667.
  • Kohn, A. (2006). The trouble with rubrics. English Journal, 95(4), 12–15.
  • Long, H., & Pang, W. (2015). Rater effects in creativity assessment: A mixed methods investigation. Thinking Skills and Creativity, 15, 13-25.
  • Mabry, L. (1999). Writing to the rubric: Lingering effects of traditional standardized testing on direct writing assessment. Phi Delta Kappan, 80(9), 673–679.
  • MoNE. (2020). Turkish Language Exam in Four Skills: Pilot Study Results. https://www.meb.gov.tr/meb_iys_dosyalar/2020_01/20094146_Dort_Beceride_Turkce_Dil_Sinavi_Ocak_2020.pdf
  • Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet Rasch measurement: Part I. Journal of applied measurement, 4(4), 386-422.
  • Nalbantoğlu F., & Gelbal S. (2011). Comparison of different designs with generalizability theory at the communication skills station scale, Hacettepe University Faculty of Education Journal, 41, 509-518. https://dergipark.org.tr/tr/download/article-file/87423
  • Oaklef, M. (2009). Using rubrics to assess information literacy: An examination of methodology and ınterrater reliability. Journal of the American Society for Information Science and Technology, 60(5), 969-983. https://doi.org/10.1002/asi.21030
  • Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing writing, 15(1), 18-39. https://doi.org/10.1016/j.asw.2010.01.003
  • Shavelson, R. J. & Webb, N. M. (1991). Generalizability theory: A primer. Newbury Park, CA: Sage. http://doi.org/10.1002/9781118445112.stat00068
  • Siti, M., & Mumu, M. (2022). The effect of critical multiliteracy learning model on students' reading comprehension. International Journal of Educational Qualitative Quantitative Research (IJE-QQR), 1(1), 28-33.
  • Smith, R., Snow, P., Serry, T., & Hammond, L. (2021). The role of background knowledge in reading comprehension: A critical review. Reading Psychology, 42(3), 214-240.
  • Snyder, L., Caccamise, D., & Wise, B. (2005). The assessment of reading comprehension: Considerations and cautions. Topics in Language Disorders, 25(1), 33-50.
  • Spandel, V. (2006). In defense of rubrics. English Journal, 96 (1), 19–22. https://doi.org/10.58680/ej20065683
  • Şata, M., & Karakaya, İ. (2021). Investigating the Effect of Rater Training on Differential Rater Function in Assessing Academic Writing Skills of Higher Education Students. Journal of Measurement and Evaluation in Education and Psychology, 12(2), 163-181. https://doi.org/10.21031/epod.842094
  • Wiseman, C. S. (2012). Rater effects: Ego engagement in rater decision-making. Assessing Writing, 17(3), 150-173.
  • Wolf, K. & Stevens, E. (2007). The role of rubrics in advancing and assessing student learning. The Journal of Effective Teaching, 7(1), 3-14.
  • Wolfe, E. W. (1997). The relationship between essay reading style and scoring proficiency in a psychometric scoring system. Assessing Writing, 4(1), 83–106.
There are 45 citations in total.

Details

Primary Language English
Subjects Testing, Assessment and Psychometrics (Other)
Journal Section Articles
Authors

Gülden Kaya Uyanık 0000-0002-8100-6994

Serap Ataoğlu This is me 0000-0002-3849-0493

Publication Date March 31, 2025
Submission Date September 12, 2024
Acceptance Date February 18, 2025
Published in Issue Year 2025 Volume: 16 Issue: 1

Cite

APA Kaya Uyanık, G., & Ataoğlu, S. (2025). Investigation of Activities For Reading Comprehension Skills: A G-Theory Analysis. Journal of Measurement and Evaluation in Education and Psychology, 16(1), 48-58. https://doi.org/10.21031/epod.1548738