Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2021, Cilt: 8 Sayı: 3, 178 - 190, 03.09.2021
https://doi.org/10.33200/ijcer.858599

Öz

Kaynakça

  • Anderson, L. W. (1988). Likert scales; educational research, methodology, and measurement: An international handbook. Pergamon. Edited by: Keeves, J. P. (Ed), 227-228
  • Arvidsson, R. (2019). On the use of ordinal scoring scales in social life cycle assessment. The International Journal of Life Cycle Assessment, 24(3), 604-606.
  • Atılgan, H., & Saçkes, M. (2004). Ölçeklerin ikili ve çok kategorili puanlanmasının psikometrik özelliklerinin karşılaştırılması [Comparison of psychometric properties of dual and multi-category scoring of scales]. İnönü Üniversitesi Eğitim Fakültesi Dergisi, 5(7).
  • Balcı, A. (2010). Sosyal bilimlerde araştırma yöntem, teknik ve ilkeler [Research methods, techniques and principles in social sciences]. Ankara: PegemA.
  • Bending, A. W. (1953). The reliability of self-ratings as a function of the amount of verbal anchoring and the number of categories on the scale. Journal of Applied Psychology, 37, 38-41.
  • Bendixen, M., & Sandler, M. (1995). Converting verbal scales to interval scales using
  • Blaikie, N. (2003). Analyzing quantitative data. London: SAGE Publications Ltd., London.
  • Brown, G., Wilding, R. E., & Coulter, R. L. (1991). Customer evaluation of retail salespeople using the SOCO scale: A replication extension and application. Journal of the Academy of Marketing Science, 9, 347-351.
  • Chang, L. (1994). A psychometric evaluation of four-point and six-point Likert-type scales in relation to reliability and validity. Applied Psychological Measurement, 18, 205-215.
  • Cicchetti, D. V., Showalter, D., & Tyrer, P. J. (1985). The effect of number of rating scale categories on levels of inter-rater reliability: A Monte-Carlo investigation. Applied Psychological Measurement, 9, 31-36.
  • Cohen, L., Manion, L., & Morrison, K. (2000). Research methods in education. 5th edn. London: RoutledgeFalmer.
  • Crask, M. R., & Fox, R. J. (1987). An exploration of the interval properties of three commonly used marketing research studies: a magnitude estimation approach, Journal of the Marketing Research Society, 29(3), 317-39.
  • Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. New Jersey: Lawrence Erlbaum Associates.
  • Erkuş, A. (2003). Psikometri üzerine yazılar [Writings on psychometry]. Türk Psikologlar Derneği.
  • Erkuş, A. (2012). Psikolojide ölçme ve ölçek geliştirme-1, temel kavramlar ve işlemler [Measurement and scale development in psychology-1, basic concepts and operations]. Ankara: PegemA.
  • Erkuş, A., Sanlı, N., Bağlı, M. T., & Güven, K. (2000). Öğretmenliğe ilişkin tutum ölçeği geliştirilmesi [Developing an attitude scale toward teaching as a profession]. Eğitim ve Bilim, 25(116), 27-32.
  • Ferrando, P. J. (2003). A Kernel density analysis of continuous typical-response scales. Educational and Psychological Measurement, 63, 809-824.
  • Fitzpatrick, A. R., & Yen, W. M. (2001). The effects of test length and sample size on the reliability and equating of tests composed of constructed-response items. Applied Measurement in Education, 14 (1), 31-57.
  • Han, K. T. (2007). WinGen: Windows software that generates IRT parameters and item responses. Applied Psychological Measurement, 31 (5), 457-459.
  • Hansen, J. P. (2003). CAN’T MISS-Conquer any number task by making ımportant statistics simple. Part 1. Types of variables, mean, median, variance, and standard deviation. J. Healthcare Qual, 25(4), 19-24.
  • Harwell, M. R., & Gatti, G. G. (2001). Rescaling ordinal data to interval data in educational research. Review of Educational Research, 71(1), 105-131.
  • Kan, A. (2009). Effect of scale response format on psychometric properties in teaching self-efficacy. Eurasian Journal of Educational Research, 34, 215-228.
  • Karasar, N. (2012). Bilimsel araştırma yöntemi: kavramlar, ilkeler, teknikler [Scientific research method: concepts, principles, techniques]. Nobel Yayın Dağıtım.
  • Kim, S., & Lee, W. (2004). IRT scale linking methods for mixed-format tests (ACT Research Report 2004-5). Iowa City, IA: Act, Inc.
  • Komorita, S. S. (1963). Attitude content, intensity, and the neutral point on a Likert scale. Journal of Social Psychology, 61, 327-334.
  • Köklü, N. (1997). Tutumların ölçülmesi ve Likert tipi ölçeklerde kullanılan seçenekler [Measuring attitudes and options used in Likert-type scales]. Ankara Üniversitesi Eğitim Bilimleri Fakültesi Dergisi, 28(2).
  • Latorraca, R. (2018). Think aloud as a tool for implementing observational learning in the translation class, Perspectives, 26(5), 708-724, DOI: 10.1080/0907676X.2017.1407804
  • Leung, S-O. (2011). A comparison of psychometric properties and normality in 4-, 5-, 6-, and 11-point Likert scales. Journal of Social Service Research, 37:4, 412-421.
  • Liou, M., Cheng, P. E., & Johnson, E. G. (1997). Standard errors of the Kernel equating methods under the common-item design. Applied Psychological Measurement, 21(4), 349-369, DOI: 10.1177/01466216970214005.
  • Lozano, L. M., García-Cueto, E., & Muñiz, J. (2008). Effect of the number of response categories on the reliability and validity of rating scales. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 4(2), 73-79. http://dx.doi.org/10.1027/1614-2241.4.2.73
  • Masters, E. R. (1974). The relationship between number of response categories and reliability of Likert‐type questionnaires 1. Journal of Educational Measurement, 11(1), 49-53.
  • Matell, M. S., & Jacoby, J. (1971). Is there an optimal number of alternatives for Likert scale items? Study 1: Reliability and validity. Educational and Psychological Measurement, 31, 657-674.
  • Moitra, S. D. (1990). Skewness and the beta distribution. Journal of the Operational Research Society, 41(10), 953-961.
  • Munshi, J. (2014). A method for constructing Likert scales, Social Science Research Network. doi:10.2139/ ssrn.2419366.
  • Muraki, E. (1992). A generalized partial credit model: Application of an EM algorithm. Applied Psychological Measurement, 16, 159-176.
  • Pell, G. (2005). Use and misuse of Likert scales. Medical Education, 39(9), 970. https://doi.org/10.1111/j.1365-2929.2005.02237.x
  • Pérez, J. G., Martín, M. D. M. L., García, C. G., & Granero, M. Á. S. (2016). Project management under uncertainty beyond beta: The generalized bicubic distribution. Operations Research Perspectives, 3, 67-76
  • Pett, M. A. (1997). Nonparametric statistics for health care research. London: SAGE Publications.
  • Preston, C. C., & Colman, A. M. (2000). Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychologia, 104, 1-15.
  • Tate, R. L., Simpson, G. K., Soo, C. A., & Lane-Brown, A.T. (2011). Participation after acquired brain injury: clinical and psychometric considerations of the sydney psychosocial reintegration scale (SPRS). Journal of Rehabilitation Medicine, 43(7), 609–618.
  • Tavşancıl, E. (2010). Tutumların ölçülmesi ve SPSS ile veri analizi [Measuring attitudes and data analysis with SPSS] (4. baskı). Nobel.
  • Tezbaşaran, A. (1997). Likert tipi ölçek geliştirme kılavuzu [Likert type scale development guide]. Türk Psikologlar Derneği.
  • Thorndike, R. (1997). Measurement and evaluation in psychology and education, Prentice-Hall.
  • Ogasawara, H. (2001). Standart errors of item response theory equating/linking by response function methods. Applied Psychological Measurement, 25(1), 53 67.
  • Oskamp, S. (1977). Attitudes and opinions. Prentice-Hall.
  • Ostini, R., & Nering, M. L. (2006). Polytomous item response theory models. Sage.
  • Paek, I., & Young, M. J. (2005). Investigation of student growth recovery in a fixed item linking procedure with a fixed-person prior distribution for mixed-format test data. Applied Measurement in Education, 18(2), 199-215.
  • Penfield, R. D., & Bergeron, J. M. (2005). Applying a weighted maximum likelihood latent trait estimator to the generalized partial credit model. Applied Psychological Measurement, 29 (3), 218-233.
  • Schaeffer, N. C., & Presser, S. (2003). The science of asking questions. Annual review of sociology, 29, 65-88 https://doi.org/10.1146/annurev.soc.29.110702.110112
  • Uyumaz, G., & Çokluk, Ö. (2016). An Investigation of Item Order and Rating Differences in Likert-Type Scales in Terms of Psychometric Properties and Attitudes of Respondents. Journal of Theoretical Educational Science, 9(3), 400-425. DOI: 10.5578/keg.10011
  • Völkl, K., & Korb, C. (2018). Deskriptive Statistik [descriptive statistics]. Wiesbaden: Springer.
  • Wakita, T. (2004). The distance between categories in rating-scale method: Applying item response model to the assessment process. Japanese Journal of Psychology, 75, 331-338.
  • Wakita, T., Ueshima, N., & Noguchi, H. (2012) Psychological distance between categories in the Likert scale: comparing different numbers of options. Educational and Psychological Measurement, 72(4) 533–546.
  • Weng, L-J. (2004). Impact of the number of response categories and anchor labels on coefficient alpha and test-retest reliability. Educational and Psychological Measurement, 64(6), 956-972.
  • Wu, C-H. (2007). An empirical study on the transformation of Likert scale data to numerical scores. Applied Mathematical Sciences, 1(58), 2851-2862.

Determining the Factors Affecting the Psychological Distance Between Categories in the Rating Scale

Yıl 2021, Cilt: 8 Sayı: 3, 178 - 190, 03.09.2021
https://doi.org/10.33200/ijcer.858599

Öz

In this study, the assumption of the equality of psychological distance between categories of rating scale was tested based on the number of categories and ability distributions. Category parameters were estimated by using generalized partial credit model. The data sets based on the conditions of categories counts and ability distributions were generated by WinGen3 software. The results show that the assumption of the equality of psychological distance between categories of rating scale was not provided in any different ability distribution and different category counts conditions. However, the number of categories influenced the psychological distance between categories, particularly for the 7-point scale. As the number of categories increases, the deviation amount from the conventional category value also increases. Also, endpoints of scales tend to close to middle point of scale when the number of categories is increased. When the converted scale values of the cases with the different ability distribution characteristics were compared, it was seen that the deviation from the conventional category value slightly varied in all the number of categories. However, these differences did not have a systematic order. The degree of violation of the assumption increases as the number of categories increases. 

Kaynakça

  • Anderson, L. W. (1988). Likert scales; educational research, methodology, and measurement: An international handbook. Pergamon. Edited by: Keeves, J. P. (Ed), 227-228
  • Arvidsson, R. (2019). On the use of ordinal scoring scales in social life cycle assessment. The International Journal of Life Cycle Assessment, 24(3), 604-606.
  • Atılgan, H., & Saçkes, M. (2004). Ölçeklerin ikili ve çok kategorili puanlanmasının psikometrik özelliklerinin karşılaştırılması [Comparison of psychometric properties of dual and multi-category scoring of scales]. İnönü Üniversitesi Eğitim Fakültesi Dergisi, 5(7).
  • Balcı, A. (2010). Sosyal bilimlerde araştırma yöntem, teknik ve ilkeler [Research methods, techniques and principles in social sciences]. Ankara: PegemA.
  • Bending, A. W. (1953). The reliability of self-ratings as a function of the amount of verbal anchoring and the number of categories on the scale. Journal of Applied Psychology, 37, 38-41.
  • Bendixen, M., & Sandler, M. (1995). Converting verbal scales to interval scales using
  • Blaikie, N. (2003). Analyzing quantitative data. London: SAGE Publications Ltd., London.
  • Brown, G., Wilding, R. E., & Coulter, R. L. (1991). Customer evaluation of retail salespeople using the SOCO scale: A replication extension and application. Journal of the Academy of Marketing Science, 9, 347-351.
  • Chang, L. (1994). A psychometric evaluation of four-point and six-point Likert-type scales in relation to reliability and validity. Applied Psychological Measurement, 18, 205-215.
  • Cicchetti, D. V., Showalter, D., & Tyrer, P. J. (1985). The effect of number of rating scale categories on levels of inter-rater reliability: A Monte-Carlo investigation. Applied Psychological Measurement, 9, 31-36.
  • Cohen, L., Manion, L., & Morrison, K. (2000). Research methods in education. 5th edn. London: RoutledgeFalmer.
  • Crask, M. R., & Fox, R. J. (1987). An exploration of the interval properties of three commonly used marketing research studies: a magnitude estimation approach, Journal of the Marketing Research Society, 29(3), 317-39.
  • Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. New Jersey: Lawrence Erlbaum Associates.
  • Erkuş, A. (2003). Psikometri üzerine yazılar [Writings on psychometry]. Türk Psikologlar Derneği.
  • Erkuş, A. (2012). Psikolojide ölçme ve ölçek geliştirme-1, temel kavramlar ve işlemler [Measurement and scale development in psychology-1, basic concepts and operations]. Ankara: PegemA.
  • Erkuş, A., Sanlı, N., Bağlı, M. T., & Güven, K. (2000). Öğretmenliğe ilişkin tutum ölçeği geliştirilmesi [Developing an attitude scale toward teaching as a profession]. Eğitim ve Bilim, 25(116), 27-32.
  • Ferrando, P. J. (2003). A Kernel density analysis of continuous typical-response scales. Educational and Psychological Measurement, 63, 809-824.
  • Fitzpatrick, A. R., & Yen, W. M. (2001). The effects of test length and sample size on the reliability and equating of tests composed of constructed-response items. Applied Measurement in Education, 14 (1), 31-57.
  • Han, K. T. (2007). WinGen: Windows software that generates IRT parameters and item responses. Applied Psychological Measurement, 31 (5), 457-459.
  • Hansen, J. P. (2003). CAN’T MISS-Conquer any number task by making ımportant statistics simple. Part 1. Types of variables, mean, median, variance, and standard deviation. J. Healthcare Qual, 25(4), 19-24.
  • Harwell, M. R., & Gatti, G. G. (2001). Rescaling ordinal data to interval data in educational research. Review of Educational Research, 71(1), 105-131.
  • Kan, A. (2009). Effect of scale response format on psychometric properties in teaching self-efficacy. Eurasian Journal of Educational Research, 34, 215-228.
  • Karasar, N. (2012). Bilimsel araştırma yöntemi: kavramlar, ilkeler, teknikler [Scientific research method: concepts, principles, techniques]. Nobel Yayın Dağıtım.
  • Kim, S., & Lee, W. (2004). IRT scale linking methods for mixed-format tests (ACT Research Report 2004-5). Iowa City, IA: Act, Inc.
  • Komorita, S. S. (1963). Attitude content, intensity, and the neutral point on a Likert scale. Journal of Social Psychology, 61, 327-334.
  • Köklü, N. (1997). Tutumların ölçülmesi ve Likert tipi ölçeklerde kullanılan seçenekler [Measuring attitudes and options used in Likert-type scales]. Ankara Üniversitesi Eğitim Bilimleri Fakültesi Dergisi, 28(2).
  • Latorraca, R. (2018). Think aloud as a tool for implementing observational learning in the translation class, Perspectives, 26(5), 708-724, DOI: 10.1080/0907676X.2017.1407804
  • Leung, S-O. (2011). A comparison of psychometric properties and normality in 4-, 5-, 6-, and 11-point Likert scales. Journal of Social Service Research, 37:4, 412-421.
  • Liou, M., Cheng, P. E., & Johnson, E. G. (1997). Standard errors of the Kernel equating methods under the common-item design. Applied Psychological Measurement, 21(4), 349-369, DOI: 10.1177/01466216970214005.
  • Lozano, L. M., García-Cueto, E., & Muñiz, J. (2008). Effect of the number of response categories on the reliability and validity of rating scales. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 4(2), 73-79. http://dx.doi.org/10.1027/1614-2241.4.2.73
  • Masters, E. R. (1974). The relationship between number of response categories and reliability of Likert‐type questionnaires 1. Journal of Educational Measurement, 11(1), 49-53.
  • Matell, M. S., & Jacoby, J. (1971). Is there an optimal number of alternatives for Likert scale items? Study 1: Reliability and validity. Educational and Psychological Measurement, 31, 657-674.
  • Moitra, S. D. (1990). Skewness and the beta distribution. Journal of the Operational Research Society, 41(10), 953-961.
  • Munshi, J. (2014). A method for constructing Likert scales, Social Science Research Network. doi:10.2139/ ssrn.2419366.
  • Muraki, E. (1992). A generalized partial credit model: Application of an EM algorithm. Applied Psychological Measurement, 16, 159-176.
  • Pell, G. (2005). Use and misuse of Likert scales. Medical Education, 39(9), 970. https://doi.org/10.1111/j.1365-2929.2005.02237.x
  • Pérez, J. G., Martín, M. D. M. L., García, C. G., & Granero, M. Á. S. (2016). Project management under uncertainty beyond beta: The generalized bicubic distribution. Operations Research Perspectives, 3, 67-76
  • Pett, M. A. (1997). Nonparametric statistics for health care research. London: SAGE Publications.
  • Preston, C. C., & Colman, A. M. (2000). Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychologia, 104, 1-15.
  • Tate, R. L., Simpson, G. K., Soo, C. A., & Lane-Brown, A.T. (2011). Participation after acquired brain injury: clinical and psychometric considerations of the sydney psychosocial reintegration scale (SPRS). Journal of Rehabilitation Medicine, 43(7), 609–618.
  • Tavşancıl, E. (2010). Tutumların ölçülmesi ve SPSS ile veri analizi [Measuring attitudes and data analysis with SPSS] (4. baskı). Nobel.
  • Tezbaşaran, A. (1997). Likert tipi ölçek geliştirme kılavuzu [Likert type scale development guide]. Türk Psikologlar Derneği.
  • Thorndike, R. (1997). Measurement and evaluation in psychology and education, Prentice-Hall.
  • Ogasawara, H. (2001). Standart errors of item response theory equating/linking by response function methods. Applied Psychological Measurement, 25(1), 53 67.
  • Oskamp, S. (1977). Attitudes and opinions. Prentice-Hall.
  • Ostini, R., & Nering, M. L. (2006). Polytomous item response theory models. Sage.
  • Paek, I., & Young, M. J. (2005). Investigation of student growth recovery in a fixed item linking procedure with a fixed-person prior distribution for mixed-format test data. Applied Measurement in Education, 18(2), 199-215.
  • Penfield, R. D., & Bergeron, J. M. (2005). Applying a weighted maximum likelihood latent trait estimator to the generalized partial credit model. Applied Psychological Measurement, 29 (3), 218-233.
  • Schaeffer, N. C., & Presser, S. (2003). The science of asking questions. Annual review of sociology, 29, 65-88 https://doi.org/10.1146/annurev.soc.29.110702.110112
  • Uyumaz, G., & Çokluk, Ö. (2016). An Investigation of Item Order and Rating Differences in Likert-Type Scales in Terms of Psychometric Properties and Attitudes of Respondents. Journal of Theoretical Educational Science, 9(3), 400-425. DOI: 10.5578/keg.10011
  • Völkl, K., & Korb, C. (2018). Deskriptive Statistik [descriptive statistics]. Wiesbaden: Springer.
  • Wakita, T. (2004). The distance between categories in rating-scale method: Applying item response model to the assessment process. Japanese Journal of Psychology, 75, 331-338.
  • Wakita, T., Ueshima, N., & Noguchi, H. (2012) Psychological distance between categories in the Likert scale: comparing different numbers of options. Educational and Psychological Measurement, 72(4) 533–546.
  • Weng, L-J. (2004). Impact of the number of response categories and anchor labels on coefficient alpha and test-retest reliability. Educational and Psychological Measurement, 64(6), 956-972.
  • Wu, C-H. (2007). An empirical study on the transformation of Likert scale data to numerical scores. Applied Mathematical Sciences, 1(58), 2851-2862.
Toplam 55 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Bölüm Articles
Yazarlar

Gözde Sırgancı 0000-0003-4824-5413

Gizem Uyumaz 0000-0003-0792-2289

Yayımlanma Tarihi 3 Eylül 2021
Yayımlandığı Sayı Yıl 2021 Cilt: 8 Sayı: 3

Kaynak Göster

APA Sırgancı, G., & Uyumaz, G. (2021). Determining the Factors Affecting the Psychological Distance Between Categories in the Rating Scale. International Journal of Contemporary Educational Research, 8(3), 178-190. https://doi.org/10.33200/ijcer.858599

133171332113318  2351823524 13319 13327 13323  13322


13325

Bu eser Creative Commons Atıf-GayriTicari-Türetilemez 4.0 Uluslararası Lisansı ile lisanslanmıştır.

IJCER (International Journal of Contemporary Educational Research) ISSN: 2148-3868