Çoklu Değerlendirici ve Tanı Testinin Kategorik Olması Durumunda Uyum İstatistiklerinin Karşılaştırılması: Bir Simülasyon Çalışması
Abstract
Keywords
References
- Gwet K. Computing inter-rater reliability and its variance in the presence of high agreement.Brit J Mathematic Stat Psychol 2008;61:29-48.
- Gwet K. Kappa statistics is not satisfactory for assessing the extent of agreement between raters.Series: Stat Met Inter- Rater Reliab Asses 2002;1:1-5.
- Gwet K. Handbook of Inter-Rater Reliability;1st rev ed. USA: STATAXIS Publishing Company; 2001.
- Krippendorff K. Reliability in content analysis some common misconceptions and recommendations. Hum Commun Res 2004;30:411-33.
- Hayes AF, Krippendorff K. Answering thecall for a standard reliability measure for coding data.Com Method Measur 2007;1:77-89.
- Kanık EA, Orekici Temel G, Ersöz Kaya İ. Effect of sample size, the number of raters and the category levels of diagnostic test on Krippendorff Alpha and the Fleiss Kappa statistics for calculating inter-rater agreement: a simulation study. Türkiye Klinikleri J Biostat 2010;2:74-81.
- Zhou X, Obuchowski N, McClish D. Statistical Methods in Diagnostic Medicine, 1st rev ed; New York: Wiley. 2002.
- Fleiss JL. Measuring nominal scale agreement among many raters. Psychol Bull 1971;76:378-82.
Details
Primary Language
Turkish
Subjects
-
Journal Section
-
Authors
E. Arzu Kanık
This is me
Gülhan Örekici Temel
This is me
Semra Erdoğan
This is me
İrem Ersöz Kaya
This is me
Publication Date
August 1, 2012
Submission Date
February 20, 2015
Acceptance Date
-
Published in Issue
Year 2012 Volume: 19 Number: 4