Araştırma Makalesi
BibTex RIS Kaynak Göster

Analysis of the Multiraters Agreement with Log-Linear Models

Yıl 2021, Cilt: 5 Sayı: 2, 107 - 110, 30.09.2021
https://doi.org/10.30516/bilgesci.950797

Öz

In this study, 87 digital panoramic images are classified by the three raters to assess the accuracy of diagnosis of peri-implant bone defects. The coefficient of kappa is obtained as 0.81 among the three raters, which indicates an almost perfect agreement. Then, the log-linear agreement models are applied to the data. The best model is determined based on the model selection criteria. Using the best model, we estimate the agreement parameter. It is 33 times higher for three raters to make the same decision than to make a different decision. The results show that the coefficients of the agreement only show the value of the fit between raters. On the other hand, agreement models provide a model equation for the raters, and more detailed and consistent results can be obtained by calculating the agreement and association parameters.

Kaynakça

  • [1] Agresti A. Categorical data analysis. New York: Wiley; 2002.
  • [2] Broemeling, Lyle D. Bayesian biostatistics and diagnostic medicine. CRC press, 2007.
  • [3] Cohen JA. Coeffi cient of agreement for nominal scales. Educational and Psychological Measurement 1960; 20: 37-46.
  • [4] Fleiss J, Cohen J, Everitt, BS. Large sample standard errors of kappa and weighted kappa. Psychological Bulletin 1969; 72: 323-7.
  • [5] Fleiss J, Cohen J. Th e equivalence of weighted kappa and intraclass correlation coeffi cient as measure of reliability. Educational and psychological measurement 1973; 33: 613-9.
  • [6] Gail M H, Benichou J. Encylopedia of Epidemiologic Methods. 1 st .Ed., New York: Wiley, 2000: 35-47.
  • [7] Kendall M G, Babington-Smith B. The Problem of m Rankings. The Annals of Mathematical Statistics, 1939; 10 (3): 275- 287.
  • [8] Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159-74.
  • [9] Lawal, HB., Categorical Data Analysis with SAS and SPSS Application. Lawrence ERlbawn Association Publisher, London, 2003
  • [10] Saraçbaşi, Tülay. "Agreement models for multiraters." Turkish Journal of Medical Sciences 41.5 (2011): 939-944.
  • [11] Siegel S. Nonparametric Statistics for the Behavioral Sciences. New York: McGraw- Hill, 1956
  • [12] Tanner MA, Young MA. Modeling agreement among raters. Journal of American Statistical Association 1985; 80: 175-80.
  • [13] Tanner MA, Young MA. Modeling ordinal scale disagreement. Psychological Bulletin 1985; 98: 408-15.
  • [14] Uebersax, JOHN S. "Modeling approaches for the analysis of observer agreement." Invest Radiol 27.9 (1992): 738-743.

Log-lineer Modeller ile Çoklu Değerlendiriciler Uyum Analizi

Yıl 2021, Cilt: 5 Sayı: 2, 107 - 110, 30.09.2021
https://doi.org/10.30516/bilgesci.950797

Öz

Bu çalışmada, peri-implant kemik defektlerinin teşhisinin doğruluğunu değerlendirmek için 87 dijital panoramik görüntü üç değerlendirici tarafından sınıflandırılmıştır. Üç değerlendiriciler arasında kappa katsayısı 0.81 olarak elde edilmekte olup, bu da mükemmele yakın bir uyumu göstermektedir. Daha sonra verilere log-lineer anlaşma modeller uygulanmıştır. Model seçim kriterlerine göre en iyi model belirlenmiştir. En iyi modeli kullanarak uyum parametresi tahmin edilir. Üç değerlendiricinin aynı kararı vermesi, farklı bir karar vermesine göre 33 kat daha fazladır. Sonuçlar, uyum katsayılarının yalnızca değerlendiriciler arasındaki uyumun değerini gösterdiğini göstermektedir. Öte yandan, uyum modelleri, değerlendiriciler için bir model denklemi sağlar ve uyum ve ilişki parametreleri hesaplanarak daha ayrıntılı ve tutarlı sonuçlar elde edilebilir.

Kaynakça

  • [1] Agresti A. Categorical data analysis. New York: Wiley; 2002.
  • [2] Broemeling, Lyle D. Bayesian biostatistics and diagnostic medicine. CRC press, 2007.
  • [3] Cohen JA. Coeffi cient of agreement for nominal scales. Educational and Psychological Measurement 1960; 20: 37-46.
  • [4] Fleiss J, Cohen J, Everitt, BS. Large sample standard errors of kappa and weighted kappa. Psychological Bulletin 1969; 72: 323-7.
  • [5] Fleiss J, Cohen J. Th e equivalence of weighted kappa and intraclass correlation coeffi cient as measure of reliability. Educational and psychological measurement 1973; 33: 613-9.
  • [6] Gail M H, Benichou J. Encylopedia of Epidemiologic Methods. 1 st .Ed., New York: Wiley, 2000: 35-47.
  • [7] Kendall M G, Babington-Smith B. The Problem of m Rankings. The Annals of Mathematical Statistics, 1939; 10 (3): 275- 287.
  • [8] Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159-74.
  • [9] Lawal, HB., Categorical Data Analysis with SAS and SPSS Application. Lawrence ERlbawn Association Publisher, London, 2003
  • [10] Saraçbaşi, Tülay. "Agreement models for multiraters." Turkish Journal of Medical Sciences 41.5 (2011): 939-944.
  • [11] Siegel S. Nonparametric Statistics for the Behavioral Sciences. New York: McGraw- Hill, 1956
  • [12] Tanner MA, Young MA. Modeling agreement among raters. Journal of American Statistical Association 1985; 80: 175-80.
  • [13] Tanner MA, Young MA. Modeling ordinal scale disagreement. Psychological Bulletin 1985; 98: 408-15.
  • [14] Uebersax, JOHN S. "Modeling approaches for the analysis of observer agreement." Invest Radiol 27.9 (1992): 738-743.
Toplam 14 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Bölüm Araştırma Makaleleri
Yazarlar

Gokcen Altun 0000-0003-4311-6508

Yayımlanma Tarihi 30 Eylül 2021
Kabul Tarihi 8 Temmuz 2021
Yayımlandığı Sayı Yıl 2021 Cilt: 5 Sayı: 2

Kaynak Göster

APA Altun, G. (2021). Analysis of the Multiraters Agreement with Log-Linear Models. Bilge International Journal of Science and Technology Research, 5(2), 107-110. https://doi.org/10.30516/bilgesci.950797