Research Article
BibTex RIS Cite

Nesne Tabanlı Ölçütlerle Yazılım Hata Kestirimi Örnek Bir Olay İncelemesi

Year 2022, Volume: 34 Issue: 1, 181 - 192, 20.03.2022
https://doi.org/10.35234/fumbd.997863

Abstract

Yazılım projelerinin kalitesini ölçme işlemi sorunların erken safhada bulunmasına olanak sağlayan bir işlem olmasına rağmen çoğunlukla ihmal edilen, zaman ve bütçe ayrılmayan bir işlemdir. Yazılım kalitesini ölçebiliyor ve bunu rakamlarla ifade edebiliyor olmak bakım yapılabilirlik, yeniden kullanılabilirlik, test edilebilirlik, verimlilik, taşınabilirlik, işlevsellik ve anlaşılabilirlik gibi kalite özellikleri hakkında fikir sahibi olmamızı sağlar. Yazılımın kalitesini etkileyen önemli unsurlardan biri de kaynak koddaki hataların sayısıdır. Bu nedenle, geliştirmenin erken safhasında kaynak oddaki hataları belirlemek çok önemlidir. Çalışmada, 50 açık kaynak kodlu eğitim projesindeki yazılım hata kestirim doğruluğu analiz edilmiştir. Yazılım kalite ölçütleri "Understand" adı verilen statik kod analiz aracı kullanılarak ölçülmüş ve hata sayıları ise "SpotBugs" adı verilen araç ile belirlenmiştir. Birisi dışarıda çapraz doğrulama (LOOCV) kullanılarak sonuçların doğruluğu belirlenirken yazılım kalitesi ve hatalar arasında ilişkiyi elde etmek için adımsal (stepwise) doğrusal regresyon analizi uygulanmıştır. Sonuçlara bakıldığında kestirim doğruluğunun literatürde de geçen değerlerden tarafından desteklendiği görülmüştür. RFC ölçütünün hata kestiriminde hiç bir etkisinin olmadığı gözlemlenmiştir. Çalışma, yazılım hatalarının sayısının kestirimi için yazılım kalite ölçütlerinin kullanılmasının mümkün olduğunu göstermektedir.

References

  • [1] Raees M. Study of software quality improvement through reliability metrics models and root cause analysis program. International Journal of Computer Engineering and Information Technology 2020; 12(6): 42-47.
  • [2] Jones C. Quantifying Software: Global and Industry Perspectives. CRC Press, 2017.
  • [3] Fenton N, Neil M. A critique of software defect prediction models. IEEE Transactions on Software Engineering 1999; 25(5), 675-689.
  • [4] Delsing J. IoT Automation-Arrowhead Framework. CRC Press: Taylor & Francis Group, 2017.
  • [5] D'Ambros M, Lanza M, Robbes R. Evaluating Defect Prediction Approaches: A benchmark and an extensive comparison. Emprical Software Engineering 2012; 17: 531-577.
  • [6] Moser R, Pedrycz W, Succi G. A Comparative Analysis of The Efficiency of Change Metrics and Static Code Attributes for Defect Prediction. In: ICSE ’08: Proceedings of the 30th international Conference on Software Engineering; 10-18 May, 2008; Leipzig-Germany. 181-190.
  • [7] Kim S, Zimmermann T, Whiteas E, Zeller A. Predicting Faulties From Cached History. Proceedings of 29th International Conference on Software Engineering (ICSE'07); 20-26 May, 2007; Minneapolis, MN, USA. 489-498.
  • [8] Basili V.R, Briand L.C, Melo W.R. Validation of object-oriented design metric as quality indicators. IEEE Transactions on Software Engineering 1996; 22(10), 751- 761.
  • [9] Hassan A.E. Predicting Faults Using The Complexity of Code Changes. International Conference on Software Engineering IEEE Computer Society; 16-24 May, 2009; Vancouver, BC, Canada. 78- 88.
  • [10] Nuñez-Varela A.S, Pérez-Gonzalez H.G, Martínez-Perez F.E, Soubervielle-Montalvo C. Source code metrics: a systematic mapping study. Journal of Systems and Software 2017; 128: 164-197.
  • [11] Chidamber S, Kemerer C. A metrics suite for object-oriented design. IEEE Transactions on Software Engineering 1994; 20(6): 476-493.
  • [12] Erçelebi Ayyıldız T, Koçyiğit A. Correlations between problem and solution domain measures of open source software. Journal of the Faculty of Engineering and Architecture of Gazi University 2017; 32(3): 887– 900.
  • [13] Catolino G, Palomba F, De Lucia A, Ferrucci F, Zaidman A. Enhancing change prediction models using developer-related factors. Journal of Systems and Software 2018; 143: 14-28.
  • [14] Bailey C.T, Dingee W.L. A software study using halstead metrics. ACM Sigmetrics (1981 ACM Workshop/Symp. Measurement and Evaluation of Software Quality 1981; 10: 189-197.
  • [15] McCabe T. A complexity measure. IEEE Transactions on Software Engineering 1976; 2(4): 308-32.
  • [16] Sarı Ö, Kalıpsız O. Using Data Analysis Methods for Software Bug Prediction, UYMS, https://pdfs.semanticscholar.org/f6f5/23ec28440cfe66096b4d4d1d62ca8f018db9.pdf. Yayın Tarihi 2014. Erişim Tarihi Temmuz 29, 2021.
  • [17] DAmbros M, Lanza M, Robbes R. On The Relationship Between Change Coupling and Software Defects. 16th Working Conference on Reverse Engineering. 13-16 October 2009; Lille-France. 135–144.
  • [18] Gyimóthy T, Ferenc R, Siket I. Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Transactions on Software Engineering 2005; 31(10): 897-910.
  • [19] Okutan A, Yıldız O.T. Software defect prediction using bayesian networks. Empirical Software Engineering 2014. 19(1): 154-181.
  • [20] Lamba T, Kumar D, Mishra A.K. Comparative study of bug prediction techniques on software metrics, International Conference on Computing for Sustainable Global Development; 01-03 March, 2017; New Delhi-(INDIA). 207-211.
  • [21] Erçelebi Ayyıldız T, Erkal B. The effect of object-oriented metrics on software bug prediction. Journal of Information Systems and Management Research 2019; 1(1): 1-8.
  • [22] Yılmaz N, Tarhan A. A two-dimensional method for evaluating maintainability and reliability of open source software. Journal of the Faculty of Engineering and Architecture of Gazi University 2019; 34(4): 1807-1829.
  • [23] Perez-Riverol Y., Gatto L., Wang R., Sachsenberg T., Uszkoreit J., Veiga Leprevost F. Leprevost, Fufezan C., Ternent T, et al. Ten simple rules for taking advantage of git and github. PLos Computational Biology 2016; 12(7).
  • [24] Durmuş G, Soğukpınar İ. A novel approach for analyzing buffer overflow vulnerabilities in binary executables by using machine learning techniques. Journal of the Faculty of Engineering and Architecture of Gazi University 2019; 34(4): 1695-1704.
  • [25] Understand SCI Tool, https://www.scitools.com/features. Erişim Tarihi: Temmuz 25, 2021.
  • [26] Gezici B, Tarhan A, Chouseinoglou O. Complexity, size and internal quality in the evolution of mobile applications: An exploratory study. Journal of the Faculty of Engineering and Architecture of Gazi University 2019; 34(3): 1483-1500.
  • [27] Choudhary G.R, Kumar S., Kumar K, Mishra A, Catal C. Empirical analysis of change metrics for software fault prediction. Computers & Electrical Engineering 2018; 67: 15-24.
  • [28] Breesam K.M. Metrics for Object-oriented Design Focusing on Class Inheritance Metrics. 2nd International Conference on Dependability of Computer Systems (DepCoS-RELCOMEX '07). 14-16 June, 2007; Szklarska-Poland.
  • [29] Subramanyam R, Krishnan M.S. Empirical analysis of ck metrics for object-oriented design complexity: Implications for software defects. IEEE Transactions on software engineering 2003; 29(4): 297-310.
  • [30] Spotbugs. http://spotbugs.github.io. Erişim Tarihi Temmuz 26, 2021.
  • [31] Wong T. Performance evaluation of classification algorithms by k-fold and leave-one-out cross validation. Pattern Recognition 2015; 48(9): 2839-2846.
  • [32] Montgomery D.C, Runger G.C. Applied Statistics and Probability for Engineers. 6th Edition. John Willey & Sons, 2014.
  • [33] Silhavy R, Silhavy P, Prokopova Z. Analysis and selection of a regression model for the use case points method using a stepwise approach. Journal of Systems and Software 2017; 125: 1-14.
  • [34] Tabachnick B.G, Fidell L.S. Using Multivariate Statistics. 2nd Edition. HarperCollins, New York, 1989.
  • [35] Conte S, Dunsmore H.E, Shen V.Y. Software Engineering Metrics and Models. Benjamin/Cummings, Menlo Park, 1986.
  • [36] Tate G, Verner J. Software Costing in Practice, The Economics of Information Systems and Software. Oxford, Butterworth-Heinemann, 1991.
  • [37] Willmott C.J. Some comments on the evaluation of model performance. Bulletion of the American Meteorological Society 1982; 63(11): 1309–1313.
  • [38] Vashisht V, Lal M, Sureshchandar G.S. Defect prediction framework using neural networks for software enhancement projects. Journal of Advances in Mathematics and Computer Science 2016; 16(5): 1-12.
  • [39] Sharaf M, Hemdan E.E.D, El-Sayed A, El-Bahnasawy N.A. StockPred: a framework for stock Price prediction. Multimedia Tools and Applications 2021; 1-32.
Year 2022, Volume: 34 Issue: 1, 181 - 192, 20.03.2022
https://doi.org/10.35234/fumbd.997863

Abstract

References

  • [1] Raees M. Study of software quality improvement through reliability metrics models and root cause analysis program. International Journal of Computer Engineering and Information Technology 2020; 12(6): 42-47.
  • [2] Jones C. Quantifying Software: Global and Industry Perspectives. CRC Press, 2017.
  • [3] Fenton N, Neil M. A critique of software defect prediction models. IEEE Transactions on Software Engineering 1999; 25(5), 675-689.
  • [4] Delsing J. IoT Automation-Arrowhead Framework. CRC Press: Taylor & Francis Group, 2017.
  • [5] D'Ambros M, Lanza M, Robbes R. Evaluating Defect Prediction Approaches: A benchmark and an extensive comparison. Emprical Software Engineering 2012; 17: 531-577.
  • [6] Moser R, Pedrycz W, Succi G. A Comparative Analysis of The Efficiency of Change Metrics and Static Code Attributes for Defect Prediction. In: ICSE ’08: Proceedings of the 30th international Conference on Software Engineering; 10-18 May, 2008; Leipzig-Germany. 181-190.
  • [7] Kim S, Zimmermann T, Whiteas E, Zeller A. Predicting Faulties From Cached History. Proceedings of 29th International Conference on Software Engineering (ICSE'07); 20-26 May, 2007; Minneapolis, MN, USA. 489-498.
  • [8] Basili V.R, Briand L.C, Melo W.R. Validation of object-oriented design metric as quality indicators. IEEE Transactions on Software Engineering 1996; 22(10), 751- 761.
  • [9] Hassan A.E. Predicting Faults Using The Complexity of Code Changes. International Conference on Software Engineering IEEE Computer Society; 16-24 May, 2009; Vancouver, BC, Canada. 78- 88.
  • [10] Nuñez-Varela A.S, Pérez-Gonzalez H.G, Martínez-Perez F.E, Soubervielle-Montalvo C. Source code metrics: a systematic mapping study. Journal of Systems and Software 2017; 128: 164-197.
  • [11] Chidamber S, Kemerer C. A metrics suite for object-oriented design. IEEE Transactions on Software Engineering 1994; 20(6): 476-493.
  • [12] Erçelebi Ayyıldız T, Koçyiğit A. Correlations between problem and solution domain measures of open source software. Journal of the Faculty of Engineering and Architecture of Gazi University 2017; 32(3): 887– 900.
  • [13] Catolino G, Palomba F, De Lucia A, Ferrucci F, Zaidman A. Enhancing change prediction models using developer-related factors. Journal of Systems and Software 2018; 143: 14-28.
  • [14] Bailey C.T, Dingee W.L. A software study using halstead metrics. ACM Sigmetrics (1981 ACM Workshop/Symp. Measurement and Evaluation of Software Quality 1981; 10: 189-197.
  • [15] McCabe T. A complexity measure. IEEE Transactions on Software Engineering 1976; 2(4): 308-32.
  • [16] Sarı Ö, Kalıpsız O. Using Data Analysis Methods for Software Bug Prediction, UYMS, https://pdfs.semanticscholar.org/f6f5/23ec28440cfe66096b4d4d1d62ca8f018db9.pdf. Yayın Tarihi 2014. Erişim Tarihi Temmuz 29, 2021.
  • [17] DAmbros M, Lanza M, Robbes R. On The Relationship Between Change Coupling and Software Defects. 16th Working Conference on Reverse Engineering. 13-16 October 2009; Lille-France. 135–144.
  • [18] Gyimóthy T, Ferenc R, Siket I. Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Transactions on Software Engineering 2005; 31(10): 897-910.
  • [19] Okutan A, Yıldız O.T. Software defect prediction using bayesian networks. Empirical Software Engineering 2014. 19(1): 154-181.
  • [20] Lamba T, Kumar D, Mishra A.K. Comparative study of bug prediction techniques on software metrics, International Conference on Computing for Sustainable Global Development; 01-03 March, 2017; New Delhi-(INDIA). 207-211.
  • [21] Erçelebi Ayyıldız T, Erkal B. The effect of object-oriented metrics on software bug prediction. Journal of Information Systems and Management Research 2019; 1(1): 1-8.
  • [22] Yılmaz N, Tarhan A. A two-dimensional method for evaluating maintainability and reliability of open source software. Journal of the Faculty of Engineering and Architecture of Gazi University 2019; 34(4): 1807-1829.
  • [23] Perez-Riverol Y., Gatto L., Wang R., Sachsenberg T., Uszkoreit J., Veiga Leprevost F. Leprevost, Fufezan C., Ternent T, et al. Ten simple rules for taking advantage of git and github. PLos Computational Biology 2016; 12(7).
  • [24] Durmuş G, Soğukpınar İ. A novel approach for analyzing buffer overflow vulnerabilities in binary executables by using machine learning techniques. Journal of the Faculty of Engineering and Architecture of Gazi University 2019; 34(4): 1695-1704.
  • [25] Understand SCI Tool, https://www.scitools.com/features. Erişim Tarihi: Temmuz 25, 2021.
  • [26] Gezici B, Tarhan A, Chouseinoglou O. Complexity, size and internal quality in the evolution of mobile applications: An exploratory study. Journal of the Faculty of Engineering and Architecture of Gazi University 2019; 34(3): 1483-1500.
  • [27] Choudhary G.R, Kumar S., Kumar K, Mishra A, Catal C. Empirical analysis of change metrics for software fault prediction. Computers & Electrical Engineering 2018; 67: 15-24.
  • [28] Breesam K.M. Metrics for Object-oriented Design Focusing on Class Inheritance Metrics. 2nd International Conference on Dependability of Computer Systems (DepCoS-RELCOMEX '07). 14-16 June, 2007; Szklarska-Poland.
  • [29] Subramanyam R, Krishnan M.S. Empirical analysis of ck metrics for object-oriented design complexity: Implications for software defects. IEEE Transactions on software engineering 2003; 29(4): 297-310.
  • [30] Spotbugs. http://spotbugs.github.io. Erişim Tarihi Temmuz 26, 2021.
  • [31] Wong T. Performance evaluation of classification algorithms by k-fold and leave-one-out cross validation. Pattern Recognition 2015; 48(9): 2839-2846.
  • [32] Montgomery D.C, Runger G.C. Applied Statistics and Probability for Engineers. 6th Edition. John Willey & Sons, 2014.
  • [33] Silhavy R, Silhavy P, Prokopova Z. Analysis and selection of a regression model for the use case points method using a stepwise approach. Journal of Systems and Software 2017; 125: 1-14.
  • [34] Tabachnick B.G, Fidell L.S. Using Multivariate Statistics. 2nd Edition. HarperCollins, New York, 1989.
  • [35] Conte S, Dunsmore H.E, Shen V.Y. Software Engineering Metrics and Models. Benjamin/Cummings, Menlo Park, 1986.
  • [36] Tate G, Verner J. Software Costing in Practice, The Economics of Information Systems and Software. Oxford, Butterworth-Heinemann, 1991.
  • [37] Willmott C.J. Some comments on the evaluation of model performance. Bulletion of the American Meteorological Society 1982; 63(11): 1309–1313.
  • [38] Vashisht V, Lal M, Sureshchandar G.S. Defect prediction framework using neural networks for software enhancement projects. Journal of Advances in Mathematics and Computer Science 2016; 16(5): 1-12.
  • [39] Sharaf M, Hemdan E.E.D, El-Sayed A, El-Bahnasawy N.A. StockPred: a framework for stock Price prediction. Multimedia Tools and Applications 2021; 1-32.
There are 39 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section MBD
Authors

Begüm Erkal 0000-0002-2170-2162

Tülin Erçelebi Ayyıldız 0000-0002-7372-0223

Publication Date March 20, 2022
Submission Date September 20, 2021
Published in Issue Year 2022 Volume: 34 Issue: 1

Cite

APA Erkal, B., & Erçelebi Ayyıldız, T. (2022). Nesne Tabanlı Ölçütlerle Yazılım Hata Kestirimi Örnek Bir Olay İncelemesi. Fırat Üniversitesi Mühendislik Bilimleri Dergisi, 34(1), 181-192. https://doi.org/10.35234/fumbd.997863
AMA Erkal B, Erçelebi Ayyıldız T. Nesne Tabanlı Ölçütlerle Yazılım Hata Kestirimi Örnek Bir Olay İncelemesi. Fırat Üniversitesi Mühendislik Bilimleri Dergisi. March 2022;34(1):181-192. doi:10.35234/fumbd.997863
Chicago Erkal, Begüm, and Tülin Erçelebi Ayyıldız. “Nesne Tabanlı Ölçütlerle Yazılım Hata Kestirimi Örnek Bir Olay İncelemesi”. Fırat Üniversitesi Mühendislik Bilimleri Dergisi 34, no. 1 (March 2022): 181-92. https://doi.org/10.35234/fumbd.997863.
EndNote Erkal B, Erçelebi Ayyıldız T (March 1, 2022) Nesne Tabanlı Ölçütlerle Yazılım Hata Kestirimi Örnek Bir Olay İncelemesi. Fırat Üniversitesi Mühendislik Bilimleri Dergisi 34 1 181–192.
IEEE B. Erkal and T. Erçelebi Ayyıldız, “Nesne Tabanlı Ölçütlerle Yazılım Hata Kestirimi Örnek Bir Olay İncelemesi”, Fırat Üniversitesi Mühendislik Bilimleri Dergisi, vol. 34, no. 1, pp. 181–192, 2022, doi: 10.35234/fumbd.997863.
ISNAD Erkal, Begüm - Erçelebi Ayyıldız, Tülin. “Nesne Tabanlı Ölçütlerle Yazılım Hata Kestirimi Örnek Bir Olay İncelemesi”. Fırat Üniversitesi Mühendislik Bilimleri Dergisi 34/1 (March 2022), 181-192. https://doi.org/10.35234/fumbd.997863.
JAMA Erkal B, Erçelebi Ayyıldız T. Nesne Tabanlı Ölçütlerle Yazılım Hata Kestirimi Örnek Bir Olay İncelemesi. Fırat Üniversitesi Mühendislik Bilimleri Dergisi. 2022;34:181–192.
MLA Erkal, Begüm and Tülin Erçelebi Ayyıldız. “Nesne Tabanlı Ölçütlerle Yazılım Hata Kestirimi Örnek Bir Olay İncelemesi”. Fırat Üniversitesi Mühendislik Bilimleri Dergisi, vol. 34, no. 1, 2022, pp. 181-92, doi:10.35234/fumbd.997863.
Vancouver Erkal B, Erçelebi Ayyıldız T. Nesne Tabanlı Ölçütlerle Yazılım Hata Kestirimi Örnek Bir Olay İncelemesi. Fırat Üniversitesi Mühendislik Bilimleri Dergisi. 2022;34(1):181-92.