Araştırma Makalesi
BibTex RIS Kaynak Göster

Yazılım Parametreleri ile ilgili Veri Madenciliğine Dayalı Çıkarımlar

Yıl 2021, , 9 - 24, 31.12.2021
https://doi.org/10.46740/alku.985839

Öz

Yazılım geliştirme projelerinin ölçülmesi ve değerlendirilmesi için bugüne kadar çeşitli kriterler ve parametreler belirlenmiştir: Verimlilik, katılım, kaliteye dikkat, kod tabanlı bilgi ve yönetim, kodlama ile ilgili yönerge ve tekniklerine uyum, öğrenme ve beceriler, kişisel sorumluluk vb. Bununla birlikte, yazılım geliştirme projelerini ölçmek ve değerlendirmek için evrensel olarak kabul görmüş herhangi bir yöntem ya da kriter maalesef ki yoktur. Alanyazındaki bu boşluk doğrultusunda, çalışmanın alt yapısını hazırlamak için “Yazılım Geliştirme Projeleri”, “Yazılım Geliştirme Süreci” ve “Yazılım Geliştirmenin Ölçme ve Değerlendirmesi” konularında çeşitli araştırmalar yapılmıştır. Ayrıca, bu arka plan çalışması ile yazılım geliştirme projelerinin ölçülmesi ve değerlendirilmesi ile ilgili ortak kriterler ortaya konmuştur. Ortak kriterlerin gerçek iş hayatında kullanımını değerlendirmek ve gerçek iş hayatında kullanılan, belirlenen ya da belirlenemeyen kriterleri ortaya çıkarmak amacıyla da, 55 farklı yazılım şirketinde çalışan 105 yazılım uzmanından (yazılım analizcisi, yazılım geliştiricisi ve yöneticiler) konu ile ilgili çeşitli bilgiler elde edilmiştir. Bu doğrultuda, veri madenciliği algoritması – “Association Rule Mining Apriori Algorithm” – kullanılarak 12 tane çıkarım yapılmış ve bunun neticesinde, yazılım geliştirme projeleri ile ilgili bir ölçme ve değerlendirme yazılım parametre seti oluşturulmuştur. Geliştirilen bu parametre setinde, 10 yazılım parametresi ve 6 tane de farklı ikili ilişki bulunmaktadır. Bu elde edilen verilerin doğrultusunda tasarlanan ve geliştirilen yazılım parametreleri, yüzde 75'in üzerinde bir doğruluk oranı ile yüksek bir geçerlilik düzeyine sahiptir. Bu denli yüksek bir geçerlilik oranı sunan bu çalışma, yazılım mühendisliğine ve çalışma alanı olan yazılım geliştirmeye ışık tutacak ve artı yönde bir etki edecektir.

Kaynakça

  • Ellis, B. (1968). Basic concepts of measurement. Cambridge, United Kingdom: Cambridge University Press.
  • Pawson, R., & Tilley, N. (1994). What works in evaluation research? The British Journal of Criminology, 34(3), 291-306.
  • Gallivan, M. J. (1998). The influence of system developers’ creative style on their attitudes toward and assimilation of a software process innovation. Thirty-First Hawaii International Conference on System Sciences, 6-9 January, Kohala Coast, 435-444.
  • Sawyer, S., & Guinan, P. J. (1998). Software development: Processes and performance. IBM Systems Journal, 37(4), 552-569.
  • Hall, T., Wilson, D., Rainer, A., & Jagielska, D. (2007). The neglected technical skill? ACM SIGMIS CPR Conference on Computer Personnel Research: The Global Information Technology Workforce, 19-21 April, St. Louis Missouri, 196-202.
  • Baggelaar, H. (2008). Evaluating programmer performance visualizing the impact of programmers on project goals. M.Sc Thesis, University of Amsterdam, Amsterdam.
  • Lee, K., Joshi, K., & Kim, Y. (2008). Person-job fit as a moderator of the relationship between emotional intelligence and job performance. ACM SIGMIS CPR Conference on Computer Personnel Doctoral Consortium and Research, 3-5 April, Charlottesville VA, 70-75.
  • Thing, C. (2008). The application of the function point analysis in software developers’ performance evaluation. 4th International Conference on Wireless Communications, Networking and Mobile Computing, 12-17 October, China, 1-4.
  • Zhang, S., Wang, Y., & Xiao, J. (2008). Mining individual performance indicators in collaborative development using software repositories. 15th Asia-Pacific Software Engineering Conference, 3-5 December, China, 247-254.
  • Calikli, G., & Bener, A. (2010). Empirical analyses of the factors affecting confirmation bias and the effects of confirmation bias on software developer/tester performance. 6th International Conference on Predictive Models in Software Engineering, 12-13 September, Romania, no. 10.
  • Chilton, M. A., Hardgrave, B. C., & Armstrong, D. J. (2010). Performance and strain levels of it workers engaged in rapidly changing environments: A person-job fit perspective. ACM SIGMIS Database, 41(1), 8-35.
  • Ramler, R., Klammer, C., & Natschläger, T. (2010). The usual suspects: A case study on delivered defects per developer. ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, 16-17 September, Italy, no. 48.
  • Wang, Y., & Zhang, M. (2010). Penalty policies in professional software development practice: A multi-method field study. 32nd ACM/IEEE International Conference on Software Engineering, 1 May, Cape Town South Africa, 39-47.
  • Balijepally, V., Nerur, S., & Mahapatra, R. (2012). Effect of task mental models on software developer’s performance: An experimental investigation. 45th Hawaii International Conference on System Science, 4-7 January, Hawaii, 5442-5451.
  • Duarte, C. B., Faria, J. P., & Raza, M. (2012). PSP PAIR: Automated personal software process performance analysis and improvement recommendation. Eighth International Conference on the Quality of Information and Communications Technology, 3-6 September, Portugal, 131-136.
  • Ehrlich, K., & Cataldo, M. (2012). All-for-one and one-for-all?: A multi-level analysis of communication patterns and individual performance in geographically distributed software development. ACM 2012 Conference on Computer Supported Cooperative Work, 11-15 February, Washington, 945-954.
  • Kelly, B., & Haddad, H. M. (2012). Metric techniques for maintenance programmers in a maintenance ticket environment. Journal of Computing Sciences in Colleges, 28(2), 170-178.
  • Schröter, A., Aranda, J., Damian, D., & Kwan, I. (2012). To talk or not to talk: Factors that influence communication around changesets. ACM 2012 Conference on Computer Supported Cooperative Work, 11-15 February, Washington, 1317-1326.
  • Westermann, D. (2012). A generic methodology to derive domain-specific performance feedback for developers. 34th International Conference on Software Engineering, 2-9 June, Zurich, 1527-1530.
  • Calikli, G., & Bener, A. (2013). An algorithmic approach to missing data problem in modeling human aspects in software development. 9th International Conference on Predictive Models in Software Engineering, 9 October, Baltimore Maryland, no. 10.
  • Mining frequent itemsets - apriori algorithm. (n.d.). Retrieved August 21, 2021, from http://software.ucv.ro/~cmihaescu/ro/teaching/AIR/docs/Lab8-Apriori.pdf
  • Hegland, M. (2007). The Apriori algorithm – a tutorial. Mathematics and Computation in Imaging Science and Information Processing, 209-262.
  • Al-Maolegi, M., & Arkok, B. (2014). An improved Apriori algorithm for association rules. International Journal on Natural Language Computing (IJNLC), 3(1), 21-29.
  • Borgelt C., & Kruse R. (2002) Induction of association rules: Apriori implementation. In: Härdle W., Rönz B. (eds) Compstat. Physica, Heidelberg. https://doi.org/10.1007/978-3-642-57489-4_59
  • Sun, D., Teng, S., Zhang, W., & Zhu, H. (2007). An algorithm to improve the effectiveness of Apriori. 6th IEEE International Conference on Cognitive Informatics, 6-8 August, Lake Tahoe, 385-390.

Data Mining based Inferences about Software Parameters

Yıl 2021, , 9 - 24, 31.12.2021
https://doi.org/10.46740/alku.985839

Öz

Up to now, several criteria (software parameters) have been determined in order to measure and evaluate software development projects: Productivity, engagement, attention to quality, code base knowledge and management, adherence to coding guidelines and techniques, learning and skills, personal responsibility and etc. However, there isn’t any universally accepted criteria or a methodology to measure and evaluate software development projects. In this context, for preparing the background of the study, several researches have been studied about “Software Development Projects”, “Software Development Process” and “Software Development Measurement and Evaluation”. Also, with this literature study, the common criteria set about measurement and evaluation of software development projects has been created, generated and presented. In addition, some information has been got and taken from 105 software experts (software analyzers, software developers and managers) with 55 different software companies so as to evaluate the use of the common criteria in real work life, and to identify criteria which are not seen in researches before, but used in real work life. Accordingly, a measurement and evaluation criteria set (software parameters) about the software development projects has been created based on the data mining algorithm – “Association Rule Mining Apriori Algorithm” – with its 12 inferences. This set has also consisted of 10 software parameters with 6 dual relationships. With the light of these data, the designed and developed software parameters have had high validation with more than 75 percent accuracy rate. As a natural result of this, the study will have had a positive effect on software engineering by shedding light on its working domain – software development.

Kaynakça

  • Ellis, B. (1968). Basic concepts of measurement. Cambridge, United Kingdom: Cambridge University Press.
  • Pawson, R., & Tilley, N. (1994). What works in evaluation research? The British Journal of Criminology, 34(3), 291-306.
  • Gallivan, M. J. (1998). The influence of system developers’ creative style on their attitudes toward and assimilation of a software process innovation. Thirty-First Hawaii International Conference on System Sciences, 6-9 January, Kohala Coast, 435-444.
  • Sawyer, S., & Guinan, P. J. (1998). Software development: Processes and performance. IBM Systems Journal, 37(4), 552-569.
  • Hall, T., Wilson, D., Rainer, A., & Jagielska, D. (2007). The neglected technical skill? ACM SIGMIS CPR Conference on Computer Personnel Research: The Global Information Technology Workforce, 19-21 April, St. Louis Missouri, 196-202.
  • Baggelaar, H. (2008). Evaluating programmer performance visualizing the impact of programmers on project goals. M.Sc Thesis, University of Amsterdam, Amsterdam.
  • Lee, K., Joshi, K., & Kim, Y. (2008). Person-job fit as a moderator of the relationship between emotional intelligence and job performance. ACM SIGMIS CPR Conference on Computer Personnel Doctoral Consortium and Research, 3-5 April, Charlottesville VA, 70-75.
  • Thing, C. (2008). The application of the function point analysis in software developers’ performance evaluation. 4th International Conference on Wireless Communications, Networking and Mobile Computing, 12-17 October, China, 1-4.
  • Zhang, S., Wang, Y., & Xiao, J. (2008). Mining individual performance indicators in collaborative development using software repositories. 15th Asia-Pacific Software Engineering Conference, 3-5 December, China, 247-254.
  • Calikli, G., & Bener, A. (2010). Empirical analyses of the factors affecting confirmation bias and the effects of confirmation bias on software developer/tester performance. 6th International Conference on Predictive Models in Software Engineering, 12-13 September, Romania, no. 10.
  • Chilton, M. A., Hardgrave, B. C., & Armstrong, D. J. (2010). Performance and strain levels of it workers engaged in rapidly changing environments: A person-job fit perspective. ACM SIGMIS Database, 41(1), 8-35.
  • Ramler, R., Klammer, C., & Natschläger, T. (2010). The usual suspects: A case study on delivered defects per developer. ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, 16-17 September, Italy, no. 48.
  • Wang, Y., & Zhang, M. (2010). Penalty policies in professional software development practice: A multi-method field study. 32nd ACM/IEEE International Conference on Software Engineering, 1 May, Cape Town South Africa, 39-47.
  • Balijepally, V., Nerur, S., & Mahapatra, R. (2012). Effect of task mental models on software developer’s performance: An experimental investigation. 45th Hawaii International Conference on System Science, 4-7 January, Hawaii, 5442-5451.
  • Duarte, C. B., Faria, J. P., & Raza, M. (2012). PSP PAIR: Automated personal software process performance analysis and improvement recommendation. Eighth International Conference on the Quality of Information and Communications Technology, 3-6 September, Portugal, 131-136.
  • Ehrlich, K., & Cataldo, M. (2012). All-for-one and one-for-all?: A multi-level analysis of communication patterns and individual performance in geographically distributed software development. ACM 2012 Conference on Computer Supported Cooperative Work, 11-15 February, Washington, 945-954.
  • Kelly, B., & Haddad, H. M. (2012). Metric techniques for maintenance programmers in a maintenance ticket environment. Journal of Computing Sciences in Colleges, 28(2), 170-178.
  • Schröter, A., Aranda, J., Damian, D., & Kwan, I. (2012). To talk or not to talk: Factors that influence communication around changesets. ACM 2012 Conference on Computer Supported Cooperative Work, 11-15 February, Washington, 1317-1326.
  • Westermann, D. (2012). A generic methodology to derive domain-specific performance feedback for developers. 34th International Conference on Software Engineering, 2-9 June, Zurich, 1527-1530.
  • Calikli, G., & Bener, A. (2013). An algorithmic approach to missing data problem in modeling human aspects in software development. 9th International Conference on Predictive Models in Software Engineering, 9 October, Baltimore Maryland, no. 10.
  • Mining frequent itemsets - apriori algorithm. (n.d.). Retrieved August 21, 2021, from http://software.ucv.ro/~cmihaescu/ro/teaching/AIR/docs/Lab8-Apriori.pdf
  • Hegland, M. (2007). The Apriori algorithm – a tutorial. Mathematics and Computation in Imaging Science and Information Processing, 209-262.
  • Al-Maolegi, M., & Arkok, B. (2014). An improved Apriori algorithm for association rules. International Journal on Natural Language Computing (IJNLC), 3(1), 21-29.
  • Borgelt C., & Kruse R. (2002) Induction of association rules: Apriori implementation. In: Härdle W., Rönz B. (eds) Compstat. Physica, Heidelberg. https://doi.org/10.1007/978-3-642-57489-4_59
  • Sun, D., Teng, S., Zhang, W., & Zhu, H. (2007). An algorithm to improve the effectiveness of Apriori. 6th IEEE International Conference on Cognitive Informatics, 6-8 August, Lake Tahoe, 385-390.
Toplam 25 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Mustafa Batar 0000-0002-8231-6628

Kökten Birant 0000-0002-5107-6406

Yayımlanma Tarihi 31 Aralık 2021
Gönderilme Tarihi 22 Ağustos 2021
Kabul Tarihi 6 Aralık 2021
Yayımlandığı Sayı Yıl 2021

Kaynak Göster

APA Batar, M., & Birant, K. (2021). Data Mining based Inferences about Software Parameters. ALKÜ Fen Bilimleri Dergisi, 3(3), 9-24. https://doi.org/10.46740/alku.985839
AMA Batar M, Birant K. Data Mining based Inferences about Software Parameters. ALKÜ Fen Bilimleri Dergisi. Aralık 2021;3(3):9-24. doi:10.46740/alku.985839
Chicago Batar, Mustafa, ve Kökten Birant. “Data Mining Based Inferences about Software Parameters”. ALKÜ Fen Bilimleri Dergisi 3, sy. 3 (Aralık 2021): 9-24. https://doi.org/10.46740/alku.985839.
EndNote Batar M, Birant K (01 Aralık 2021) Data Mining based Inferences about Software Parameters. ALKÜ Fen Bilimleri Dergisi 3 3 9–24.
IEEE M. Batar ve K. Birant, “Data Mining based Inferences about Software Parameters”, ALKÜ Fen Bilimleri Dergisi, c. 3, sy. 3, ss. 9–24, 2021, doi: 10.46740/alku.985839.
ISNAD Batar, Mustafa - Birant, Kökten. “Data Mining Based Inferences about Software Parameters”. ALKÜ Fen Bilimleri Dergisi 3/3 (Aralık 2021), 9-24. https://doi.org/10.46740/alku.985839.
JAMA Batar M, Birant K. Data Mining based Inferences about Software Parameters. ALKÜ Fen Bilimleri Dergisi. 2021;3:9–24.
MLA Batar, Mustafa ve Kökten Birant. “Data Mining Based Inferences about Software Parameters”. ALKÜ Fen Bilimleri Dergisi, c. 3, sy. 3, 2021, ss. 9-24, doi:10.46740/alku.985839.
Vancouver Batar M, Birant K. Data Mining based Inferences about Software Parameters. ALKÜ Fen Bilimleri Dergisi. 2021;3(3):9-24.