IJRCS – Volume 4 Issue 1 Paper 3


Author’s Name : Vaijayanthi Murugan | Karthick M

Volume 04 Issue 01  Year 2017  ISSN No:  2349-3828  Page no: 9-13



Software defects commonly known as bugs, present a serious challenge for the software developers to predict the bugs and to enhance the system reliability and dependability. The software defects are usually an incorrect output value, exceptions occurred in the source code, failure due to logical errors or due to any syntax errors. As the size of the programs grows and it may contain large number of methods, so, occurrence of bugs become more common and difficult to fix. It will take time to predict the bugs at the individual methods. Many techniques have been developed to mainly focus on method-level bug prediction. Several features are commonly used for method level bug prediction. To identify the best set of features it is proposed to use Filter Based Feature Selection (FBFS) using Information Gain. The Information Gain value is calculated for estimating the individual features. Based on the Information Gain values, the relevant features will be extracted for evaluation. In this work, the method-level bug prediction will be carried out using Support Vector Machine (SVM) classifier. Finally, the performance of the bug prediction models will be measured by using Precision, Recall and F-measure values. The volume of predicted bugs can be assessed by using the values of evaluation measures.


Bug prediction, precision, recall, F-measure, method-level, information gain, accuracy, SVM classifier.


  1. Giger, E. D’Ambros, M. Pinzger, M. Gall, H.C. (2012) ‘Method-level bug prediction’, ESEM ’12 Proceedings of the ACM-IEEE international symposium on Empirical Software Engineering and Measurement, pp. 171-180.
  2. Bandana Garg. (2013) ‘Design and Development of Naïve Bayes Classifier’, Fargo, North Dakota.
  3. Langley, P. Iba, W. and Thompson, K. (1992) ‘An analysis of Bayesian Classifiers’, Proceedings of the Tenth National Conference on Artificial Intelligence, San Jose, CA, pp. 223-228.
  4. Friedman, N. Geiger, D. and Goldszmidt, M. (1997) ‘Bayesian Network Classifiers’, Machine Learning, vol. 29, pp. 131-163.
  5. Kotsiantis, S.B. (2007) ‘Supervised Machine Learning: A Review of Classification’, Informatica pp. 249-268.
  6. Moser, R. Pedrycz, W. and Succi, G. (2008) ‘A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction’, Proceedings of ICSE , pp. 181-190.
  7. Couto, C. Silva, C. Valente, M.T. Bigonha, R. and Anquetil, N. (2012) ‘Uncovering Causal Relationships between Software Metrics and Bugs’, Proceedings of CSMR, pp. 223-232.
  8. Shanthini, A. Chandrasekaran, RM. (2012) ‘Applying Machine Learning for Fault Prediction Using Software Metrics’, Proceedings of IJARCSSE, pp. 274-278.
  9. Kim, S. Zhang, H. Wu, R. and Gong, L. (2011) ‘Dealing with noise in defect prediction’, Proceedings of ICSE, vol. 2, no. 6, pp. 481-490.
  10. Freund, Y. and Schapire, R. E. (1999) ‘A Short Introduction to Boosting’, Journal of Japanese Society for Artificial Intelligence, vol. 14, no. 5, pp. 771-780.
  11. Chidamber, S. R. and Kemerer, C. F. (1994) ‘A metrics suite for object oriented design’, IEEE Transactions on Software Engineering, vol. 20, no. 6, pp. 476-493.
  12. Kim, S. Whitehead, E. J. Zhang, J. Y. (2008), ‘Classifying Software Changes: Clean or Buggy?’, vol. 34 no. 2, pp. 181-196.
  13. Subramanyam, R. and Krishnan, M.S. (2003), ‘Empirical analysis of CK metrics for object-oriented design complexity: Implications for software defects’, IEEE Transaction on Software Engineering, vol. 29 no. 4, pp. 297-310.